I (W3Counter, 70k sites) measure unique users, and it's always tracked much closer to StatCounter's numbers than NetApps. Same for every other site that tracks this stuff. NetApps has always been an outlier, especially on their IE numbers.
The difference for NetApps is how they track unique users, particularly for countries such as China.
My understanding is that they basically look at the total number of unique visitors from each country that they get, then weight that number by the number of reported internet users in that country. So if NetApps sees 10 users from country A and 5 users from country B, but country B has twice as many 'connected' people as country A, then they both get equal share.
What this basically leads to is that China, which has a very large population of people who infrequently use the internet but are still counted as internet connected (and may be using shared computers), gets inflated numbers based on total population. This is also why IE8 numbers are so large- because it's still in wide use in China.
The NetApps number probably makes sense if you were to ask 'what is the browser use of the individual users across the entire planet that could possibly load my website'. But when looking at browser usage by # of page loads or predicted visitors, the numbers could very well be much closer to StatCounter.
In that example, it is ok for A and B to get equal share - if you are measuring unique users and not usage. There is no inflation here, at least not in the methodology, which looks like a valid statistical one to me.
Of course it could be wrong in practice, if the data used to re-weight is misleading (while the unweighted raw data was more accurate). That's possible in theory, but seems less likely - there is fairly good information about internet usage in general which is what is used to re-weight, certainly compared to browser share.