PCMark Memory Benchmark Favors GenuineIntel 298
javy_tahu writes "A review by Ars Technica disclosed that PCMark 2005 Memory benchmark favors GenuineIntel CPUID. A VIA Nano CPU has had its CPUID changed from the original VIA to fake GenuineAMD and GenuineIntel. An improvement of, respectively, 10% and 47% of the score was seen. The reasons of this behavior of FutureMark product are not yet known."
Money (Score:5, Insightful)
Easy. Intel paid them to make it that way.
Re:Money (Score:5, Funny)
Re:Money (Score:5, Insightful)
Here, sir, is the Internet, which you have won fair and square.
Re:Money (Score:5, Interesting)
If anyone can come up with a better explanation I'd be interested to hear it.
Re:Money (Score:4, Insightful)
Even if this is an unintentional error, they have certainly lost some credibility.
Re:Money (Score:5, Funny)
Mods, while you're at it, mod me +5 insightful for pointing out that the parent post was modded +4 insightful for pointing out that its parent was redunant...
Re:Money (Score:5, Informative)
If anyone can come up with a better explanation I'd be interested to hear it.
TFA offers the following:
At the very least, this suggests some incredibly sloppy coding on Futuremark's part, as the company may be enabling or disabling CPU optimizations based on a processor's vendor name in CPUID instead of actually checking CPUID for SIMD support. In this case, PCMark 2005's memory subsystem test doesn't appear to be aware that Nano supports SSE2 and SSE3, and is instead running a decidedly less-optimized code path. There are two factors, however, that make this explanation a bit difficult to swallow.
First, there's the issue of timing. PCMark 2005 was released (obviously) in 2005, and was obviously coded with an eye towards supporting current and future processors. This is standard operating procedure for Futuremark, which always builds benchmarks designed to last for at least a year, and often two. VIA's C5N-T (Nehemiah) core may have only supported MMX and 3DNow!, but the C7 launched in 2005, and that processor supported SSE2 and SSE3 from day one. Even if proper extension support wasn't built into the first version of PCM2K5, we tested version 1.2.0, and that patch was released on or around 11-29-2006.
Second, there's the issue of performance when Nano is identified as AuthenticAMD. If performance between the AMD and Intel CPUIDs was identical, there wouldn't really be a story here, but it isn't, and that's curious. Futuremark could plausibly argue that VIA's C3/C7 processors weren't exactly on the radar back in 2004-2005, but AMD and K8 certainly were, and K8 launched with full SSE and SSE2 support, with SSE3 added in 2005.
There's more, but I don't want to quote the entire article.
Re:Money (Score:5, Insightful)
Moral of the story is, when you're dealing with code like this, where it has the capacity to influence who receives billions of dollars and who doesn't, well, you can't trust it if it's closed source and not subject to public scrutiny.
Closed source test suites cannot be trusted, shouldn't even be considered by potential purchasers, and have been misleading the public for years and years. This is mute evidence to the fact.
troll? really? mod up again! (Score:2, Interesting)
you got a point there which is important to the discussion, if the source is closed, how can we know if the test is fair?
Re:troll? really? mod up again! (Score:5, Insightful)
I disagree. At this point there is controversy. It will be explained by the vendor and people will have to either accept the explanation or not.
If it were open source, the facts of how the code behaves could be determined by third parties and publicized. We wouldn't have to take anyone's word for it.
Re:troll? really? mod up again! (Score:5, Insightful)
$randomInternetDude
If the source is open, you have multiple samples of $randomInternetDude to choose from. And it's not really random either. More like $internetDudeWithUnhealthyInterestInGameEngineProgramming, who I would expect to know a thing or two.
And you can always learn enough to verify it for yourself, if you have the source.
Better than trusting $corporatePrMan, anyway.
Re:troll? really? mod up again! (Score:4, Insightful)
Re: (Score:3)
To expand on the parent's argument:
With closed proprietary software, you get your information from one person/company (with a vested interest in convincing you to buy their product), and you have limited information that can be used to verify their claims.
With open source, you get information from a community of users and developers. But more importantly, OSS source code is available to be looked at by anyone at any time - including you or a programmer that you trust. This alone strongly discourages behin
Re: (Score:3, Insightful)
You're missing the point.
With proprietary software, you only get one entity to assure you it's legit, and that's the vendor. If the vendor is a trojan author of doom, you're screwed.
With open-source software, you get many people looking to see if there's anything sneaky going on. Since you have multiple samples, your result is more likely to be accurate. If one or more of them are trojan authors of doom, then it doesn't really matter, because the honest ones can spot and point out the malicious code.
Re: (Score:3, Insightful)
Because software isn't religion. There's a right answer and a wrong answer. You prove things.
Even if you can't look at a three-line code sample and follow the logic (which I doubt - if you tried) people could write a demonstration of the flaw in, for example, Ruby, which you could cut and paste into another browser window and run on someone else's computer [hobix.com] so you didn't need to worry about trojans.
If this was legit the code would look like this
CPUID = get_cpu_id
[...]
case CPUID
when 'Intel' ; e
Re:troll? really? mod up again! (Score:5, Insightful)
The problem with this argument is that with open source software, you don't just have to trust a single random guy for your information. When the source is open, it is often the case that MANY people in the online community will examine the code, and through discussion there emerges a consensus which is far more reliable than the opinion of just one random guy. That isn't to say that the community as a whole is never wrong, but it's vastly more trustworthy and reliable than just some $randomInternetDude.
Agree; but could we formalize auditing more? (Score:4, Interesting)
I agree with you.
I was wondering if there is some way we can get code audited by the community on a more formal basis, perhaps with a bounty system and a reputation system, so that one might donate to get the KDE4 code audited by me ($10), or some KDE contributor ($300), or Linus Torvalds ($10000). Then these people could develop a formal reputation system, like + or - votes on SourceforgeAuditVoting.org. They'd use their PGP signature to sign the audits.
Or something. I would view this as the next phase of the open source economy. Eventually companies might hire people with good reputations, to audit their own intra-company code.
Re: (Score:3, Informative)
By whom? Someone trustworthy? Mathematics? You're clutching at straws there, dude.
Uh, prior experience. Isn't that obvious? I don't have to trust someone to use their stuff. For example, I don't trust the people who put cracks up on gamecopyworld.com, necessarily... I just don't care. I use their stuff even though I don't trust them, and they could theoretically build up trust over time.
The point is not that open-source is inherently more trustworthy than closed source, it's that an open-source vendor who claimed that their code could do something it couldn't do would lose credibility.
Closed-source products give the vendor "credibility through obscurity", i.e. something for nothing.
Uh... any vendor who claims their product does something it can't loses credibility. And when they claim they can do something that they can, in fact, do, they gain it. This is all very elementary, and has nothing to do with closed or open source.
I have to think you are trolling, because I can't possibly believe that someone that's capable of typing on a keyboard can be so inept and unable to discern the difference between the level of trustworthiness on a given topic when an unrelated sample of the population gives it a thumbs up and a single vendor gives it a thumbs up.
There's a reason science and statistic gathering uses random swaths of the population for information gathering. It provides a fairly accurate sample of the whole. Using a single
Re: (Score:3, Insightful)
Wow. Just wow. "I don't agree with you, so you must be trolling," is really rude, even for the internet. Consider not posting any more if you can't handle people who disagree with you.
It's not the fact that I disagree with you, it's the fact that you are wrong. It's not an opinion, it's a fact. Therefore, you are trolling or are mentally handicapped, because you are no longer ignorant, since you've been informed you are wrong by a number of different people.
I noticed you completely skipped over the part about how science and statistics are gathered. I put that there for a reason. It's to enlighten you to the fact that there IS A REASON wide swaths of random people are polled, and not
Re: (Score:3, Insightful)
That's ALL you ever have, from ANYONE! Hell, if that's your reason for not trusting, I damn well hope you don't trust anyone at all. Anyone you know can only give you their own word that they will continue to be trustworthy in the future. That's what trust is!!
No, you're wrong. That's not all you ever have. You completely missed the most important part: "All you have is their own word." See that? That means there is no else to vouch for them. That's what closed source is. When it's open source, you have a whole fucking lot more than just their own word. You also have the word of every other person who has the capability to read and understand the code. If one of them is lying, you can bet another of them will raise a big stink about it. And that's totally ignorin
Re:Money (Score:5, Interesting)
Saying that a config has 9000 points is pretty much useless. Saying that it gets an average of 40FPS in the UT3 benchmark at high details, and 1680x1050 is much more informative.
Unfortunately, this also is a little bit more complicated, and as we know everything simpler is more popular with the dumb masses.
Re:Money (Score:4, Insightful)
Re:Money (Score:5, Insightful)
What I don't get is why game developers don't release freeware benchmark versions of their engines.
Because that would require a non-trivial amount of work for no substantive payoff?
Re:Money (Score:5, Insightful)
"Everything I don't know how to do is easy!"
Re: (Score:3, Insightful)
I've never worked in game development, but I know at other software companies I worked for anything that went out the door (no matter how small - whether it was paid for or not) to the consuming public had to have gone through a full QA cycle which took weeks - especially for apps that were available in more than one language.
The reason for this is simple - if its crap (even if its free) you lose goodwill with the customer. For paid for applications doubly so because you may lose money on sales.
Re: (Score:3, Insightful)
Because the drivers will be optimized for these tests, rather than the game - they did it before, they will do it again (Nvidia and ATI sure wasn't above it).
Re: (Score:3, Insightful)
Ok then, point me to an open source benchmarking program that's as complete, and I'll use it.
Might it just be that they got the software done as cheaply as possible, marked it as ready for release as soon as they could, and never bothered to fix what was obviously a glaring flaw?
Anyway, as an open source developer myself I don't really buy this 'open source will always be better' deal. It can only be better if the project is fortunate enough to attract quality coders and designers. There are a lot more open
Re:Money (Score:5, Insightful)
Ok then, point me to an open source benchmarking program that's as complete, and I'll use it.
glxgears.
Seriously, when they are changing the results based on the vendor name, it makes any result suspect -- which makes it pretty much useless as a benchmark. At least with glxgears, while it may not be a particularly accurate benchmark, it's at least guaranteed to be fair.
Anyway, as an open source developer myself I don't really buy this 'open source will always be better' deal.
That's not the point of this exercise.
Open source will not always make a better game, or a better office suite, or even a better text editor.
But there are some kinds of software which you need to trust, and which are difficult to verify without the source. Benchmarks are one example. SSH clients are another. For these, I would not even consider a proprietary version -- it's not about features or relative quality; open source is a necessity.
Pick me, pick me! (Score:3, Informative)
The Phoronix Test Suite [phoronix-test-suite.com].
It's Linux only, but a CPU that performs better on Linux will perform better on Windows.
Re:Money (Score:5, Informative)
I'll give 10:1 odds that Futuremark simply compiled their benchmark with Intel's C++ compiler.
I wrote a detailed explanation [slashdot.org] back in 2005 about how the Intel C++ compiler generates separate code paths for memory operations to make AMD processors appear significantly slower, and how you can trick the compiled code into believing your AMD processor is an Intel one to see incredibly increased performance. See this article [slashdot.org] for additional details.
Re:Money (Score:5, Interesting)
Yes, I remember that...
But why would icc make AMD better than "no name" beats me.
Re: (Score:2)
The devil you know?
Re: (Score:2, Insightful)
Re: (Score:3, Interesting)
Re:Money (Score:5, Interesting)
If your benchmark tool is going to use multiple code paths, then they should be configurable, so that you can benchmark different systems both using the same code and more optimal code. That way you'd get an idea of how much speedup various features provide.
As an example, john the ripper's SSE2 support for cracking DES - on a core2 the SSE2 version is considerably faster and compiling the C version never comes anywhere close regardless of compiler and flags, but on an AMD compiling the generic C code with appropriate flags and a modern version of gcc produces slightly faster code.
Running the SSE2 ver on a 2.3ghz core2 quad achieves about 2mil c/s per core, while a 2.3ghz amd phenom yields about 1.6, but compiling the C source with various flags and gcc versions makes amd slightly faster, while the core2 is nowhere close.
Re:Money (Score:5, Insightful)
Writes an (anonymous) Intel representative.
Re: (Score:3, Interesting)
Fine, testing is one reason for two code paths; you want to make sure the generated code works on all x86 CPUs, but you don't want to thoroughly test on all of them. But that means that icc isn't suitable for compiling benchmarks. because different code is being run depending on the CPUID. The comparison is not a fair test; you have two variables (the code and the CPU) instead of one (the CPU).
Two questions:
- Why didn't Intel tell benchmark writers not to use icc? Obviously the results will be unfair if an
Re:Money (Score:4, Insightful)
And why would Intel's compiler emit code that is not x86-compliant? Code should look at cpuid feature bits, not "GenuineIntel".
Re:Money (Score:5, Insightful)
And this is partly why I generally ignore benchmark scores, and look at real-world performance. It's possible for the benchmark or the hardware being benchmarked to 'cheat' or at least behave very differently and produce bogus scores. If i'm looking for a new video card, I don't look at 3DMark scores, I look at framerates in games that I play (or that use the same engine). If I'm looking for a CPU, I'll look at RAR compression times or video encoding speeds. If I'm looking for a storage solution at work, I look at file copy speeds of similar file quantities and sizes, or I/O performance of a similar database.
Re: (Score:3, Informative)
Re:Money (Score:5, Funny)
If anyone can come up with a better explanation I'd be interested to hear it.
OK, far-fetched it maybe but what if VIA paid them to do it so that the expose would generate a lot of free advertising and ram home the information that the Nano is faster.
Alternatively the US military could have engineered it to distract us from the possibility that they are working for aliens and have files full of UFO data on their systems. Gosh, i'd better hack in and take a look....
"optimized" benchmark is no benchmark! (Score:3, Insightful)
I'll give you credit for coming with a scenario that r
Re:Money (Score:5, Funny)
GenuineIntel (Score:5, Funny)
I'm a GenuineIntel, mod me 47% higher!
Re:GenuineIntel (Score:5, Funny)
Re:GenuineIntel (Score:5, Funny)
Re: (Score:3, Funny)
On the internet, no one can tell that you're a VIA Nano!
Money? (Score:4, Insightful)
Seems obvious, but follow the money trail, does PCMark get backing from Intel?
Compiler Optimization? (Score:2, Insightful)
A VIA Nano CPU has had its CPUID changed from the original VIA to fake GenuineAMD and GenuineIntel. An improvement of, respectively, 10% and 47% of the score was seen.
It sounds to me like this could possibly be explained by some kind of conditional optimization that the compiler puts in for various chips, to take advantage of differences in their designs that can improve performance.
Then again, probably not.
Re: (Score:3, Insightful)
This could all be explained if they compiled with something silly like ICC
http://www.theinquirer.net/en/inquirer/news/2005/07/13/intel-compiler-nobbles-amd-chips-claim [theinquirer.net]
Re: (Score:2, Insightful)
Yeah, except that ICC Intel optimizations frequently improve AMD scores as well (over generic optimizations). Not always as much as it helps Intel processors, but it does help some.
Re: (Score:3, Interesting)
Re:Compiler Optimization? (Score:5, Insightful)
You can't. That's why it was discovered now. Intel and AMD don't let you change the CPUID results on their CPUs. Via DOES let you change it. (You could hack the benchmark to change the checks, but then your results are invalid because you changed the benchmark code)
Either way, that's not an excuse. As Ars points out, if it is just checking for something like SSE2 the Nano has that. If you want to make an optimized code path it should be based on if a feature is reported as present or not, not who made the CPU.
It's just really REALLY fishy.
Re: (Score:2)
Re: (Score:3, Insightful)
Because, as we all know, real applications do this. Winzip has different code paths for different processors, as well as Office, and Photoshop, right?
In real life, you do things in the fastest GENERIC way possible. If SSE should make it faster, check for the existence of SSE, and then use it. If SSE n
Re:Compiler Optimization? (Score:4, Insightful)
Exactly, what happens when you run an AMD chip under both IDs? or an Intel?
As TFA mentions, we cant test it. AMD and Intel lock the CPUIDs on their chips. VIA doesnt. I do think AMD should do some testing in-house though, as I'm sure they could change the CPUID themselves. Though I wouldnt be surprised if they'd already tried this long ago. I know I would have. And if there were major discrepancies, we probably would have heard about it by now.
Re:Compiler Optimization? (Score:5, Informative)
Re:Compiler Optimization? (Score:5, Insightful)
Re: (Score:2)
Well, it might not be SSE2 based at all...
It could be that it recognises the nano chip as an older variant VIA chip, and thus follows a codebase optimized for *that* chip...
Seeing an Intel it follows a codebase optimized for whatever chip Intel had out at the time, and thus performs better.
As an example, cyrix processors used to have very weak floating point support, to the extent that it was sometimes quicker to run software floating point emulation. The older VIA chips may have had a similar issue perhaps
Re: (Score:2)
Re: (Score:2)
Because by the time the benchmark came out, the AMD K8/Hammer series was out which had SSE2. SSE3 was added in the next update, which was before the most recent patch to the benchmark.
Ether way there is a set of flags returned by CPUID that specifically lists what features a chip has (like SSE1/2/3). To not check that would be moronic.
Moronic or Corrupt? (Score:5, Insightful)
Does it really matter whether the cause was "incredibly sloppy coding" or "Intel bribed them?" Either way, their benchmark cannot be trusted, and trustworthiness is ESSENTIAL for a benchmark. If anyone pays serious attention to this (which, having read TFA, it seems to merit), then FutureMark is toast.
Re: (Score:2)
Why is the damn benchmark checking CPUID in the first place? The thing should be CPUID agnostic, period.
Can't they design the benchmark to just ask if a CPU has a feature or not, and execute against what features the CPU reports back?
Doesn't seem so hard to me.
Re:Compiler Optimization? (Score:4, Informative)
Yes, it does: http://en.wikipedia.org/wiki/CPUID [wikipedia.org]
Re: (Score:2)
That is actually a pretty good guess.
An extreme example is the Intel compiler which used to do great optimization on Intel CPUs but really bad optimization on AMD.
The VIA chip is really new and the code the compiler generates treats it as the a P3 or P2.
Re:Compiler Optimization? (Score:5, Insightful)
In all likelihood, this probably IS the case, but that still goes a long way to discredit Futuremark as it shows their benchmarks were certainly NOT fairly tested.
Closed Source Benchmarks? (Score:3, Insightful)
It sounds to me like this could possibly be explained by some kind of conditional optimization that the compiler puts in for various chips, to take advantage of differences in their designs that can improve performance.
People are trusting closed-source benchmarks? Well, golly gee, who'd'a thunk there'd be errors, oversights, or shenanigans?
If this was used for anything more than entertainment value, any methodical person would have at least compared multiple closed-source benchmarks. If that proved to be
Re:Closed Source Benchmarks? (Score:5, Funny)
Re: (Score:2)
If this is the case then wouldn't changing the cpuid give you better "real-life" performance too because similarily compiled (non-benchmark) apps would also be utilising the optimisations? TFA doesn't seem to mention testing that...
Re: (Score:3, Insightful)
It sounds to me like this could possibly be explained by some kind of conditional optimization that the compiler puts in for various chips, to take advantage of differences in their designs that can improve performance.
I suppose it's possible that a VIA chip running code optimized for what the benchmark believes is an Intel CPU might perform better than the same chip running the benchmark's unoptimized code path, but as I understand it the VIA Nano is pretty entry-level; any optimizations present in it shou
Do I understand this correctly? (Score:2, Interesting)
Is this like changing the user agent in a browser?
Re:Do I understand this correctly? (Score:4, Informative)
Is this like changing the user agent in a browser?
Pretty much, yes.
Re:Do I understand this correctly? (Score:5, Funny)
GenuineIntel/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.1) Gecko/2008072820 Firefox/3.0.1
I can already feel the speed!
Re:Do I understand this correctly? (Score:5, Insightful)
That's a pretty good analogy.
If Futuremark is indeed enabling CPU features based upon the CPUID, then this situation is a lot like the webpages that render incorrectly in Firefox unless the user agent is set to Internet Explorer.
Possible semi-benign explaination? (Score:5, Insightful)
This definitely requires clarification from the creator of the benchmark.
It is possible that the benchmark uses the CPUID to change how the benchmark works, for example, to work around known flaws in a given chip. If this is the case, then the problem is not "omyghoshitplaysfavorites" but rather lack of full disclosure that the benchmarks are not directly comparable across different chips. In the most benign scenario, this could be someone at the benchmark creator's shop forgetting to tell the documentation team. This is still a very serious issue, but it's not fraud.
Re: (Score:3, Interesting)
That's an insightful explanation, but IMO the benchmark is then only valid is the operating systems people use make the same allowances for the different chips.
One thing that doesn't seen to have been investigated is the permutations of the test - marking an Intel chip as a VIA, etc. If the differences are the same drop in performance as the improvement in marking a VIA as an Intel then your explanation has effectively been disproved.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
They probably just use code compiled with optimizations for each given chip family...if they didn't then people would be shouting how not using special features of a certain cpu would be unfair, etc.
So what now if Intel was the biggest desktop CPU manufacturer in the market (I know it's a stretch, but bear with me for a minute) and profiles for their CPUs (either through ICC or maybe even GCC) were just better optimized because more people put time and effort into them?
I can certainly see this being true fo
Re:Possible semi-benign explaination? (Score:4, Insightful)
And that's simply Intel favoritism.
Re: (Score:2)
It will be interesting to see how this pans out. It FutureMark playing favourites with Intel? Or did they actually try to avoid some substandard implementations or bugs in the VIA and AMD processors?
Re: (Score:2)
Well, not really
Let's say (for simplicity) they used gcc to compile the different code paths. If all they told the compiler was to "optimize this one for Intel, optimize this one for AMD and optimize this one for VIA" and at the end you had results as those in the article then it wouldn't really be a fault of the PCMark guys.
This gets even worse if they optimized the different code paths using the respective compiler and libraries of the CPU vendor (e.g. ICC for Intel).
If ICC's optimizations were a lot bett
MMX/SMD Extensions (Score:5, Insightful)
Could it be that FutureMark uses the GenuineIntel and AMD flags to enable processor specific extensions? and then does a whole bunch of math with those extensions and never bothers to check the result?
This would indicate some really terrible code on FutureMarks part, and VIA should be flagging those op-codes as illegal op-codes, but it might be possible that something like this could happen. It is even possible that the CPUID checks are duplicated in some library somewhere that actually gets the correct code sequence right, and the main FutureMark code disables the advanced functions of the library whenever the GenuineIntel and AMD flags are missing. Thus FutureMark may feature both code sequences that work and those that don't, and the resulting incompatibilities are what causes the issues.
Re:MMX/SMD Extensions (Score:5, Informative)
Why are they using a benchmark they can't read? (Score:5, Insightful)
Why would you even consider running a benchmark program you don't have source code for and cannot compile yourself? (If you are worried about random compiler differences messing up the results, you can check an MD5 sum of the final binary against the published one, but it is important that you can reproduce the binary from source and you can read the source to find out what it does.)
If compilers like ICC cripple their code depending on CPUID, that will just lead all manufacturers to set CPUID to GenuineIntel, just as moronic websites (with help from Microsoft) ensured that all browsers call themselves 'Mozilla'.
Benchmark (Score:4, Insightful)
Well, PC Mark 2005 is no longer good for testing processors against processors of another maker, i.e. only good for intra-AMD, etc.
That should be AuthenticAMD... (Score:4, Informative)
That should be AuthenticAMD, not GenuineAMD.
But that would be expecting editors to actually, you know, edit.
Numerology. (Score:4, Funny)
V+I+A == 224
G+e+n+u+i+n+e == 715;
Genuine+A+M+D == 925
Genuine+I+n+t+e+l == 1223
The bigger the number, the faster the processor. And you get 20% extra when you pass 1000.
Re: (Score:3, Funny)
its AuthenticAMD, not GenuineAMD
Do not want! Ur in my numerologiez, ruining my calculationz!
CPUIDs (Score:5, Interesting)
VIA's is "CentaurHauls"
AMD's is "AuthenticAMD"
Intel's is "GenuineIntel"
There's no "VIA" nor is there "GenuineAMD".
Clearly PCMark2005 is buggy (at the best) and cannot be used to compare different CPU families in this test. At the worst it is intentionally flawed, and shouldn't be used at all.
It's a shame that not one VIA Nano review benchmarked the built-in Padlock functionality. Not one OpenSSL benchmark.
Am I missing something? (Score:3, Insightful)
Careful fraud could be much harder to spot (Score:3, Insightful)
If I were an evil fraudster at PCMark, paid for Intel to deliver worse scores to rivals, I would make sure that these rivals had no easy way of uncovering the fraud. Testing for an ID looks much more like bad code paths than like "sneaky fraud".
There is no shortage of alternative quirks that can be used to see whether a given processor belongs to one family or another. Should enough of these quirks be combined, it would be *very* hard to discover an evil-related cause.
Of course, choosing the 'bad' path given an ID may just be blatant enough to provide plausible deniability for the developers that "messed up". However, being a firm proponent of Hanlon's Razor, I would rather call it a bug than a "sponsored feature".
On the other hand, kudos to the guys at Ars who thought of changing the ID and, when the numbers did not add up, make further tests to nail down the argument. Instead of just forgetting about the problem and performing a "review as usual", which would have doubtlessly required less effort. Yay for inquisitive hacker - reviewers.
Look at Intel's track record (Score:2)
And here's the code: (Score:3, Funny)
{
Run_really_fast();
}
else if(cpuid == "AuthenticAMD")
{
Run_no_so_fast();
}
else
{
Run_slow();
}
Re:And here's the code: (Score:5, Funny)
No way. Vista doesn't even _try_ to run fast on any hardware.
There are lies, damn lies, (Score:2)
Re:Additional instructions (Score:5, Informative)
The CPUID instruction provides feature bits that software should use to determine which instructions are available. Using the vendor string is not a reasonable way of detecting the presence/absence of instruction set extensions like SSE.
The more obvious answer (Score:3, Interesting)
Re: (Score:2)
And why do we care about some e-penis benchmark?
If it's fast enough to play h.264 at nice high resolutions, and plays some of the fun games out, that's all I care about.
If I need CPU power, I'd go get 4 quad cores on a server with big memory and big hd. A 2d gfx card would be god enough for that server... See if they can do passive cooling on that gfx card, I bet not. And if I needed more cpu still, I'd use a dual backplane mobo with a bunch of ATI/AMD graphics cards with AIXGL (or whatever the acronym) and
Re: (Score:2)
Agreed. Back in the day when overclocking or m/board chipsets made a tangible difference in a world where PC power trailed software requirements, benchmarks were a useful way of ensuring you were wringing the max out of your hardware. These days, almost everything is fast enough and unless you're playing frame rate willy waving on Crysis or whatever, it's really of no real interest. The broad brush approaches of CPU speed and/or number of CPUs are all you