×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel Caught Cheating In 3DMark Benchmark

kdawson posted more than 4 years ago | from the try-something-smarter-than-detecting-the-app-name dept.

Intel 216

EconolineCrush writes "3DMark Vantage developer Futuremark has clear guidelines for what sort of driver optimizations are permitted with its graphics benchmark. Intel's current Windows 7 drivers appear to be in direct violation, offloading the graphics workload onto the CPU to artificially inflate scores for the company's integrated graphics chipsets. The Tech Report lays out the evidence, along with Intel's response, and illustrates that 3DMark scores don't necessarily track with game performance, anyway."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

216 comments

Good reporting there Ric (4, Insightful)

Shadow of Eternity (795165) | more than 4 years ago | (#29728355)

Thanks for telling all of us that the best measure of hardware's performance ingame is... to benchmark it with a game.

Wonder if AMD plays fair? (1)

Taco Cowboy (5327) | more than 4 years ago | (#29728367)

Intel has cheated, can AMD avoid cheating?

Re:Wonder if AMD plays fair? (1)

bloodhawk (813939) | more than 4 years ago | (#29728467)

Don't remember AMD cheating, But then I have only more recently become a big ATI fan, However Nvidia has a long history of benchmark cheating in drivers in order to make there stuff look better than it is and many times it was far more blatant than what intel is doing here

Re:Wonder if AMD plays fair? (5, Informative)

afidel (530433) | more than 4 years ago | (#29728559)

Oh, ATI was one of the first to cheat on a graphics benchmark quack.exe [hardocp.com] anyone?

Re:Wonder if AMD plays fair? (1)

Idiomatick (976696) | more than 4 years ago | (#29728617)

Optimizing things by application is a good thing. While benchmarks are nice and all I don't think ATI gave a shit about cheating on benchmarks. At the time the game art on the cover of every video card was Quake 3. It isn't shocking to see they did a few optimizations for it, I'd be surprised if they didn't. Cool read though, neat seeing groups hack little toys like that together.

Re:Wonder if AMD plays fair? (1, Insightful)

Anonymous Coward | more than 4 years ago | (#29728835)

Sorry, but I remember that all to clearly. That IS cheating. Quake 3 was a used as a major benchmark at the time and ATI didn't optimize for anything else unless there was a full out issue with the drivers.

So, the lesson learned is it's OK to optimize your drivers to make hardware run better for applications but it's flat out cheating a consumer when that optimization makes them believe that if your hardware runs that app better, it'll run just about every other app better.

THATS WHAT GAME BENCHMARKING GPU's IS FOR.

Re:Wonder if AMD plays fair? (0, Troll)

Idiomatick (976696) | more than 4 years ago | (#29729103)

Sounds like the fault of the benchmarking community not ATI. Why should we be annoyed that it runs the top game of the year extra well? Being able to run Quake3 was a major selling point. Should ATI not make quake 3 run faster even though they could? Hell no, that would be stupid. I suppose they probably could have included a sticker that said made with quake in mind or something and maybe they did I dunno.

In any case application specific optimizations are a great tool. They got an extra 18% speed out of the chip with just application specific tweaks. That's a pretty damn significant increase. Ignoring that would be a terrible decision. All graphics drivers should use this and update the drivers every few months as new games come out.

Re:Wonder if AMD plays fair? (3, Informative)

noundi (1044080) | more than 4 years ago | (#29729299)

Just to get things straight bloodhawk said:

Don't remember AMD cheating, But then I have only more recently become a big ATI fan, However Nvidia has a long history of benchmark cheating in drivers in order to make there stuff look better than it is and many times it was far more blatant than what intel is doing here

At the time of quack.exe ATI wasn't owned by AMD, cheating or no cheating we've got to be clear on that one.

Re:Wonder if AMD plays fair? (1)

bloodhawk (813939) | more than 4 years ago | (#29728673)

That is more Quasi cheating similar to what Intel are doing, but as I said I was not saying they were innocent, just I could not remember them doing it as until recently I just did not use or care about there products. Nvidia on the other hand were far more blatant with the cheating, going so far as to alter the output or not render some stuff in order to artificially inflate there numbers.

Re:Wonder if AMD plays fair? (1)

afidel (530433) | more than 4 years ago | (#29728695)

ATI altered the rendering with that hack reducing image quality. It's really hard to get 15% better performance without doing something underhanded unless your previous drivers were beta quality.

Re:Wonder if AMD plays fair? (1)

bloodhawk (813939) | more than 4 years ago | (#29728737)

would love to see a link to an article on that too, the quack.exe one was just on optimising for specific apps with application specific instructions in the driver not altering output.

Re:Wonder if AMD plays fair? (5, Informative)

0123456 (636235) | more than 4 years ago | (#29729073)

It's really hard to get 15% better performance without doing something underhanded unless your previous drivers were beta quality.

I used to work for a video card manufacturer and game and video developers often did totally retarded things which just happened to work on the cards they developed on but made the software run like crap on ours. We routinely had to implement workarounds for individual games to make them run properly on our cards.

One particular example which springs to mind -- I won't mention the developer or the game -- was an engine which used a feature which we supported in hardware but a certain other card manufacturer whose cards they used performed in software. Rather than configuring said feature once as they should have done, retarded developer repeatedly reconfigured it numerous times in the course of a single video frame, which required us to reconfigure the hardware every time -- slow as heck over an AGP bus -- whereas other card manufacturer just had to execute a few CPU instructions. We had to detect the game and disable our hardware support, so that we would fall back to software and run the retarded code much faster; in that instance there were places in the game where, far from a measly 15%, we'd literally be going from seconds per frame to numerous frames per second.

So it's quite possible to need to detect individual games or applications in order to work around retarded coding which cripples performance on your hardware. The line you shouldn't cross -- and which I don't believe we ever did -- was to render something other than what the developer intended, for example by detecting a shader used by a benchmark and replacing it with one that looked similar but didn't do as much work.

Similarly, the issue here is not Intel punting processing to the CPU when the GPU is overloaded, but the fact that they do so by detecting the name of the benchmark rather than by monitoring the GPU loading and dynamically switching between hardware and software so that it would work on any application. General optimisation is fine, workarounds for retarded developers are fine, but special optimisations for benchmarks which don't affect real applications is getting pretty close to the line.

Re:Wonder if AMD plays fair? (1)

purpledinoz (573045) | more than 4 years ago | (#29729371)

I remember some time ago, that people were upset that a video card manufacturer was optimizing drivers so certain games run fast (thus will score higher on benchmarks). I welcome this. I want my games to run faster, and if the manufacturer is putting a ton of effort to optimize their drivers so that some games will run faster, FOR FREE, then it's a boon for the customers. Although, optimizing for 3D Mark helps no one. But who actually cares about 3D Mark scores anyway?

Re:Wonder if AMD plays fair? (4, Interesting)

Lloyd_Bryant (73136) | more than 4 years ago | (#29728955)

Oh, ATI was one of the first to cheat on a graphics benchmark quack.exe anyone?

Oh this type of thing has been going on for a VERY long time. For example, there was the Chang Modification [pcmag.com] back in 1988 (It slowed down the system clock that was used as a timing base for the benchmark, resulting in higher benchmark scores).

Re:Wonder if AMD plays fair? (3, Interesting)

Fred_A (10934) | more than 4 years ago | (#29729163)

Oh, ATI was one of the first to cheat on a graphics benchmark quack.exe anyone?

Oh this type of thing has been going on for a VERY long time.

I even remember teapot based hacks (although not the details unfortunately, probably something along the lines of having the teapot hardwired somewhere) back when displaying rotating GL teapots was all the rage to test graphics hardware (ancient history, obviously).
Of course something like Quake was still the stuff of science fiction at the time.

Re:Wonder if AMD plays fair? (3, Informative)

sexconker (1179573) | more than 4 years ago | (#29728681)

I don't know all the details of when (in relation to AMD buying out ATi) but...

ATi was notorious for cheating on the IQ benchmarks - essentially using a different anisotropic filtering method for the IQ test (the good one), and then the cheating one during the other tests.

The ridiculous part was that Nvidia was caught doing a similar thing, and the outcry (in part driven by ATi calling out Nvidia) forced Nvidia to include admit it and later driver option to select the optimization level used. When ATi was later caught doing the exact same thing, there was no outcry, there was no admission (despite proof), and there was no option in the drivers to turn off the "optimization".

I don't recall the details of why the particular optimization was considered a "cheat" and others weren't (I believe it killed off IQ to the point of ass, and it was something that would never be used in-game).

This was back around the 6800 (Nvidia) vs x800 (ATi) days.

Re:Wonder if AMD plays fair? (0)

Anonymous Coward | more than 4 years ago | (#29728879)

AMD started the whole "2000+" CPU rating thing to confuse consumers over performance stats. Claiming an AMD 2000+ was equivalent to an Intel 2ghz chip at the time, when it really wasn't. That's close enough to cheating.

Re:Wonder if AMD plays fair? (1)

Pentium100 (1240090) | more than 4 years ago | (#29729009)

They actually claimed that the 2000+ chip was equivalent to what the performance of 2GHz Thunderbird (their older CPU) would be. Thatit also was similar to 2GHz P4 was left to the imagination of the buyers.

Though 2000+ Athon XP and 2GHz P4 were quite similar.

And now neither company uses the clock frequency for advertisement, since all the clock frequency can tell you is if the chip in question is faster/slower than other chips in the same family, and model numbers can do that too (for example Opteron 275 is faster than 270).

Re:Wonder if AMD plays fair? (1)

hairyfeet (841228) | more than 4 years ago | (#29728993)

Well if you RTFA (I know, but I got bored) it says they tested a 785G from ATI and found that renaming did nothing, and of course it kicked Intel's butt, surprise surprise. I mean is there anyone at this point that doesn't know Intel IGP = Big can of fail, and that pretty much the only reason you see them so much is they are dirt cheap?

But if you would like some benchmarks of actual games and BD playback using both the ATI and Intel chips here you are [techreport.com] , enjoy. As someone who recently switched from being a lifelong Intel+Nvidia guy (the bad solder fiasco put me off Nvidia, and the "bang for the buck" on the new dual and quads is just great) that I was quite surprised when I was able to game for nearly a month and a half on the AMD IGP (a 780V board) before I got around to picking me up a 4650. Granted I'm not playing Crysis at 1080p, but for games I do enjoy, like FEAR and Bioshock it was quite an enjoyable experience, with none of that "slideshow" BS that I always got even on old games with an Intel IGP.

Frankly I think the only way you could do worse than an Intel IGP would be one of the SiS IGP "offerings". I've had to deal with quite a few of those in customer's "Best Buy Specials" and pretty much anything more than drawing the desktop and its gonna suck. But anybody who buys an Intel IGP PC and expects to do anything on it more than watching some vids is just asking for hurt. Considering how lousy their IGPs are at...well just about everything, is it any wonder that Intel resorted to cheating? I'm surprised with all the blunders that Nvidia has made of late that Intel doesn't just buy them out. At least then they would have IGPs like the Ion that are halfway decent.

Re:Good reporting there Ric (1)

cjfs (1253208) | more than 4 years ago | (#29728501)

Thanks for telling all of us that the best measure of hardware's performance ingame is... to benchmark it with a game.

Except the article clearly shows that the name of the games executable determines frame rates in some cases. It then goes on to state:

the very same 785G system managed 30 frames per second in Crysis: Warhead, which is twice the frame rate of the G41 with all its vertex offloading mojo in action. The G41's new-found dominance in 3DMark doesn't translate to superior gaming performance, even in this game targeted by the same optimization.

This kind of offloading is definitely shady. I can't see how they'd get the driver approved.

Re:Good reporting there Ric (1)

Shadow of Eternity (795165) | more than 4 years ago | (#29728729)

"The G41's new-found dominance in 3DMark doesn't translate to superior gaming performance, even in this game targeted by the same optimization."

Re:Good reporting there Ric (5, Informative)

palegray.net (1195047) | more than 4 years ago | (#29728601)

The driver apparently detects "crysis.exe" and inflates performance metrics by offloading processing, whereas renaming the executable to "crisis.exe" gives realistic performance scores. Please RTFA before replying.

Re:Good reporting there Ric (0, Redundant)

sexconker (1179573) | more than 4 years ago | (#29728711)

How is better performance a bad thing?
Optimizing for a game is good.
Optimizing for a benchmark is bullshit.

And Crysis isn't a benchmark, no matter how much people want it to be.
It's buggy, it scales horribly, and it's simply crap in terms of efficiency.
Add on the fact that it's not a very good game (I got bored when the ice thing happened), and I wonder why people still talk about it.

People seem to like it because it stresses a system. Just because it runs like shit doesn't mean it's a good measure of your system's capabilities.
I can write a shitty 3D program that will bring the latest dual quad-core, quad-crossfire/sli system to it's knees. That doesn't mean a damned thing.

Re:Good reporting there Ric (2, Insightful)

palegray.net (1195047) | more than 4 years ago | (#29728751)

Did you actually read the article? The driver was shown to be using the same cheats when the benchmark executable was renamed. This isn't about actual optimizations as far as the GPU is concerned; it's about falsifying results by using the CPU instead.

Why couldn't you be bothered to do your research before replying?

Re:Good reporting there Ric (1)

Shadow of Eternity (795165) | more than 4 years ago | (#29728745)

"The G41's new-found dominance in 3DMark doesn't translate to superior gaming performance, even in this game targeted by the same optimization."

To me at least that reads as "it cheats in 3dmark but you catch it red handed if you benchmark with a game."

Re:Good reporting there Ric (5, Funny)

Fred_A (10934) | more than 4 years ago | (#29729325)

The driver apparently detects "crysis.exe" and inflates performance metrics by offloading processing, whereas renaming the executable to "crisis.exe" gives realistic performance scores. Please RTFA before replying.

Thanks for the tip, I've now renamed all my games to "crysis.exe" and am now enjoying a major speed boost. You've given my laptop a new youth !

I can finally get rid of that cumbersome i7 box with that noisy nVidia !

Eh? (2, Interesting)

Tyler Eaves (344284) | more than 4 years ago | (#29728369)

I thought offloading graphics computations to the CPU was the whole *point* of integrated video.

Re:Eh? (0)

DigiShaman (671371) | more than 4 years ago | (#29728405)

Well, if the GPU becomes saturated, I could imagine the rest of the load spilling over to the CPU (one or many cores). Obviously the GPU is more efficient at video tasks, but if the video task is priority for the user, why not offload to the CPU as well? Makes sense to me.

Re:Eh? (3, Insightful)

Anonymous Coward | more than 4 years ago | (#29728489)

While it makes some sense, triggering the behavior using certain filenames is peculiar to say the least.

I suppose considering that the 3DMark tests are intented to test a hardware solution's peak performance, there is some rationale behind identifying the test executable on some list of "heavy" applications. The guidelines in which 3DMark explicitly forbids that sort of thing are clear, yes. However, in a sense the "spirit" of those guidelines is that they don't want companies trying to cheat by designing driver features/modes for the test which are not usable in actual gameplay.

Since these are (apparently) in use for actual games, it might not be such a heinous violation. Whether the other entries on their list are simply there, with sinister intent, to raise doubts as I've had in this post, who can say?

Still a pretty daft thing to do, but maybe it is a simple mistake rather than intentional deception.

Re:Eh? (4, Insightful)

jamesh (87723) | more than 4 years ago | (#29728603)

Well, if the GPU becomes saturated, I could imagine the rest of the load spilling over to the CPU (one or many cores). Obviously the GPU is more efficient at video tasks, but if the video task is priority for the user, why not offload to the CPU as well? Makes sense to me.

If you do that for a benchmark app then you are not really testing (just) the performance of the graphics hardware, so turning on that optimization without disclosing it is probably not really a fair comparison of the hardware. To make it 'fair' you really need to make the benchmark app to be aware of the feature and be able to turn it on or off under software control, or at least know if it is enabled or not. I wonder if similar optimisations could be made to any 3D video driver...

In the real world, if the user wants high graphics performance and there are CPU cores doing nothing then like you said, offloading to them makes perfect sense.

Re:Eh? (4, Informative)

The MAZZTer (911996) | more than 4 years ago | (#29728507)

And here I thought the whole point of not doing video on the CPU was to offload it to a dedicated chip!

Re:Eh? (5, Insightful)

parallel_prankster (1455313) | more than 4 years ago | (#29728565)

Effectively dividing tasks among CPUs is not the issue here. They want to benchmark the GPU and they wanna make sure you don't enable optimizations that are targeted specifically for the benchmark which Intel was doing shamelessly.

Mod Parent Up (3, Insightful)

causality (777677) | more than 4 years ago | (#29728615)

Effectively dividing tasks among CPUs is not the issue here. They want to benchmark the GPU and they wanna make sure you don't enable optimizations that are targeted specifically for the benchmark which Intel was doing shamelessly.

Please mod this up; it really is that simple.

Which would make sense... (3, Insightful)

SanityInAnarchy (655584) | more than 4 years ago | (#29728599)

That was my first thought, too.

Here's the thing, though: They took 3DMarkVantage.exe and renamed it to 3DMarkVintage.exe, and much of that offloading was dropped. So this isn't a general-purpose optimization, which would make sense -- it's a targeted optimization, aimed at and enabled specifically for a benchmark, in order to get higher scores in said benchmark.

It reminds me of the days when Quake3.exe would give you higher benchmarks, but worse video, than Quack3.exe.

Re:Which would make sense... (3, Insightful)

Anonymous Coward | more than 4 years ago | (#29728727)

Here's the thing, though: They took 3DMarkVantage.exe and renamed it to 3DMarkVintage.exe, and much of that offloading was dropped. So this isn't a general-purpose optimization, which would make sense -- it's a targeted optimization, aimed at and enabled specifically for a benchmark, in order to get higher scores in said benchmark.

A practice which is explicitly forbidden per the guidelines. I know lots of Slashdotters don't read the article but I am really beginning to wonder what part of that is so hard to understand. Or maybe that's easy to understand. Maybe it's just that people can assert things that clearly didn't happen, and you find it convincing as long as they do it with confidence.

I'll (re)summarize the article. Intel quite obviously cheated by trying to artificially inflate a benchmark score, and did so in a way that was not permitted by the guidelines of the benchmark. The motive is quite clear, as such benchmarks often influence buying decisions. There's nothing ambiguous about it according to the story.

Reading some of the "debates" below, you'd think this were some complex, nuanced issue. It's amusing and kinda pathetic at the same time.

I don't really know if you can blame this one on the public schools, but you probably can as it seems to be all about the general lack of critical thinking.

Re:Which would make sense... (1)

xedd (75960) | more than 4 years ago | (#29729409)

Those who want to pretend it is a nuanced issue seem to be the Intel fanboys, (and maybe even Intel employees...)

32 bit or 64 bit? (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#29728375)

Or both? And yes, I am too lazy to RTFA

Yes but (1)

For a Free Internet (1594621) | more than 4 years ago | (#29728383)

What does this have to do with Twitter (TM) ?

Can I do this on my iPod (TM) ?

These and other questions remain unanswered... like, where is my hat? DUDE! bowel

INTEL ALWAYS DOES THIS (0)

Anonymous Coward | more than 4 years ago | (#29728391)

Why are we surprised? They are a marketing company!

Hmm... (4, Informative)

fuzzyfuzzyfungus (1223518) | more than 4 years ago | (#29728445)

On the one hand, a mechanism that uses the CPU for some aspects of the graphics process seems perfectly reasonable(whether or not it is a good engineering decision is another matter, and would depend on whether it improves performance under desired workloads, what it does to energy consumption, total system cost, etc.), so I wouldn't blame intel for that alone.

On the other hand, though, the old "run 3Dmark, then run it again with the executable's name changed" test looks pretty incriminating. Historically, that has been a sign of dodgy benchmark hacks.

In this case, however, TFA indicates that the driver has a list of programs for which it enables these optimizations, which includes 3Dmark, but also includes a bunch of games and things. Is that just an extension of dodgy benchmark hacking, taking into account the fact that games are often used for benchmarking? Or is this optimization feature risky in some way(either unstable, or degrades performance) and so only enabled for whitelisted applications?

If the former, intel is being scummy. If the latter, I'm not so sure. From a theoretical purist standpoint, the idea that graphics drivers would need per-application manual tweaking kind of grosses me out; but, if in fact that is the way the world works, and intel can make the top N most common applications work better through manual tweaking, I'm can't really say that that is a bad thing(assuming all the others aren't suffering for it).

Re:Hmm... (1)

sexconker (1179573) | more than 4 years ago | (#29728723)

You'd think they'd at least learn from the past and just have a couple hashes stored in the driver - check the hashes of the exe and not just the name.

If IntelCheats.exe matches the hashes, turn on the optimizations.

Update each time 3D Mark is updated - they do this anyway.

Re:Hmm... (2, Interesting)

Sycraft-fu (314770) | more than 4 years ago | (#29729115)

I'm inclined to give Intel the benefit of the doubt here. Few reasons:

1) Nobody buys Intel integrated chips because of how they do on 3D mark. Nobody thinks they are any serious kind of performance. Hell, most people are amazed to find out that these days they are good enough that you can, in fact, play some games on them (though not near as well as dedicated hardware). So I can't imagine they are gaining lots of sales out of this. Remember these are chips on the board itself. You either got a board with one or didn't. You don't pick one up later because you liked the numbers.

2) Individual program optimization in drivers is extremely common. Some programs do things an odd way, and sometimes the vendors can figure out a way to work around it. An example would be the Unreal 3 engine and anti-aliasing in DirectX 9 mode. I don't know the details, but the upshot is it normally doesn't work. However nVidia (and probalby others) have figured out a way around this. So you can force AA on the Mass Effect and games that don't include the controls in the driver. However the driver has a particular hack for that game to make it work. If you use a program like Riva Tuner, you can mess with that sort of thing and flip the hacks on and off for various things.

3) Since Intel's integrated chips are exceedingly simple, it isn't surprising they have the CPU handle some things. I seem to recall that their older integrated chips did basically everything on the CPU, being little more than frame buffers themselves. The whole point of an integrated GPU is cheap and low power. That means it isn't going to have massive arrays of shaders to handle things. However with a clever driver, a CPU could do some of that work. Would work particularly well in an integrated GPU case since they use system memory.

So while I'm not sure I see the point in optimizing for 3DMark, I don't see the overall problem in specific optimizations for specific apps. If you discover that an app has a problem, and you can fix it, but that fix is not something to apply over all, well then why not apply that fix for that app?

Re:Hmm... (0)

Anonymous Coward | more than 4 years ago | (#29729339)

if in fact that is the way the world works, and intel can make the top N most common applications work better through manual tweaking, I'm can't really say that that is a bad thing

It's definitely not the way it should be. Nvidia and ATI are capable of generic drivers/chips. However Intel is notorious for shoddy graphics chips and drivers. They are horrible hacks. Whenever a game player/developer (even with less graphically demanding titles) has a problem and lists some Intel graphics chip, the only response you can usually give is "Oh yea, that's Intel for you. Get a real graphics card.".

If anything those cheaters (not only this but also claiming 3D API support that is a joke) are a serious threat to PC gaming as they suggest 3D capabilities of Intel computers that is sub-standard.

A large corporation lying? (1, Funny)

Anonymous Coward | more than 4 years ago | (#29728449)

I'm shocked shocked shocked, I tell you.

If you're too lazy to RTFA... (5, Informative)

Jonboy X (319895) | more than 4 years ago | (#29728451)

Just look at the pics. Changing the name of the executable changed the results dramatically. The driver is apparently detecting when it's running a 3DMark (or some other specific apps) and switches to some other mode to boost its scores/FPS markings.

Re:If you're too lazy to RTFA... (0)

boshi (612264) | more than 4 years ago | (#29728491)

It seems entirely reasonable to me for them to optimize the driver to run particular programs faster if at all possible. I would only consider this cheating if the software is not being rendered entirely ( I think nvidia did this? ), or if it somehow degraded the play experience ( such as jerky with higher average framerates versus smooth with lower average frame rates ). By this logic, would the special drivers ( like SLI or crossfire ) that have to be optimized per application also be cheating?

Re:If you're too lazy to RTFA... (4, Insightful)

cjfs (1253208) | more than 4 years ago | (#29728545)

It seems entirely reasonable to me for them to optimize the driver to run particular programs faster if at all possible.

Perhaps, but you definitely don't do it for the benchmark. The article quotes the 3DMark Vantage guidelines which are perfectly clear.

With the exception of configuring the correct rendering mode on multi-GPU systems, it is prohibited for the driver to detect the launch of 3DMark Vantage executable and to alter, replace or override any quality parameters or parts of the benchmark workload based on the detection. Optimizations in the driver that utilize empirical data of 3DMark Vantage workloads are prohibited.

So yes, SLI and Crossfire are a different case.

Re:If you're too lazy to RTFA... (0)

Anonymous Coward | more than 4 years ago | (#29728587)

It seems entirely reasonable to me for them to optimize the driver to run particular programs faster if at all possible. I would only consider this cheating if the software is not being rendered entirely ( I think nvidia did this? ), or if it somehow degraded the play experience ( such as jerky with higher average framerates versus smooth with lower average frame rates ).

By this logic, would the special drivers ( like SLI or crossfire ) that have to be optimized per application also be cheating?

It's cheating because I went out and bought a CPU that could run X amount of processing, and a GPU that could run Y amount. And I had assumed that the GPU wasn't stealing clock cycles from the rest of my system, otherwise I wouldn't have bought that card, or else compensated by buying a stronger CPU. But now I'm hosed because the hardware manufacturer lied about their capability and made up for it by robbing the resources from the rest of my system.

Yes, the drivers should optimize for the software being run, but it should be able to do so without thieving resources from other parts of the system. In any event, it's pretty obvious why they would choose to accelerate specific applications, like benchmarks, as opposed to accelerating actual user applications.

I don't CARE how well the car performs on the closed course, I want to know how it handles in the "wild".

Re:If you're too lazy to RTFA... (2, Insightful)

Idiomatick (976696) | more than 4 years ago | (#29728655)

Optimizing for games makes sense or rendering software. Optimizing for benchmarks seems like a pretty clear violation of the rules.

It does point out an weakness in benchmarks over in game tests though. If a company spends all of their time optimizing for specific applications then they will get lower marks in a benchmark than they would in real life. But it isn't fair to apply these to benchmarks. Lends more credence to the 'top 5 games' benchmarks that tomshardware or whoever uses.

Re:If you're too lazy to RTFA... (0)

Anonymous Coward | more than 4 years ago | (#29728661)

And Intel doesn't deny it. They freely admit that they switch to CPU rendering when it's faster than the (admittedly pitiful) GPU in their onboard chipsets.

Heck, if you have one of their non-X chipsets, it *ALWAYS* uses the CPU for this.

For example, try 3DMark on their "GMA 3500" chipset, and on their "GMA X3500" chipset (G33 and G35 chipsets, repsectively,) and you'll probably get the exact same score with 3DMark in its 'proper' name, and a worse score on the X3500 when you rename the executable. Intel knows that some games do worse using the GPU, so they code in the drivers to use the CPU on those games. If it doesn't know, though, it'll use the GPU.

Re:If you're too lazy to RTFA... (3, Insightful)

Eil (82413) | more than 4 years ago | (#29728697)

But see also Intel's response on page 2:

We have engineered intelligence into our 4 series graphics driver such that when a workload saturates graphics engine with pixel and vertex processing, the CPU can assist with DX10 geometry processing to enhance overall performance. 3DMarkVantage is one of those workloads, as are Call of Juarez, Crysis, Lost Planet: Extreme Conditions, and Company of Heroes. We have used similar techniques with DX9 in previous products and drivers. The benefit to users is optimized performance based on best use of the hardware available in the system. Our driver is currently in the certification process with Futuremark and we fully expect it will pass their certification as did our previous DX9 drivers.

And the rest of page 2 indicates that offloading some of the work to the CPU does, for certain games, improve performance significantly. Offhand, this doesn't necessarily seem like a bad thing. Intel is just trying to make the most out of the hardware of the whole machine. Also, one would also do well to bear in mind that the GPU in question is an integrated graphics chipset: they're not out to compete against a modern gaming video adapter and thus have little incentive to pump their numbers in a synthetic benchmark. Nobody buys a motherboard based on the capabilities of the integrated graphics.

The question that should be asked is: What is the technical reason for the drivers singling out only a handful of games and one benchmark utility instead of performing these optimizations on all 3D scenes that the chipset renders?

Re:If you're too lazy to RTFA... (1)

Guspaz (556486) | more than 4 years ago | (#29729175)

It makes sense; you'd only want to perform these optimizations in games where you're significantly GPU bound. CPU-heavy games, such as Supreme Commander, are probably better off spending the CPU time on the game itself.

I'd see this as more laziness than anything else; it's easier to just hard-code in a list of GPU-bottlenecked games than it would be to actually have your driver auto-detect if there is idle CPU time that could be better spent on offloading.

I don't really see much of an issue with what Intel is doing, though. This behaviour represents their drivers real-world performance and optimizations. How is it cheating to make the best use of available resources?

Heck, their future products will blur the line so much between CPU and GPU that they might well be offloading all sorts of things back and forth. And maybe they'll still be lazy about deciding what games should offload what. Is it still cheating when there's not much difference between your CPU and GPU?

Re:If you're too lazy to RTFA... (0)

Anonymous Coward | more than 4 years ago | (#29728739)

It's an optimization. The driver has a list of applications which benefit from offloading GPU work to the CPU. This benchmark happens to be one of them. I agree that this should not be done with a list of executables in the driver, but generally there's nothing wrong with using an underutilized CPU to speed up graphics. The driver should measure performance and utilization and decide based on these measurements though (and if it did, it would still accelerate the benchmark by using the CPU for graphics work, but it would do so regardless of the filename.)

Doesn't 3DMark cheat too? (3, Interesting)

iYk6 (1425255) | more than 4 years ago | (#29728459)

Is 3DMark the benchmark that will give a higher score to a VIA graphics card if the Vendor ID is changed to Nvidia?

Re:Doesn't 3DMark cheat too? (1)

afidel (530433) | more than 4 years ago | (#29728645)

That's probably a matter of the optimization path in the games that are run and would probably result in an unstable system if done for general gaming. Tricking a game into running the incorrect codepath just seems to be asking for trouble IMHO.

Re:Doesn't 3DMark cheat too? (3, Informative)

pantherace (165052) | more than 4 years ago | (#29728927)

You may be thinking of changing the CPUID on Via chips to GenuineIntel vs AuthenticAMD vs CentaurHauls.

There's one of the 'big' benchmark suites where the chip's score is roughly the same on AuthenticAMD and CentaurHauls, but gets a boost on GenuineIntel. Via's chips are the only ones with (user) changeable cpuid, so we don't know how differently IDed AMD or Intel do, but still interesting.

(First google'd link talking about it.)
http://www.maximumpc.com/article/news/pcmark_memory_benchmark_favors_genuineintel_over_authenticamd [maximumpc.com]

That's what they do for LOTS of games... (4, Interesting)

Anonymous Coward | more than 4 years ago | (#29728485)

Intel fully admits that the integrated chipset graphics aren't that great. They freely admit that they offload rendering to the CPU in some cases. This isn't a secret.

Re:That's what they do for LOTS of games... (1)

crazyjimmy (927974) | more than 4 years ago | (#29728793)

I think GAMES is the operative word here. A benchmark shouldn't be targeted in such a fashion.

Why not? (0)

PolarBearFire (1176791) | more than 4 years ago | (#29728531)

The newest GPUs have 2 billion transistors. Why wouldn't you put them to use? That's the trend anyways, even nVidia is going to release a 3 billion transistor GPU that's able to run general programs. I'm a PC gamer, I could care less if Intel or ATI or nVidea cheat on their benchmarks. In fact they should be encouraged to release hand coded or special drivers to improve performance in specific games.

Re:Why not? (3, Insightful)

rm999 (775449) | more than 4 years ago | (#29728607)

"they should be encouraged to release hand coded or special drivers to improve performance in specific games."

Games, sure - but it defeats the point of benchmarks by introducing a new useless variable: how optimized the driver is for that benchmark. I mean, why should 3dMarkVintage.exe be 30% slower than 3dMarkVantage.exe? How does this help anyone except Intel?

Re:Why not? (1)

BikeHelmet (1437881) | more than 4 years ago | (#29728997)

I mean, why should 3dMarkVintage.exe be 30% slower than 3dMarkVantage.exe? How does this help anyone except Intel?

And for that matter, when slowed down so that it gets the same score as an ATI IGP in 3DMark, it had one third the framerate as that ATI IGP in Crysis - an actual game!

I fully agree! These scores aren't helping anyone except Intel!

Re:Why not? (3, Informative)

BobisOnlyBob (1438553) | more than 4 years ago | (#29728623)

It's not special drivers for specific games. It's regular drivers with exceptions coded in to make them appear faster on "standardised" tests, which are meant to be an all-purpose benchmark to help consumers identify the sort of card they need (and to compare competing cards). This is cheating to increase sales among the early adopter/benchmarker crowd, impress marketing types and get more units on shelves, and is generally at the cost of the consumer.

Re:Why not? (3, Insightful)

Grishnakh (216268) | more than 4 years ago | (#29728731)

Exactly. If they want to offload GPU processing to the CPUs, then they should do that for ALL programs, not just certain ones in a list.

Re:Why not? (2, Insightful)

causality (777677) | more than 4 years ago | (#29728795)

It's not special drivers for specific games. It's regular drivers with exceptions coded in to make them appear faster on "standardised" tests, which are meant to be an all-purpose benchmark to help consumers identify the sort of card they need (and to compare competing cards). This is cheating to increase sales among the early adopter/benchmarker crowd, impress marketing types and get more units on shelves, and is generally at the cost of the consumer.

No need for a car analogy on this one. So it's like what happens when the public schools teach a generation or two in such a way that they are optimized for performance on standardized tests, and when those students eventually enter the working world, they don't know how to make change without a cash register or other calculator of some sort? The way they don't know how to deconstruct an argument? Let alone understand the importance of things like living within your means?

Why a bad hack when you are close to much more? (1)

parallel_prankster (1455313) | more than 4 years ago | (#29728549)

Its funny that Intel simply creates an INF file and uses those to detect apps and optimize for performance. I mean, if you are detecting a file name and enabling performance optimizations, why not detect the app behaviour itself and make the optimizations generic ? Clearly you know the app behaviour and you know the performance optimizations work. This seem to me a case where people were asked to ship it out fast and instead of taking the time to plug the optimization into the tool, they just made it a hack. A really bad one too!!!

Re:Why a bad hack when you are close to much more? (2, Informative)

Grishnakh (216268) | more than 4 years ago | (#29728763)

I'm just guessing here, but maybe because offloading this work to the CPUs decreases CPU performance substantially, they don't want to make these changes generic because it'd make it look like systems with Intel video are slow, especially in any CPU-oriented benchmarks. After all, they pointed out in the article how Intel does this same thing for the "Crysis" game, but even with this offloading working, the game only got a measly 15fps with all extra effects off, which is downright unusable.

The whole thing just looks really shady.

Re:Why a bad hack when you are close to much more? (3, Interesting)

causality (777677) | more than 4 years ago | (#29728843)

Its funny that Intel simply creates an INF file and uses those to detect apps and optimize for performance. I mean, if you are detecting a file name and enabling performance optimizations, why not detect the app behaviour itself and make the optimizations generic ? Clearly you know the app behaviour and you know the performance optimizations work. This seem to me a case where people were asked to ship it out fast and instead of taking the time to plug the optimization into the tool, they just made it a hack. A really bad one too!!!

Sure, but how hard would it actually be for a graphics driver to scan an arbitrary executable and determine a) that it's a game and b) how it will behave when executed? I suppose they could model it after the heuristic and behavioristic features of some antivirus/antispyware applications, but nothing about this problem sounds trivial. There's also the question about how bloated of a graphics driver you are willing to accept.

My guess is that the above concerns explain why this was a poorly-executed hack.

Re:Why a bad hack when you are close to much more? (1)

TheThiefMaster (992038) | more than 4 years ago | (#29729157)

I think he meant at runtime.
It wouldn't be hard to detect that a running application was only using one thread out of a quad core cpu and was thrashing the gpu, so then offload some stuff to the other cpu cores.

Re:Why a bad hack when you are close to much more? (2, Interesting)

Mashi King (1636207) | more than 4 years ago | (#29728965)

Detecting application behavior dynamically is non-trivial. Commonly it is performed by instrumenting the binary, which _degrades_ the performance of the binary. The act of observation destroys the behavior to be observed, so to speak. This is why 3D marks vantage explicitly prohibits "Use of empirical data of application for optimization". _After_ you get the behavior of application, optimization is a lot easier.

White Goodman Would be Proud (2, Interesting)

mpapet (761907) | more than 4 years ago | (#29728561)

In true White Goodman fashion, cheating is something losers come up with to make them feel better about losing.

Is Intel the 500 lb. gorilla in chipsets? Sure, and they got there by 'cheating.' Which is winning.

Aint capitalism grand?

For all you losers who don't know who the great White Goodman is: http://www.imdb.com/title/tt0364725/ [imdb.com]

Re:White Goodman Would be Proud (2, Insightful)

Grishnakh (216268) | more than 4 years ago | (#29728789)

Is Intel the 500 lb. gorilla in chipsets? Sure, and they got there by 'cheating.' Which is winning.

To be fair, I'm pretty sure that Intel has made the highest-performing chipsets for Intel processors for quite a long time now, occasionally competing with Nvidia (who recently gave up). The other makers like VIA and SiS never had chipsets that worked as well as Intel's. Of course, this is for chipsets which don't have built-in graphics.

Intel's entries into the 3D graphics market have never been very good, only a "better than nothing" solution for low-end and corporate desktops where customers don't want a relatively expensive add-on graphics card, but want to run very basic 3D applications, such as Google Earth. The cost difference between a motherboard with a non-graphics chipset and Intel's built-in graphics is very nominal, and much cheaper than a separate Nvidia or ATI graphics card, especially when you multiply that difference by thousands of desktop systems as used in corporations. But why they even bother trying to rig benchmarks like this is beyond me. No one who's serious about graphics performance would use Intel's built-in video.

Re:White Goodman Would be Proud (2, Interesting)

Kjella (173770) | more than 4 years ago | (#29729031)

But why they even bother trying to rig benchmarks like this is beyond me. No one who's serious about graphics performance would use Intel's built-in video.

No, but there are a lot of 3D games that aren't FPS junkie, the-sky-is-the-limit craving games. For example, I liked King's Bounty which has minimum requirements of "Videocard nVidia GeForce 6600 with 128 Mb or equivalent ATI". Tales of Monkey Island says: "Video: 64MB DirectX 8.1-compliant video card (128MB rec.)". You won't find any of these in the latest AMD/nVidia review, but just pretending to have a little 3D performance can make a difference between "no 3D games at all" and "some non-intensive 3D games". Outside the hardcore gaming market it might matter.

Re:White Goodman Would be Proud (0)

Anonymous Coward | more than 4 years ago | (#29729419)

No one who's serious about graphics performance would use Intel's built-in video.

I think you are wrong here. People only think of AAA mainstream games with latest 3D imagery when they think of the gaming industry. However there is a HUGE casual/non-hardcore games industrys.

Unfortunately these non-hardcore gamers often only have Intel chips and think they have a viable 3D graphics card.

In my opinion Intel is gunning for the mass market that can be convinced that some Intel 3D thingamajig is adequate for "casual, family-friendly gaming".

They get another foot in the door by faking benchmarks because in the end the scores matter more than the fact that they were faked.

xbitlabs was onto them.. (1, Informative)

Anonymous Coward | more than 4 years ago | (#29728581)

http://www.xbitlabs.com/articles/mainboards/display/amd785g-intelg45_6.html#sect1 [xbitlabs.com]

quote
The obtained numbers are pretty interesting. The thing is that although AMD 785G solution is ahead of Intel in 3DMark06, it falls behind the competitor in 3DMark Vantage suite. It is especially strange keeping in mind that Radeon HD 4200 is considerably more powerful than GMA X4500HD according to formally calculated theoretical performance. However, the fact is undeniable: Intel G45 chipset does produce higher 3DMark Vantage score in Windows 7. By the way, this is only true for the upcoming operating system, because Intel graphics accelerator can't repeat its success in Windows Vista. And it means that we can conclude that this sudden success demonstrated by Intel G45 can only be explained by certain driver optimizations and not the GPU architecture.

it's only cheating if... (1, Informative)

Anonymous Coward | more than 4 years ago | (#29728589)

It's nothing new that integrated/cheap gpus use the cpu for various things. By itself, this is not cheating. it's just a subpar solution. It's only cheating if the drivers are fudging the settings per-application without telling the user. If they fudge for 3dmark and not for other applications, this might mislead the user's intuition about the gpu's performance elsewhere. The per-game profiling offered in the control panels for ati/nvidia are different because they can be switched off and the user is made aware of them.

graphics COprocessor (1)

Gothmolly (148874) | more than 4 years ago | (#29728611)

The whole idea of a coprocessor is to distribute the work. Oddly, Intel seems to be using the CPU as the graphics coprocessor, instead of the other way around. However, if your task is not CPU constrained, then this actually makes sense. Its weird, and shady not to admit it, but if Intel came out and said "The following executables are not CPU constrained on processor X, therefore we shift graphics back to the CPU for improved performance" everyone would applaud them for being clever.

Re:graphics COprocessor (0)

Anonymous Coward | more than 4 years ago | (#29728951)

I gave you a +1 Insightful, but I feel I should add that the point of the benchmark is to test the 3D card, not the entire system. ("3DMark Vantage is a PC benchmark suite designed to test the DirectX10 performance of your graphics card. ") What Intel did is equivalent to strapping rocket boosters on the back of a car for a track run to make the engine performance look better... sure, the whole car goes faster but the result is still misleading.

Still, it sounds like it was a legitimate boost for several games even if it increased their benchmark performance egg-to-face ratio.

Do the optimizations work for anything else? (1)

TimTucker (982832) | more than 4 years ago | (#29728719)

I'm seeing a potential other side to this that doesn't seem be being explored (unless I've missed something) -- if the optimizations are specific to .exes listed in the driver's .inf file, has anyone tried adding other games to the list (or alternately, just renaming another executable to match one in the list)?

It would seem like an interesting turn if the optimizations are generic, but only enabled for games/applications that Intel has spent time testing them on.

Re:Do the optimizations work for anything else? (2, Interesting)

Nurf (11774) | more than 4 years ago | (#29728923)

Well, there is also the interesting tidbit that it doesn't enable those optimizations unless the CPU is an Intel CPU.

Hmm.

Re:Do the optimizations work for anything else? (4, Insightful)

edmudama (155475) | more than 4 years ago | (#29729065)

That's not interesting. How do you plan to connect a non-Intel CPU to an Intel chipset with integrated graphics?

Monopoly to cheat (-1, Troll)

Anonymous Coward | more than 4 years ago | (#29728757)

With a couple of companies dominating the market chances are that they all cheat and they don't care much if they get caught. What are you gonna do? Are you going to fire them? Drive them out of business? That's just silly... They cheat, they get caught... doesn't matter... nothing changes.

Next, please...

If you were improving the GPU for gaming... (3, Insightful)

zullnero (833754) | more than 4 years ago | (#29728873)

You'd think you'd have logic in the GPU that could determine when a certain load was being achieved, certain 3D functionality was being called, etc., and offload some work to a multicore CPU if it was hitting a certain performance threshold (as long as the CPU itself wasn't being pounded...but most games are mainly picking on the GPU and hardly taking full advantage of a quad core CPU or whatever). That makes a degree of sense...using your resources more effectively is a good thing. If that improves your performance scores, well...so what? It measures the fact that your drivers are better than the other card's drivers. That seems like fair play, from a consumer's standpoint. If the competitors can't be bothered to write drivers that work efficiently, that's their problem. Great card + bad drivers = bad investment, as far as I'm concerned. That's the real point of these benchmarking tests, anyway. It's just product marketing.

But trapping a particular binary name to fix the results? That's being dishonest to customers. They're deliberately trying to trick gamers who just look at the 3DMark benchmarks into buying their hardware, but giving them hardware that won't necessarily perform at the expected level of quality. I generally stick up for Intel, having worked there in the past as a contractor and generally liking the company and people...but this is seriously bad form on their behalf. I'm surprised this stuff got through their validation process...I know I'd have probably choked on my coffee laughing if I were on that team and could see this in their driver code.

SOP (2, Insightful)

OverflowingBitBucket (464177) | more than 4 years ago | (#29728913)

Hasn't every chipset maker- ever- been busted for fudging benchmark results at some point? Multiple times, usually?

And then they get caught out by the old exe-renaming technique.

Why do they keep trying it? The mind boggles.

I would have thought by now that a standard tool in the benchmarkers repertoire was a tool that copied each benchmark exe to a different name and location and launched that, followed by a launch with the default name; and that the more popular benchmarks had options to tweak the test ordering and methodology slightly to make application profiling difficult.

No organizational memory. (2, Insightful)

mac1235 (962716) | more than 4 years ago | (#29728949)

Marketing execs change all the time. Each one says "Hey! I have an idea...." The programmer who is asked to put in the cheat is not wildly enthusiastic about the idea, knows it won't work and does a quick and dirty hack.

SOC? (1)

toastar (573882) | more than 4 years ago | (#29728939)

ya know, with the pci bus going on die for the I5's it looks like this just a first step, Next gen chips will all almost have to have one core dedicate to graphics

If that's the case.... (1)

ThePengwin (934031) | more than 4 years ago | (#29729113)

If that's the case:

1: Find a normal app.
2: Rename it to Crysis.exe or 3DMarkVantage.exe
3: ???
4: PROFIT.

Cheating is mandatory for corporate thugs... (1)

Bob_Who (926234) | more than 4 years ago | (#29729121)

But they're expected not to get caught. The truth will screw up the inflated stock values. Shareholders get rabid, which makes the lawyers have to work slightly longer than an hour. Just weed out the inferior ones who fail at lying and stealing and cheating like a professional capitalist, and send them off to Radio Shack in Moldova.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...