Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Casting a Jaundiced Eye On AnTuTu Benchmark Claims Favoring Intel

timothy posted about a year ago | from the surely-there's-a-perfectly-innocent-explanation dept.

Intel 82

MojoKid writes "Recently, industry analysts came forward with the dubious claim that Intel's Clover Trail+ low power processor for mobile devices had somehow seized a massive lead over ARM's products, though there were suspicious discrepancies in the popular AnTuTu benchmark that was utilized to showcase performance. It turns out that the situation is far shadier than initially thought. The version used in testing with the benchmark isn't just tilted to favor Intel — it seems to flat-out cheat to accomplish it. The new 3.3 version of AnTuTu was compiled using Intel's C++ Compiler, while GCC was used for the ARM variants. The Intel code was auto-vectorized, the ARM code wasn't — there are no NEON instructions in the ARM version of the application. Granted, GCC isn't currently very good at auto-vectorization, but NEON is now standard on every Cortex-A9 and Cortex-A15 SoC — and these are the parts people will be benchmarking. But compiler optimizations are just the beginning. Apparently the Intel code deliberately breaks the benchmark's function. At a certain point, it runs a loop that's meant to be performed 32x just once, then reports to the benchmark that the task completed successfully. Now, the optimization in question is part of ICC (the Intel C++ compiler), but was only added recently. It's not the kind of procedure you'd call by accident. AnTuTu has released an updated "new" version of the benchmark in which Intel performance drops back down 20-50%. Systems based on high-end ARM devices again win the benchmark overall, as they did previously."

Sorry! There are no comments related to the filter you selected.

Benchmarks, trustworthy? (1)

Anonymous Coward | about a year ago | (#44271371)

Of course not.

Make them do real work loads. With Monkeys.

Re: Benchmarks, trustworthy? (0)

Anonymous Coward | about a year ago | (#44271451)

Maybe if they run this story a few more times [slashdot.org] the results will change. Or maybe not.

Re:Benchmarks, trustworthy? (1)

fustakrakich (1673220) | about a year ago | (#44271741)

What became of the famous "gaussian blur benchmark"? What could be more universal? My personal favorite is hitting the "x^2" key on the calculator until it took more than two minutes to get a result.

Re:Benchmarks, trustworthy? (5, Informative)

hairyfeet (841228) | about a year ago | (#44271775)

Look up "Intel cripples compiler" and you'll see its MUCH worse than merely tilting the benchmarks in favor of ARM, this bullshit means that ANY chip that doesn't have a CPUID of "Genuine intel" gets royally fucked by ALL SOFTWARE that is compiled with the Intel compiler.

If you look up the above in google you'll find a researcher that has done studies and if that doesn't deserve antitrust i don't know what does, he started looking into it when he found that his code would run faster on an old P4 than on a new AMD and it is soooo nasty that if you take a Via chip, the only chip that lets you change the CPUID, and change it from "Centaur hauls" to genuine Intel it jumps nearly 30% in the benches!

So do NOT buy chips based on the benches, they are as rigged as the old "quack.exe" but this is a thousand times worse because ANY program that is compiled with this is crippled and WILL run slower on ANY non Intel chip. So please programmers, use GCC, use AMD's compiler (which is based on GCC and doesn't favor one chip over another) and for those looking for a system DO NOT buy Intel if you can help it, since you are supporting this kind of market rigging bullshit. after seeing the results and seeing just how badly Intel is rigging I went exclusively AMD in my shop and even in my family with NO regrets, at least this way i'm supporting a company that isn't bribing OEMs and rigging markets.

seriously guys don't take MY word for it, look it up. They have even rigged it in the past to push shittier chips over better ones, the guy doing the tests found that even though the early P4 was a slow as hell chip when you ran a program compiled with ICC on both the P3 and P4 surprise! P4 would win. same program compiled with GCC? P3 won by over 30%.

Re:Benchmarks, trustworthy? (1)

Anonymous Coward | about a year ago | (#44271905)

Why would you use an Intel compiler on a non Intel cpu?

Re:Benchmarks, trustworthy? (1)

Anonymous Coward | about a year ago | (#44272149)

The real question is why would you use an Intel compiler. At all. Period.

Re:Benchmarks, trustworthy? (0)

Anonymous Coward | about a year ago | (#44272171)

To get the best performance on an intel CPU.

Re:Benchmarks, trustworthy? (1)

Anonymous Coward | about a year ago | (#44272191)

Because for intel cpus it is actually really good?

Re:Benchmarks, trustworthy? (0)

Anonymous Coward | about a year ago | (#44272277)

Yeah, so good they have to make the compiler cheat on any other manufacturer to slow them down. Says it all really.

Re:Benchmarks, trustworthy? (0)

Anonymous Coward | about a year ago | (#44272325)

There is nothing in Intel's compiler that slows down the competition. It just not tuned for those CPUs. There is nothing shady about it. There is no good reason why intel should waste resources tuning it for CPUs they don't support.

If you want the best performance on AMD CPUs then use AMD's compiler.

Re:Benchmarks, trustworthy? (0)

Anonymous Coward | about a year ago | (#44273819)

Yeah keep living the dream, shill

Re: Benchmarks, trustworthy? (0)

Anonymous Coward | about a year ago | (#44281887)

That doesn't explain how changing the cpuid on the via processor will boost performance. Sounds like they're specifically going out of their way to disable optimizations that other CPUs support just fine if they don't find the "genuine intel" string. That is pretty fucking shady.

Re:Benchmarks, trustworthy? (1)

tolkienfan (892463) | about a year ago | (#44283515)

Nope. If you remove the checks in the resulting compiled binary, the intel-optimized version of the code runs faster.
I'd call that shady.

Because it's x86 compatible. (0)

Anonymous Coward | about a year ago | (#44275527)

And when you build your closed source application, you compile it with only one compiler and if the market is 70% Intel, you use their compiler. And if that compiler nerfs AMD, then AMD is being unfairly nerfed because of the market imbalance. And THAT is an antitrust issue.

Re:Benchmarks, trustworthy? (1)

tolkienfan (892463) | about a year ago | (#44283477)

The intel compiler puts code in the compiled binary that does the checking. It doesn't matter whether you compile on Intel, the resulting binary is crippled and runs slower than necessary on AMD.
I worked with a guy that wrote a program to patch said binaries to remove the checking - this resulted in a nice speedup on all our boxes, since it was an AMD shop.
GP is absolutely right.

Re:Benchmarks, trustworthy? (0)

Anonymous Coward | about a year ago | (#44272013)

GCC's instruction scheduling for newer Intel CPUs is pretty bad. Mostly because Intel won't release any information that would help improving the scheduler in GCC.

However, on the same Intel CPU, GCC produces slower code than ICC. It's simply not as good, which is why ICC is still popular.

Re:Benchmarks, trustworthy? (4, Insightful)

Macman408 (1308925) | about a year ago | (#44272533)

To be fair, any use of a benchmark to judge which system to buy is pretty silly. The best benchmark you can make is something that is identical to your intended workload; eg play a game or use an application on several systems, and see which feels better to you.

Taking some code written in a high-level language and compiling it for a platform is a great benchmark - if that's what you're going to be doing with the system. But you'd better be using the compiler you'll be using on the system. If you need free, you should test GCC on both. If you are considering buying Intel's compiler (it's not free, is it?), then add it in as another test to see if it's worth the extra outlay of cash. Intel puts a lot of work into making compilers very good on its systems, so if you're going to use the Intel compilers for Intel systems, it's perfectly valid to compare against using GCC on an ARM platform, if that's what you'd be using on ARM.

But if most of what you're running will be compiled in GCC for either platform, yes, you should absolutely test GCC on both.

That said, much of what's noted isn't necessarily intentional wrongdoing. For the example of breaking functionality, it's quite possible that the compiler made a perfectly valid optimization to get rid of 31 of the 32 loop iterations. One of my professors once told a story about how he wrote a benchmark, and upon compiling it, found that he was getting some unbelievably fast results. As in literally unbelievable - upon investigation, he discovered that the main loop of the benchmark had been completely optimized away, because the loop was producing no externally visible results. (As an example, if the loop were to do "add r3 = r2, r1" 32 times, a good compiler could certainly optimize that down to a single iteration of the loop; as long as r2 and r1 are unchanging, then you only need to do it once. Similarly, even if r1 and r2 are changing on each iteration, you need to use the result in r3 from each iteration of the loop, otherwise you could optimize it to only perform the final iteration, and the compiler could pre-compute the values that would be in r2 and r1 for that final iteration.)

So perhaps it's a bad benchmark - but I wouldn't default to calling it malicious, just that the benchmark isn't measuring what you might want it to measure. And quite frankly, most users aren't going to be doing anything that even vaguely resembles a benchmark anyway, so they really have little justification to make a buying decision based on them.

Re:Benchmarks, trustworthy? (1)

evilviper (135110) | about a year ago | (#44275383)

The best benchmark you can make is something that is identical to your intended workload; eg play a game or use an application on several systems, and see which feels better to you.

And that's exactly what benchmarks are supposed to approximate. If they aren't doing that, it's because they are bad benchmarks.

People can't go and get hands-on with every system out there, and even if they couldn't, they can't just install all their own software on it and try it out for a few days... so we need some objective measurement to help narrow down the field, and give some general indication that X is fairly fast, but Y is pretty slow...

Re:Benchmarks, trustworthy? (0)

Anonymous Coward | about a year ago | (#44275659)

The optimization is valid, however the specific compiler revision when it appeared and the timing of it, as well as the fact that looks like an optimization tuned to the specific benchmark with no real-world relevance, and in addition the fact how much media attention it got seem to suspect that categorizing it as "cheating" is the most correct conclusion.
However the real result is that it completely discredits those who did not even notice this. In particular, the AnTuTu developers are either _completely_ clueless, and didn't even notice a minor compiler revision update creating a 20% boost, or they _intentionally_ spread a broken benchmark favouring Intel.
Sorry, but I don't know why anyone would use a benchmark by guys who are either incompetent at epic proportions or fraudsters, that is just inexcusable.

Re:Benchmarks, trustworthy? (2)

hairyfeet (841228) | about a year ago | (#44275939)

I'm sorry dude but while you started out well, you quickly ran into bullshit. Look up what I said to look up, "Intel cripples compiler" and there you WILL see the smoking gun....The Pentium 3. If your arguments were valid, that its JUST Intel knows their own chips well? Then the P3 wouldn't get penalized by ICC...but it does. and again if you ONLY change the CPUID, which frankly ANY compiler that uses CPUID to judge what a chip can do instead of the flags? Bullshit. But you switch from Centaur hauls to genuine Intel and tada! Your chip will "magically" score 30% HIGHER than it did before the switch,the ONLY thing being changed is the CPUID.

The guy wrote did the tests tore down his code to see what it was doing and what intel has done with their cripple compiler is that on ANY chip that they don't want pushed, including their own P3, that doesn't give it a P4 or better CPUID gets thrown into X87 mode. That's right NO SSE, even though BOTH Intel and AMD has had SSE for over a decade now and that ANY code could just check the CPU Flags and know this, but this isn't about using what the chip has, its about making sure Intel chips score higher no matter what.

So I'm sorry dude but if that isn't grounds for antitrust then nothing is. Intel ignores CPU flags on ALL chips but their own and instead uses CPUID to make sure that any non Intel chip gets thrown into slow mode so they can win, again its quack.exe all over again only worse because plenty of companies compile with ICC and they are helping Intel rig the market against competition.

Re:Benchmarks, trustworthy? (1)

Macman408 (1308925) | about a year ago | (#44278147)

Let me start this by saying I'm no fan of Intel - quite frankly, many of their business practices are a little suspect, and they've had some downright nasty ones before (like selling a bundle of CPU + Northbridge for less than the CPU alone, and then saying it violated the agreement if the OEM buyer decided to toss the Northbridge in the garbage in lieu of a different manufacturer's chipset.). But I don't see a slam-dunk case for antitrust in this alone.

The first reason is that there may actually be technical reasons for doing this; both Intel and other CPUs all have a variety of errata, and they generally do not match up. It may be desirable for Intel to write their compiler to take their own SSE errata into account, while using simpler (X87) support for other vendors to avoid dealing with their errata. Just an example, but there could be some totally legitimate reasons for doing what they do (while also, of course, helping them out by making everything else slower).

A second reason is that just writing a shitty compiler that only works well on your own products is not antitrust by itself, no matter how easy adding support for other products would be. I think you might have a case for antitrust if their compiler had a strong position over the market and there wasn't other good competition out there. As far as I know (which is not very far at all), that isn't the case - GCC is widely used. Companies that choose to purchase ICC may choose it because they only expect to use Intel CPUs, and are willing to pay extra for better performance on them.

Finally, is there any reason why one could not use both compilers? I'm not exactly a compilation infrastructure expert, but I would expect that it would be possible to compile the same code with different compilers and put it into the same binary. I believe this is what GCC was doing (albeit only a single compiler, but still two distinct assembly code paths) when I wrote PowerPC code and asked GCC to optimize it for both PPC 7400 and PPC 950. (I could be wrong - maybe it's only using optimizations available for both - but in any case, this shouldn't be an intractable problem.)

Now if Intel uses their CPU stronghold to sell their compiler, AND they get a stronghold in the compiler business, AND they use their compiler to ensure that their CPU stronghold is maintained/strengthened, I think that'd be more reasonable of calling out the antitrust goons on. For now though, I think they're still missing that middle link.

Re:Benchmarks, trustworthy? (1)

hairyfeet (841228) | about a year ago | (#44279631)

You final sentence is the case dude, Intel gives big discounts to major software companies, hence why nearly every benchmark uses ICC. And again the smoking gun is the P3, if what you were saying is true then the P3 would NOT be penalized, since they know their own chips, but it is. again take the same program and run it on both the P3 and the P4 of the same speed, with ICC the P4 will get a 30% speed boost while the P3 gets a boat anchor tied to it, the same code with GCC? the P3 will win by 30%.

And as others pointed out so many times with MSFT when you are the market leader? the rules for you are different than for anybody else. the fact that the DoJ let Intel buy their way out by handing AMD 1.2 BILLION dollars frankly was a travesty of justice and just shows how the DoJ is worthless. maybe the EU still has some fangs but the DoJ is nothing but a bad joke.

Re:Benchmarks, trustworthy? (1)

godrik (1287354) | about a year ago | (#44272589)

Are you talking about the compiler that was checking the processor ID instead of the capabilities of the processor? That's an old story that has been fixed a long time ago.

In all fairness, compiler optimizations are close to black magic. The only reasonnable way to know what is best is to test multiple compilers and see what comes. Depending on codes some compiler will be better than some other one. Even on intel platforms, depending on benchmarks sometimes gcc performs much better, sometimes icc performs better, sometimes pgi performs better. If you care about performance of an application that much, you got to bench it on the target hardware with different compilers and optimization levels.

Re:Benchmarks, trustworthy? (3, Informative)

OneAhead (1495535) | about a year ago | (#44274191)

If by fixed you mean "Intel put a disclaimer [wikipedia.org] on its compiler saying [ICC] may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors", then yes, it is fixed. Otherwise, not so much. I happen to have tested ICC performance against other compilers not too long ago, and it refuses to genereate AVX instructions that are reachable when running on an AMD CPU. The -xO flag [wikipedia.org] didn't help - all it did was turn off AVX altogether. Adding flags that prevent it from generating other execution paths than the AVX one didn't help either; when started, the binary would just generate a clean (but false) error message that the processor doesn't support its instructions, and exit immediately. From this, I concluded that after all these years, they still check for "GenuineIntel" instead of looking at the actual capability flags. In the end, we found absolutely no way to make ICC generate AVX instructions that would be executed on an AMD processor.

Re:Benchmarks, trustworthy? (1)

hairyfeet (841228) | about a year ago | (#44275977)

Have you tried the AMD compiler yet? Its free and according to the one that originally found the "Intel cripple code" it does NOT favor any one chip over another but actually checks the use flags like a compiler SHOULD, not this CPUID market rigging bullshit. while it was originally based on GCC they have added a bunch of optimizations and updates to GCC to make it faster and support the latest and greatest.

Here is the link [amd.com] at least I think, its been awhile since I went looking for dev tools, but as you can see its supports C,C++,and Fortran, and unlike Intel doesn't use CPUID or favor one chip over another. Maybe you running a couple of tests and posting results would make a good /. follow up article?

Re:Benchmarks, trustworthy? (1)

OneAhead (1495535) | about a year ago | (#44279875)

Our benchmarks were done with a scientific workload that is not even representative for scientific workloads in general, so I think they will be all but useless to general users. That said, we did try Open64 (not sure if it's fair to call it "the AMD compiler"), and it came out pretty good. But... so did GCC 4.7.2. To my surprise, it was way faster than the GCC 4.5 we used before, and scored virtually on par with ICC, Open64 and Portland (aka. PGI). One important thing to note is that we gave gcc the -ffast-math flag, which makes a huge difference for some floating-point-heavy codes. I know the gcc documentation has a big fat warning against doing so, but ICC and PGI perform a similar set of optimizations by default, so not setting that flag would not be a fair comparison. And the numerical results were good.

Our observation that we got very similar performance with all 4 compilers may not transfer well to other softwares, though. The softwares I tested are written in C and Fortran, which are relatively low-level languages without garbage collection, and scientific software is already fairly optimized at the code level, so important differences between the compilers are may surface with programs that are written in a higher-level languages with less manual optimization.

Re:Benchmarks, trustworthy? (1)

hairyfeet (841228) | about a year ago | (#44279683)

I have a question, not a programmer so I was wondering what advantage do you get by using AVX over SSE? because i took a quick look at AVX and its only supported on Bulldozer and Sandy Bridge, which would be a very VERY small portion of the chips out there. it seems to me if you wanted the widest possible support you'd use the SSE flags since SSE 4 has been around since Phenom I and Core 2.

Re:Benchmarks, trustworthy? (2)

Runaway1956 (1322357) | about a year ago | (#44276223)

I've been all AMD almost forever, for this reason among others.

http://forums.pcper.com/showthread.php?470102-Intel-s-compiler-cripples-code-on-AMD-and-VIA-chips [pcper.com] 2010

http://www.theregister.co.uk/2009/12/16/intel_ftc/ [theregister.co.uk] 2009

http://techreport.com/news/8547/does-intel-compiler-cripple-amd-performance [techreport.com] 2005

I found those three on the first page of my search results, and quit looking. Different search terms and a more determined search will find hits as old as about 1999, maybe even older. Hard to remember, but I think I first became aware of compiler cheats by Intel around 2000 or 2001. Prior to that, I naively thought that a compiler was a compiler.

Re:Benchmarks, trustworthy? (1)

hairyfeet (841228) | about a year ago | (#44293257)

Glad to see somebody else put their money where their mouth is and support market competition over market rigging. I was a big Intel chip user until the bribery and ICC scandals came out and since then I've not bought a single Intel chip at the shop and even my family and I are 100% AMD, 4 desktops, a laptop, and a notebook and they ALL run great.

The thing most folks just don't seem to realize is how much better the bang for the buck is on AMD, you can pick up a 1035T or 1045T for around $100, quads for less than $80 and I've been picking up Athlon triples for less than $55 a pop and I'm seeing more than 70% unlock rates, you just can't beat that. Even when you can't unlock the BFTB you get out of these chips is just nuts, my youngest is blasting through Borderlands II as i type this on a 3.3GHz Athlon triple and it just flies, the whole system with Win 7 HP, 4GB of RAM, 640GB HDD and an HD4850? $350 after rebates.

So be sure to do as I do and point out when anybody brings up benchmarks that Intel is rigging them and point them to places like Tigerdirect and show them how truly insanely cheap they can get a system [tigerdirect.com] just by going AMD. Oh and FYI but if you know somebody that just surfs, does office work, or needs a REAL cheap HTPC? Look at the AMD E350 boards at Amazon and Tiger, you can get 'em as low as $75 [amazon.com] and with a PCI to IDE card and a 4GB RAM stick you can have a REALLY cheap upgrade for an older system or a great start for an HTPC, although I use the version with a PCI-E X16 slot on the HTPC builds. I have been using these to replace the aging P4s in SMBs and it gives you a pretty nice upgrade, dual cores and a GPU that does 1080P and for the icing on the cake the whole system uses less power under load than the P4 does at idle, just 19w for the whole board.

I was impressed enough with the performance i sold my full size laptop for an E350 EEE netbook and after 3 years it still gets over 4 hours on the battery, does 1080P over HDMI so when I'm not using it as a portable it can do double duty as an HTPC, oh and unlike those Atom and Celeron based netbooks my baby has a full 8GB of RAM which not only gives me a full GB on the graphics but thanks to superfetch once I boot up ALL my applications are preloaded into memory thus allowing the HDD to stay parked and save power. So if you haven't looked at 'em give one a spin, they are pretty damned sweet.

Re:Benchmarks, trustworthy? (1)

N3tRunner (164483) | about a year ago | (#44276847)

This is why you should use more than one benchmark when testing newly released hardware, especially if you're going to write an article on your findings.

One Benchmark Bad, Multiple Benchmarks Good (0)

Anonymous Coward | about a year ago | (#44275399)

One benchmark is not trustworthy, but a range of benchmarks from a number of different sources can be quite a reliable indicator, especially if the various benchmark providers are in competition.

For example this review uses multiple benchmarks:

http://www.gsmarena.com/samsung_galaxy_tab_3_101-review-948p4.php [gsmarena.com]

The results are reasonably consistent results across the CPU benchmarks with various devices typically appearing in the same part of the list. The obvious outlier is the old version of AnTuTu.

But still... (2, Insightful)

sunking2 (521698) | about a year ago | (#44271391)

It is the suite of tools, not just the processor. If intel offers a better processor/compiler package than is available for arm why shouldn't they tout it? I'm not saying they are presenting it in the correct way, but I do think they have a valid point they want to make. That with Intel you get more than a CPU, you get a heck of a lot of tool expertise. And for some people that is worth something.

Re:But still... (3, Insightful)

Anonymous Coward | about a year ago | (#44271455)

if you use icc instead of gcc for x86 then you should use the ARMCC compiler or Keil or one of the others for arm.

Yet it does not make this "benchmark" honest. (5, Insightful)

boorack (1345877) | about a year ago | (#44271457)

Compiler was one of many skews in this "honest" benchmark. Aside of deliberately "fixing" benchmark code for intel and deliberately breaking ARM benchmark by disabling NEON. In my opinion they should run identical code, trying to maximize its performance on both platforme and in case of Intel use both compilers and post both results. This would lead potential customer to correct conclusions - as opposed to a bunch of lies and misinterpretations AnTuTu actually posted.

Re:Yet it does not make this "benchmark" honest. (1)

godrik (1287354) | about a year ago | (#44272637)

Well, that is stupid. You NEVER run identical codes on different architectures. Especially when they are not even binary compatible. You almost always optimize the code for a given architecture in fractions of code that are particularly important. Querying cache sizes, checking the number of hardware contexts are common things.

For instance libcairo has some NEON specialized code path. ffmpeg contains codepath for pretty much every single architecture out there.

Re:Yet it does not make this "benchmark" honest. (2)

dryeo (100693) | about a year ago | (#44274349)

You run configure (various options such as --enable-gpl for FFmpeg) && make for each platform. For benchmarking I guess you could do make check for Cairo but that is not a very good test as make check needs exactly the right versions of ghostscrpt, various fonts and I don't know what else. For FFmpeg you could run make fate after downloading the samples and time it. This would be a fairly good C benchmark for various CPU's because as you stated there are code paths for a hell of a lot of CPU's. The OS and libc are still going to affect the results. Examples of results of this, without timings, are at fate.ffmpeg.org (also similar at fate.libav.org)

Re:Yet it does not make this "benchmark" honest. (1)

gl4ss (559668) | about a year ago | (#44275465)

he probably meant identical in the sense that the input->output is identical. which is what benching two systems should be about anyways.

Re:But still... (1)

mechtech256 (2617089) | about a year ago | (#44271465)

That's true, but in this case it looks like it is simply a broken version of a benchmark that Intel latched on to for marketing purposes: "At a certain point, it runs a loop that's meant to be performed 32x just once, then reports to the benchmark that the task completed successfully."

Re:But still... (0)

Anonymous Coward | about a year ago | (#44271695)

actually no, it does 32 operations with only one command, but still does all work and gives same result

Re:But still... (0)

Anonymous Coward | about a year ago | (#44275343)

actually no, it does 32 operations with only one command, but still does all work and gives same result

If the benchmark allows this kind of optimisation then it is poorly-written benchmark. If you write a loop to test bit manipulation operations then there should be no way to avoid doing those operations.

Re:But still... (0)

Anonymous Coward | about a year ago | (#44275449)

What a stupid comment. The whole point of benchmarks is to optimize them. If it's an important benchmark that represents real-world usage then it's common sense to make a fast implementation in hardware. And then it gets used and the benchmark is "broken"? Please.

Re:But still... (1)

MightyMartian (840721) | about a year ago | (#44271479)

They're rigging results by using parameters and optimizations useful only for the benchmarks in question. In other words, unless the only thing you use processors for is benchmarks, you have learned absolutely nothing about how this processor will work in any real world application.

Re:But still... (0)

Anonymous Coward | about a year ago | (#44271601)

The benchmark is a completely pointless one anyway. It doesn't test anything that is necessary.

(And optimising something to work really well in real life with great battery life would make it totally awful on this benchmark. Only using what it needs etc etc).

Re:But still... (1)

K. S. Kyosuke (729550) | about a year ago | (#44271633)

It is the suite of tools, not just the processor. If intel offers a better processor/compiler package than is available for arm why shouldn't they tout it?

Because you'll be stuck with the architecture for quite some time, while the SW tools may evolve faster than you think (not to mention the fact that there's always the profiler, compiler intrinsics, and inline assembly, if you need top performance right here, right nor, for a particular piece of code, and then, only your brain and the piece of silicon come into the equation, not some silly compilers).

Re:But still... (0)

Anonymous Coward | about a year ago | (#44271677)

It you just write a compiler to skew results in a known benchmark by merely pretending to complete the tasks set forth by it, you aren't making any valid points.

It seems that every company that gets gauged by a popular benchmark tries to cheat on it at some point. The trick is to not get caught.

Re:But still... (0)

Anonymous Coward | about a year ago | (#44271747)

no actually program does all the work it is supposed to do, and has exactly same result as arm version, but compilier is smart enough to detect that 32 operations working on 1 bit can be replaced with 1 operation working on all 32 bits at same time.

arm version of compiler is not that smart ...

Re:But still... (1)

jthill (303417) | about a year ago | (#44271797)

If you want to know why they shouldn't present honest results, it looks like you;'re going to have to ask them, because it seems they didn't. Until they explain why, the usual reason people put their thumb on the scale is that they know they can't win honestly.

Re:But still... (0)

Anonymous Coward | about a year ago | (#44271871)

With Intel you also get a compiler that makes your code faster by not running it.

Re:But still... (2)

gnasher719 (869701) | about a year ago | (#44271917)

It is the suite of tools, not just the processor. If intel offers a better processor/compiler package than is available for arm why shouldn't they tout it? I'm not saying they are presenting it in the correct way, but I do think they have a valid point they want to make. That with Intel you get more than a CPU, you get a heck of a lot of tool expertise. And for some people that is worth something.

Absolute correct, you should judge the combination of processor + commonly used compiler. For example, if Apple built an iPad with an Intel processor, then any iPad app would be built with Clang for ARMv7, Clang for ARMv7s, and Clang for x86_64, and you could directly compare all three versions.

However, you must be careful. You need to check real-life code. If you run identical code 32 times and an opimising compiler figures out you need to do it only once, that's not real-life. If this is what your benchmark does, then your benchmark runs 32 times faster, but nobody cares how fast benchmarks run. People care about real applications, and the benchmark's now fails at its purpose which is given an indication how real applications will behave.

Re:But still... (0)

Anonymous Coward | about a year ago | (#44276297)

nBench is a laughable benchmark. There's great likelihood that the Intel compiler is not cheating at all in respect of C99 or C11 semantics; that it just works so well thanks to good compiler algorithms and raw power on static analysis. Benchmark source code modification is the more fishy part.

If I'd be distributing benchmarking software, I'd choose something that's actually hard to fake and not completely relic. Why doesn't AnTuTu just ditch those benchmarks that are close to pointless and replace them with real stuff?

Re:But still... (1)

bcmm (768152) | about a year ago | (#44277685)

But they aren't using the other compiler properly - their results effectively rely on the lie that they can do SIMD and ARM can not. And even without the actual dishonesty, it's a synthetic benchmark selected specially to show off their compiler/processor's strong points.

Linpack bullshitting (0)

Anonymous Coward | about a year ago | (#44271429)

Code to the test. Seen it all before.

Re:Linpack bullshitting (2)

Trepidity (597) | about a year ago | (#44271611)

At least Linpack performs actual linear algebra, so coding to that particular test will help some people with real workloads (i.e. scientific software that uses Linpack). It's definitely not representative of everyone's workload, though.

Better link (0)

Anonymous Coward | about a year ago | (#44271433)

The actual forum post by Exophase. [anandtech.com]

It's a good analysis. The code in question looks really synthetic (set or clear a large range of bits one bit a time?), so even claims that this compiler efficiency will carry on to real-world performance fall flat on their face. Disregarding that most Android developers will use the GCC compiler provided with the NDK.

Duplicate? (1)

Anonymous Coward | about a year ago | (#44271441)

http://hardware.slashdot.org/story/13/07/12/1558209/new-analysis-casts-doubt-on-intels-smartphone-performance-vs-arm-devices?sdsrc=rel

Re:Duplicate? (2)

Molochi (555357) | about a year ago | (#44271481)

Just the controversy. The news, buried at the bottom of the article, is that AnTuTu has a newer version that drops Intel performance back to where it was before.

To all the marketting... (0)

Anonymous Coward | about a year ago | (#44271469)

To all the marketting idiots out there who think this sort of thing will go unnoticed, think again. We engineers will see through the charade. We will do this, because we are smarter than you. That is why we are engineers, and you are marketting idiots.

Fixed, apparently (3, Informative)

edxwelch (600979) | about a year ago | (#44271493)

In fairness to AnTuTu they released a new version which tries to rectify the problem:
http://www.eetimes.com/author.asp?section_id=36&doc_id=1318894& [eetimes.com]

Re:Fixed, apparently (1)

gl4ss (559668) | about a year ago | (#44271615)

yeah but does the new version remove optimizations from the intel compile or(the right way) add those to the arm version?

seriously though.. who gives a fuck. the tests should be done with the usual android toolchain... it's not like anyone is going to use _that_ intel processor for scientific computing.

Re:Fixed, apparently (0)

Anonymous Coward | about a year ago | (#44271679)

should be done with the usual android

No!

Re:Fixed, apparently (0)

Anonymous Coward | about a year ago | (#44271691)

The optimizations where not added by Antutu but Intel's ICC.

Btw, ICC have been found to hamper non-Intel x86 CPUs unfairly in the past...

Re:Fixed, apparently (0)

Anonymous Coward | about a year ago | (#44275693)

Fairness? They didn't even manage (or intentionally missed) the problem themselves. And they still seem to use ICC, so this same thing can still happen. Assuming they _really_ are innocent they didn't even have the balls to sanction Intel for making ICC cheat, e.g. by using gcc in x86 (which would be the only sane thing anyway, since it's supposed to be an Android benchmark and the Android NDK does not support ICC).

AS ABRASH HAS STATED MANY TIMES !! (-1)

Anonymous Coward | about a year ago | (#44271555)

Cuckoo Eggs Invite Soviet Spying !!

No, wait !!

Optimize only if it matters !!

I.e., don't optimize if you can cheat !!

Speed is one thing that is not of prime importance on a mobile. Power use, or lack of use, is !! Feel free to mod this +5, insidious !!

Anyone remember the days... (2)

gTsiros (205624) | about a year ago | (#44271649)

...where companies used to rig benchmarks?

Oh right, we're still not past them.

AND WE'LL NEVER BE!

Always use real world applications, in actual, real usage. Never benchmarks.

Usefull (1)

Tim12s (209786) | about a year ago | (#44271811)

I know some ignorant people that will take these benchmarks as gospel in their righteous views.

This is what Intel's millions of PR spend achieve (-1)

Anonymous Coward | about a year ago | (#44271835)

Every technical publication and site on the Internet is in Intel and Microsoft's pocket. These mock-independent public facing entities CANNOT exist without massive amounts of cash from their sponsors. In the worst cases, their reviews are WRITTEN by Intel- in slightly less worse cases their reviews use benchmark software exclusively provided by Intel.

Intel already outspends AMD by hundreds to one in R+D to achieve what is in most cases a tiny advantage over their main competitor. Historically, Intel has lagged seriously behind AMD in technology on two occasions despite their obscene spending. Intel can only continue this strategy while they maintain their effective monopoly (AMD sales are tiny in comparison) AND the PC marketplace remains healthy. Unfortunately for Intel, the IT world is seeing seismic changes.

The INTEL TAX alone means Intel can never (NEVER) compete with ARM. I hope many of you here are aware of the insanely good quality and amazing low cost of ARM SoC parts made by the Chinese (Allwinner, Rockchip, Amlogic etc) in the last few years. These parts would be good enough for 99.9% of all laptop users if only proper Windows was allowed to run on ARM commercially. Intel needs an average selling price (ASP) 4X and more over the Chinese to simply pay its annual recurring costs.

But it gets much worse for Intel. Intel bet the farm on a new chip making concept called FinFET. While tech sites a few years back took Intel money to lie and state this was an Intel invention, actually FineFET was an industry wide consensus about where future chip design was going. Today, largely as a result of Intel's TWO disasters with building families of chips with FinFET, confidence in this technology is at an all time low, and Intel's competitors are currently investing in FD-SOI, an amazing piece of lateral thinking that either boosts the speed OR power efficiency of transistors without the complete re-tooling FinFET requires.

So, Intel currently has a useless hand, and can ONLY bluff to try to maintain confidence in the company. However, what puzzles industry watchers, and is turning even tame tech sites against Intel, is that Intel is ignoring the traditional desktop enthusiast market, and focusing on a completely hopeless battle with ARM. By doing so, Intel is actually convincing the world that tablets are the future, accelerating the decline of the PC.

ARM has a business model that fools companies like Intel. On paper, ARM seems like a push-over (the company, not the tech). However, ARM punches FAR harder than Intel. Every-time Intel tries a trick like this, the blow Intel receives in return is much more damaging. Today, many are asking "exactly where do Intel excel?"

Low power = ARM
Exciting = ARM
Cheap = ARM
High performance computing = GPU from AMD or Nvidia.
Future server parts = ARM

Current decent Microsoft Windows platform = Intel

Intel doesn't want its fate tied exclusively to the whims of failing Microsoft, but all things considered, it has no choice. The ONLY phones and tablets built with Intel technology will be by companies PAID by Intel or Microsoft to do so. When free will and commercial consideration is involved, only ARM chips will be used. And do not forget, Apple has effectively bought TSMC (the giant Taiwan chip company) to finally drop Intel from all its products.

Re: This is what Intel's millions of PR spend achi (1)

Miamicanes (730264) | about a year ago | (#44272455)

>high-performance computing

winner, no contest: Intel's best CPU, plus the best GPU money can buy. Why hobble a kick-ass GPU with a second-rate CPU?

> excitement

I don't know about YOU, but I get more excited by maximum performance than by power consumption, cost, or marketing. Winner: Intel

I don't wish AMD ill. I'd *much* rather see AMD pulling ahead of Intel & forcing both into a deathmatch for better performance. Let's not forget that AMD happily showed us that they can be as expensive & mean as Intel given half a chance. I *want* to see AMD & Intel trading the top spot every 4-8 months, like in the old days :-)

The last time I checked, neither Windows NOR Linux truly has realtime 2560x1600 120hz alpha-blended raytracing or consequence-free desktop eyecandy yet. There's still plenty of room for improvement, as long as we can slow down Microsoft & Ubuntu's mad quest to roll back performance to the nasty level of tablets and netbooks. I don't want a 1mm pseudo-notebook with virtual keyboard & cpu that spends most of its time at 700MHz... I want a 6" thick lunchbox with 3 screens (1920x1200 main, flanked by a pair of hinged 960x1200 panels), mechanical-switch keyboard (Cherry Green, or ideally buckling spring), and the fastest cpu available. With optional 24,000MAh battery & Thunderbolt-connected pseudo-netbook remote LCD+keyboard+Trackpoint, so that when I fly I can put the lunchbox under the seat & still use it ;-)

Re: This is what Intel's millions of PR spend achi (1)

Bert64 (520050) | about a year ago | (#44275795)

>high-performance computing

winner, no contest: Intel's best CPU, plus the best GPU money can buy. Why hobble a kick-ass GPU with a second-rate CPU?

Because power consumption is very important for HPC...
It might not matter so much for a single user's desktop, but when you scale to thousands of processors the extra power consumption can cost serious amounts of money to keep running both in the power it consumes, and the extra power consumed keeping it cooled.

If your HPC workload primarily uses the GPU, then the CPU may even be sitting idle most of the time, your CPU only needs to be fast enough to keep the GPU fed with data.

Also for HPC, throughput is important... Current CPUs are much faster than the memory to which they're connected, so while some workloads can fit in the cache and run very fast those which result in lots of memory access can be considerably slower. That's why several of the IBM top500 supercomputers use relatively slow embedded PPC cpus, the CPU won't be wasting lots of its time and energy waiting for memory to catch up.

Didn't we already learn this? (0)

Anonymous Coward | about a year ago | (#44271945)

Never, never, NEVER EVER, trust intel with any kind of benchmark or pretty much anything to do with their compiler.

But didn't we already learn this lesson the last time they tried screwing us with their "special" compiler optimizations? That prompted me to quit using intel compiler and tools even on intel CPUs. I just don't need the headaches. Fool me once...fool me twice...etc.

Time for ARM to invest in GCC (2, Insightful)

citizenr (871508) | about a year ago | (#44271965)

ARM looks like a sore loser here.

>GCC isn't currently very good at auto-vectorization, but NEON is now standard on every Cortex-A9 and Cortex-A15 SoC

So the conclusion is to remove intel optimizations instead of improving ARM ones?

Re:Time for ARM to invest in GCC (1)

imgod2u (812837) | about a year ago | (#44272027)

Well, no. There are better compilers out there for ARM. Keil for one. More importantly though is the fact that real code that cares about performance won't just write a loop and let the compiler take care of it; they'll use optimized libraries (which both Intel and ARM provide).

Compiler features like auto-vectorization are neat and do improve spaghetti code performance somewhat but anyone really concerned with performance will take Intel's optimized libraries over them. So if we're going to compare performance that the end-user cares about, we'd use a benchmark that not only mimicked the functions we'd see in actual software but the libraries they use.

Re:Time for ARM to invest in GCC (0)

Anonymous Coward | about a year ago | (#44272153)

NEON isn't even standard on all A9 SoCs. Nvidia Tegra 2 devices don't have it.

Re:Time for ARM to invest in GCC (1)

RemAngel (2982891) | about a year ago | (#44272381)

ARM will invest very little in GCC now, because of GPLv3. The question should be is why the benchmark used a generic compiler for ARM (gcc) versus a vendor specific compiler for Intel (icc). Why was the ARM produced compiler not used?

Re:Time for ARM to invest in GCC (0)

Anonymous Coward | about a year ago | (#44274131)

ARM will invest very little in GCC now, because of GPLv3.

That's utter bullshit. There is nothing special about GPLv3 vs. GPLv2 if all you care about is making a decent compiler for your like of chips.

Or are you saying that somehow the compiler optimizations are "sekrit patents" that cannot be implemented in GCC because everyone will steal them for their own processors??

Re:Time for ARM to invest in GCC (2)

RemAngel (2982891) | about a year ago | (#44278279)

ARMs business is IP so they care about GPLv3 and the constraints it puts them under. Alternative open-source compilers such as LVVM, with less onerous licensing, are therefore more likely to be contributed to,

Re:Time for ARM to invest in GCC (0)

Anonymous Coward | about a year ago | (#44278297)

ARM sucks. OOOOOHHH it's as fast and as powerful as a 486 but uses half the power. Go fuck yourself ARM. Nobody wants a poorly suited processor just because it's "low power". If it can't get the job done, low power or not, it still sucks. Just because dopamine addicts like to play simple minded games with your shit doesn't make it great, useful, or even wanted.

Re:Time for ARM to invest in GCC (0)

Anonymous Coward | about a year ago | (#44281991)

Nobody wants a poorly suited processor just because it's "low power".

Acutally, a lot of people want them. You almost certainly own a significant number of them yourself...

Re:Time for ARM to invest in GCC (0)

Anonymous Coward | about a year ago | (#44283733)

This. ARM fanboys come here to badmouth intel for providing tools to improve their performance. If the benchmark missed this all along and now try to correct because they "forgot", perhaps is giving some hints that either benchmarking is wrong or simply useless.

NEON? (0)

Anonymous Coward | about a year ago | (#44272771)

Come on, it's 64-bit and comparable to MMX, introduced in second series of Pentium 1. That won't help you much.

Re:NEON? (0)

Anonymous Coward | about a year ago | (#44275735)

1) NEON is 128 bit (though you can use them as 64 bit registers as well if you want). And 64 bit is still 8 bytes, so with the right use-case you could get an 8x speedup still.
2) MMX was seriously broken due to not being compatible with floating-point operations or even the function call ABI. You had to switch modes before any floating-point operation and thus would take _lots_ of cycles.
3) MMX was _very_ useful and made a _huge_ difference despite those issues. Even 3dnow did, despite in some ways being more limited.

mo3 dowN (-1)

Anonymous Coward | about a year ago | (#44273673)

alike to reap We'll be+ able to Appeared...saying

fix it then... (1)

SuperDre (982372) | about a year ago | (#44279257)

Instead of pointing a finger and tell people about it, why not fix it and SHOW people the actual numbers after optimisations for both platforms are set into place, and without those optimisations on both platforms.... Meanwhile you can also say that Intel's C++ compiler is just better than GCC's as appearantly Intel compiler already has all optimisations ON by default and GCC doesn't....
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?