Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel Challenges ARM On Power Consumption... And Ties

Unknown Lamer posted about 2 years ago | from the not-too-shabby dept.

Intel 163

GhostX9 writes "Tom's Hardware just published a detailed look at the Intel Atom Z2760 in the Acer Iconia W510 and compared it to the NVIDIA Tegra 3 in the Microsoft Surface. They break it down and demonstrate how the full Windows 8 tablet outperforms the Windows RT machine in power consumption. They break down power consumption to include the role of the CPU, GPU, memory controller and display. Anandtech is also reporting similar findings, but only reports CPU and GPU utilization." Despite repeated claims that x86 is beating ARM here, they look neck and neck. Assuming you can make a meaningful comparison.

Sorry! There are no comments related to the filter you selected.

I can't believe they missed this (0)

Anonymous Coward | about 2 years ago | (#42385327)

What about floating point checks and comparisons?

Re:I can't believe they missed this (1)

cc.Scotty (94020) | about 2 years ago | (#42385345)

An understandable oversight since it's Christmas Eve.

Re:I can't believe they missed this (-1)

Anonymous Coward | about 2 years ago | (#42385421)

Ssshhhh! It's Christmas in Australia, you fool! You don't want to bring down the wrath of the Australians for not including them in your comment!!

Re:I can't believe they missed this (2)

cc.Scotty (94020) | about 2 years ago | (#42385663)

My God, I'm an insensitive clod... sorry mate.

WINDOWS 8 STILL SUCKS (-1)

Anonymous Coward | about 2 years ago | (#42385567)

But why would anyone care, if x86 means you're on Windows 8, which still sucks?

Neck AND Neck (5, Informative)

Anonymous Coward | about 2 years ago | (#42385351)

Despite repeated claims that x86 is beating ARM here, they look neck in neck.

It's neck and neck [thefreedictionary.com] .

Re:Neck AND Neck (0)

Anonymous Coward | about 2 years ago | (#42386091)

It's neck and neck.

Aussies (and some Americans) pronounce that neck ain neck. You can't reasonably expect foreigners, or even locals, to get it right. Especially given how neck to neck is common in North America, which adds to the confusion.

Re:Neck AND Neck (1)

Anonymous Coward | about 2 years ago | (#42386259)

Aussies do not pronounce it neck ain neck.
it is much closer to say it's pronounced neck 'n' neck (like rock 'n' roll)

they look neck in neck (3, Informative)

magarity (164372) | about 2 years ago | (#42385353)

It's "neck and neck" as in a pair of horses very close together at the finish line.

Sigh

Re:they look neck in neck (2, Interesting)

ohnocitizen (1951674) | about 2 years ago | (#42385379)

Neck in Neck seems like a more internet appropriate version. As in a series of images tucked away in a dark corner of imgur, briefly referenced on reddit before being removed by admins. Neck in Neck - "A filthy, gritty internet version of Neck and Neck."

Re:they look neck in neck (0)

Anonymous Coward | about 2 years ago | (#42385453)

"neck 'n neck" is what he was looking for.

Re:they look neck in neck (0)

Anonymous Coward | about 2 years ago | (#42385515)

I don't think they're anywhere near the finish line, and I don't see how they're anything like horses.

I think they're more like neck in neck, you know, the sexual position.

Re:they look neck in neck (2)

ArchieBunker (132337) | about 2 years ago | (#42385645)

The author must have written the summary while standing online.

Re:they look neck in neck (0)

Anonymous Coward | about 2 years ago | (#42386171)

mod funny

Re:they look neck in neck (1)

MichaelSmith (789609) | about 2 years ago | (#42385721)

neck 'n' neck

'neck in neck'? (3, Informative)

drainbramage (588291) | about 2 years ago | (#42385355)

Oh for crying out loud: Neck and Neck.
Often used when describing two racers that are nearly even in position.

3 out of the first 4 comments are corrective geeks (-1)

Anonymous Coward | about 2 years ago | (#42385385)

and the 4th is an apologist geek.

Neck and Neck is advantage Intel (4, Interesting)

Anonymous Coward | about 2 years ago | (#42385391)

If two processors are Neck and Neck in power consumption and one of them is x86. It means x86 is ahead. It's got better clock speeds and it's got more software going for it than arm. Yes we have a lot of android apps, but I would rather have my windows applications to those "apps" and their private internet. Unless Neck and Neck is for a processor intel does not produce any more, it's clearly advantage intel.

Re:Neck and Neck is advantage Intel (3, Interesting)

Anonymous Coward | about 2 years ago | (#42385445)

Two processor are neck and neck. One costs $120 and the other costs $20.

Which one has a brighter future?

Especially now since people don't need to run all sorts of software. They just need android.

Re:Neck and Neck is advantage Intel (2, Informative)

Ocker3 (1232550) | about 2 years ago | (#42386185)

What people? Enterprise IT staff are going to buy Huge numbers of Win8 mobile devices that can authenticate to their networks at the OS level, removing the need for every app itself to authenticate. We have iPads that refuse to forget wireless accounts, meaning a user can get locked out (hitting the bad login limit quite fast) in a few minutes, and the iOS on them doesn't prompt the user for a corrected username/pw. Apple's support for Enterprise environments has been late and shoddy, especially if you don't live in the USA. And good luck trying to print properly from an iOS device to a Server 2003-based printer, which a Lot of people still run.

I for one am going to be Really happy when I can give Surface devices to our users and swing them away from getting iPads, they're good for home use, but are a Huge hassle en mass.

I've asked our corporate purchasing staff about 'droid devices, their response was: Can't get a serious warranty, platforms rollover too fast, and they're Far too easy to get root access to.

Re:Neck and Neck is advantage Intel (4, Insightful)

iserlohn (49556) | about 2 years ago | (#42386355)

If the execs and the sales guys want their Apple devices, or Android devices for that matter, what the IT organization thinks is 100% irrelevant. I've seen this happening already in quite a few large organizations that aren't particularly famous for being early adopters in new tech. Next thing to go are the standard windows images - corporate images are normally poor quality that people complain about constantly.

Re:Neck and Neck is advantage Intel (2)

Lennie (16154) | about 2 years ago | (#42386471)

Those won't be buying ARM that is for sure.

Because Windows RT does not support any of these things. Only the Intel version does.

That also means, if it's Bring-Your-Own-Device situation they'll be bringing the ARM-version.

This is going to be fun to watch.

Re:Neck and Neck is advantage Intel (1)

poetmatt (793785) | about 2 years ago | (#42385525)

No, it doesn't.

Why doesn't it mean x86 is ahead? Because x86 has had years of development ahead of ARM. Also because x86 uses proprietary microcode.

So having them equal means ARM is a significant benefit.

Re:Neck and Neck is advantage Intel (5, Insightful)

JimCanuck (2474366) | about 2 years ago | (#42385625)

No, it doesn't.

Why doesn't it mean x86 is ahead? Because x86 has had years of development ahead of ARM. Also because x86 uses proprietary microcode.

So having them equal means ARM is a significant benefit.

The original x86 was introduced in 1978.

The original ARM was introduced in 1985.

That is just 7 years more over the ARM with 27 years of development since the first implementation. Plus all of the /. crowd and other self described "experts" have been saying for years that a neck and neck tie between them for power consumption would never happen. And well it did, so obviously this is a win for the x86.

Re:Neck and Neck is advantage Intel (0)

Anonymous Coward | about 2 years ago | (#42386001)

That is just 7 years more over the ARM with 27 years of development since the first implementation. Plus all of the /. crowd and other self described "experts" have been saying for years that a neck and neck tie between them for power consumption would never happen. And well it did, so obviously this is a win for the x86.

HAHA. Using a tegra 3? AKA the most power hungry SoC out there? Why not Exynos || Snapdragon's SoC?

admiral0
(too lazy to login)

Re:Neck and Neck is advantage Intel (0)

Anonymous Coward | about 2 years ago | (#42386495)

My last post is a stupid statement derived from laziness. Actually tegra 3 is not much more power hungry that the last exynos and I didn't invest a couple of seconds in googling for the correct comparative.

admiral0
(next time don't be too lazy to login)

Re:Neck and Neck is advantage Intel? Oh, really! (-1)

Anonymous Coward | about 2 years ago | (#42386157)

Truthfully it is an advantage for Intel because of the simple fact that 99% of chip comparisons exclude the manufacturing process. For example, IBM's PowerPC A2@2.3Ghz processor manufactured on a 45nm process has sixteen 64-bit cores and each core is capable of handling four threads at a total of 64-threads while keeping the power usage under 70 watts, in comparison you have Intel's I7{Ivy-Bridge}@(2.5-4.1Ghz) processor manufactured on a 22nm process which has four 64-bit cores and each capable of two threads at a total of 8-threads while keeping the power usage between (69-87 watts) furthermore Intel's I7 22nm{Ivy-Bridge} is not even close to the performance of IBM's 45nm PowerPC A2. So, hypothetically if we iterate IBM's PowerPC A2 through successive process shrinks and each allowing a power reduction of 30% then at 32nm the PowerPC A2 is a 49-watts, and at 22nm it's at 34-watts thus with Intel's I7{Ivy-Bridge) at 22nm(69-87 watts) you get 4 real cores and 4 virtual cores, but with IBM's PowerPC A2 at 22nm(34 watts) you get 16 real cores and 48 virtual cores. PowerPC is used in applications that demand high performance like, deep packet inspection of {ALL} internet traffic to determine who is downloading porn, wireless communication processing, or any applications that requires the most advance processing capabilities at a modest price. IBM will maintain there performance lead for they have four process shrinks left{not including half-nodes}, Intel with two process shrinks left will continue to provide to the common user a processor which they think is good enough, and ARM Ltd has nearly reached its peak because Samsung may buy it some more time at 14nm, yet Intel's Atom at 32nm nearly equal the power usage of ARM's at 28nm, and we all now ARM's performance sucks, so as ARM tries to increase performance at 22nm their power usage will go up, and given Intel's manufacturing capacity when all fabs convert to a 22nm process Intel will flood the market with 22nm Atoms. ARM Ltd is doomed because the performance is not enough to hold back either X86 or PowerPC. Samsung, Apple, Microsoft, Toshiba, all have PowerPC licenses and as 22nm and 14nm go mainstream PowerPC becomes a more viable option because of performance.

Re:Neck and Neck is advantage Intel (1)

metalmaster (1005171) | about 2 years ago | (#42385877)

it's got more software going for it than arm

This product comparison [microsoft.com] from Microsoft leads me to believe that applications have to be rewritten to behave correctly on Windows 8 Pro. Notice the blurb about downloading apps from the Microsoft store. This does not say you can download any plain old exe file. The mention of Windows 7 applications could be those that have already been rewritten to be compatible with the tablet.

If that's the case, iOS and Android apps witten for the ARM architecture greatly outnumber those for x86.

Re:Neck and Neck is advantage Intel (2)

KingMotley (944240) | about 2 years ago | (#42386033)

You would be incorrect. Windows 8 Pro runs any old executable that ran on Windows 7, you don't need to recompile or anything.

Re:Neck and Neck is advantage Intel (3, Insightful)

Ocker3 (1232550) | about 2 years ago | (#42386195)

I think you're confusing Surface RT with Surface Pro. The RT uses a different chip and requires different coding. Win8 Pro runs on any machine that runs Win7.

Doesn't mean a thing (2, Interesting)

Tough Love (215404) | about 2 years ago | (#42385395)

Even if true (watch out for cognitive dissonsoance with respect to Intel power efficiency claims) it does not mean a thing if Intel cannot match the price. Currently something like $1 goes to ARM holdings per chip. Lets see a bloated old monopolist get by on that.

Re:Doesn't mean a thing (1, Interesting)

Giant Electronic Bra (1229876) | about 2 years ago | (#42385467)

Nope, they get MORE than $1 a chip, which means they have more to plow back into R&D. Truthfully though, its an interesting question, but all told unless the price is substantially different we're not talking a big deal. If you pay $5 more for your x86 tablet you won't really care, assuming it works at least as well and happens to have the features you wanted/be the brand you like/etc.

I think the question is whether Intel will be able to push the x86 design down to EXTREME low cycles/watt levels. x86 has a lot of baggage that ARM doesn't, and there may be even newer designs out there that can push things further. Still, it seems Intel is brutally tough to compete with. That's good for us, as long as the competition exists. I'd hope they lose now and then.

Re:Doesn't mean a thing (0)

Anonymous Coward | about 2 years ago | (#42385553)

The problem is intel doesn't want only $5 or 10 dollars. They want something like $20 to $30 more. Intel has had the highest margins and they want to keep it that way.

Apple will never move to x86 as long as their pour money into their own designs. These aren't the PPC days where they were at the mercy of IBM or Motorola. Also they have more money than intel.

Others like nvidia and qualcomm will be content with lower margins because they'll have to and their behind on process technology ( as is Apple). That's ok. Intel has bleeding edge tech and they'll want to charge more for it, but that's difficult to justify when performance is good enough and you can keep profits in house.

Re:Doesn't mean a thing (2)

Tough Love (215404) | about 2 years ago | (#42386367)

all told unless the price is substantially different we're not talking a big deal. If you pay $5 more for your x86 tablet you won't really care

You're in outer space. Intel can't get by on $5/tablet, they need at least $50 or they will soon need to sell their head office. There is no way Intel can compete with ARM's royalty structure while continuing to live in the manner to which they have become accustomed.

Are either of these processor relevant? (0)

Anonymous Coward | about 2 years ago | (#42385399)

In terms of processing power per watt, the Snapdraon S4 kills both of them handily.

Tegra 3 is ancient already, and Atom has never been a performer in any platform.

Re:Are either of these processor relevant? (0)

Anonymous Coward | about 2 years ago | (#42385475)

Oh come now. Statements like "Snapdragon S4 kills both" needs to be backed up with facts, not just random conjectures.

Re:Are either of these processor relevant? (5, Informative)

EmagGeek (574360) | about 2 years ago | (#42385599)

http://www.tomshardware.com/reviews/snapdragon-s4-pro-apq8064-msm8960t,3291-4.html [tomshardware.com]

Atom isn't here, but perhaps because it is too new, but it's clear from this graph that at least Tom's Hardware seems to agree that the Snapdragon eats Tegra's lunch.

I have a Nexus 4 (Snapdragon S4) and a Nexus 7 (Tegra 3), and the 4 is WAY, WAY faster than the 7 in almost every experience.

On the Nexus 4 I can leave a movie playing in the background and keep listening to it while I check an important email that just came in or make a move in a game of Words with my wife. Attempting the exact same thing on the Nexus 7 results in the movie skipping and the user experience slowing to a crawl.

Perhaps there are some significant architecture differences between the two, but at least from a real-world user experience standpoint, I would not characterize the OP's assertion as "random conjecture" at all.

Re:Are either of these processor relevant? (2)

default luser (529332) | about 2 years ago | (#42385759)

That's probably a combination of the piss-poor GPU on Tegra 3 (barely good enough to render one thing at-a-time, and you expect stutter-free multitasking?) Along with the pathetic memory bandwidth (DDR3, but only a 32-bit bus).

Snapdragon S4 has nether of these issues!

Re:Are either of these processor relevant? (0)

Anonymous Coward | about 2 years ago | (#42386009)

yea how much did tom's get paid to make that bar graph in excel

toms hardware is a joke for twiddle dicks that think they are smart

Re:Are either of these processor relevant? (0)

Anonymous Coward | about 2 years ago | (#42386401)

Snapdragons are ground up designs by Qualcomm, thanks to a ISA license from ARM.

Tegra is basically ARM Cortex paired up with a Geforce.

It is interesting to see OMAP staying so close to Snapdragon tho, as OMAP is again a Cortex CPU paired up with a GPU (PowerVR SGX450).

OvO hoot

i said it back in september (4, Insightful)

arbiter1 (1204146) | about 2 years ago | (#42385407)

http://www.tomshardware.com/news/intel-arm-processor-soc-atom,17476.html [tomshardware.com] When that story was posted i said that all ARM was doing was poking the bear. Didn't take long for Intel to get there either. Just shows you don't piss off a company with a lot of $ for R&D

Re:i said it back in september (2, Insightful)

Anonymous Coward | about 2 years ago | (#42385457)

Samsung will be presenting at the ISSCC on their 28nm "big-little".
http://www.eetimes.com/electronics-news/4401645/Samsung-big-little--no-Haswell--Project-Denver-at-ISSCC
>Samsung will detail a 28-nm SoC with two quad-core clusters. One cluster runs at 1. 8 GHz, has a 2 MByte L2 cache and is geared for high performance apps; the other runs at 1.2 GHz and is tuned for energy efficiency.

Need to see how it matches up to Samsung latest 14nm proto.
http://www.eetimes.com/electronics-news/4403838/Samsung-14nm-FinFET-test-chip-pushes-ecosystem
One of the interesting part aside from the smaller geometry process is their "big-little" low power architecture.

Re:i said it back in september (1)

equex (747231) | about 2 years ago | (#42386377)

if they can pull off 14nm, its a breaktrough. all hail samsung.

Re:i said it back in september (1)

Lennie (16154) | about 2 years ago | (#42386513)

That's actually called: big.LITTLE :-)

Re:i said it back in september (0)

Anonymous Coward | about 2 years ago | (#42386179)

Not really, if they didn't beat a 1+ year old SOC in performance/power, it'd probably be time to give up and go home.

I wish the would concentrate on giving more speed. (0)

Anonymous Coward | about 2 years ago | (#42385419)

I dont give a flying crap how much juice it sucks just give me 75 gigahertz CPU and a damn drive that can keep up.
Oh and make it AMD prices not intel.

Re:I wish the would concentrate on giving more spe (3, Interesting)

jamesh (87723) | about 2 years ago | (#42385811)

I dont give a flying crap how much juice it sucks just give me 75 gigahertz CPU and a damn drive that can keep up.
Oh and make it AMD prices not intel.

I suspect you're in the minority here (as in wanting power regardless of power consumption). For me, desktop (and laptop) processors became fast enough about 5 years ago. Probably more. The laptop i'm using now is about 5 years old and any performance problems it has aren't CPU related. A hard drive that can keep up with my 1.8Ghz CPU would be nice - something that could keep your proposed 75GHz running without waiting would be just a little awesome :)

Nvidia is the worst competitor (0)

Anonymous Coward | about 2 years ago | (#42385423)

for this test. They chosen Nvidia just so they had a fair play. Nvidia is not the best on the market regarding power consumption.

I want to see how it compares to PowerVR SGX, present on most ARM SoCs, including Apple, Samsung, Texas Instruments, Broadcom, Freescale, etc.

Actually, I would love to see how does it compare to Qualcomm Snapdragon. People can imagine the results. This wouldn't be fair too.

Would the results be the same under Android ? (5, Interesting)

obarthelemy (160321) | about 2 years ago | (#42385435)

First, those articles are very interesting, thanks to Intel for making them happen.

Second, it's a good thing that Intel is catching up. I'm not a great Intel fan (rooting for the underdogs and all that), but still, I'm impressed.

Third, isn't the OS choice biasing the results a bit ? Would ARM fare better under a more ARM-oriented OS such as Android ? Or is power consumption profile, in the end, fully OS-independent ?

Re:Would the results be the same under Android ? (0)

Anonymous Coward | about 2 years ago | (#42385507)

Yes the article is a little bias as far as OS tweaking but still ARM has to compete on this platform windows isn't leaving it and they need to either write better code for it or crank up there R&D. They still have a much better price point which will help them for a while. Yes this is a bias test but it was Intel doing it and they really proved they have mad great strides in the power dept

Re:Would the results be the same under Android ? (0)

Anonymous Coward | about 2 years ago | (#42385729)

wow I need to read before I post

Re:Would the results be the same under Android ? (0)

Anonymous Coward | about 2 years ago | (#42385517)

Android the battery killer? It would probably do worse.

No (1)

Anonymous Coward | about 2 years ago | (#42385523)

Windows RT still runs a Windows subsystem.

Android's apps are really fragments of apps, the gui is a different fragment from the service (the thing that does any grunt work if needed) etc. If you don't use a gui bit, then that gui bit never loads. If a service bit is running, it's gui bit can/usually is closed.
The broadcast intents mean apps that appear to be running, actually aren't always running or even in memory. The broadcast intent fires (e.g. a minute timer, particular network events, lots of other events...), wakes up the bit of code to handle it, executes, then returns, ending the fragment if necessary.
Apps can be killed at any time, and are designed that way. Hence code is already written to handle it.
Widgets on Android aren't anything, just bitmaps, if the widget changes, it can be because an intent fired, the tiny bit of code needed to redraw the fragment was loaded, executed then discarded. They're not code constantly running.
Apps are memory constrained on Android, on Windows they can grow beyond ram. Which unfortunately means paging to disk or flash. You can see why Android keeps the memory usage of apps down to a minimum given this limit, but paging is no longer a fix if flash is there, writing to flash eats battery.

There's lots of others things going on, but I've developed for both and there's simply no way a Windows app is going to ever achieve that, which presumably is why they're pushing Metro (I have no experience of metro to know if it fixes this).

The Intel vs Arm test is also void, because ARM's big thing is its low power idle. In Android that is most of the time, since it doesn't run much. So when they're running Windows / Windows RT, they're really comparing the power draw with the processors chugging along. So just because they are comparable on Windows, doesn't mean they would be on Android or iOS.

I thought when the MS Office people said they'd turned off the flashing cursor it was some sort of ironic joke, indicating how little effort they'd put into the RT port, as if they were proud of sinking RT! Really guys? You turned off the flashing cursor??? Android unloads my complete app and loads in only minimal bits of it when the front task is soaking up the processing power, and you turned off the blink??

That's a lot of words, for a simple thing (3, Interesting)

Anonymous Coward | about 2 years ago | (#42385609)

Arm draws 10% of the power of Atom at idle, and Android runs mostly at idle even when you're using it to do stuff because its designed from day one that way. Windows uses a lot more processing power, and 'idle' on those Windows, literally means not using it at all, and even when you're not using it, the Atom is still drawing > 1W.

Re:Would the results be the same under Android ? (1)

Heir Of The Mess (939658) | about 2 years ago | (#42385607)

I think the underlying intent of the article is to show that the Microsoft Surface is a waste of time, and so it was Windows 8 focussed. They compared a Microsoft Surface with an Acer W510, and the Acer tied on power and won on performance. But also the Acer runs all Windows apps, so why would you buy the Microsoft Surface over the Acer W510?

Chromebook vs Chromebook (1)

Andy Prough (2730467) | about 2 years ago | (#42385655)

would probably be a much better comparison.

technology node (5, Insightful)

blackC0pter (1013737) | about 2 years ago | (#42385441)

The only issue here is that this is not an apples for apples comparison. 40nm vs. 32nm should give a huge benefit to the 32nm Atom. We need to compare the same technology node for this to make any sense. Also, looking at the idle cpu power consumption from the anandtech article, the Atom SOC used 10x more power.
So the real question is what do most tablets spend the majority of their time doing? Running a benchmark at full /half speed or with the SOC sitting idle?

Re:technology node (0)

Anonymous Coward | about 2 years ago | (#42385581)

Yep.... Article is basically "32nm part available in Q4 2012 competitive with 40nm part available in Q1 2012". Especially on many benchmarks where the CPU is left idle or all work is offloaded to a video/audio accelerator.

Re:technology node (4, Insightful)

jiteo (964572) | about 2 years ago | (#42385585)

One of Intel's weapons has always been process size. So while it's not a fair comparison if you're doing science, it's a fair comparison if you're wondering what tablet to buy.

Also rather hard to hate on Intel for it (5, Informative)

Sycraft-fu (314770) | about 2 years ago | (#42385689)

Why are they a node ahead all the time? Because they spend billions in R&D. When the downturn hit everyone in the fab business cut R&D, except Intel. So now they have a 22nm fab that has been running for awhile, another that just came fully online, and two 14nm fabs that'll be done soon (one on 450mm wafers).

They do precisely what geeks harp on companies to do: Invest money in R&D, invest in tech. They also don't outsource production, they own their own fabs and make their own chips. Most of them are even in the United States (8 of the 11).

The payoff is that they are ahead of people in terms of node size, and that their yields tend to be good (because the designers and fab people can work closely).

If other companies don't like it, well the only option is to throw in heavy on the R&D front. In ARM's case being not only fabless but actually chipless, just licensing cores to other companies, they can't do that. They are at the mercy of Samsung, TSMC, Global Foundries, and so on.

Re:Also rather hard to hate on Intel for it (1)

rrohbeck (944847) | about 2 years ago | (#42386089)

And even though they're way in front technology wise, they keep pissing everybody off with artificial market segmentation. Why?

Re:Also rather hard to hate on Intel for it (2)

TheRaven64 (641858) | about 2 years ago | (#42386219)

They also don't outsource production, they own their own fabs and make their own chips

Outsourcing production is not necessarily a bad thing, as it allows specialisation. Intel can afford it because they are a big player, but for other companies it makes sense to share the fab R&D costs with others, including with their competitors. They then compete based on their strengths (chip design), and the manufacturers compete based on their process technology.

Re:Also rather hard to hate on Intel for it (0)

Anonymous Coward | about 2 years ago | (#42386549)

they only use the 'bleding edge' process for part of the chip resulting in a hybrid of sorts
L2/L3 cache on intel chips use wider spacing than the rest of the chip

Re:technology node (1)

Anonymous Coward | about 2 years ago | (#42385691)

Nobody is buying Windows tablets, so it's a pointless comparison. Like arguing over whether Rosie O'Donnell or Roseanne Barr has a tighter pussy.

Re:technology node (1)

nateman1352 (971364) | about 2 years ago | (#42385835)

Unfortunately this article lacks detail, it seems that Bernstein Research considers Intel's latest smartphone designs to be as energy efficient as competitors. [mercurynews.com] Intel's latest smartphone chip is Medfield, which is 32nm. Unfortunately the article does not say what chips they compared... but it would be surprising if they didn't include the Qualcomm (Snapdragon S4 @ 28nm node) in their comparison. So we at least have some indirect evidence that even when they are at the same technology node, Intel's design is still close to ARM's. It will be interesting to see what Silvermont (22 nm) brings in 2013, at which point Intel SoCs will have LTE capability (instead of 3G GSM only) as well as Ivybridge graphics (instead of PowerVR), and they will be Quad-core. The smaller technology node paired with the new design features is probably going to yield an awesome smartphone/tablet platform.

Re:technology node (0)

Anonymous Coward | about 2 years ago | (#42386317)

Also, looking at the idle cpu power consumption from the anandtech article, the Atom SOC used 10x more power.

... what anand article were you looking at?
http://images.anandtech.com/reviews/SoC/Intel/CTvT3/idle-cpu.png

A look at the CPU chart gives us some more granularity, with Tegra 3 ramping up to higher peak power consumption during all of the periods of activity. Here the Atom Z2760 cores average 36.4mW at idle compared to 70.2mW for Tegra 3.

accounting for the last 25% of the graph (where "true" idle was achieved), if the atom were actually using 10x more power during idle, the blue component on the tail end should be above the green component by a significant margin (ie. display with a gap). instead we see the blue component "embedded" within the green component. we also see the cpu power for the tegra chip vary more wildly than the atom. even taking a "visual average" it even looks like the atom has comparable or lower idle power.

Re:technology node (1)

Lennie (16154) | about 2 years ago | (#42386519)

Of course this wasn't an apples for apples comparison, there was no iPad ;-)

Reason: crappy NVidia GPU (2, Insightful)

Anonymous Coward | about 2 years ago | (#42385489)

Example numbers: ARM CPU 0.0038 W vs.. Atom 0.02.
NVidia GPU 0.21 W vs. Imagination 0.11 W
The part that wins isn't from Intel, and it is available for ARM and it probably is the part that would lose badly in any benchmark.
Yay for biased benchmarking.
So far Intel wins by undersizing the GPU.

A tie means Intel loses (4, Insightful)

steveha (103154) | about 2 years ago | (#42385501)

I have said it before [slashdot.org] : with ARM, you can choose from multiple, competing chip vendors, or you can license the ARM technology yourself and make your own chips if you are big enough; with x86, you would be chaining yourself to Intel and hoping they treat you well. So, if low-power x86 is neck and neck with ARM, that's not good enough.

Intel is used to high margins on CPUs, much higher than ARM chip makers collect. Intel won't want to give up on collecting those high margins. If Intel can get the market hooked on their chips, they will then ratchet up the margins just as high as they think they can.

The companies making mobile products know this, and will not lightly tie themselves to Intel. So long as ARM is viable, Intel is fighting an uphill battle.

Re:A tie means Intel loses (1)

ikaruga (2725453) | about 2 years ago | (#42385633)

Let alone backwards software compatibility. Recompile and debug all those iOS and NDK based Android apps all over again doesn't sound something developers will like.

Re:A tie means Intel loses (1)

MichaelSmith (789609) | about 2 years ago | (#42385743)

It shouldn't be an issue in this day and age.

Re:A tie means Intel loses (1)

ikaruga (2725453) | about 2 years ago | (#42386227)

Lazy devs are a issue in all ages. Even something as simple as changing the target device and maybe changing a couple of parameters of a project can make people moan. Plus resource intensive apps may still require some low level code. "Luckily" such apps seem to be very rare on the consumer mobile app market.

Re:A tie means Intel loses (4, Insightful)

CODiNE (27417) | about 2 years ago | (#42385791)

Actually all those iOS apps already run on Intel, XCode simulator runs Intel code not ARM code. Android also runs on Intel but I believe most apps are emulated during development so they might have slightly more tweaking than an iOS app to get running on intel.

Re:A tie means Intel loses (-1)

Anonymous Coward | about 2 years ago | (#42385839)

Lol, binary translation is terrible for performance, which means it is also terrible for battery life. You don't want to do any of that on a mobile device. For a desktop attached to mains, it's obviously fine to emulate iOS/Android for development but any developer making A-level titles is not going to release an app without native support.

Re:A tie means Intel loses (1)

CODiNE (27417) | about 2 years ago | (#42385867)

Smart guy, too bad you can't read.

Re:A tie means Intel loses (0)

Anonymous Coward | about 2 years ago | (#42385639)

It's also why every company is forming their own core dev team. Samsung, nVidia, Apple, Qualcomm. None want to be solely dependent on ARM to push CPU performance forward. And all of these companies have bigger budgets, more able to form larger dev teams. These companies aren't standing still waiting for Intel to roll them over.

Poor comparison (4, Interesting)

markdavis (642305) | about 2 years ago | (#42385521)

Interesting that they are not comparing to a *modern* ARM chip (Cortex-A15), like the Exynos 5 (5250) or even a Qualcom Krait S4 (perhaps MSM8960).

So the news is that Intel has mostly caught up to an old ARM based chip based on designs/specs years older still and only running under MS-Windows. Yawn....

Re:Poor comparison (0)

Anonymous Coward | about 2 years ago | (#42386043)

There's no windows 8 system that runs on Cortex A15's yet...

Re:Poor comparison (0)

Anonymous Coward | about 2 years ago | (#42386483)

They are comparing best in class SHIPPING hardware though.

Re:Poor comparison (0)

Anonymous Coward | about 2 years ago | (#42386547)

Cortex a15 isn't as power efficient as a8.

Check under the hood (1)

SpaceLifeForm (228190) | about 2 years ago | (#42385559)

The main problem is likely the compiler.

ya and windows 8 still sucks (0)

Anonymous Coward | about 2 years ago | (#42385643)

so bad i bought a new pc with windows 7
early so i could avoid the bullshit windows 8 thats creeping around and if this route is the way ms is going glad steam is going linux cause this was the last pc i will get windows on it.
ill build form parts to avoid the boot lock bullshit
that crap ought to be illegal

and ill add by time this win 7 really is useless linux and steam ought to be very ready by then and ill just drop a linux on it.
nvidia and ati better be getting smart aobut this move cause i know a lot a hard core gamers that are now trying to get into linux now cause they really don't like wincrud 8

"Neck in neck"? (-1)

Anonymous Coward | about 2 years ago | (#42385667)

The phrase is "neck and neck".

Interesting findings... (0)

Anonymous Coward | about 2 years ago | (#42385711)

...too bad both run Windows 8.

Meh.

Intel GPUs more open prospect than ARM (4, Insightful)

Morgaine (4316) | about 2 years ago | (#42385747)

One area in which Intel is significantly more open than any manufacturer in the ARM ecosystem is in graphics hardware. Although Intel hasn't opened all their GPUs fully yet (from what I've read), this seems to be mostly because providing all the documentation takes time, not because they are against making everything open.

This contrasts dramatically with every single ARM license in existence. ARM's own MALI GPU is tightly closed (probably because MALI was a licensed technology) so the Lima team is having to reverse engineer a Linux driver. All the ARM licensees who provide GPUs seem to be either unable to open their GPU information because their GPU core has been licensed from a 3rd party, or else are simply disinterested in doing so, or else vehemently opposed to it for alleged commercial reasons in at least a couple of cases. So, the prospect of open documentation on SoC GPUs appearing from ARM manufacturers is vanishingly small.

This gives Intel at least one possible opening through which they can be fairly certain that the competition will not follow. Although that may be worth a lot to us in this community, the commercial payback from community support tends to be very slow in coming. Still, it's something that Intel might consider an advantage worth seizing in the mobile race where they're a rank outsider.

Re:Intel GPUs more open prospect than ARM (0)

Anonymous Coward | about 2 years ago | (#42386067)

Except the "Intel" graphics you're talking about is actually Imagination Technologies PowerVR which is closed and used in ARM chips too.

Re:Intel GPUs more open prospect than ARM (1)

xynopsis (224788) | about 2 years ago | (#42386449)

You either don't know what you're talking about or just plainly trolling. The GPU specs [intellinuxgraphics.org] that Intel opened is the CoreHD graphics series which is Intel's own GPU technology and is in no way related to ImgTech's PowerVR.

I am looking forward though to the real competition between ARM's latest and greatest with Intel's upcoming Haswell.

Apples and Oranges sometimes (2)

EmperorOfCanada (1332175) | about 2 years ago | (#42385807)

One thing to keep in mind is that the ARM is much more general purpose while the Intel chips tend to have a more complex assembly instruction set. So for adding one number to another (x=y+z) I suspect the simpler ARM architecture is going to win on power consumption. But many Intel chips have assembly instructions specifically for crazy things like AES encryption. This is used as the basis of many encryption protocols, hashing, and random number generation. So if a machine is basically serving up all encrypted data then it is possible that an Intel chip will be much faster and consume much less power while performing these operations. Depending on whether he software will take advantage

So I thing this is a case where you really have to look at the significantly broken down performance results to see if your use case fits one chip better than the other. A normal consumer example would be if your OS is encrypting your file system and using these cool Intel instructions. I suspect that it would then be a night and day difference in battery drain. But the drag is that you probably have to pretty well buy a device with both chips, set up your standard configuration, and then test it out. This is generally only something an IT person about to provision a department might be expected to do.

I guess that the overall benchmark is all we really have to go by which really doesn't tell the whole story.

Re:Apples and Oranges sometimes (3, Interesting)

willy_me (212994) | about 2 years ago | (#42385965)

One thing to keep in mind is that the ARM is much more general purpose while the Intel chips tend to have a more complex assembly instruction set. So for adding one number to another (x=y+z) I suspect the simpler ARM architecture is going to win on power consumption. But many Intel chips have assembly instructions specifically for crazy things like AES encryption. This is used as the basis of many encryption protocols, hashing, and random number generation. So if a machine is basically serving up all encrypted data then it is possible that an Intel chip will be much faster and consume much less power while performing these operations.

Not really important. The Intel chips convert assembly instruction into microcode - how they implement it internally (either dedicated hardware or reusing existing silicon) is up to them. You can't make a blanket statement like that unless Intel has specifically stated that hardware support is included. But in general, the Atom series trims as much off the CPU core as possible so don't be surprised if hardware support for some of those exotic instructions is lacking. And many ARM cores include instructions that are just as interesting - mostly for the embedded DSP market. A manufacturer, with the appropriate license, can include whatever instructions and dedicated hardware they want.

What likely matters more then the instructions is the included memory and cache. Intel likely includes a larger cache - which will drive up the price. Cache is usually static and has a very low power draw when not in use. By including a large cache, Intel can minimize expensive requests to memory. Also note that DIMMs have a significant constant current draw. Low power DIMMs are available but more expensive. You can bet that Intel used the latest and greatest for their demo while others might opt for the cheaper and slightly more power hungry DIMMs.

This demo shows how having a process 1 step more advanced then the competition can make a big difference wrt power consumption. But newer ARMs will be available soon - I believe Samsung is scheduled for roll out 28nm in the near future. Intel still has a long way to go to convince manufacturers that they should pay more for what ARM can do for less.

Re:Apples and Oranges sometimes (0)

Anonymous Coward | about 2 years ago | (#42386173)

I think i just read they are down to 14nm.

DIMMs? (1)

dutchwhizzman (817898) | about 2 years ago | (#42386325)

You must mean RAM chips and even those are often on-chip on these SoC systems. The main thing here is price point and since Intel is the only manufacturer and uses a very expensive fab at 32nm, their system is far more expensive to buy than a "generic" 40nm fab Arm chip. You are right that the Intel is, under the hood, just as RISC as the Arm chip is. The point seems to be that with using a more expensive smaller fab, Intel can sort-of offset the extra power required for the on-the-fly translation of x86 instructions to the "native" instructions for the RISC cores in their system.

Even though that may be the case, indications that a lot of power was used by the Tegra SoC that has a reputation of being a power hungry beast that's at least one generation older as current state-of-the-art offerings in the last generation of smartphones and tablets. I welcome having x86 stuff available that is easy on batteries, since that would benefit the life cycle of "classic" laptops in the future. However, winning from Arm on the smartphone and tablet market, I don't see that happening any time soon. The other way around, Arm getting into desktop and server market, yes, that is very feasible. They are already getting into the gaming market as well, with several Android based consoles starting to appear in the last few months. Exiting developments and good for competition and prices.

Re:DIMMs? (1)

nojayuk (567177) | about 2 years ago | (#42386351)

"You must mean RAM chips and even those are often on-chip on these SoC systems."

Nope. A DRAM of any significant capacity (256MB or better) has a similar die size to a SoC chip. An SoC will usually have some RAM on-board for buffers, cache, maybe low-end graphics support but the main memory in tablets, phones etc. resides on separate DRAM chips. A typical 2Gb DDR3 die is about 30 sq. mm whereas the Tegra 3 with 5 cores and over a MB of cache is 80 sq. mm.

Devices like the Raspberry Pi uses package-on-package construction where the DRAM device is mounted on top of the SoC controller to save space but they don't share the same die.

Re:Apples and Oranges sometimes (1)

TheRaven64 (641858) | about 2 years ago | (#42386233)

But many Intel chips have assembly instructions specifically for crazy things like AES encryption.

You picked a pretty poor example, as ARMv8 includes instructions for AES. You should also look at the NEON instruction set on ARM, which has a number of fairly complex floating point operations. The advantage of the microcode on an x86 chip is greater instruction density, meaning less instruction cache usage, so you can have less instruction cache, which means less power consumption. The disadvantage is that you have a significantly more complex instruction decoder, which means more power consumption. The greater instruction density was a big win against more traditional RISC architectures like SPARC, MIPS and Alpha, but is far less so against ARM. For example, address calculation on ARM is about as cheap as on x86 (complex addressing modes were a big win of CISC over RISC when compilers started to use them). In my testing, Thumb-2 code is typically about 10-20% smaller than x86, which means that Intel pays both for bigger instruction caches and for a more complex decoder.

Re:Apples and Oranges sometimes (1)

Pinhedd (1661735) | about 2 years ago | (#42386249)

Intel chips are nothing more than dressed up RISC processors. The high level CISC instructions are converted into RISC micro-ops before execution. Similarly, no one in their right mind would call ARMv7/ARMv8 "reduced"

All of that money towards the x86 (0)

Anonymous Coward | about 2 years ago | (#42385821)

All of that money for the x86, if it was thrown in for ARM we'd have something better than what we have now.

Comparing two Windows tablets (1, Funny)

symbolset (646467) | about 2 years ago | (#42385969)

Battery life is not the reason we don't want Windows tablets. Windows tablets suck. Might as well evaluate which one makes a better skateboard.

Re:Comparing two Windows tablets (1)

rrohbeck (944847) | about 2 years ago | (#42386099)

Yup. That same shootout with Android would be way more interesting.

Shill for Intel, earn a forune (0)

Anonymous Coward | about 2 years ago | (#42385987)

Most Internet 'Tech' sites are 'pay for play'. Tom's, Anandtech, and Xbitlabs have some of the very worst reputations for this. AMD has never been rich enough to 'pay' in the first instance. Intel, on the other hand, has a PR budget that runs to billions EVER year.

Samsung's Exynos A15 ARM SoC parts blow away any Atom that Intel has ever built (benchmarks are easily found). To beat the A15, Intel has to use its crown-jewels, the ULV sandybridge parts. With its most recent core design, Intel has a significant lead, dual core vs A15 dual-core, BUT the ARM SoC parts are about to go quad core A15 for far less than anything Intel can afford to charge for a ULV dual core. At quad core, ARM will whip even Intel's latest cores at dual.

It is going to get far far worse for Intel. The ARM core that replaces the A15 in a year's time is fully 64-bit, but at only 60% of the die size of the A15 (yes, you read that right). ARM has massively re-engineered its core, even as it gets much more functionality.

Intel has essential written the articles for Anandtech and Tom's (they gave both sites guidelines defined to the nth degree). Tegra 3 is a joke part compared to the current leading ARM SoC designs from Samsung, Apple and Qualcomm, which is why Intel has chosen it for its comparison. Tegra 4, on the other hand, will blow the wheels of anything Intel can deliver in the next 3 years for the equivalent market.

There are rumours that Intel is about to merge with Nvidia (owners of Tegra). If the Tegra 4 lives up to its expectations AND sees a major GPU advantage over ARM and PowerVR GPU designs, the merger will be a lock. Intel cannot compete with ARM no matter what they do. The Intel TAX makes this impossible. Intel needs an insane Average-Selling-Price for its parts to maintain its mega-expensive operations. Intel is currently heavily subsidising some of its ULV parts to get them into ARM competing products (see the latest Chromebook as an example). It simply cannot afford to do this for significant numbers of chip sales.

ARM parts can come through as cost-plus. Intel parts can only sell at cost-plus-an unthinkably-high-overhead-profit. The Intel tax means:
- that die area is wasted on circuits translating x86 to internal RISC machine
- the translation units waste power
- the translation units carry a high IP cost that inflates the cost of the chip
- programming the chip is inefficient, as code must be written in x86, even though the actual core uses a completely different ISA internally.

Again, Tegra 3 is a poor example of a A9 ARM design (only one slow memory channel, for instance). ARM is already TWO generations beyond the A9 design, and each of these generations is a massive leap over the last. Is it any wonder Intel wants to be compared to the Tegra 3? And even then, Intel is lucky to get a draw.

Anonymous Coward writes (0)

Anonymous Coward | about 2 years ago | (#42386383)

I am currently working for ARM in Cambridge. This has been a major talking point at lunch. What happens when Intel really become serious in ARM's domain? The war isn't over, but ARM have to get serious about Intel. Who will buy ARM and invest some much needed capital?
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?