Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel Shows 14nm Broadwell Consuming 30% Less Power Than 22nm Haswell

timothy posted about a year ago | from the disinterested-source-of-course dept.

Intel 88

MojoKid writes "Kirk Skaugen, Senior Vice President and General Manager of the PC Client Group at Intel, while on stage, at IDF this week snuck in some additional information about Broadwell, the 14nm follow up to Haswell that was mentioned during Brian Krzanich's opening day keynote. In a quick demo, Kirk showed a couple of systems running the Cinebench multi-threaded benchmark side-by-side. One of the systems featured a Haswell-Y processor, the other a Broadwell-Y. The benchmark results weren't revealed, but during the Cinebench run, power was being monitored on both systems and it showed the Broadwell-Y rig consuming roughly 30% less power than Haswell-Y and running fully loaded at under 5 watts. Without knowing clocks and performance levels, we can't draw many conclusion from the power numbers shown, but they do hint at Broadwell-Y's relative health, even at this early stage of the game."

Sorry! There are no comments related to the filter you selected.

30%? (2, Informative)

Anonymous Coward | about a year ago | (#44847689)

Meaningless number unless we know they are comparing at same performance level. You can get another IvyBridge CPU, downclock it, and you'll get 30% less power use.

Re:30%? (5, Funny)

wonkey_monkey (2592601) | about a year ago | (#44847733)

In other tests, one chip was shown to use 100% less power when switched off.

Re:30%? (0)

Anonymous Coward | about a year ago | (#44848759)

awesome! i'm going to try this, i bet it'll save me so much battery

Re:30%? (0)

Anonymous Coward | about a year ago | (#44847807)

undervoltaging is usually the way to reduce power usage, not reducing clock rate.

most cpus will undervolt by enough to get a reasonable drop in power usage, and this is even easier with smaller chips which is probably where some of the improvement is coming. there may be some bigger architecture stuff too.

Re: 30%? (4, Insightful)

Bram Stolk (24781) | about a year ago | (#44848537)

Parent is correct.
Power usage goes up with *square* of voltage, but is *linear* with clock speed.

Frequency does not matter much, voltage does.

Re: 30%? (1)

gman003 (1693318) | about a year ago | (#44849599)

Ah, but decreasing the frequency generally allows the voltage to be decreased as well without becoming unstable.

Re: 30%? (0)

Anonymous Coward | about a year ago | (#44852671)

That's only true in a constant resistance or impedance load. A processor is not a simple circuit component of constant impedance, and actually impedance DOES change with frequency if it were to be described as a simple LRC circuit.

So to be fair, your statement is kinda all sorts of wrong.

Re: 30%? (0)

Anonymous Coward | about a year ago | (#44854055)

It's not all sorts of wrong. I've seen benchmarks that show a roughly linear power scaling with frequency given a fixed voltage.

Re: 30%? (0)

Anonymous Coward | about a year ago | (#44847925)

Hence the phrase "fully loaded". RTA.

Oh Yes We Can (1)

Anonymous Coward | about a year ago | (#44847699)

Without knowing clocks and performance levels, we can't draw many conclusion from the power numbers shown

Intel Shows 14nm Broadwell Consuming 30% Less Power Than 22nm Haswell

So a processor running at an unknown speed is using less power than a different processor running at an unknown speed, not to mention several other unknown factors, and we're going to write a story about that with a specific power savings?

Re:Oh Yes We Can (0)

Anonymous Coward | about a year ago | (#44848195)

You can always go on estimating the speeds from the speed of appearing rendered tiles of the Cinebench screen in the IDF video.

Re:Oh Yes We Can (2)

iggymanz (596061) | about a year ago | (#44848293)

you forgot the part about accomplishing an unknown amount of work on a benchmark with unknown results

Re:Oh Yes We Can (1)

wmac1 (2478314) | about a year ago | (#44849713)

Still it makes sense that a 14nm circuit use 30% lower power than a 22nm (I guess even a bit more than that would make sense).

Re:Oh Yes We Can (1)

davester666 (731373) | about a year ago | (#44851351)

Yes, please buy a new computer with the new chip in it.

For the children.

Re:Oh Yes We Can (1)

LostMyBeaver (1226054) | about a year ago | (#44854033)

Just to nitpick, wouldn't you suppose that Intel's claims to power consumption takes two chips of equal performance and specifications and claim that across the board, the new fab process provides a 30% less power yield based purely on the process?

So, the real question isn't 30% compared to something else. That one is easily justified. Just assume a broadwell will use 30% less power than a haswell... Same architecture, smaller die.

The question is, how fast is the 5W part?

Where would we be without Africans... (-1)

Anonymous Coward | about a year ago | (#44847701)

I wonder how many work for Intel...

Anybody?

How much does this help? (1)

FooBarWidget (556006) | about a year ago | (#44847747)

How much does lowering CPU power usage help? How much of a computer's power usage comes from the CPU, instead of the GPU, the screen, the LEDs, the disks, etc?

Re:How much does this help? (0)

Anonymous Coward | about a year ago | (#44847773)

when playing games generally the video card will use the most cpu followed by the monitor followed by the cpu, followed by hard-disk+ssd, followed by motherboard, followed my mouse, keyboard etc.

Re:How much does this help? (0)

Anonymous Coward | about a year ago | (#44847789)

that said, monitors are coming down in power usage. led monitors use less power than ccfl at the expense of some colour quality.

video cards is probably the most significant for "playing games", and hopefully where improvements will come. i suspect that the shift to lower power memory than gddr5 should help slightly across the range of video cards, and maybe video cards can start slowing down clock speeds when they're at their fps limit or such.

Re:How much does this help? (0)

Anonymous Coward | about a year ago | (#44848791)

Just about every game ever has a FPS-cap setting, which does reduce power use.

Re: How much does this help? (1)

DigiShaman (671371) | about a year ago | (#44849825)

For nVidia cards, just set VSYNC at the global level and be done with it. LCD screens are generally capped at 60hz, so that's a cap of 60FPS right there. Assuming you have a mid-range card and aren't running Crysis 3 or some such.

Re: How much does this help? (1)

DigiShaman (671371) | about a year ago | (#44849849)

Follow up on my last statement of Crysis 3; you don't want VSYNC enabled if your only slogging along at 26FPS in game (hardware not up to snuff)

Re:How much does this help? (2)

Khyber (864651) | about a year ago | (#44850153)

"led monitors use less power than ccfl at the expense of some colour quality"

What? Not even close. Go look at a spectrograph of a white LED versus any fluorescent. A white LED is only beat by an incandescent lamp as far as a complete light emission range goes.

Here you go. [scoutnetworkblog.com]

Re:How much does this help? (1)

acariquara (753971) | about a year ago | (#44847853)

when playing games generally the video card will use the most cpu followed by the monitor followed by the cpu

I think you accidentally something there.

Re:How much does this help? (0)

Anonymous Coward | about a year ago | (#44848121)

I think most CPUs draw more than monitors nowadays. High end CPUs have a TDP of around 100W still, whereas modern monitors use well under 30W. For common examples look at any of Dell's LED monitors, even at 24" they're using around 20W. That said.. desktop quads are dropping fast. Everything besides the GPU probably drops less than 30W at most. You can assume a single magnetic hard drive to be around 5W.

The GPU is by FAR the largest draw, albeit not a constant one. The main reason is that graphics capability is one of the few fields where load has been increasing in step with the hardware. CPUs can afford to draw very little most of the time because most of what people do is web browsing and occasionally document editing. Monitors just have to draw to a screen, at most at 120Hz. On the other hand gamers demand better graphics so GPU designers push for the most they can get out of their few hundred W. Now obviously playing older games on a high end GPU means you're going to save some power, but most people enjoy playing the latest and greatest at the highest quality their card will allow.

Re: How much does this help? (1)

UnknownSoldier (67820) | about a year ago | (#44849991)

> Monitors just have to draw to a screen, at most at 120Hz.

Incorrect.

Most monitors refresh at 60 Hz.

My Asus VG248 is a 144 Hz monitor

Re:How much does this help? (0)

Anonymous Coward | about a year ago | (#44854085)

I think most CPUs draw more than monitors nowadays. High end CPUs have a TDP of around 100W still, whereas modern monitors use well under 30W.

These Y model chips are aimed at mobile platorms, whereas you're comparing desktop numbers. On a phone or tablet the screen is usually the biggest power consumer.

Re:How much does this help? (1)

Anonymous Coward | about a year ago | (#44847791)

If you lower the power consumption by 33% for the same performance, you can cram 150% performance into the same thermal envelope.

So I would say it's quite important.

Re:How much does this help? (0)

Anonymous Coward | about a year ago | (#44848085)

I think nowadays we'd care more about a potential 30% boost in battery life... assuming no other components.

Re:How much does this help? (2, Interesting)

Anonymous Coward | about a year ago | (#44847829)

Pretty huge.

1) means smaller design, which means you can pack more in for the same power
2) simpler cooling, which means you could fit it in smaller cases

Both of those are very good because you could fit both scenarios in to a production line trivially.
Larger procs go one way, smaller mobile ones the other way.

Hell, I am just surprised they are at 14nm. I never thought they could get down that low because of leakage.

Re:How much does this help? (1)

Kjella (173770) | about a year ago | (#44847901)

When you're talking about 5W SoCs there are almost no non-trivial uses of power, if you're visiting a Javascript-heavy site then the CPU eats power, if you're playing games then the GPU eats power, if you're watching a movie the screen eats power, RAM eats power, chipset eats power, motherboard eats power and so on. On a 5W package I'd estimate the CPU gets 1-2W, the GPU 2-3W and the rest of the system 1W but if you're running on full tilt at 5W your battery won't last very long. From what I've understood what Intel has been working on the last couple years is a lot the integration, they're used to delivering a CPU and the rest is up to someone else.

I'd say in mixed usage, that is to say anything not gaming or graphics heavy the CPU is probably still the most important factor, a big bright screen is of course also a big power draw but that's more of a fixed overhead for both Intel and ARM. And just anecdotally without numbers I'd say I notice a huge difference on the heat output of a smart phone or tablet when it's using the screen in a simple way like browsing a simple site compared to games, so the SoC has very much room for improvement. In theory if gaming took no extra power I should be able to game as long as I could browse the net, which is clearly not the case today.

Quite a bit (1)

Sycraft-fu (314770) | about a year ago | (#44848001)

The CPU is the GPU in low power systems, they are integrated units. Gone is the time when integrated Intel GPUs were worthless. These days, they can handle stuff quite well, even modern games at lower resolutions. The display is still a non-trivial power user too but the CPU is a big one.

Disks aren't a big deal when you go SSD, which is what you want to do for the ultra low power units. They use little in operation, and less than a tenth of a watt when idle.

So ya, keeping CPU power low is a big thing for low power laptops. Doesn't matter so much if you have a big desktop replacement with a 17" screen, dual drives, discrete GPU and all that, but even then it can help when you are on battery. I have such a laptop and it is amazing, I can get 2-3 hours of battery life when using the iGPU and doing regular web surfing and the like. No longer do you have to have a behemoth that really has a battery only as a joke. While it needs to be plugged in to spool all the way up and run powerful stuff, it can crank down nicely for mobile use and the low power CPU is part of that.

Of course the real target isn't laptops like that, but smaller ones that just have the iGPU, have a lower power CPU, and a smaller, not as bright, screen. They can then last for a whole day on a battery no problem.

Re:Quite a bit (2)

Khyber (864651) | about a year ago | (#44850183)

"Gone is the time when integrated Intel GPUs were worthless"

Actually, here's something funny about that. You want to know why Intel GMA945/950/X3100 sucked balls?

They were all deliberately crippled by Intel. Their original spec speed was to be 400 MHz. Every desktop and laptop that had these had them at 133/166 MHz speed. Unusable for even Quake 3.

But suddenly - if you fixed that clock speed issue, holy crap, you could play Q3 in OpenGL mode! Suddenly newer games like Faster Than Light run full speed instead of 2 FPS. I'm on a laptop that has an Intel GMA950, and since I installed GMABooster as well as reset the clock speed to the proper 400MHz, this system has suddenly become a LOT more usable, and it's still not getting any hotter than typical.

Re:Quite a bit (0)

Anonymous Coward | about a year ago | (#44879629)

Wow, you're a dumbass. These were low voltage / low power chipsets. Ask engineers to hit a power target, and they will! But when that low power product is a derivative of another product which can (and did) close timing at much higher frequencies, well duh, you can overclock the low power version pretty successfully. And if that boosted power consumption from "ultra low" to "moderately low" (because even the regular version is far from a graphics powerhouse), then duh, you're not gonna see high temps. None of this means someone set out to deliberately "cripple" your computer.

Re:How much does this help? (3, Informative)

bzipitidoo (647217) | about a year ago | (#44853711)

Helps a lot. But there are many factors that affect power usage.

Power supplies used to be awful. I've heard of efficiencies as bad as 55%. Power supplies have their own fans because they burn a lot of power. Around 5 years ago, manufacturers started paying attention to this huge waste of power. Started a website, 80plus.org. Today, efficiencies can be as high as 92%, even 95% at the sweet spot.

GPUs can be real power pigs. I've always gone with the low end graphics not just because it's cheap, but to avoid another fan, and save power. The low end cards and integrated graphics use around 20W, which is not bad. I think a high end card can use over 100W.

A CRT is highly variable, using about 50W if displaying an entirely black image at low resolution, going up to 120W to display an all white image at its highest resolution. An older flatscreen, with, I think, fluorescent backlighting, uses about 30W no matter what is being displayed. A newer flatscreen with LEDs takes about 15W.

Hard drives aren't big power hogs. Motors take lots of power compared to electronics, but it doesn't take much to keep a platter spinning at a constant speed. Could be moving the heads takes most of the power.

These days, a typical budget desktop computer system, excluding the monitor, takes about 80W total. Can climb over 100W easy if the computer is under load. So, yes, a savings of 5W or more is significant enough to be noticed, even on a desktop system.

Re:How much does this help? (2)

tlhIngan (30335) | about a year ago | (#44854413)

Power supplies used to be awful. I've heard of efficiencies as bad as 55%. Power supplies have their own fans because they burn a lot of power. Around 5 years ago, manufacturers started paying attention to this huge waste of power. Started a website, 80plus.org. Today, efficiencies can be as high as 92%, even 95% at the sweet spot.

They had to, because at 50% efficiency, if you wanted a 500W power supply, you're talking about drawing 1000W. And that would be a practical limit because a typical 15A outlet would provide 1650W. Don't forget a switching supply also has horrible PF, so 1000W could mean 1250VA. Sure it's only consuming 1000W, but your circuit breaker has to handle the extra 250VAR (just over 2A), so your computer was drawing up to 12A instantaneously. At 80% PF, this limited consumed power to around 1300W, which at 50% efficiency meant your PC could draw up to 650W. (The thing with imaginary power is it has very real currents, so your electric meter will never register imaginary power, but all the power handling equipment will see its effects - because what really happens is you "give back" that imaginary power so your meter wouldn't record it).

At the same time, PC components were drawing more and more power and 650W wouldn't cut it.

So the situation was to be as-is and force everyone to install multiple circuits in their computer room, or to sharpen up efficiency.

Which they did, because at 95% efficiency and a power factor of 1.0, you can now have a power supply that nearly goes all the way - 1500W (a little less to account for component variations and such), and still be within 15A.

Oh well (0)

Anonymous Coward | about a year ago | (#44847783)

Oh well

rdrand (-1, Offtopic)

pr0nbot (313417) | about a year ago | (#44847793)

Maybe there are power savings to be had in further reducing the randomness of rdrand?

Look at all the silicon used for crypto (3, Interesting)

Anonymous Coward | about a year ago | (#44847833)

Take a look at this slide, on the right is the system on a chip version of their Broadwell 2 core processor:

http://hothardware.com/image_popup.aspx?image=big_idf-2013-8.jpg&articleid=27335&t=n

See how much of the chip is assigned to crypto functions? It's almost as big as one of the processor cores. All that silicon used for crypto and it's completely wasted because it cannot be trusted because of the NSA. It wouldn't surprise me if some of that silicon is NSA back door functionality because that's one heck of a lot of transistors to assign to crypto functions.

Re:Look at all the silicon used for crypto (1)

Anonymous Coward | about a year ago | (#44847909)

See how much of the chip is assigned to crypto functions? It's almost as big as one of the processor cores.

No, it's not. Better get your eyes checked.

ARM (2, Interesting)

Anonymous Coward | about a year ago | (#44847805)

Arm meanwhile has 8 core processors suitable for Smartphones (and yes they can run all 8 cores simultaneously).

What they need right now is an a chip *now* that is 30% less power THAN AN EQUIVALENT ARM, and more cores and cheaper, oh and it also needs to be SOC available.

Really saying you're next chip is 30% lower power than one you just launched, means the one you just launched is 30% too much power drawn. Which is true, but not something to point out.

ARM vs x86 (5, Interesting)

IYagami (136831) | about a year ago | (#44847887)

There is a good comparison of ARM vs x86 power efficiency at anandtech.com: http://www.anandtech.com/show/6536/arm-vs-x86-the-real-showdown [anandtech.com]

"At the end of the day, I'd say that Intel's chances for long term success in the tablet space are pretty good - at least architecturally. Intel still needs a Nexus, iPad or other similarly important design win, but it should have the right technology to get there by 2014."
(...)
"As far as smartphones go, the problem is a lot more complicated. Intel needs a good high-end baseband strategy which, as of late, the Infineon acquisition hasn't been able to produce. (...) As for the rest of the smartphone SoC, Intel is on the right track."

The future for CPUs is going to be focused on power consumption. The new Atom core is two times more powerful at the same power levels than the current Atom core. You can see http://www.anandtech.com/show/7314/intel-baytrail-preview-intel-atom-z3770-tested [anandtech.com] :

" Looking at our Android results, Intel appears to have delivered on that claim. Whether we’re talking about Cortex A15 in NVIDIA’s Shield or Qualcomm’s Krait 400, Silvermont is quicker. It seems safe to say that Intel will have the fastest CPU performance out of any Android tablet platform once Bay Trail ships later this year.
The power consumption, at least on the CPU side, also looks very good. From our SoC measurements it looks like Bay Trail’s power consumption under heavy CPU load ranges from 1W - 2.5W, putting it on par with other mobile SoCs that we’ve done power measurements on.
On the GPU side, Intel’s HD Graphics does reasonably well in its first showing in an ultra mobile SoC. Bay Trail appears to live in a weird world between the old Intel that didn’t care about graphics and the new Intel that has effectively become a GPU company. Intel’s HD graphics in Bay Trail appear to be similar in performance to the PowerVR SGX 554MP4 in the iPad 4. It’s a huge step forward compared to Clover Trail, but clearly not a leadership play, which is disappointing."

Re:ARM vs x86 (5, Insightful)

Sycraft-fu (314770) | about a year ago | (#44848111)

Ya I think ARM fanboys need to step back and have a glass of perspective and soda. There seems to be this article of faith among the ARM fan community that ARM chips are faster per watt, dollar, whatever than Intel chips by a big amount. Also that ARM could, if they wish, just scale their chips up and make laptop/desktop chips that would annihilate Intel price/performance wise. However for some strange reason, ARM just doesn't do that.

The real reason is, of course, it isn't true. ARM makes excellent very low power chips. They are great when you need something for a phone, or an integrated controller (Samsung SSDs use an ARM chip to control themselves) and so on. However that doesn't mean they have some magic juju that Intel doesn't, nor does it mean they'll scale without adding power consumption.

In particular you can't just throw cores at things. Not all tasks are easy to split down and make parallel. You already find with with 4/6 core chips on desktops. Some things scale great and use 100% of your CPU (video encoding for example). Others can use all the cores, but only to a degree. You see some games like this. They'll use one core to capacity, another near to it, and the 3rd and 4th only partially. Still other things make little to no use of the other cores.

So ARM can't go and just whack together a 100 core chip and call it a desktop processor and expect it to be useful.

Really, Intel is quite good at what they do and their chips actually are pretty efficient in the sector they are in. A 5-10 watt laptop/ultrabook chip does use a lot more than an ARM chip in a smartphone, but it also does more.

Also Intel DOES have some magic juju ARM doesn't, namely that they are a node ahead. You might notice that other companies are talking about 22/20nm stuff. They are getting it ready to go, demonstrating prototypes, etc. Intel however has been shipping 22nm stuff, in large volume, since April of last year. They are now getting ready for 14nm. Not ready as in far off talking about, they are putting the finishing touches on the 14nm fab in Chandler, they have prototype chips actually out and testing, they are getting ready to finalize things and start ramping up volume production.

Intel spends billions and billions a year on R&D, including fab R&D, and thus has been a node ahead of everyone else for quite some time. That alone gives them an advantage. Even if all other things are equal, they've smaller gates, which gives them lower power consumption.

None of this is to say ARM is bad, they are very good at what they do as their sales in the phone market shows. But ARM fans need to stop pretending they are some sleeping behemoth that could crush Intel if only they felt like it. No, actually, Intel's stuff is pretty damn impressive.

Re:ARM vs x86 (0)

Anonymous Coward | about a year ago | (#44848281)

"In particular you can't just throw cores at things. Not all tasks are easy to split down and make parallel. You already find with with 4/6 core chips on desktops."

Watch the new 8 core Exgnos demo, Android uses all 8 cores when loading a web page to filter/scale the images, Angry birds uses 4 (of the lower power core). We're talking about a 2 core chip that consumes more power and runs the *same* apps.

"Intel spends billions and billions a year on R&D, including fab R&D, and thus has been a node ahead of everyone else for quite some time."
Intel spends LESS than the sum of its ARM competitors spend.

Samsung's Cortex runs on 14nm since last Dec
http://www.pcmag.com/article2/0,2817,2413522,00.asp

They need to catch up a lot faster.

Re:ARM vs x86 (1)

Nemyst (1383049) | about a year ago | (#44848757)

All that article says is that they've produced test chips on 14nm. Intel is much closer to actually releasing 14nm parts than Samsung, let alone the others.

Also, Intel spends less than the sum of all the ARM manufacturers, sure, but those guys aren't exactly collaborating either. Much of their R&D is redundant (developing Krait and Exynos wasn't cheap I'm sure, yet the two chips are fairly similar in performance), which makes the comparison pointless at best, misleading at worst.

Re:ARM vs x86 (0)

Anonymous Coward | about a year ago | (#44848953)

" Intel is much closer to actually releasing 14nm parts than Samsung, let alone the others."

That remains to be seen, neither has it out, Samsung had prototypes back in december last year.
They all cooperate via ARM, Mediatek did the 4 low power + 4 fast cores first, Samsung then changed their chip firmware from 4 low OR 4 high to support the full 8, by assigning based on thread priority same as Mediatek.

Intel don't have a low power version to drop down to. That's why it loses on partial tests, it's either full on, or full off and in those configs it can compete, but that's not a real world scenario.

Lots of "intel *will* do this and Intel *will* do that, but the world is leaving them behind they need to get their act together damn fast.

Re:ARM vs x86 (0)

Anonymous Coward | about a year ago | (#44849453)

Intel don't have a low power version to drop down to. That's why it loses on partial tests, it's either full on, or full off and in those configs it can compete, but that's not a real world scenario.

you're right. what intel does do is an excellent version of frequency scaling, turning off unused execution units, etc. this is better than a dedicated low-power core, because it's finer grained and there's some benefit even without software particilation.

Re:ARM vs x86 (1)

dkf (304284) | about a year ago | (#44851407)

you're right. what intel does do is an excellent version of frequency scaling, turning off unused execution units, etc. this is better than a dedicated low-power core, because it's finer grained and there's some benefit even without software particilation.

What's it like when you put a real workload on it? Benchmarks are interesting and all, but the proof is always reality. (I mistrust benchmarking because it's so easy to tune things to do well in a benchmark without actually being particularly good on anything else; this has definitely happened in the past with CPU design too.)

Mind you, Intel's main problem is that there's a large and expanding market out there where their CPUs just aren't the things that people choose. People select ARM-based systems because that's what the existing software is written for and what the existing users have, and the benefit of integrating with existing platforms is very strong indeed.

Re:ARM vs x86 (0)

Anonymous Coward | about a year ago | (#44862859)

(I mistrust benchmarking because it's so easy to tune things to do well in a benchmark without actually being particularly good on anything else; this has definitely happened in the past with CPU design too.

Can I get a citation in which a CPU benchmark was so far off of the actual workload that, a CPU that bench marked significantly lower, outperformed a higher significantly benchmarking CPU?

Re:ARM vs x86 (1)

K. S. Kyosuke (729550) | about a year ago | (#44848315)

In particular you can't just throw cores at things. Not all tasks are easy to split down and make parallel.

Hah. That will change once programmers actually *learn* program algebra.

Re:ARM vs x86 (3, Funny)

garethjrowlands (694510) | about a year ago | (#44849029)

If you have a way to split all tasks down and make them parallel, could you please share it with the rest of us? If it's this 'program algebra' of which you speak, could you please provide us with a link?

Re:ARM vs x86 (0)

Anonymous Coward | about a year ago | (#44849327)

Just follow Guy Steele Jr.'s work. That's what he's working on right now (after his work on Scheme, Common Lisp, and Java).

Re:ARM vs x86 (0)

Anonymous Coward | about a year ago | (#44849107)

Nobody could possibly know anything about academic accomplishments. Pay-walls and figments of irony towers are preventing seeing anything of value unless working for an institution committed to breaking down walls and cutting through irony.

Re:ARM vs x86 (1)

phantomfive (622387) | about a year ago | (#44851485)

wtf is 'program algebra'

Re:ARM vs x86 (0)

Anonymous Coward | about a year ago | (#44869861)

Maybe be means functional programming? Anyways that can only be parralelized to a point as any useful program ends up doing I/O more than rarely, and I/O messes up the properteis of a functional program that allows really agresseive code modification and optimization (sometimes included automatic parralelization of tasks. In the procedural state-based programmin paradigm such a task

Re: ARM vs x86 (0)

Bram Stolk (24781) | about a year ago | (#44848599)

x86 needs to die a.s.a.p. because of the legacy crap it carries.
Please look up the phenomenon A20 gate.
Or just the pain that multiple FP/SIMD implementations cause devs: mix the wrong ones and your performance crashes.

x86 architecture is hampering progress because it is so successful.

Re: ARM vs x86 (2)

garethjrowlands (694510) | about a year ago | (#44848969)

Nobody's running their x86 in a mode that's impacted by A20 any more. And hardly anybody's writing in assembler. So it doesn't matter. And for the minority who *are* writing in assembler, ARM isn't going to help them (unless they're writing ARM assembler of course).

If x86's legacy carried a significant power or performance impact, it *would* matter. But it doesn't.

Re: ARM vs x86 (0)

Anonymous Coward | about a year ago | (#44848975)

You can implement a PC architecture without the A20 gating. It just won't be IBM compatible. These days, in a tablet or PC running something that isn't a desktop OS, I doubt that's a big deal.

The FP/SIMD thing should be fixed by useful library implementations and compilers, then whored very hard. You shouldn't have to care about mixing them. That's definitely an example of "The tools should be smarter."

Re: ARM vs x86 (0)

Anonymous Coward | about a year ago | (#44848997)

ARM needs to die a.s.a.p. because of the legacy crap it carries.
Please look up the phenomenon "constant tables", universal predication, and combined alu+shift instructions.
Or just the pain that multiple high-density encodings (Thumb, Thumb2, Jazelle) cause devs: mix the wrong ones and you performance crashes.

ARM architecture is hampering progress because it is so successful.

Re: ARM vs x86 (0)

Anonymous Coward | about a year ago | (#44849503)

all that *and* arm still has multiple FP/SMID implementations. off the top of my head 8500, vfp (multiple versions), neon, etc.
since x86 keeps all the old versions, for non performance-critical work, just use the newest version that was around on the oldest hardware you plan to ever support. these days sse can be counted on.

Re: ARM vs x86 (2)

nateman1352 (971364) | about a year ago | (#44854027)

Actually it appears that Intel removed the A20 line starting with Haswell.

Check out page 271 of the Intel System Programmers Manual Vol. 3A from June 2013 [intel.com] . Notice the following excerpt: "The functionality of A20M# is used primarily by older operating systems and not used by modern operating systems. On newer Intel 64 processors, A20M# may be absent."

Now check out page 368 from the May 2011 [fing.edu.uy] version of that same document. In the same paragraph, the statement above is not present.

From this, we can infer that between May 2011 and June 2013 some new Intel chip dropped support for A20M#. In that timeframe, Ivybridge and Haswell are the only 2 chips that were released. Since Ivybridge is the same architecture as SandyBridge just manufactured on 22nm and we know that SandyBridge did have A20M#, I think its a fairly safe assumption that Haswell is the first x86 chip that has _finally_ done away with A20M#. That said, it would be nice if Intel actually said in the manual which chip was the first to remove it.

Does someone have a new Haswell system that they can do a quick DOS ASM program on to verify this? Even better, if someone has an Ivybridge system we can narrow this down.

Re: ARM vs x86 (2, Informative)

Anonymous Coward | about a year ago | (#44855437)

Wikipedia claims [wikipedia.org] that "Support for the A20 gate was removed in the Nehalem microarchitecture." but does not provide a citation.

Re:ARM vs x86 (1)

Anonymous Coward | about a year ago | (#44848663)

ARM (Acorn Risk Machines) already made desktop chips (and computers) who wipped the floor with Intel's... You are just too young. It was in the 80s. Google Acorn Archimedes.

Re:ARM vs x86 (0)

Anonymous Coward | about a year ago | (#44848741)

Ooops... intending Acorn Risc Machines. Acorn was one of the first Reduced Instruction Set Computer adopters and yes, i think they are a big RISK to Intel Desktop supremacy.

Re:ARM vs x86 (0)

Anonymous Coward | about a year ago | (#44851523)

The performance characteristics of chips in the 80s is hardly relevant to the performance characteristics today.

Re:ARM vs x86 (1)

Sycraft-fu (314770) | about a year ago | (#44875537)

Right which is why I can go out and buy one of those right now! ...

Oh wait I can't. They haven't made a desktop chip since the ARM2 in the 80s.

We are talking about the actual real world, here today, where you can buy Intel laptop, desktop, and server CPUs but not ARM CPUs in those markets.

Re:ARM vs x86 (0)

Anonymous Coward | about a year ago | (#44848711)

- Intel can't compete on price. They'll have to forget their current margins of 60%.
- Intel has redefined how they measure watts consumed by their cpus. They really want to announce a small number so they measure only the core not the "system" of the system-on-chip.
- nanometer size is PR. Everyone measures differently.
- Intel is doing so well, that 40-50% of their fabs are idle.
- Apple and Samsung have 60% of the smartphone market. And they produce their own cpus. Why would they drop their own chips in favor of Intel?

The many ARM licensees have created a perfect storm of competition for Intel. Even if Intel can match performance, they'll have to match price, too. With shrinking desktop sales (-10%/year), how many soc does Intel have to sell to make up for their lost desktop cpu sales? $200/desktop_cpu vs $20/arm_soc.

Re:ARM vs x86 (1)

phantomfive (622387) | about a year ago | (#44849203)

- Apple and Samsung have 60% of the smartphone market. And they produce their own cpus. Why would they drop their own chips in favor of Intel?

Apple doesn't manufacture their own CPUs. I don't know why Samsung would drop their own chips in favor of Intel's, but they already have in at least one tablet.

Re:ARM vs x86 (0)

Anonymous Coward | about a year ago | (#44849775)

intel can already match performance, i am waiting for them to match battery life too before i buy smartphone.

as for price I would be prepared to pay even 100% more than what iphone costs if my phone would be able to run windows 8.1, and all those x86 desktop apps i use at work or home.

would be plus if it could be able to play crysis/world of warcraft in lower resolution, not special smartphone version, but real desktop 20GB version just with reduced details, than i would not need desktop PC just monitor, keyboard and smartphone with mini-USB/mini-HDMI

as for buying ARM smartphone, no chance until they add windows/x86 compatability

Re: ARM vs x86 (1)

expatriot (903070) | about a year ago | (#44851887)

14nm is expensive to make. At least double patterning, relatively low yield, and only worth it if you want an expensive high performance part.
ARM chips are aleady being made in 14/20 so this is size is not a long term advantage for Intel.
A lot of chips are still made greater thon 60 because it's cheap. Some are even made at 160 to reduce mask cost by single patterning.
There is no doubt that Intel currently makes the highest performance parts with an equivalent power dissipation.

Re:ARM vs x86 (0)

Anonymous Coward | about a year ago | (#44851917)

So ARM can't go and just whack together a 100 core chip and call it a desktop processor and expect it to be useful.

no, they cant.

but an 8-16 core arm chip based on current 'best available' revisions, paired with discrete gpu, media chip (encoder/decoder), sufficient ram and expansion/ports (usb, sata, lan, pcix).... arm *can* be a desktop for 80% of users.

Re:ARM vs x86 (0)

Anonymous Coward | about a year ago | (#44853639)

Also Intel DOES have some magic juju ARM doesn't, namely that they are a node ahead. You might notice that other companies are talking about 22/20nm stuff. They are getting it ready to go, demonstrating prototypes, etc. Intel however has been shipping 22nm stuff, in large volume, since April of last year. They are now getting ready for 14nm. Not ready as in far off talking about, they are putting the finishing touches on the 14nm fab in Chandler, they have prototype chips actually out and testing, they are getting ready to finalize things and start ramping up volume production.

Indeed, being a node ahead is quite valuable. Interestingly the performance gains from being a node ahead almost precisely equal the performance loss from using a horrific architecture called "x86". I'd like to see what kinds of performance processors Intel could make if they were willing to do ARM processors. I suspect they would blow everyone else away.

They don't make the claim (0)

Anonymous Coward | about a year ago | (#44848215)

Intel don't compare the power drawn of their cores to ARM, they compare them to the current generation of their own chips. Likewise you did the same : "The new Atom core is two times more powerful at the same power levels than the current Atom core".

Anandtech review is just wishful thinking, they compared two loads, one a idle and one at heavy load.
An ARM core turned off uses similar power to an Intel core turned off. A core at full power doesn't give a measure per watt it gives a maximum watt. Intel can simply set the maximum wattage making the test pointless.

You brag on behalf of Intel, because they know they have nothing to brag about yet.

Intel are still behind on graphics (0)

Anonymous Coward | about a year ago | (#44850041)

"Intel’s HD graphics in Bay Trail appear to be similar in performance to the PowerVR SGX 554MP4 in the iPad 4."

The charts in that article show iPad 4 significantly ahead of Baytrail in most offscreen tests and equal or better in the remainder, so I've no idea how the article comes to the conclusion that they are similar in performance.

[ iPad thrashes baytrail onscreen because of the lower resolution, so that doesn't count! ]

Shield matches Baytrail (0)

Anonymous Coward | about a year ago | (#44850087)

"Whether we’re talking about Cortex A15 in NVIDIA’s Shield or Qualcomm’s Krait 400, Silvermont is quicker. It seems safe to say that Intel will have the fastest CPU performance out of any Android tablet platform once Bay Trail ships later this year."

Geekbench has 2.39 GHz baytrail Baytrail matching 1.91 GHz Shield so I would hardly call that prediction "safe" given that there will be a number of new tablets out in that timeframe.

http://browser.primatelabs.com/geekbench3/compare/9041?baseline=52725 [primatelabs.com]

Re:ARM (-1)

Anonymous Coward | about a year ago | (#44847949)

Intel is more power efficient under load than any 8, 16 whatever core chinese arm crap.

In idle was match for arm more than one year ago.

Intel: Suck it, ARM fanboys! (0)

Anonymous Coward | about a year ago | (#44847883)

ARM is finished! In 3 year 90% of all smartphones will be x86! Mwahahah!!

What's the point of 30% saving? (0)

Anonymous Coward | about a year ago | (#44848003)

serious, WTF with 30%?

I'll buy when they make it 3x fast and use only 3% of power.

I Don't Trust Intel Enough to Switch Back to Them (0)

Anonymous Coward | about a year ago | (#44848125)

I dropped Intel when they added the blackbox with network activity to all of their systems that is the Intel Management Engine.

And no, I don't wear a tinfoil hat. I use lead based paint in my house instead. It's waaaaay easier.

integrated graphics solution? (1)

Lawrence_Bird (67278) | about a year ago | (#44848377)

At least that is what is implied. That is great for corporate energy usel but when will the real power hogs be addressed? Expansion video cards can use many multiples the power consumed by the rest of the system combined.

Yawn (1)

WhoBeDaPlaya (984958) | about a year ago | (#44848541)

If the trend keeps up, we'll get a 1-3% IPC improvement, and even less overclocking headroom with Broadwell. It's absolutely disappointing that after waiting ~5 years, a fully overclocked 4770K (~4.4GHz) is only 1.37x as fast as a fully overclocked i7 920 (~4GHz).

Re:Yawn (2, Insightful)

Anonymous Coward | about a year ago | (#44849601)

The IPC has hit a brick wall. The proportion of time spent on cache misses and branch mispredictions simply is a limit.
After all IBM Power8 will have 8 threads/core (as announced at Hot Chips, but as far as I know, there have been no article about it on Slashdot). I'm not sure 8 is very useful, but apparently on many workloads, the 4 threads/core of Power7/Power7+ gives more throughput then 2 threads. Several threads per core increase aggregate IPC, but not per thread IPC.
The reason I'm doubtful on 8 threads/core is important on Power8 is that there are only 2 LSU (load/store units), which means that each thread can only access memory every 4 cycles on average. For a RISC processor with 32 registers and 2 LSU, 2 threads are an obvious way to keep the execution units busy, 4 threads can certainly increase throughtput, but at 8 I start to my doubts since they have to share the caches despite the 4(!) levels of cache: L1 and L2 are per core, L3 is mostly per core if I understand correctly, and L4 is completely shared among all cores. Fox x86, Intel has not gone (yet) above 2 threads, and they seem to go rather towards large and wide register file with AVX512, introducing a new instruction encoding which is even more baroque (tough job, but it's Intel we are speaking of) than everything they have piled on top of the original 8086, including a new 4 byte long prefix for a start. I'm also a bit doubtful about AVX512: the instruction to save/restore FPU context has been extended and now stores/loads over 2.5kB of data, this is more context than any other processor I know (including Itanic) and will certainly impact context switch and signal delivery latencies. It is also incredibly intertwined with the memory protection extensions (MPX) and adds complexities to supporting MPX.
On x86, last time I looked, there were still only one load and one store unit (somewhat less flexible than 2 general purpose LSU since there are generally more loads than stores) but the big problem was with 32 bit code, which was spilling like mad because of two few available registers. amd64 (whatever claims Intel, they had to follow AMD's on this one, and the NIH syndrome shows) often gives 9 more available registers (in theory 8, but position independent code has to sacrifice a register on 32 bit) has made the memory traffic due to register spills negligible in practice. Intel stayed far too long minimizing importance of 64 bit support
(NIH syndrome?). However, they finally have admitted that 64 bit is mainstream and 32 bit fading away.
Trying to improve the IPC on x86 is a nightmare, because the instruction decoder is insanely complex (and the complexity is growing): they have gone from 3 instruction/clock on the PPro to 4 (Only 33% improvement in 17 years, I'm aware that this is a meaningless figure). I don't remember how instruction decoding is done on Intel processors, but I remember a description of an AMD processor in which they simply distribute the instruction stream to 16 decored, each shifted by one byte, and then
cancel the results of the decoders who were not fed a instruction which starts on an instruction boundary. That's gross, needs a lot of wasted power and transistors for no useful work. Actually some Intel processors have dual instruction caches: one encoded in x86, and one recoded to something easier to digest. Of course, this does not come for free (both in silicon area and in power consumption, coherency logic, etc.).

Re:Yawn (1)

Khyber (864651) | about a year ago | (#44850243)

"a fully overclocked 4770K (~4.4GHz) is only 1.37x as fast as a fully overclocked i7 920 (~4GHz)."

You got some benchmarks on that?

Re:Yawn (1)

WhoBeDaPlaya (984958) | about a year ago | (#44851533)

I believe that it's generally accepted that : Haswell -> 110% IvyBridge -> 113% SandyBridge -> 125% Nehalem not counting special cases such as AVX2. Hence, a 4.4GHz 4770K = 5.5GHz i7 920, or ~137%.

Re:Yawn (0)

Anonymous Coward | about a year ago | (#44852125)

... that still doesn't make sense does it?
[Nehalem] = n = 1
[Sandy Bridge] = s = 1.25n
[Ivy Bridge] = i = 1.13s
[Haswell] = h = 1.1i

1.1 * 1.13 * 1.25n = 1.55375n

4.4 ghz / 4 ghz = 1.1

1.55375n / 1.1 = 1.4125n

or, haswell is 41.25% faster than nehalem
(as opposed to 37% faster)

Power density (1)

Anonymous Coward | about a year ago | (#44850489)

Suppose they actually scaled the transistors proportionally with the 22nm to 14nm features size reduction. That would be a reduction to less than half the area, but still 70% of the power. That means that the power density (and thus heat per unit area) would be higher, 1.7x the old value. One of the hopes from the smaller process is to be able to run faster, which means even more power. This seems unrealistic given that current processors are already thermally limited. We are way past the point where die shrinks offer faster operation, or increases in density close to what would be expected based on the feature size changes. It might help get some latency sensitive stuff closer together, maybe fit some more cache or cores, and perhaps lower power and area/cost for low power non thermally limited chips, but its not going to a lot to get you a much faster desktop for single threaded stuff, and only ~30% for highly multi-threaded (thermally limited) work.

setting up to lose to AMD again (1)

slashmydots (2189826) | about a year ago | (#44853259)

So that's great for mobile chips for devices with batteries but they tend to go the same direction with their desktop chips. That opens the door for AMD to release a double or triple the wattage chip that's less efficient but faster overall and they price it at a far better "speed vs price" ratio that takes money right out of Intel's pockets. My advice to Intel is release some hyper-efficient but still 90W-ish 4.5-5.0GHz chip. AMD wouldn't get anywhere near that kind of performance.

More intel vaporware (0)

Anonymous Coward | about a year ago | (#44854183)

Like xeon phi, which you still can't just go out and buy, you won't be able to buy this before you die.

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?