Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel Turbo Boost vs. AMD Turbo Core Explained

kdawson posted more than 4 years ago | from the highly-pressurized-gas dept.

AMD 198

An anonymous reader recommends a PC Authority article explaining the whys and wherefores of Intel Turbo Boost and AMD Turbo Core approaches to wringing more apparent performance out of multi-core CPUs. "Gordon Moore has a lot to answer for. His prediction in the now seminal 'Cramming more components onto integrated circuits' article from 1965 evolved into Intel's corporate philosophy and have driven the semiconductor industry forward for 45 years. This prediction was that the number of transistors on a CPU would double every 18 months and has driven CPU design into the realm of multicore. But the thing is, even now there are few applications that take full advantage of multicore processers. What this has led to is the rise of CPU technology designed to speed up single core performance when an application doesn't use the other cores. Intel's version of the technology is called Turbo Boost, while AMD's is called Turbo Core. This article neatly explains how these speed up your PC, and the difference between the two approaches. Interesting reading if you're choosing between Intel and AMD for your next build."

cancel ×

198 comments

Sorry! There are no comments related to the filter you selected.

WRI (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#32092458)

I win, I am so fool of win.

Can we get.. (5, Funny)

vjlen (187941) | more than 4 years ago | (#32092484)

...Turbo switches on our workstations again like back in the day?

Re:Can we get.. (1)

Bugamn (1769722) | more than 4 years ago | (#32092550)

Oh, man, you tickled my nostalgia.

Re:Can we get.. (1)

Cryacin (657549) | more than 4 years ago | (#32092980)

I have a handle on the side of my laptop that I can crank to make it go faster.

Re:Can we get.. (1, Funny)

Anonymous Coward | more than 4 years ago | (#32093094)

Broke mine a while ago, now there's a vise-grip hanging off of one side. The real sad thing is using the original handle required less effort -- now when I come up from the basement sweating and panting with my laptop, the wife gives me funny looks.

Re:Can we get.. (4, Funny)

sznupi (719324) | more than 4 years ago | (#32092564)

Plus a straightforward way of figuring out how to best assign processes to particular cores? (which ones are faster and which are slower)

PS. (3, Interesting)

sznupi (719324) | more than 4 years ago | (#32092604)

For that matter, can we have one more thing: a way to limit max core usage to, say, 10% (imagine you're playing an old game on a laptop, for example Diablo2; now, many games have the unfortunate habit of consuming all available CPU power...whether they need to or not; taking battery with them)

Re:PS. (4, Informative)

icebraining (1313345) | more than 4 years ago | (#32092626)

aptitude install cpulimit

Re:PS. (2, Informative)

bhtooefr (649901) | more than 4 years ago | (#32092638)

It's called SpeedStep. (OK, it doesn't reduce the CPU usage, but it reduces the CPU clock speed, which is more effective.)

Re:PS. (2, Insightful)

sznupi (719324) | more than 4 years ago | (#32092748)

Yes, one can force SpeedStep setting - but the game would still be consuming all available power, preventing the CPU from going to pseudo sleep states (which is even more effective)

Re:PS. (2, Insightful)

fedcb22 (1215744) | more than 4 years ago | (#32092808)

Really? I always thought that going to a lower voltage mode was much more effective than C1/C2/C3. That's why SpeedSted is used, against normal CPU throttling.

Re:PS. (2, Interesting)

sznupi (719324) | more than 4 years ago | (#32092900)

What do you mean "that's why SpeedStep is used, against normal CPU throttling"? SpeedStep is CPU throttling; but on top of that C-states are also highly effective, or at least Thinkpad Wiki thinks so [thinkwiki.org] , and I see no reason to disbelieve them...

Re:PS. (0)

Anonymous Coward | more than 4 years ago | (#32092642)

man cpulimit [sourceforge.net] . man numactl [freshmeat.net] .

Re:PS. (2, Informative)

Animaether (411575) | more than 4 years ago | (#32092816)

In the off chance that you're running on Windows;
http://mion.faireal.net/BES/ [faireal.net]
( ugly UI, does the job )

Re:Can we get.. (1, Insightful)

indre1 (1422435) | more than 4 years ago | (#32092614)

This seems like another marketing trick. I bet all the cores could easily run at the "super turbo boost" clock rates with an average heatsink and fan...

Re:Can we get.. (2, Insightful)

sznupi (719324) | more than 4 years ago | (#32092810)

But perhaps not without exceeding the amperage value for which power lines are rated...

Re:Can we get.. (3, Interesting)

Anpheus (908711) | more than 4 years ago | (#32092996)

Yes but it consumes more power and heat than they'd like. Binning is also a bigger deal than you think with CPUs. My CPU can be over-clocked significantly, because I got a lucky unit, but not nearly as much as what some people get. My CPU isn't stable at the memory speeds most over-clockers see online either. So in some ways, I got a good CPU, in some, it's meh.

On the other hand, there's no way I'd sell a company my CPU & motherboard at the speed I've boosted it up to. Not a chance. It's not 100% stable, there are infrequent glitches, etc. I improved my cooling, decreased my over-clock, and I've still had it not wake up from s3 sleep and done a couple other odd things.

So no, super turbo boost is not what you think it is. Is it a marketing ploy? Everything is a marketing ploy, but it's also a useful feature. Especially on laptops, where all but one core of the CPU can completely shut down and the remaining one can nearly double its clock speed.

Re:Can we get.. (4, Interesting)

postbigbang (761081) | more than 4 years ago | (#32093072)

So what you do is get people to code apps that use lighter-weight threads. Apple's GCD and FOSS ports of GCD spawn low-cost (as in overhead) threads so you can cram more in, make them smaller, and relieve part of the dirty cache memory problem in using them.

Spawn threads across cores, keep thread life simpler. Make those freaking cores actually do something. It can be done. It's just that MacOS or Linux or BSD have to be used to run the app/games.

Don't get me started on GPU threading.

Re:Can we get.. (0)

Anonymous Coward | more than 4 years ago | (#32092646)

Back in the day??? What are you talking about??? Today I was using a computer at work that only had 640k of memory and a turbo button to go with it. Amazingly, 640k was enough for me to complete the task at hand, and I expect should be enough for everyone else.

Re:Can we get.. (0)

Anonymous Coward | more than 4 years ago | (#32093082)

We could, but that button is delivered as software now. For years actually.

Re:Can we get.. (0)

Anonymous Coward | more than 4 years ago | (#32093284)

I read the article, there's nothing about a math coprocessor being implemented so I highly doubt you'd see any marginal improvements in its performance if you switch turbo on ...

Re:Can we get.. (1)

larry bagina (561269) | more than 4 years ago | (#32093416)

my netbook (eeepc) has a low power/high performance button.

Huh? (5, Insightful)

Wyatt Earp (1029) | more than 4 years ago | (#32092490)

That read like the pasting of two press releases together. That did very little to explain what is going on beyond press grade buzz words.

Re:Huh? (5, Informative)

Darkness404 (1287218) | more than 4 years ago | (#32092518)

Essentially they both just detect if other cores can be powered down, power them down and then crank up the clock speed on the single cores because heat/power doesn't matter if the other cores are turned off or in the low megahertz. AMD's solution is like an afterthought because their architecture is older than Intel's while Intel's was built in to the architecture.

Re:Huh? (1, Redundant)

Wyatt Earp (1029) | more than 4 years ago | (#32092556)

The description said - "This article neatly explains how these speed up your PC, and the difference between the two approaches. Interesting reading if you're choosing between Intel and AMD for your next build."

But it really don't have any performance information to help you choose.

Re:Huh? (2)

gparent (1242548) | more than 4 years ago | (#32093312)

Performance information?
Intel is better.
Has been that way for many years now. Yes, it's more expensive.

Re:Huh? (0)

Anonymous Coward | more than 4 years ago | (#32092548)

I think that's the point, it's meant to be a summary of the two approaches. if they're constantly referring to them in their various benchtesting articles it makes sense to have an explainer somewhere - i actually found it quite handy. the sort of assumed knowledge that would be handy in cpu/system reviews

Re:Huh? (3, Informative)

DeadboltX (751907) | more than 4 years ago | (#32092734)

The way I understand it (and I could be wrong) is that on a quad core 1.6ghz i7 each core is actually capable of going up to 2.8ghz, although I'm not sure if they are all capable of going to 2.8ghz at the same time. If you run a program that can't take advantage of more than 1 core, and it starts maxing out that core at 100%, the cpu will increase the clock speed of that core, up to 2.8ghz until it isn't maxed out anymore. In order to keep energy consumption and heat down the cpu will also lower the clock speeds of the other cores as needed.

With older multi-core processors if you had a quad core 1.6ghz and you had a program that could only use 1 core then you would effectively just have a 1.6ghz processor, in which case a dual core 2.8ghz would be way better. With Turbo Boost you can essentially get the best of both worlds.

Re:Huh? (0, Flamebait)

K2tech (1685250) | more than 4 years ago | (#32093212)

When will we stop referencing Moore's law? This has about as much relevancy today as Tom Watson's statement of "there is a world market for maybe five computers" made in 1943. Really, let it go. As a child I loved the TV show SPACE: 1999. But its gone now and not relevant, the same as Moore's law. No offense to the man himself, I truly do appreciate his work and discoveries. but it is time to move on. Say good night, Gracie.

Re:Huh? (4, Insightful)

oldhack (1037484) | more than 4 years ago | (#32093232)

The damn thing could (and should) have been two-paragraph memo - reads like it's written by a high school kid trying to fill up page quota. Oh well - the info, as shallow as it is, still is something I didn't know before.

30 MINUTES AND ONLY THREE POSTS?? (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#32092520)

What is it with slashdot? Too much Apple crap so everyone that mattered left?

Look. You know Apple is crap. I know it's crap. Even they know it's crap, but they don't care.

Re:30 MINUTES AND ONLY THREE POSTS?? (-1, Flamebait)

Anonymous Coward | more than 4 years ago | (#32093106)

Fuck off an die or move to some riverfront rental in Nashville or beatchfront shack in Biloxi or move to greece douchebag.

No turbine, no turbo (-1, Troll)

Anonymous Coward | more than 4 years ago | (#32092538)

Why do they insist on using the word turbo? You will not find a turbine (ok, rotor with blades) in your CPU, it does not have a turbo no matter what the marketing folks tell you.

Re:No turbine, no turbo (1)

CorporateSuit (1319461) | more than 4 years ago | (#32092742)

It may have no turbines, but it does have an oscillator!

Cooling fan noise anyone? (1)

Twinbee (767046) | more than 4 years ago | (#32092572)

Rather than cranking up the GHz of each core to obtain more speed, I wish they'd concentrate on keeping it cool. I hate the fan noise, and multicore was a way around that because it rarely heats up with standard usage. Hence less or no cooling required.

"We've got to find some way to get that fan to rotate to annoy the users... ah I have a cunning plan..."

Re:Cooling fan noise anyone? (1, Informative)

Anonymous Coward | more than 4 years ago | (#32092692)

try passive cooling. http://www.vonslatt.com/proj-cc.shtml

Re:Cooling fan noise anyone? (5, Informative)

washu_k (1628007) | more than 4 years ago | (#32092862)

There are a multitude of aftermarket CPU coolers which are much quieter than the stock ones from Intel or AMD. Some chips can even be run passive with the right heatsink. Take a look at the reviews on http://www.silentpcreview.com/ [silentpcreview.com]

Re:Cooling fan noise anyone? (0)

Anonymous Coward | more than 4 years ago | (#32093386)

I put my current system together 16 months ago (Core i7-920). Watching core temperatures get pretty high (79 degrees Celsius, or 174 degrees fahrenheit). I noticed that in testing these chips, Intel used a thermalright cooler instead of their stock box cooler. I got a Thermalright true 120, and replace the boxed cooler, and watched the temperature drop 10 degrees at the top end and 20 degrees at the bottom, and it takes a much longer time to get warm.

Re:Cooling fan noise anyone? (-1, Redundant)

value_added (719364) | more than 4 years ago | (#32093534)

There are a multitude of aftermarket CPU coolers ...

I spent several mostly unproductive years occupying my spare time (while offering up my spare dollars) by sifting through the specs, recommendations, user comments, product lifecycles, and marketing campaigns of what I'll call the Cool and Quiet industry. I say that as preface to the following comment:

    They're not coolers, they're fucking fans. Get over it.

Some are "less cheap" than others, but they're all made of plastic, they all spin, they all make noise, and they all suck. Or blow. Whatever. And eventually, they fall.

And if "noise" isn't a problem for you, you're either deaf, wear head phones, have grown accustomed to the excessive ambient din of your environment, never learned to tell the difference, live in a trailer park where classic 70s rock rules the day, are slightly retarded, or all the above.

Re:Cooling fan noise anyone? (2, Interesting)

ozbird (127571) | more than 4 years ago | (#32092984)

Do both.

I bought an Intel i7-860 recently and the supplied HSF is barely able to keep the core temperatures under 95 deg. C with eight threads of Prime95 running. Eek!! I replaced it with a cheap Hyper TX3 cooler (larger coolers won't fit with four DIMMs fitted), and it run at least 20 degrees C cooler under the same conditions. The supplied fan is a little noisy under full load, but for gaming etc. it's not a problem.

Turbo Boost is cute, but I've opted to overclock it at a constant 3.33GHz (up from 2.8GHz) instead for predictable performance, with no temperature or stability issues. YMMV.

Re:Cooling fan noise anyone? (4, Interesting)

SanityInAnarchy (655584) | more than 4 years ago | (#32093456)

predictable performance

Predictable power-drain, you mean, and a predictable shortening of the life of your hardware -- assuming it doesn't just overheat and underclock itself, which I've seen happen a few times.

CPU scaling has been mature for awhile now, and it's implemented in hardware. Can you give me any real examples of it causing a problem? The instant I need that speed (for gaming, etc), it's there. The rest of the time, I'd much rather it coast at 800 mhz all around, especially on a laptop.

with no temperature or stability issues. YMMV.

Understatement of the year.

Overclocking is a bit of a black art, for a number of reasons. First problem: How do you know it's stable? Or rather, when things start to go wrong, how do you know if it's a software or a hardware issue? The last time I did this was a 1.8 ghz machine to 2.7. I ran superpi, 3dmark, and a number of other things, and it seemed stable, but occasionally crashed. Clocked it back to 2.4, it crashed less often, but there were occasionally subtle filesystem corruption issues -- which was much worse, because I had absolutely no indication anything was wrong (over months of use) until I found my data corrupted for no apparent reason. Finally set it back to the factory default (and turned on the scaling) and it's been solid ever since.

Second problem: Even with the same chip, it varies a lot. All that testing I did is nothing compared to how the manufacturer actually tests the chip -- but they only test what they're actually selling. That means if they're selling you a dual-core chip that's really a quad-core chip with two cores disabled, it might just be surplus, the extra cores might be fine, but they haven't tested them. Or maybe they have, and that's why they sold it as a dual-core instead of quad-core.

So even if you follow a guide to the letter, it's not guaranteed.

I'm sure you already know all of the above, but I'm at the point in my life where, even as a starving college student, even as a Linux user on a Dvorak keyboard, it's much saner for me to simply buy a faster CPU, rather than trying to overclock it myself.

So buy a better cooler (1)

Sycraft-fu (314770) | more than 4 years ago | (#32093086)

The stock Intel coolers are designed to be economical and meet the thermal requirements, not good.

I use an Arctic Cooling Freezer 7 Pro. With my Q9550 I cannot make the fan spin up past the minimum, which is about 300rpm. The Intel board I use figures the CPU should maintain about a 20 degree thermal margin, meaning run 20 degrees below its rated max. If it is running hotter than that, the fan spins faster up to the max. If it is running cooler than that, the fan throttles back as low as the minimum. Idle, the CPU is around 40 degrees below the margin. If I crank up the processing, the heat will rise, but never high enough for the fan to speed up.

In all cases, the fan is inaudible in the room.

Also, you are aware that Intel makes more efficient, lower power CPU right? Have a desktop? Ok, instead of a Core i7 quad core, get a Core i3 dual core. It's made on the 32nm process and lacks some high end features (like VT-d and turbo boost) and runs rather low power. Has integrated low power graphics too, to further reduce whole system consumption. Just don't bitch that it isn't as powerful.

For that matter, get a system with an Atom. Some of those are down near a watt in terms of usage, yet they still have the performance and features needed to run a modern OS and do things like surf the web.

You can have low power efficient systems. You can have high power super systems. They are all available on the market today. You just can't have a system that uses very little energy and is a super performer.

Re:Cooling fan noise anyone? (1)

DAldredge (2353) | more than 4 years ago | (#32093136)

Then buy one of the low voltage / extreme low voltage variants that AMD and Intel both make.

Re:Cooling fan noise anyone? (1)

rwa2 (4391) | more than 4 years ago | (#32093410)

Get a goat.

Then after it craps in your bed and chews up your linens and brays all night, get rid of it.

Your computer will seem so much cooler and quieter after you get rid of the goat!

(my current PC from 2007 is soooo much quieter than the 2002-era PC it replaced)

Summary? (1)

sakonofie (979872) | more than 4 years ago | (#32092608)

Both do the following: they detect when some cores are being unused, they then give less power and decrease the clock speed to some of less used cores, and the power/clockspeed is then increased on the rest of the cores. The AMDs have do this by having 2 different of modes of operation with fixed power distribution/clockspeed settings for each mode. Intel does something more dynamic and on-the-fly.

Now I might have mis-summarized the article, but shouldn't that have been the article summary instead of a rephrasing of the article's lead?

Moore's Law? (0)

Anonymous Coward | more than 4 years ago | (#32092610)

The article seems to have absolutely no relationship to Moore's Law, or to whatever the author imagine's Intel's "corporate philosophy" to be.

Why is that extraneous, off-topic garbage included in the summary?

"Apparent performance" (4, Insightful)

macshome (818789) | more than 4 years ago | (#32092618)

What's "apparent performance"? It's either faster or it's not.

Re:"Apparent performance" (3, Insightful)

asdf7890 (1518587) | more than 4 years ago | (#32092684)

What's "apparent performance"? It's either faster or it's not.

You have obviously never worked in UI design! (though in this area I don't know who/what they would be trying to fool or how they would be trying to fool them/it so your response is probably quite right)

Re:"Apparent performance" (1)

oldhack (1037484) | more than 4 years ago | (#32093252)

Ah! The infamous progress bar.

Re:"Apparent performance" (4, Insightful)

phantomcircuit (938963) | more than 4 years ago | (#32092690)

Many programs simply do not benefit from multiple cores. This technology is basically a trade off between partially disabling one core and increasing the frequency of the other core.

Re:"Apparent performance" (4, Interesting)

pwnies (1034518) | more than 4 years ago | (#32092708)

Not necessarily. If they're overclocking a single core, while underclocking the rest, it may all balance out to have an average core speed that's less than what it was. However, in doing this it may actually increase performance if there is a single app that requires a lot of CPU time (and isn't threaded). In reality the total speed of the computer is being reduced, while the performance as viewed by the user is increasing.

A better explanation (5, Informative)

Sycraft-fu (314770) | more than 4 years ago | (#32092668)

The article kinda glosses over things. So a more detailed explanation of how Intel's turbo boost works:

As stated, every core has a budget for the maximum heat it can give off, and the maximum power it can use, as well as a max clock speed that it can handle. However, when you look at these things, they aren't all even, one ends up being the limiting factor. So Intel said, ok, we design a chip to always run at a given speed and stay under the thermal and power envelopes. However, if it isn't running at that, we allow for speed increases. It can increase the speed of cores in 133MHz increments. If things go over, it throttles it back down again.

This can be done no matter how many cores are active, but the less that are active the more it is likely to be able to be. On desktop cores, it isn't a big deal since they usually run fairly near their speed limit anyhow. So you pay see only 1 or 2 max 133MHz increments that can happen. For laptop cores, in particular quad cores, it can be a lot more.

The Intel i7-720QM runs at 1.6GHz and has 1/1/6/9 turbo boost multipliers. That means with all 4 cores running, it can clock up at most 1 increment, to 1.73GHz. However with only one running, it can go to 2.8GHz, 9 133MHz clocks up. It allows for a processor that would be too fast to reside in the laptop to go in there with some flexibility. A desktop Core i7-930 is 2.8GHz with 1/1/1/2 turbo mode. That means it'll clock up to 2.93GHz with 2-4 cores active, and 3GHz with 1. Much less flexible, since it is already running near it's rated max clock speed.

Now this is not the same as speed step, which is their technology to down clock the CPUs when they aren't in so much use. Similar idea, but purse based on how hard the CPU is being asked to work, not based on if the system can handle the higher speeds.

As an aside, I'll call BS on the "Little uses multiple cores." Games these days are heavily going at least dual core, some even more. Reason is, if nothing else, the consoles are that way too. The Xbox 360 has 3 cores, 2 threads each. The PS3 has a weak CPU attached to 7 powerful SPUs. On a platform like that, you learn to do parallel or your games don't look as good. Same knowledge translates to the PC.

However there are still single core things, hence the turbo boost thing can be real useful. In laptops this is particularly the case. If the i7 quad was limited to 1.6GHz, few people would want it over one of the duals that can be 2.53GHz or more. Just too much loss in MHz to be worth it. However now, it can be the best of all worlds. A slower quad, a faster dual, whatever the apps call for, it handles.

Re:A better explanation (0, Troll)

Vellmont (569020) | more than 4 years ago | (#32092938)


As an aside, I'll call BS on the "Little uses multiple cores." Games these days are heavily going at least dual core, some even more.

I'd say games qualify for "little uses". Believe it or not, most people don't use their computers for high end gaming.

Even for games, are you really certain that the extra cores are used much? Sure, the game can say it uses multiple cores, and I'm sure it actually does. But how much does it really improve performance?

The PS3 has a weak CPU attached to 7 powerful SPUs.

Heh. The PS3 is known to be notoriously difficult to program as well. How many tasks can really be split up into 7 relatively equal, independent parts?

Re:A better explanation (4, Informative)

DMalic (1118167) | more than 4 years ago | (#32093146)

The third core gives a significant performance benefit over two cores, especially since many games were originally designed for consoles and are badly ported to PCs. Unoptimized performance hogs like Grand Theft Auto demand more cores (and can use them). Just today I saw an article on Anandtech describing significant, unexpected benefits from a slower quad core over a newer, faster dual-core in gaming. http://www.anandtech.com/show/3695/the-clarkdale-experiment-mea-culpa [anandtech.com]

Re:A better explanation (1)

drsmithy (35869) | more than 4 years ago | (#32093180)

I'd say games qualify for "little uses". Believe it or not, most people don't use their computers for high end gaming.

Most people don't use their computers for high end _anything_ and whether the machine has one core or a dozen is basically irrelevant.

Re:A better explanation (2, Insightful)

petermgreen (876956) | more than 4 years ago | (#32093472)

I'd say games qualify for "little uses". Believe it or not, most people don't use their computers for high end gaming.
Afaict most people don't use their computers for anything that strains the CPU at all. Most people would be perfectly happy with a bottom of the range C2D, i3, late P4 or maybe even less as long as the system was kept free of crapware and had enough ram for their OS and applications of choice.

However of those apps that DO strain the CPU (e.g. games, video encoding, scientific software, software build systems) a lot do now have the ability to use multiple cores.

Re:A better explanation (2)

juuri (7678) | more than 4 years ago | (#32093098)

What I find interesting is that the current OS most people use, with the exception of
some RealTime and big iron custom dealies are still built in such a monolithic way that
it becomes more "profitable" to the user experience to still ramp up single cores as
opposed to having most cores running at the same speed.

With the exception of some high demand apps like games, extensive math apps,
and stuff that could or should be offloaded to GPUs desktop OS don't need a VERY
fast single core, they instead need lots of equal cores with fast context switch times
coming from the OS.

For example the fastest core in Win7/OSX for a desktop should be the one handling
the UI but it doesn't need to be THAT much faster than anything else. Instead all the
tiny little apps need to be sent around to as many different cores as possible when
they aren't multithreaded... unfortunately none of the current schedulers are that great
at this in consumer land. Even worse the kernels can have so much locked in that
you end up with lots of things stuck on a single core that could exist elsewhere.

Such a shame that true mach never got the switch times down because of the
huge separation in "drivers" or "kernel features". QNX definitely got this right,
but they never took multicores seriously, perhaps it is a much harder problem than
I am assuming.

slowing down? (0)

Anonymous Coward | more than 4 years ago | (#32092776)

What this article describes, is more like
"slowing cores down" to save energy, rather than "speeding up the cpu".

Re:slowing down? (0)

Anonymous Coward | more than 4 years ago | (#32092840)

Yes, except for the bit where they burn off the energy saved to overclock the cores that aren't slowed down -- did you even read the article?!

Re:slowing down? (0)

Anonymous Coward | more than 4 years ago | (#32093586)

Whoops -- forgot I was posting on /., not e)))).

Now I just wonder why the hell nobody's zinged me with "YMBNH, nobody RTFAs." yet.

Obligatory reference... (1)

Hamsterdan (815291) | more than 4 years ago | (#32092794)

But will it allow me to jump over those traffic jams?

http://www.youtube.com/watch?v=arL04K3HLMw

Why not use the extra transistors... (2, Interesting)

John Hasler (414242) | more than 4 years ago | (#32092804)

...for more cache instead of more processors? Think of something with as many transistors as a hex core but with only two cores and the rest used for L1 cache! I'd suggest lots more registers as well, but that would mean giving up on x86.

Re:Why not use the extra transistors... (1, Insightful)

maxume (22995) | more than 4 years ago | (#32092932)

They like to use cache size to segment markets (people that really need it end up paying for it).

Also, I imagine that they ran the numbers on increased cache size versus another core (but maybe they figured the second core was more marketable, rather than a better performance boost).

I do however, have this wishy-washy impression that Intel has been selling Pentium Pros with ever larger caches and ever lower voltages for the last 10 years (I'm quite certain that the Pentium III was mostly a smaller, faster Pentium Pro; The II and IV were not. I think Core was, except for there being 2 of them.). Somebody please either lambaste this or tell me it isn't that far off.

Re:Why not use the extra transistors... (2, Interesting)

sznupi (719324) | more than 4 years ago | (#32093128)

PII is also of the PPro lineage. And even if PII, PIII and to some degree P-M and Core1 aren't that different, there are supposed to be some notable changes with Core2 and, especially, Nehalem.

Besides, if the tech is good and it works... (look what happened when they tried "innovating" with PIV)

Re:Why not use the extra transistors... (2, Interesting)

DMalic (1118167) | more than 4 years ago | (#32093152)

It's more like a series of improvements on that model. It works really well, though. Cache doesn't seem to help performance as much as you'd think it would, though. There are some core2 cpus described as "pentium dual core" with a fraction of the cache which perform almost as well.

Re:Why not use the extra transistors... (1, Interesting)

Anonymous Coward | more than 4 years ago | (#32093694)

Some architectures are more dependent on cache size than others; the Pentium 4 was awful when cache-starved -- the first round of P4-based Celerons were utter dogs with only a 128k L2 cache and 400FSB, for example, while the jump from Northwood (512k L2) to Prescott (1MB L2) actually hurt performance for many workloads as the cache increase didn't really help with the increase in pipeline length and the clock speeds didn't scale up initially. By contrast, the jump in performance from 533FSB to 800FSB dual-channel between the B and C northwoods was big.

Core 2 was a remarkably more tolerant of slow memory and smaller caches.

Re:Why not use the extra transistors... (4, Interesting)

glsunder (241984) | more than 4 years ago | (#32093026)

Larger caches are slower. Moving to a larger L1 cache would either require that the chip run at a lower clock rate, or increase the latency (increasing the length of time it takes to retrieve the data).

As for registers, they did increase them, from 8 to 16 with x64. IIRC, AMD stated that moving to 16 registers gave 80% of the performance increase they would have gained by moving to 32 registers.

Re:Why not use the extra transistors... (0)

Anonymous Coward | more than 4 years ago | (#32093764)

Also, with out of order processors, you've got a lot more registers "behind the scenes" than you have through the visible instruction set architecture. I think it was around 40 with the P6 (PPro, PII, P3) and around 120 starting with the P4 (I'm fairly sure that this was one of the things that was updated from the P3 to the P-M as well)

Re:Why not use the extra transistors... (1, Funny)

Anonymous Coward | more than 4 years ago | (#32093464)

Screw that.. Give me faster ram and stack it between heat pipes on top of the cpu..

Re:Why not use the extra transistors... (2, Insightful)

petermgreen (876956) | more than 4 years ago | (#32093504)

L1 and sometimes L2 caches are small not because of die area but because there is a tradeoff between cache size and cache speed. Only the lowest level cache (L3 on the i series) takes significant chip area (and it already takes a pretty large proportion on both the quad and hex core chips).

Re:Why not use the extra transistors... (1)

SpazmodeusG (1334705) | more than 4 years ago | (#32093522)

Because the cache is shared on newer multicore processors you essentially do get more cache. Cache is the largest user of real estate on die. The added processors you get are just a bonus.

Turbo button comming back? (2, Funny)

TavisJohn (961472) | more than 4 years ago | (#32092872)

So they are bringing the Turbo Button back?

Seriously, When I was looking at laptops, 2 laptops that were pretty much the same in specks, one had a "Turbo" CPU the other's CPU was the speed of the "Boosted" one next to it...
The price difference... $20.00!!! I'll pay an extra $20 to have FULL SPEED ALL THE TIME!

Re:Turbo button comming back? (1)

broken_chaos (1188549) | more than 4 years ago | (#32093426)

No, this is automatic at the hardware level -- not a manual switch. In fact, it's more or less useless on desktop machines (as someone excellently explained above) since the speed improvements are small. On laptops with >2 cores, however, it seems to be very, very nice. A fairly easy way to have both reasonably powerful parallel processing with multiple cores, fairly fast single-thread processing, and not creating a level of heat that could damage components.

Also, if you're overclocking a desktop (which is insanely easy on any modern chip), it'll probably be the first thing you turn off. Boosting the speed in one core unpredictably can both cause instabilities at higher overclocks and is even more pointless than normal, as you're almost certain to get much higher speeds at all times out of the chip than it would have run at even with the highest 'turbo' mode at default settings.

Re:Turbo button comming back? (1)

dwinks616 (1536791) | more than 4 years ago | (#32093524)

$20 more for what? A processor that runs hotter "ALL THE TIME", uses more battery "ALL THE TIME" and requires louder fans "ALL THE TIME". No thanks, I prefer my laptop to be cool, quiet and have long battery life. If I can save $20 in the process, that's just a bonus.

Re:Turbo button comming back? (1)

PitaBred (632671) | more than 4 years ago | (#32093588)

That turbo boosted CPU also had hyperthreading, AES-NI and a few newer instruction sets, and will last longer on battery. I can't imagine why you'd want battery on a laptop, though... it's all about teh megahurz!

Why? (2, Insightful)

Bootarn (970788) | more than 4 years ago | (#32092886)

Why this compromise? There's a huge need for developers to start thinking in terms of multicore CPUs. Offering them this solution is just postponing the inevitable. We need change now.

Re:Why? (4, Insightful)

Shikaku (1129753) | more than 4 years ago | (#32093034)

Because it's a pain in the ass and very hard for most coders.

What we need is either a simple library for threading or a new language (like haskell) for auto-parallelization

Re:Why? (1)

ascari (1400977) | more than 4 years ago | (#32093308)

Erlang anyone?

Re:Why? (2, Insightful)

Anonymous Coward | more than 4 years ago | (#32093494)

And more importantly, not all tasks CAN be parallelized.

Re:Why? (0)

Anonymous Coward | more than 4 years ago | (#32093766)

There are plenty of simple libraries for threading, for nearly every language imaginable. The problem is that threading is in general a terrible abstraction. It exposes all the worst aspects of concurrent non-determinism upward to application code, while hiding most of the low-level facilities present in hardware for managing it. So in practice it needs a cumbersome library built on top of the threading to hide most of what threading exposes, and construct facilities for managing concurrency some of which were hidden by the threading abstraction in the first place.

Because Intel knows their history (3, Interesting)

Animats (122034) | more than 4 years ago | (#32093826)

When Intel came out with the Pentium Pro, they had a good 32-bit machine, and it ran UNIX and NT, in 32-bit mode, just fine. People bitched about its poor performance on 16-bit code; Intel had assumed that 16-bit code would have been replaced by 1995.

Intel hasn't made that mistake again. They test heavily against obsolete software.

Question... (1)

TheSHAD0W (258774) | more than 4 years ago | (#32092944)

Does Intel's architecture adjust its management scheme based on CPU temperature? It'd be nice if having a better heat sink or a cooling system would allow the system to run even faster.

I've also been wondering why, given the new poly-core systems, we don't see a mix of CPU types in a system. Throwing a bunch of slower but less complex and therefore less expensive cores in with a few premium cores would result in a better balance, allowing the system to concentrate heavy-load apps on the faster CPUs while offloading less-intensive work onto the cheaper cores.

Re:Question... (2, Insightful)

John Hasler (414242) | more than 4 years ago | (#32092992)

> I've also been wondering why, given the new poly-core systems, we
> don't see a mix of CPU types in a system.

How would the OS decide which process to assign to which core?

Re:Question... (1, Interesting)

sznupi (719324) | more than 4 years ago | (#32093174)

Looking at history of CPU time to running time ratio for each process, or perhaps also what typically causes spikes of usage and moving process to faster core at that point? Plus central db of what to expect from specific processes.
(I'm not saying it's necessarily a good idea; just that it could be not so hard, OS-wise)

Re:Question... (2, Funny)

TheSHAD0W (258774) | more than 4 years ago | (#32093620)

How about, every app that runs in the background or as a tray icon by default gets a cheesy core? :-P

Re:Question... (1)

sznupi (719324) | more than 4 years ago | (#32093184)

Though we do see a mix, in a different way, with GPGPU adoption.

Redundant much? (0)

Anonymous Coward | more than 4 years ago | (#32093076)

Wow - whoever wrote that article needs to go back to school. Did you know that the Intel solution is much more elegant because AMD tacked the Turbo core onto their K10 architecture they've been using for a while now?

In layman's terms (4, Funny)

digitalhermit (113459) | more than 4 years ago | (#32093190)

Just wanted to clarify some of the misconceptions about the Turbo Boost...

The technology is fairly simple. At it's most level, we take the exhaust from the CPU fan and route it back into the intake of the system. If you're using Linux you can see the RPM increase by running 'top' (google Linux RPM for more information).

The turbo itself is a fairly simple technology. As you're aware, we can use pipes to stream the outputs of different applications together. In the case of Linux, we pipe the stdout stream to the stdin (the intake) of the turbo (tr) which increases the speed and feeds it into a different application. For example, we can increase the throughput of dd as follows:

        dd if=/dev/zero | tr rpm | tee /proc/cpuinfo

This will increase the CPU speed by feeding output from dd into the turbo (and increasing the rpm) and finally pumping it back into the CPU.

On other platforms there are some proprietary solutions. For example, take the output of Adobe AIR to HyperV to PCSpeedup! then back into the processor.

Hope this helps...

Re:In layman's terms (1)

toopok4k3 (809683) | more than 4 years ago | (#32093262)

Are you sure you don't need a filter to that pipeline? Surely /dev/zero can't be just clean 0's. All those 1 bits that get inside your cpu will eventually shorten the cpu's lifetime.

Re:In layman's terms (1)

Gothmolly (148874) | more than 4 years ago | (#32093392)

Hopefully people will not mod you offtopic.

Re:In layman's terms (0)

Anonymous Coward | more than 4 years ago | (#32093796)

The technology is fairly simple. At it's most level, we take the exhaust from the CPU fan and route it back into the intake of the system.

FYI - That's not how a turbo works.
A turbo is two turbines attached to a shaft.

Turbine 1 is spun up using exhaust gas and the connecting shaft spins turbine 2 in order to compress & force 'fresh' air into the intake.

Marketeers from the 1980s called (0)

Anonymous Coward | more than 4 years ago | (#32093206)

They want their "Turbo" bullshit back.

"Your next build" - who builds PCs anymore? (1)

Gothmolly (148874) | more than 4 years ago | (#32093334)

For $300 you can get a brand new Dell - who builds a PC anymore?

Re:"Your next build" - who builds PCs anymore? (4, Insightful)

0123456 (636235) | more than 4 years ago | (#32093348)

For $300 you can get a brand new Dell - who builds a PC anymore?

Someone who wants something better than a $300 Dell?

Re:"Your next build" - who builds PCs anymore? (1)

rwa2 (4391) | more than 4 years ago | (#32093482)

I bought one of those $400 laptops for my wife last year... 2.2Ghz dual core Toshiba Satellite with a fairly recent Intel 4000 integrated graphics chip.

My upgraded 8-year old 2.2Ghz dual SMP Athlon XP-M with an AGP nVidia 6800GS still blows the doors off of it for gaming. Works great as long as the games don't require DX10 or 64-bit. And it actually didn't cost all that much since I waited to upgrade the CPUs, video, and RAM after they got cheap.

Re:"Your next build" - who builds PCs anymore? (1)

Cadallin (863437) | more than 4 years ago | (#32093566)

Um, people who read Slashdot? Or in other words, who are you and why are you posting from someone else's account?

Take advantage of what? (1)

Yvan256 (722131) | more than 4 years ago | (#32093552)

take full advantage of multicore processers

Too bad people don't use even a single core to correct their mistakes.

Choosing between Intel and AMD? (0)

Yvan256 (722131) | more than 4 years ago | (#32093582)

Interesting reading if you're choosing between Intel and AMD for your next build.

Ah! I don't need to do such a thing! Apple decides for me*!

* please Apple, put a Core 2 Quad** in the next Mac mini!

** Before you say "Core 2 Quad is old technology", compare the processing power, the power and heat dissipation requirements and the cost of a Core 2 Quad vs the Core i3, i5 and i7, even the mobile versions.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>