×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel's Single Thread Acceleration

CmdrTaco posted about 7 years ago | from the sometimes-the-few-outweigh-the-needs-of-the-many dept.

Intel 182

SlinkySausage writes "Even though Intel is probably the industry's biggest proponent of multi-core computing and threaded programming, it today announced a single thread acceleration technology at IDF Beijing. Mobility chief Mooly Eden revealed a type of single-core overclocking built in to its upcoming Santa Rosa platform. It seems like a tacit admission from Intel that multi-threaded apps haven't caught up with the availability of multi-core CPUs. Intel also foreshadowed a major announcement tomorrow around Universal Extensible Firmware Interface (UEFI) — the replacement for BIOS that has so far only been used in Intel Macs. "We have been working with Microsoft," Intel hinted."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

182 comments

Who cares? (-1, Troll)

tomocoo (699236) | about 7 years ago | (#18749397)

Does anyone care about any of this?

Re:Who cares? (1)

mfaras (979322) | about 7 years ago | (#18749465)

Well... that depends.

How good is linux's support for UEFI?

This "single thread acceleration" will have to be supported by the OS?
Does it have the potential to break a half-bad application?

--
Every time your page doesn't validate, the W3C kills a foxy.

Re:Who cares? (2, Interesting)

something_wicked_thi (918168) | about 7 years ago | (#18749635)

This "single thread acceleration" will have to be supported by the OS?

I doubt it. My reading of the article is that the CPU detects when only one core is in use and does everything itself. But, even if it does require some level of OS support, I wouldn't worry about Linux's support of it (or of UEFI, for that matter, as Linux runs quite well on Macs and Intel does a good job of supporting Linux, anyway). Linux even has support for hotplugging CPUs, so, even if it comes to that (and I doubt it will), then it should still work.

Does it have the potential to break a half-bad application?

Any change in a CPU's implementation should not be observable to anyone unless the observer knows to look for it (e.g. with the CPUID instruction). Intel won't release a chip that breaks existing apps. Besides, if you think about it, if apps work on a single-core CPU, why shouldn't they work on a dual-core CPU with one core disabled?

Overclocking? (4, Insightful)

Nuffsaid (855987) | about 7 years ago | (#18749409)

For a moment, I hoped Intel had come out with something like AMD's rumored reverse-Hyperthreading. That would be a real revolution!

Re:Overclocking? (4, Informative)

Aadain2001 (684036) | about 7 years ago | (#18749973)

I did my MS thesis on a topic very similar to this. Trust me, it's not worth it. While some applications that have inherent parallelism (image manipulation, movie encoding/decoding, etc) can see between 2x to 4x improvements when dynamically threaded, the majority of your basic applications are too linear and have too many dependencies between instructions for dynamic threading to really be worth the investment in hardware.

ACK!!! (2, Funny)

Gr8Apes (679165) | about 7 years ago | (#18750159)

Good lord, let me sell all my web, application, and DB servers then!!!! I've overpaid for 32 CPU systems!!!! ACK!!!

Re:Overclocking? (3, Funny)

Spokehedz (599285) | about 7 years ago | (#18750121)

See... I thought it was from that Red Dwarf episode, where Kryten put all the CPU time through one processor--exponentionally increasing it's computing power, but shortening it's overall lifespan.

Holly only had 3min before she would be gone forever... And that bloody toaster had to ask if she wanted toast.

Lets hope that Intel has solved this issue with their new CPU's.

I for one which welcome, in soviet russia we compute you, and PROFIT!

Re:Overclocking? (3, Funny)

baadger (764884) | about 7 years ago | (#18750927)

Lister: No, I don't want any toast. In fact, no one around here wants any toast. Not now, not ever. NO TOAST. OR muffins! We don't LIKE muffins around here! We want no muffins, no toast, no teacakes, no buns, baps, baguettes or bagels, no croissants, no crumpets, no pancakes, no potato cakes and no hot-cross buns and DEFINITELY no smegging flapjacks!

Talkie Toaster: Ahh so you're a waffle man.

..off topic... so shoot me.

complexity/stability (1)

ushering05401 (1086795) | about 7 years ago | (#18749451)

I don't follow hardware and I don't write multi-threaded apps so i could be way off on this... But with the sheer volume of poorly designed/implemented single threaded applications wouldn't it be asking for trouble to urge speed in the industry conversion to the increased complexity of multi-threaded solutions?

Y'all multi-thread developers take as much time as you want.

Regards.

Most applications will never become multi-threaded (5, Insightful)

something_wicked_thi (918168) | about 7 years ago | (#18749557)

Why should they? The advent of multicore CPUs won't actually hurt single-threaded apps. They just won't get any faster. For most things, that's fine. Legacy apps that aren't changing are most likely already fast enough. Besides, not everything can be parallelized properly, anyway. Multithreaded applications will become more popular, but I think this trend will affect new applications much more than old ones because it's just not that important. Even new apps don't necessarily need parallelization because many things are "fast enough" on a single core.

By the way, I actually hope that many things never become multithreaded. In my experience, most coders simply aren't capable of thinking threading through clearly. For many people, the concept is just too complex. Hopefully, compilers will improve to the point where many things can be parallelized without the coder having to know very much, if anything, about the threading involved, but, today, we're nowhere near that. We desperately need higher-level threading primitives in computer science.

Re:Most applications will never become multi-threa (1)

CastrTroy (595695) | about 7 years ago | (#18749769)

I think you are right. Doing something multithreaded takes a lot of extra thinking. Even for someone who knows what their doing. I took a parallel programming course, and that was where things got really fun. Things get really complicated once you start programming things to work in parallel. Not only that, a lot of apps won't really speed up a noticeable amount from using a multi-threaded architecture. I hope that compilers can help this out, because programming multiple threads is hard enough, not to even get started on parallel algorithms.

Re:Most applications will never become multi-threa (3, Insightful)

pla (258480) | about 7 years ago | (#18749811)

In my experience, most coders simply aren't capable of thinking threading through clearly

I agree completely, though you can expect to catch some flack for that one, from the hoardes of poor coders who think nothing (or rather, who don't think about the implications) of splitting off another thread to boost performance (even in a single core environment). ;-)

Personally, I consider myself a damned good coder - And I avoid multithreading wherever possible. If I really need the raw CPU power, I'll usually try to model it as a full slave process before resorting to messy threading.



We desperately need higher-level threading primitives in computer science.

We've had it for decades - Just look for multiprocessor support, and you have implicit multithreaded support automatically.

As one "mature" implementation, we could all start coding in HPF. I'd personally rather gnaw my own right leg off, but, to each their own.

Re:Most applications will never become multi-threa (3, Interesting)

something_wicked_thi (918168) | about 7 years ago | (#18750103)

We've had it for decades - Just look for multiprocessor support, and you have implicit multithreaded support automatically.

Well, yes and no. I think the easiest model for multithreading today is message passing, but it doesn't suit all needs and requires you to design your app to support it from the start. Most mainstream languages (read C/C++, Java, and .NET) don't really support much beyond your basic mutex, semaphore, and monitor. There are a few other things out there that provide various ways of doing things, but none are universal and none seem to have really caught on.

What we really need is either a language that can express things in such a way that the compiler can easily make good decisions about what can be parallelized, or a compiler that can do that with existing languages. I think that the latter approach may prove impossible. To make informed decisions about threading, a compiler really needs to know things about the data, and most procedural languages just don't cope with that very well.

It seems that HPF may provide some of these things already. I did a few quick Google searches and it seems interesting, but I wonder how much better it is than current work that is being done on auto-vectorization of loops and such in modern compilers. I'll have to look into that language more closely before I can really draw any conclusions. I believe that IBM has been trying to do some interesting work in this area with the Cell processor, too, and I suspect that's why Sony makes interesting statements about how the true power of the Cell will never be fully realized.

Regardless, the next decade is going to be an interesting one for compiler writers, I suspect.

Re:Most applications will never become multi-threa (4, Informative)

iangoldby (552781) | about 7 years ago | (#18750231)

What we really need is either a language that can express things in such a way that the compiler can easily make good decisions about what can be parallelized
You mean Fortran 90?

Seriously, several constructs in Fortran are designed specifically for parallel execution. The language itself makes it hard to write code that the compiler can't heavily optimise. There's a reason why variable aliasing is strongly controlled in Fortran and why function parameters have an 'intent' attribute. Then there are constructs such as WHERE, which is by its very nature implicitly a parallel set of operations.

Re:Most applications will never become multi-threa (1)

ioshhdflwuegfh (1067182) | about 7 years ago | (#18750791)

You mean Fortran 90?

Seriously, several constructs in Fortran are designed specifically for parallel execution.
Not quite. FORTRAN 90 was an attempt to freshen up FORTRAN 77 to become like other languages (pointers, recursion, structures, dynamical memory management), but nothing there is particularly suited for or enforcing the parallel programming. There is an attempt though to make FORTRAN 90 "high performance", HPF (High-Performance Fortran), by adding some even older things, like side-effect free constructs that would help compiler automatically vectorize certain operations (vectorization is important in scientific/engineering applications), but, historically speaking, already LISP was better for parallelization than FORTRAN, like when you add two lists with (mapcar #'+ a b), compiler knows that all individual additions can be done in parallel. Nowadays you can use OpenMP (which is SMP oriented and newest versions of gcc support it) or MPI (which is message-passing oriented) both in FORTAN or C(++) that lets you give instructions where and how to parallelize; in terms of more general programming languages, there are Erlang and Ocamm.

Re:Most applications will never become multi-threa (0)

Anonymous Coward | about 7 years ago | (#18750505)

"Personally, I consider myself a damned good coder"

It only counts if OTHERS consider you a damned good coder.

Re:Most applications will never become multi-threa (2, Insightful)

Gr8Apes (679165) | about 7 years ago | (#18750549)

In my experience, most coders simply aren't capable of thinking
threading through clearly


I agree completely, though you can expect to catch some flack for
that one, from the hoardes of poor coders who think nothing (or rather,
who don't think about the implications) of splitting off another thread
to boost performance (even in a single core environment). ;-)
Hordes of even "good coders" can't properly code multi-threaded apps. Actually, after more than a decade as a programmer, I'm not sure there are hordes of good coders. There are good coders, and a disturbingly large percentage of them do not understand concepts like multi-threading or effective techniques of fail-safe systems coding.

Personally, I consider myself a damned good coder - And I avoid
multithreading wherever possible. If I really need the raw CPU
power, I'll usually try to model it as a full slave process before
resorting to messy threading.
You may be a good coder, but you apparently fall into the majority camp by your own admission. Not that there's anything wrong with that though. You at least realize that multi-threading isn't your thing.

We desperately need higher-level threading primitives in computer science. ...
As one "mature" implementation, we could all start coding in HPF. I'd
personally rather gnaw my own right leg off, but, to each their own.
As many folks pointed out to me previously, Eiffel seems to be pretty nice in this arena. I've never seen a production use of it, but who's to say it's not the next big thing? (Perhaps 3+M Java coders?)

The major issue with multi-threading remains though, and that's identifying the parallel processes. Take a series of sequential code blocks that involve retrieving pieces of information from several sources. If those retrievals are independent of each other, you can retrieve all the pieces concurrently (in parallel) and then sequence them together when all retrievals are done. Now the process takes the time of the longest retrieval plus assembly vs an sum of all retrievals and assembly. This type of process is quite common in enterprise systems working off of several DBs. Putting such code in a slave process requires inefficient messaging results back to the calling process, and adds unnecessary overhead. This is but one case where multi-threading helps performance significantly. I'm not sure that something like Eiffel would make this code any easier to write since the bulk of the multi-threaded work is in the design itself.

Re:Most applications will never become multi-threa (1)

grumbel (592662) | about 7 years ago | (#18750135)

### For many people, the concept is just too complex. Hopefully, compilers will improve to the point where many things can be parallelized without the coder having to know very much

As long as we are using C or C++ I doubt that will happen, those languages are just not build with multi-threading in mind and thus aren't easily to parallelize, other languages like functional ones on the other side make the job really easy. When you don't have side effects to worry about, executing things in parallel becomes rather easy and straight forward. So I hope that switching to multi-core will also mean that we will finally switch to more advanced languages, its really time, but so far using them didn't provide much different in the end, with multi-core on the other side they could really have some huge advantages, not only will they lead to better code, but also to faster code.

Till that happens multi-cores should still be quite beneficial, I currently have 120 processes running, while many of them just idle most of the time, its not that unusual to have two or more processes that want all the CPU they can get, good thing when you then actually have two CPU cores to give them.

Two letters and a slash for you... (0)

Anonymous Coward | about 7 years ago | (#18751225)

...other languages like functional ones on the other side make the job really easy

I/O

Re:Most applications will never become multi-threa (0)

Anonymous Coward | about 7 years ago | (#18750201)

By the way, I actually hope that many things never become multithreaded. In my experience, most coders simply aren't capable of thinking threading through clearly. For many people, the concept is just too complex. Hopefully, compilers will improve to the point where many things can be parallelized without the coder having to know very much, if anything, about the threading involved, but, today, we're nowhere near that. We desperately need higher-level threading primitives in computer science.


Let's apply this thinking to another topic:
By the way, I actually hope that many things never become object oriented. In my experience, most coders simply aren't capable of thinking OOP through clearly. For many people, the concept is just too complex. Hopefully, compilers will improve to the point where many things can be implemented using OOP techniques without the coder having to know very much, if anything, about the architecture involved, but, today, we're nowhere near that. We desperately need higher-level OOP in computer science.

Just as with OOP, if the technology is not made available then architects and coders will never learn the concepts, because they'll have no exposure to it. Oh sure, an architect at a well-run software company *MIGHT* have the time to keep up with academic publications and the latest proposed technology, but seriously: when was the last time your emoployer gave you the time to pick up a new skill until it was time to actually use it? Sure, Google provides paid research time to their employees which could be used to keep abreast of new technologies and perhaps even experiment with them, but who else does? I know my employers haven't - but then again everyone regularly put in 60-70 hours per week with ongoing promises of an IPO and things finally stabilizing. The IPO never happened; the pigfucking CEO simply drew larger and larger salaries and bonuses year after year.

Re:Most applications will never become multi-threa (1)

something_wicked_thi (918168) | about 7 years ago | (#18750363)

I think this analogy is a bit disingenuous. First, OOP isn't all that complicated, and I did acknowledge that better tools that hide the complexity will help things. The other thin you have to consider is that OOP and threading are meant to achieve two separate goals. OOP is meant to simplify design whereas threading is meant to increase performance, often at the expense of simplicity. Threading is fundamentally harder than OOP because it's harder to think about, and it only matters if you need performance. OOP is a good idea to make a good design. That is always important. Threading, as it exists today, is actually at odds with correctness as it is much harder to guarantee correctness in the presence of threading. As correctness trumps performance any day ("I can make any program run instantly if it doesn't need to produce a correct result"), threading should be used sparingly. Oftentimes, you can make better performing apps by concentrating on other things, anyway.

Anyway, I don't know why I'm writing so much. I could have just substituted "brainfuck" in place of OOP and shown that, just by substituting a word, you haven't proven anything. Just because it was a good idea to use OOP doesn't make it a good idea to use threading.

BTW, where do you get your information about Google employees having paid research time? Google employees have 20% time, certainly, but that isn't "paid research time".

Re:Most applications will never become multi-threa (2, Interesting)

kartracer_66 (96028) | about 7 years ago | (#18750249)

Concurrent applications needn't be so difficult to program. Take a look at the actors model [wikipedia.org] and STM [wikipedia.org].

What's unfortunate is that we're stuck on this idea that concurrency == multiple threads w/shared state. With that approach, sure, apps will never scale. You're right, we do need higher-level threading primitives. I'm just not so sure they're all at the compiler level.

Re:Most applications will never become multi-threa (1)

something_wicked_thi (918168) | about 7 years ago | (#18750519)

Thanks for that. I shouldn't have concentrated on the compiler. What I really think we need is language-level constructs, but that's certainly not what I said.

I like the actor's model. It sounds like a formalization of message-passing, which (as I mentioned cross thread), is, in my opinion, the best multiprocessing model we have so far.

Re:complexity/stability (0)

Anonymous Coward | about 7 years ago | (#18750563)

I do some coding, very minor, usually personal projects.

Yesterday, I came across a few ideas where multiple threads would be useful, and I figured since the hardware/CPUs exists (multiple cores, hyperthreading) for this sort of thing, I looked up how to clearly do this. (The reasons were mpeg transcoding/encoding, 3d/ray tracing of multiple animation sequences, and dna sequencing.)

HOLY *#@%!

Besides a near total lack of clear documentation, there is even inconsistent use of terminology it seems by those in the field. I had to dig quite a bit just to find some commentary even in some languages. For example, in perl, while it has the meat to fork and what not, it doesn't seem to do it well, and most think it's a losing proposition on perl5, suggesting to wait for perl6.

Just because you call for a thread or a fork, it also seems it doesn't necessarily mean the processor's capabilities will be used. I mean, hell, if I can't even figure out if an OS utilizes a processor correctly (ever read those notes? rather difficult to parse to find something specific many times), how the hell do you get around to coding for it? Just code it in, and cross your fingers and hope your efforts aren't wasted? Even some of the research papers on the topic were laughable; it seemed the only way even the coders knew if something worked wasn't documentation or a clear logical argument, but by setting up a machine and testing the code to see if it hopefully ran faster in proportion to what was expected.

Not only all that, but even experienced coders and applications seem to not do it so well--there was a somewhat recent article on Toms Hardware, where they said only 2 apps really seemed to use multiple cores well, and then, none of those 2 took advantage of more than 2 cores (4 cores showed really no proportional benefit). Hell, if paid commercial programmers of Adobe can't even do it right, how the hell do most of us pull it off?

Say goodbye to open home built PC's... (-1, Troll)

Anonymous Coward | about 7 years ago | (#18749453)

...the only machines in the market will be closed Macs and closed Microsoft boxen.

What needs to happen is for Microsoft to make and sell their own closed boxes a la Apple, and leave the rest of us alone.

Why the surprise? (2, Interesting)

something_wicked_thi (918168) | about 7 years ago | (#18749457)

It makes perfect sense that you'd still try to speed up single-threaded applications. After all, if you have 4 cores, then any speedup to one core is a speedup to all of them. I realize that's not what this article is about. In this case, they are speeding up one at the expense of the other, but the article's blurb makes it sound like Intel shouldn't be interested in per-core speedups when that is clearly false.

Re:Why the surprise? (1)

mwvdlee (775178) | about 7 years ago | (#18749755)

I thought so too, until I actually read TFA.
This optimalization essentially shuts down the other cores in order to let the remaining core perform faster.
So this optimalization is counterproductive when you have applictions that actually use multiple cores.

Re:Why the surprise? (1)

something_wicked_thi (918168) | about 7 years ago | (#18749809)

Hmm? I thought I said that... However, I have a problem with something in your post: this optimization doesn't hurt apps that use multiple cores because it is enabled only when one core is idle.

Sorry to be so pedantic. I'm sure you knew that, but I don't like leaving misinformation lying around for other readers, even if this is Slashdot. grin

EFI used by more than Apple (3, Informative)

EMB Numbers (934125) | about 7 years ago | (#18749471)

EFI is used by more than just Apple. For example, HP Itanium systems use EFI. By virtue of being "extensible", EFI is vastly better than the BIOS which has frankly failed to evolve since Compaq reverse engineered it in the early 1980s.

It is well past time that BIOS went to the grave.

Re:EFI used by more than Apple (4, Insightful)

pla (258480) | about 7 years ago | (#18750019)

By virtue of being "extensible", EFI is vastly better than the BIOS

Yeah... Why, that nasty ol' standard BIOS makes hardware-level DRM just so pesky. And vendor lock-in for replacement hardware? Almost impossible! Why, how will Dell ever survive if it can't force you to use Dell-branded video cards as your only upgrade option? And of course, WGA worked so well, why not include it at the firmware level? Bought a "OS-less" PC, did we? No soup for you!


Sorry, EFI has some great potential, but it has far too much potential for vendor abuse. The (somewhat) standardized PC BIOS has made the modern era of ubiquitous computers possible. Don't take a "step forward" too quickly without first looking to see if it will send you over a cliff.

Re:EFI used by more than Apple (2, Informative)

ThisNukes4u (752508) | about 7 years ago | (#18750203)

And besides, most modern OSes basically relegate the bios to the back burner. Its not like we're still calling bios interrupts from DOS anymore.

Re:EFI used by more than Apple (1)

Bozdune (68800) | about 7 years ago | (#18750379)

Its not like we're still calling bios interrupts from DOS anymore.

Speak for yourself! I, for one... oh, never mind.

Re:EFI used by more than Apple (3, Insightful)

99BottlesOfBeerInMyF (813746) | about 7 years ago | (#18751171)

Yeah... Why, that nasty ol' standard BIOS makes hardware-level DRM just so pesky.

Not really. It just makes improvements and DRM hacks. Add a TPM module to a BIOS-based system and include support in the OS and it will be just as effective for MS's purposes as an EFI one. BIOS makes modern hardware a pain in the butt. The fact that DRM modules are modern hardware is sort of orthogonal to the issue.

And vendor lock-in for replacement hardware? Almost impossible! Why, how will Dell ever survive if it can't force you to use Dell-branded video cards as your only upgrade option?

Umm, Dell is not even the biggest player in a market that is not monopolized. If Dell requires Dell branded video cards and people care (most probably won't) then people will switch to a vendor that does not do this and Dell will change or die. I don't think Dell or any other PC vendor has enough influence to force such a scheme upon the existing graphics card makers. Only MS really has that much influence and I don't think they have the motivation.

Bought a "OS-less" PC, did we? No soup for you!

I don't think you have to worry about this problem unless you're running Windows on it.

Sorry, EFI has some great potential, but it has far too much potential for vendor abuse.

I disagree. I don't see that vendors will abuse this any more than they already abuse BIOS. In any case, the change is coming. You just need to decide which side of the curve you want to be on. (Typed from an EFI laptop.)

EFI (2, Informative)

Anonymous Coward | about 7 years ago | (#18749487)

Universal Extensible Firmware Interface (UEFI) -- the replacement for BIOS that has so far only been used in Intel Macs


Really. I know Google is hard to use, but even Wikipedia [wikipedia.org] would have given some detail on EFI history. (Hint: Itanium only ever used EFI). And it turns out that Macs are not even the first x86 machines to use it, either:

In November 2003, Gateway introduced the Gateway 610 Media Center, the first x86 Windows-based computer system to use EFI. The 610 used Insyde Software's InsydeH2O EFI firmware, based on the Framework. It still relied on a legacy BIOS implemented as a compatibility support module to boot Windows.

And there is always XScale.

Give slashdot editors some credit. (1)

anss123 (985305) | about 7 years ago | (#18749949)

The submitter said UEFI, It's usualy only called EFI. The Universal must have thrown them off.

Assuming they read the submission, of course.

A Marketing Triumph (5, Informative)

sibtrag (747020) | about 7 years ago | (#18749491)

Intel's "Enhanced Dynamic Acceleration Technology" is a triumph of marketing. Notice how the focus is on the transition where one core becomes inactive and the other one speeds up. This is the good transition. The other transition, where the chip workload increases & voltage/frequency are limited to keep within a power envelope, is called "throttling" and is much disliked in the user community.

Don't get me wrong, this is valuable technology. It is important that microprocessors efficiently use the power available to them. Having a choice on a single chip between a high-performance, high-power single-thread engine & a set of lower-performance, lower-power engines has great promise. But, the way this is presented is a big victory for marketing.

Re:A Marketing Triumph (0)

Anonymous Coward | about 7 years ago | (#18749969)

I tried rereading your post several times and it looks like random babbling. Are you saying that this "good transition" is a good thing? And, if so, are you for or against Intel's "Enhanced Dynamic Acceleration Technology" !?

The other transition, where the chip workload increases & voltage/frequency are limited to keep within a power envelope, is called "throttling" and is much disliked in the user community.

WTF? I've got a Core 2 Duo that is, by default, running at 1.86 Ghz. However, when the CPUs load is low and when I configure my system to allow the CPU to lower its frequency, then the speed goes down to 1.6 Ghz (it may not seem much, but power consumption reduction is huge). Wikipedia's entry for "CPU throttling" says it's a way to reduce power consumption by lowering a CPU's frequency, which is pretty much the definition I expected to find. Not only do I love it (it's "greener") but I can decide, at will, if I want to enable it or not (e.g. "echo ondemand > /proc/sys/.../cpu0/scaling_governor").

And of course it makes no fscking sense to disable it on my workstation (Core 2 Duo / 4 GB of Ram), for the technology seems to work fine: you can't notice that the speed is going up or not. It's all transparent, for the system is very smart in deciding wether to run at full speed or not.

Care to explain why "throttling is much disliked in the user community"? Why would it be disliked? If you dislike it, can't you turn throttling off on your OS? Do you dislike the fact that your laptop, upon noticing that its battery shall soon be empty, decides to save power?

 

$ cpufreq-info | grep limit
  hardware limits: 1.60 GHz - 1.86 GHz
$ cpufreq-info | grep current
  current policy: frequency should be within 1.60 GHz and 1.86 GHz.
  current CPU frequency is 1.60 GHz (asserted by call to hardware).


(For anyone on Linux, this is on a kernel 2.6.16, so I'm using the "old" speedstep-centrino and cpufreq_ondemand modules to have automatic throttling on my Core 2 Duo. If you're using a more recent kernel you'll have to use other modules)

Note that the limits 1.6 - 1.86 are the official limits for my CPU, when configured it to run normally (i.e. I'm not overclocking that CPU), others have a wider range of operation.

I'm posting to /., and though I've got ten programs opened (and a VMWare VM running) it makes no sense to run at full speed, so the two cores are running at 1.6 Ghz. Should I decide to do some number crunching/kernel building/etc. the system will switch to 1.86 Ghz.

I do wonder which "user community" you're talking about...

Re:A Marketing Triumph (1)

hcdejong (561314) | about 7 years ago | (#18750241)

The other transition, where the chip workload increases & voltage/frequency are limited to keep within a power envelope, is called "throttling" and is much disliked in the user community.

Who cares, when you won't notice the throttling since the throttled core was sitting idle anyway? They're not slowing down the core you're using.

Re:A Marketing Triumph (2, Informative)

Kjella (173770) | about 7 years ago | (#18750359)

Look, if you look at the benchmarks it's quite clear that you could either get the maximum clock speed *or* the big number of cores. How likely is it really that you'll have four cores, all equally at 100% load? Not unless you're doing something embarassingly parallel better left to a cluster.

Basicly, if you have a thermal envelope. You know that consumption rises with clockspeed squared. You can either have 4*(1GHz)^2 = 4Ghz processing power or 1*(2GHz)^2 = 2GHz processing power with the same power consumption. You can use more cores if possible, since they have better efficiency. You have maximum performance for a single thread if necessary.

One thing is thermal throttling that happens under heavy load which is when you need the processing power, it's like an engine that you can't use. Another thing is a system, that within a TPD of say 100W always makes the most possible out of it. You can eat your cake and have it too, not choosing one over the other at purchase. What's not to like about that?

Twice the speed? (2, Insightful)

Aladrin (926209) | about 7 years ago | (#18749495)

The article suggests that this technology makes 1 core run twice as fast by basically disabling the second core for a while. They go on to 'prove' how effective it is by running a photo processing thing that they don't explain. It runs twice as fast this way.

So... If they can have 2 cores at full speed, or 1 core at double speed... WHY THE FUCK do they have 2 cores in the first place?

Re:Twice the speed? (0)

Anonymous Coward | about 7 years ago | (#18749551)

Because if you have multi-threaded applications, using 2 cores probably yields better performance than using 1 core that goes a little bit faster?

Re:Twice the speed? (2, Informative)

plasmacutter (901737) | about 7 years ago | (#18749607)

because it's better to have separate cores with separate pipelining for multiple threads than sharing a single core.

because of pipelining, if you have to swap between tasks, you actually lose a large number of instructuions, which means switching tasks often with a single core is significantly worse for performance than multiple cores.

Re:Twice the speed? (1)

something_wicked_thi (918168) | about 7 years ago | (#18750221)

Er, not quite.

The time slice of an app stays the same, so you have just as many context switches on a single- or a multi-core CPU, assuming you have more tasks than CPUs. Furthermore, performance does not scale by the number of cores you have. You have locking contention, which means that you end up being serialized for at least part of the time. In almost all cases, you're better off with one fast core than two slow ones.

Actually, it was a single benchmark running with a new technology that had nothing to do with the overclocking that produced the 2x speedup. Most likely, the benchmark they used was atypical and, furthermore, has nothing to do with the overclocking bit.

Re:Twice the speed? (1)

kannibal_klown (531544) | about 7 years ago | (#18749609)

So... If they can have 2 cores at full speed, or 1 core at double speed... WHY THE FUCK do they have 2 cores in the first place?
Well, in the longrun if they do something along the lines of the OS determining if the present circumstances would benefit more from 1 fast core or 2 regular cores, then you get the best of both worlds. Your machine could take situations where multi core is benefitial or jump to single core mode.

Think of it like having a car that can transform between a sleek sports car (single core) or heavy-duty pickup truck (multi core). Then again the optimal scenario would be a pickup truck that can simple accelerate and handle like a sports car.

Re:Twice the speed? (0)

Anonymous Coward | about 7 years ago | (#18749695)

The photo benchmark was about "Turbo Memory", probably about having some flash memory (on system instead of pendrive?) so Vista used it as more RAM.

Re:Twice the speed? (1)

level_headed_midwest (888889) | about 7 years ago | (#18749985)

Think of quad-cores or more rather than dual cores. Having four cores at a moderate clock speed where one can get ramped up to a high clock speed will give you the large speed boost of many slower cores for multithreaded applications and a high-clock-speed single core for single-threaded applications. The four or more slower cores will beat the one higher-clocked one in multithreaded applications.

UEFI? (2, Interesting)

Noryungi (70322) | about 7 years ago | (#18749497)

While I am all for having something a bit more intelligent than BIOS to init a computer, I can't help but wonder... Does this UEFI integrates DRM functions? Is this the Trojan Horse that will make all computers DRM-enabled?

Inquiring minds want to know!

Re:UEFI? (4, Informative)

KonoWatakushi (910213) | about 7 years ago | (#18750029)

Rather than answer that question, I will ask another. Why would hardware manufacturers such as Intel and AMD want to limit their market by crippling the hardware to only run certain software? It is unlikely in the extreme that open source operating systems will be locked out, and that is what really matters.

As I understand it, UEFI will enable some thoroughly nasty DRM, but only so far as the OS vender chooses to take it. Apple and Microsoft will almost certainly make it a miserable experience for all involved, but will probably tire of shooting themselves in the feet at some point. There are alternatives after all and they are looking better every day.

Re:UEFI? (1)

CrackedButter (646746) | about 7 years ago | (#18750383)

I noted the anti Apple remark. Kinda pointless when Apple have already proved you wrong by not limiting any OS on their machines because they already use this technology and they are instead promoting alternative OS'es to co-habit with OSX.

Re:UEFI? (2, Interesting)

Kjella (173770) | about 7 years ago | (#18750491)

Locked out, no. Let in, also no. Linux is going to suffer the death of a thousand needles when "secure" movies, "secure" music, "secure" webpages, "secure" e-mail, "secure" documents, "secure" networks, "secure" IM and whatnot get propagated by the 98% running DRM operating systems. I mean, look how many people are frustrated Linux doesn't play MP3 or DVDs out of the box, no matter how little it's Linux's fault, and there is an easy fix.

What if the problem is ten times worse, and there is no easy fix? Are you going to say "but hey there's this open source network..." "but all my friends are on MSN" "they can come too" "...restore my Windows. Now!" and that'll be the end of Linux on the desktop as anything but a geek's toy.

Re:UEFI? (0)

Anonymous Coward | about 7 years ago | (#18750063)

Another question: what was wrong with OpenFirmware [wikipedia.org], a standard used by Sun and Apple and others? Was it just a case of NIH?

Re:UEFI? (1)

644bd346996 (1012333) | about 7 years ago | (#18750395)

UEFI makes it easier to do nasty things with a TPM, but it is not a guaranteed problem. The Intel Macs have EFI and TPMs, but all they use the TPM for is to enable OS X to confirm that it is on an Apple computer. The presence of the TPMs in intel Macs is probably just a sign that Apple didn't bother with making their own motherboard/chipset design from scratch, and instead just made the Intel designs fit their form factors.

"Caught up"? (4, Insightful)

Z0mb1eman (629653) | about 7 years ago | (#18749519)

It seems like a tacit admission from Intel that multi-threaded apps haven't caught up with the availability of multi-core CPUs.

Or maybe Intel, unlike the story submitter, knows that many apps simply do not lend themselves to multithreading and parallelism. It's not about "catching up".

Multi-core for multithreaded apps? Check.
Trying to get each core as fast as possible for when it's only used by one single-threaded app? Check.

Makes sense to me.

mod parent up (0)

Anonymous Coward | about 7 years ago | (#18749605)

Exactly... I was going to quote the exact same phrase as the one you quoted: I don't agree with that dumb "tacit admission" statement. It's really about getting the best of both worlds. Not too mention that it's not just "multi-core for multithreaded apps" but also "multi-core for correctly multithreaded OSes".

Re:"Caught up"? (0)

bfields (66644) | about 7 years ago | (#18749865)

many apps simply do not lend themselves to multithreading and parallelism.

For example?

Can't think of anyone, but... (1)

anss123 (985305) | about 7 years ago | (#18750263)

most applications, like games and emulators, benefit more from one fast core contra multiple slower ones. There are also applications such as Matlab, where ease of use takes the high seat over performance.

Re:"Caught up"? (1)

Z0mb1eman (629653) | about 7 years ago | (#18751219)

Anything where it's not trivial to break up the problem space into equal chunks, work on them separately, and put them back together at the end.

One example - which I run into at work all the time: parsing large HTML (or XML, same thing really) files. Web browsers are multithreaded in the sense that they use threads for connections to the server to get different files; it's still (as far as I know) single-threaded per file as far as parsing is concerned.

Another example - games. There's obvious potential for multithreading - one thread to maintain the gameworld state, another for AI, another for physics, another to push polygons to the GPU... But since these are different tasks, rather than one task that's being computed in parallel, it's very unlikely that the threads are going to be using the CPU cores (or multiple CPUs) equally - which sounds like the whole point of what Intel is doing.

Of course, I am not an expert in any of these fields, so (factual) corrections are welcome :p

Re:"Caught up"? (1)

nine-times (778537) | about 7 years ago | (#18751059)

Also, and I feel dumb for saying this because it's so obvious (I'm not even an expert on these things), but you don't really need all of your applications to be multithreaded in order to benefit from multiple cores. I guess I'm assuming that I'm not the only person who runs multiple applications at the same time.

Of course, it's more likely that you'll be taking good advantage of 8 cores if your apps are multithreaded, but if you're running two single-threaded applications on a dual core system, shouldn't the OS be smart enough to push each one to a different core, meaning each application will run faster than if it were a single core system?

journalism at its finest (3, Funny)

gEvil (beta) (945888) | about 7 years ago | (#18749545)

Ahhh, journalism at its finest: "The new chips will be able to overclock one of the cores if the other core is not being used." Then two paragraphs later: "This is not overclocking. Overclocking is when you take a chip and increase its clock speed and run it out of spec. This is not out of spec."

That said, this seems to make perfect sense to me. If they're able to pump all that power into a single core while the other one is asleep/idle, all while keeping it within its operating parameters, then I'm all for it.

Given the modern power budgets, this makes sense (1)

CTho9305 (264265) | about 7 years ago | (#18749565)

In the past, chips were limited to a maximum voltage because of the risk of long-term damage at higher voltages. As a results, the voltage could be cranked up close to the maximum, providing high-frequency performance. Around 2004, however, OEMs started becoming concerned about cooling extremely high-power chips like Tejas, and the chip manufacturers had to start pushing the power consumption back down. Now, we have chips that could operate at higher frequencies if the power budget were higher. When you have multiple cores and some aren't running, that busy cores can be run at a higher frequency (and potentially voltage) without exceeding the overall power budget (which is what TFA says Intel is doing).

Re:Given the modern power budgets, this makes sens (1)

mwvdlee (775178) | about 7 years ago | (#18749941)

So basically the limiting factor in CPU design nowadays is power consumption? It can only use upto a set quantity of power, regardless of the number of cores?

Strange link (1)

leuk_he (194174) | about 7 years ago | (#18749567)

The link to "single core overclocking" states:

"This is not overclocking. Overclocking is when you take a chip and increase its clock speed and run it out of spec."

THis is just a technique to stay under the specified power envelope. Nowadays not the speed is the real problem, but the powerusage. Not that in single thread mode the CPU will run less instructions per watt.... and i guess for every 25% more cpu frequency you you 75% more power or or something like that.

strange runon too (1)

plasmacutter (901737) | about 7 years ago | (#18749789)

Overclocking is when you take a chip and increase its clock speed and run it out of spec.
... and have to use extra cooling and forget to do so and melt the chip and get pissed and go to the store and buy a new chip and overclock it too.

Re:Strange link (1)

Carewolf (581105) | about 7 years ago | (#18749899)

It might also be a fancy keyword for shared cache, where one core can use all the cache if the other one is sleeping or not very active. Intel has previously jumped a few fences and not implemented fully shared cache unlike AMD.

Btw. robson? Rubs on, Rubs off..

Thanks for the heads up... (2, Insightful)

dintech (998802) | about 7 years ago | (#18749603)

"We have been working with Microsoft," Intel hinted."

Now I know to avoid it.

Multi-core is good for jobs (3, Interesting)

pzs (857406) | about 7 years ago | (#18749643)

As many slashdotters are in software development or something related, we should all be grateful that multi-core processors are becoming so prevalent, because it will mean more jobs for hard-core code-cutters.

The paradigm for using many types of software is pretty well established now, and many new software projects can be put together by bolting together existing tools. As a result of this, there has been a lot of hype about the use of high level application development like Ruby on Rails, where you don't need to have a lot of programming expertise to chuck together a web-facing database application.

However, all the layers of software beneath Ruby on Rails are based on single-threaded languages and libraries. To benefit from the advances of multi-core technology, all that stuff will have to be brought up to date and of course making a piece of code make good use of a number of processors is often a non-trivial exercise. In theory, it should mean many more jobs for us old-schoolers, who were building web/database apps when it took much more than 10 lines of code to do it...

Peter

Re:Multi-core is good for jobs (4, Insightful)

mr_mischief (456295) | about 7 years ago | (#18749739)

Taking advantage of multiple cores with a single-threaded per-client application just requires having more than one simultaneous user on your server. It doesn't at all require having a multi-threaded application per client. Most HTTP connections don't do anything very fancy, and really won't be helped much internally by multiple cores. The web server software itself, the database server, the fact that popular sites (or shared servers) get more than one visitor at a time, and similar concerns will make a much bigger difference with multiple cores than making a CRUD application or a blog multi-threaded.

Re:Multi-core is good for jobs (1)

pzs (857406) | about 7 years ago | (#18749841)

You're right, web applications are perhaps a poor example here, but there are many other applications that run on the average desktop that will now run faster if they can make use of a multi-core system. Somebody has to do the coding that will effect this change, and that coding will often be non-trivial.

Peter

Re:Multi-core is good for jobs (2, Informative)

Simon (815) | about 7 years ago | (#18750289)

Actually I can't think of any desktop applications that would really benefit from supporting multithreading to actually warrant the extra effort. Most desktop applications for the average person run perfectly fast as single threaded programs. And the high-end stuff like graphics and 3D rendering have supported SMP for a long long time. You could buy dual processor Pentium Pro machines back in the 90s. I don't see multicore processors fuelling demand for programmers.

--
Simon

slow down, slow down (0)

Anonymous Coward | about 7 years ago | (#18750411)

Reading your post make it sound like a Webapp, to be multi-threaded, can simply create one thread per user connecting to the Webapp. Maybe it's not what you meant, but it's how I read your post. Things are way more complicated than that. Many of the biggest Websites are backed by Java (I'll just cite three: GMail, eBay, FedEx). In Java, it used to be that one new thread was created per user session... But this simply wouldn't scale: it is fine for toyish websites, running a few simultaneous users, but it won't scale. The cost for multi-threading in Java (and presumably in C# as well), after a certain number of threads, is just too high.

Making a multi-threaded Webapp by assigning a thread to each user is naive and unsustainable...

Re:Multi-core is good for jobs (0)

Anonymous Coward | about 7 years ago | (#18751321)

Except rails (mongrel) is single threaded full stop.

For other web servers/app servers you are correct, except if you are using an app server you are already having to deal with concurrent access to shared data. The app server managing task dispatch to threads does not eliminate most of the work.

Re:Multi-core is good for jobs (1)

GoatMonkey2112 (875417) | about 7 years ago | (#18749741)

I know that at least one higher level programming language can make use of multiple processors relatively easily for web applications. .Net has a relatively easy way to make web garden base applications.

Re:Multi-core is good for jobs (1)

Verte (1053342) | about 7 years ago | (#18749927)

I don't know if I really understand what you're saying. Making Rails and other interpreters take advantage of smp and the like is not trivial, but it's easier than having to convert all the ruby, python, perl, lisp etc apps, you've just got to convert the engine. If the low level functions, especially those with big loops like map and reduce, search, and regexp processing, can take advantage of more cores, you've got all the performance benefit you need. IMHO I'd like to see better vectorisation options to make this sort of low level parallelism easier, rather than all this threading stuff. I understand it isn't always the best way, but on your average dual or quad core machine it'd be worth it.

So I'm not sure if there's all that much work in it, I guess.

Re:Multi-core is good for jobs (1)

drgonzo59 (747139) | about 7 years ago | (#18750371)

making a piece of code make good use of a number of processors is often a non-trivial exercise.


True, but the many benefits from a multicore system don't come from necessarily runnning one process with shared data on multiple cores (i.e. threading) but running multiple processes that are fairly isolated in parallel. One could be the Ruby interpreter,other could be your database, the other a monitoring or security application, another a backup deamon and so on.


Concurrent programming in shared data environment is very difficult. And besides, some tasks are inherently sequential like a finite state machine for example.


In other words, I don't think that I wasted my money by buying a Core 2 Duo laptop instead of a single core machine. I notice a very real performance improvement because of an extra core. I run a numerical simulation module on one core and the system and other stuff like browsing the web and office apps can have an extra core all to themselves!

Re:Multi-core is good for jobs (1)

tji (74570) | about 7 years ago | (#18751229)

Not really. Multi-core doesn't mean you need multi-threaded apps to benefit from it. Take a loot at the processes running on your Linux/Mac/Windows box some thime.. there are a lot of them. While process A is running on CPU 0, it doesn't need to be switched out to let process B run on CPU 1.

Web apps, like Ruby on Rails, is a good example of why multi-threading is not needed. Web servers handle many simultaneous requests, so the workload is easily divisible based on individual requests. The web server itself may be mutlithreaded, use multiple processes, or both, but the downstream apps like Ruby don't need to be MT.

You get a ton of benefit from multiple cores for web apps, or many other multi-process functions.

There are some examples where the CPU needed by a single process is greater than a single core (High bit-rate H.264 video comes to mind) and these apps must be carefully coded to use multithreading. But, for the vast majority of other apps, multithreading is not worth the effort/complexity.

Oh no! Not UEFI! (-1, Troll)

Anonymous Coward | about 7 years ago | (#18749677)

Oh, nuts. I hope UEFI does not become popular with PC Motherboard makers. It will just be another way for spyware creators to infect our systems, and this time it could be independent of OS!

Re:Oh no! Not UEFI! (1)

cortana (588495) | about 7 years ago | (#18750417)

If your OS allows spyware to install itself into the EFI partition then it is broken. :)

Multi-core CPUs (5, Informative)

nevali (942731) | about 7 years ago | (#18749713)

With all this talk of multi-threading on multi-core CPUs, Slashdotters appear to have forgotten that we all run multi-tasking operating systems. An OS isn't forced to schedule all of the threads of a single application between cores: it's perfectly capable of spreading several different single-threaded applications between cores, too.

And no, EFI didn't appear first on Intel Macs. Intel Macs weren't even the first x86-based machines to employ it.

come on, guys (1)

Verte (1053342) | about 7 years ago | (#18749743)

since we've already got to write somewhat parallel code [because cores are pipelined what, 20 deep now?], it'd be nice if that parallelism extended easily to multi cores. For example, when dividing up some function that reduces [large array -> scalar], we've already got to split execution 20 ways. It'd be nice if there was some way to split such things into the minimum arbitrary number of independent threads supported by the machine, at runtime.

Will this help mac videocard support? (1)

badjohny (176646) | about 7 years ago | (#18750015)

I was under the assumption that the big problem with macs supporting pc video cards was bios vs EFI. Will this move allow quicker adaptation of video cards to move to the mac? I would assume that if they start using UEFI just like apple does now, that it would just be a OS X driver issue, and no longer a bios vs EFI issue.

Where are the EFI video cards and raid cards? (1)

Joe The Dragon (967727) | about 7 years ago | (#18750177)

As you will then them to boot a efi system as with a mac pro when you put in a non efi card you get no video until you boot windows.
And an non EFI raid card may not be able to boot in a efi system.

Re:Where are the EFI video cards and raid cards? (1)

drinkypoo (153816) | about 7 years ago | (#18750819)

In theory, couldn't you have an extension that provided a driver that would work until boot? They do like to talk about how extensible EFI is...

Re:Where are the EFI video cards and raid cards? (1)

Joe The Dragon (967727) | about 7 years ago | (#18750985)

But you need to have that extension in the cards rom

Re:Where are the EFI video cards and raid cards? (1)

drinkypoo (153816) | about 7 years ago | (#18751201)

But you need to have that extension in the cards rom

Why?

And on top of that, most add-in cards today are firmware upgradable.

But can you explain precisely why it would have to be in the ROM? It would have to be there to not have to be somehow installed, I'll grant you that...

Re:Where are the EFI video cards and raid cards? (1)

ivan256 (17499) | about 7 years ago | (#18751313)

Only if you want the new video card to be "Plug and Play"...

Otherwise the driver can live on the system. Even on the system disk.

Ominous (0)

Anonymous Coward | about 7 years ago | (#18750199)

> "We have been working with Microsoft," Intel hinted."

What a shame. All Microsoft care about is locking down the PC platform and further entrenching their malware. What's in this for OSX, BSD and Linux users (as opposed to Microsoft prisoners)?

EFI = no XP (1, Interesting)

Anonymous Coward | about 7 years ago | (#18750267)

By convincing Intel to make Santa Rosa EFI-Only, MS can ensure that none of their pesky users will be able to install XP on it.

Nothing like using monopoly influence to prop of sales of your lastest OS that no one really needs or wants.

Re:EFI = no XP (0)

Anonymous Coward | about 7 years ago | (#18750459)

> MS can ensure that none of their pesky users will be able to install XP on it.

Sure they can, they just run under QEMU on linux.

Thread accell. about power or temperature? (1)

hcdejong (561314) | about 7 years ago | (#18750319)

I suspect this isn't so much about power as it is about temperature. With a dual-core chip, you expect both cores to contribute 50% to the heat load. If one core's throttled back, you can overdrive the other core without the chip overheating.

Below max clock vs. TDP (2, Informative)

AcidPenguin9873 (911493) | about 7 years ago | (#18751005)

What this amounts to is taking a part that is qualified to run at, say, 2.8GHz, and selling it with a default clock of 2.2GHz in order to meet TDP. Then, when one core is disabled, you crank up the other core's clock to 2.8GHz and stay within TDP. This sounds like a good idea for mobile computing, since power (i.e. battery life) is by far the most important thing. But for servers, I think you'd want to sell as many chips as you can with the highest rated clock freq, since those are higher margin.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...