Inside Intel's Next Generation Microarchitecture 116
Overly Critical Guy writes "Arstechnica has the technical scoop on Intel's next-generation Core chips. As other architectures move away from out-of-order execution, the from-scratch Core fully adopts it, optimizing as much code as possible in silicon, and relies on transistor size decreases--Moore's Law--for scalability."
Core Duo == Article Duo! (Score:5, Funny)
Re:Core Duo == Article Duo! (Score:1)
Re:Core Duo == Article Duo! (Score:3, Funny)
totally, just sum it up in one post:
New Intel architecture- Smaller, faster, better!
New Intel architecture- Smaller, faster, better!
Is this a PURPOSEFUL dupe? (Score:1, Troll)
AMD Vs Intel: Round 9 (Score:4, Interesting)
I believe that AMD had this technology [wikipedia.org] [wikipedia.org] before Intel ever started in on it. Yes, I know it wasn't really commercially available on PCs but it was there. And I would also like to point out a nifty little agreement between IBM and AMD [pcworld.com] [pcworld.com] that certainly gives them aid in the development of chips. Let's face it, IBM's got research money coming out of their ears and I'm glad to see AMD benefit off it and vice versa. I think that these two points alone show that AMD has had more time to refine the multicore technology and deliver a superior product.
As a disclaimer, I cannot say I've had the ability to try an Intel dual core but I'm just ever so happy with my AMD processor that I don't see why I should.
There's a nice little chart in the article but I like AMD's explanation [amd.com] [amd.com] along with their pdf [amd.com] [amd.com] a bit better. As you can see, AMD is no longer too concerned with dual core but has moved on to targeting multi core.
Do I want to see Intel evaporate? No way. I want to see these two companies go head to head and drive prices down. You may mistake me for an AMD fanboi but I simply was in agony in high school when Pentium 100s costed an arm and a leg. Then AMD slowly climbed the ranks to be a major competitor with Intel--and thank god for that! Now Intel actually has to price their chips competitively and I never want that to change. I will now support the underdog even if Intel drops below AMD just to insure stiff competition. You can call me a young idealist about capitalism!
I understand this article also tackles execution types and I must admit I'm not too up to speed on that. It's entirely possible that OOOE could beat out the execution scheme that AMD has going but I wouldn't know enough to comment on it. I remember that there used to be a lot of buzz about IA-64's OOOE [wikipedia.org] [wikipedia.org] processing used on Itanium. But I'm not sure that was too popular among programmers.
The article presents a compelling argument for OOOE. And I think that with a tri-core or higher processor, we could really start to see a big increase in sales using OOOE. Think about it, a lot of IA-64 code comes to a point where the instruction stalls as it waits for data to be computed (most cases, a branch). If there are enough cores to compute both branches from the conditional (and third core to evaluate the conditional) then where is the slowdown? This will only break down on a switch style statement or when several if-thens follow each other successively.
In any case, it's going to be a while before I switch back to Intel. AMD has won me over for the time being.
What gives? (Score:1, Funny)
The mods gotta loosen up a little. Sheesh.
Re:AMD Vs Intel: Round 9 (Score:2)
Re:AMD Vs Intel: Round 9 (Score:1)
Re:AMD Vs Intel: Round 9 (Score:2)
All that said, there are several applications where the Intel Archecture whips AMDs, the top two being:
MS Office and s
Re:AMD Vs Intel: Round 9 (Score:2)
Re:AMD Vs Intel: Round 9 (Score:2)
Re:AMD Vs Intel: Round 9 (Score:2)
I believe that AMD had this technology before Intel ever started in on it.
What technology a
Re:IA-64 is an in-order execution processor (Score:1)
The real technology is... (Score:4, Funny)
Re:The real technology is... (Score:2, Funny)
Re:The real technology is... (Score:2)
Let's be perfectly honest here... (Score:1, Insightful)
Re:Let's be perfectly honest here... (Score:2, Funny)
I have a great idea (Score:1, Funny)
Seriously this is gonna be so cool, slashdot will never be the same again!
No no, not good enought (Score:1, Troll)
Israel (Score:1, Interesting)
Re:Israel (Score:3, Insightful)
no it doesn't. only mentions country - not culture. are you suggesting that only semites live in Israel? or maybe only semites could obtain PHD's in Israel?
I think your reference to semitism is plain OOO .
actually, your "joke" about a checkpoint firewall actually infers racism.
Re:Israel (Score:2)
Re:Israel (Score:1)
So many of them came from Levittown, LI.
KFG
Re:Israel (Score:2)
During the Middle Ages, while gentiles pushed their smart sons into the priesthood and celibacy, the smart Jews became rabbis and had lotsa kids.
[/flamebait]
The Izzies have had to become really smart because they're surrounded by people who'd like nothing better than to push them into the sea. As a matter of fact, when they got military gear from the States, the manufacturers often came back and asked them exactly *what* they did with the electronics; it might have had to do with the 88-2 kill r
Re:Israel (Score:1)
Re:Israel (Score:2, Insightful)
One odd thing is that the US imports many scientists with attractive grants, resulting in an exodus from European scientists (probably from other countries too, I just know Europe). Of course, since the eleventh september, getting a visa has become hard and thus less scient
Re:Israel (Score:1)
For comparison, the US Navy lost 2 planes [wikipedia.org] to Syrian SAMs in just one raid in '83.
Re:Israel (Score:3, Informative)
Re:Israel (Score:2)
Is this an attempt to prove the saying that if a lie is often repeated, it becomes true?
Intel First to Ship Dual core [internetnews.com]
I don't care how you spin it, your statement was a lie bordering on AMD fanboyism.
Re:Israel (Score:2)
Re:Israel (Score:2)
Twice. (Score:1)
Since this is a dupe (Score:4, Interesting)
Why is it important that Intel is embracing OOOE and everyone else is moving away.
Re:Since this is a dupe (Score:5, Informative)
In order execution doesn't require all that special silicon and therefore frees up die space.
So one approach is to try to make your one processor as efficient as possible at executing instructions.
Another approach is to make your processor relatively simple, and get lots of them on the die so you can have many threads at once.
I personally prefer the multiple cores, because I think there is plenty of room for parallelism in software. HOwever this guy is basically claiming that intel is trying to get both, more cores and smarter cores. They're relying on Moore's law to shrink the size of their out of order execution logic so that they can get more smart cores on die.
Re:Since this is a dupe (Score:1)
Also don't forget that with multiple cores you're introducing a host of new problems such as scheduling, cache coherency,
Re:Since this is a dupe (Score:2)
I believe disk transfers are mostly done using DMA, the processor isnt really executing a loop for copying data (check ur cpu usage during a copy)... the deadlocking i think has prolly more to do with the IO interface being choked.
You are right about the amount of available parallelism though, architects/designers simply dont know of any good way to use all the real estate o
Re:Since this is a dupe (Score:1)
And yes, dumping the pipeline due to page faults or cache misses is a big deal. Miss penalties are a huge deal in any system. Most nowadays just go do something else if a program faults (assuming there's something else to do).
Re:Since this is a dupe (Score:1)
Re: (Score:1)
Re:Since this is a dupe (Score:1)
Re:Since this is a dupe (Score:2)
Re:Since this is a dupe (Score:1)
Re:Since this is a dupe (Score:5, Informative)
Some software simply doesn't parallelize well. Processors like Cell and Niagara will take a very ugly ugly beating from Core architecture based processors in that case.
Then there's coarse-grained parallelism, tasks operating independently with modest requirements to communicate between themselves. For these workloads, cache sharing probably guarantees scalability. Going even further, there's embarassingly parallel tasks which need almost no communication between different processes -- such is the case of many server workloads, where each incoming user spawns a new process, which is assigned to a different core each time, keeping all the cores full. This type of parallelism ensures that multicore (even when taken to the extreme, as in Sun's Niagara) will succeed in the server space. The desktop equivalent is multitasking, which can't justify the move to multicore alone.
Now for fine-grained parallelism. Say the evaluation of an expression a = b + c + d + e. You could evaluate b + c and d + e in parallel, then add those together. The architecture best suited for this type of parallelism is the superscalar processor (with out-of-order execution to help extract extra parallelism). Multicore is powerless to exploit this sort of parallelism because of the overhead. Let's see:
Essentially, putting synchronization aside for the moment (which is really the most expensive part of this), it takes a few dozens of cycles to compute a result in one core and forward it to another. Also, if this were done in a large scale, the communication channel between cores would become clogged with synchronization data. Hence it is completely impractical to exploit any sort of fine-grained paralellism in a multicore setting. Confront this with superscalar processors, which have execution units and data buses especially tailored to exploit this sort of fine-grained parallelism.
Unfortunately, this sort of fine-grained parallelism is the easiest to exploit in software, and mature compiler technology exists to take advantage of it. To fully exploit the power of multicore processors, the cooperation of programmers will be required, and for the most part they don't seem interested (can you picture a VB codemonkey writing correct multithreaded code?) I hope this changes as new generations of programmers are brought up on multicore processors and multithreaded programming environment, but the transition is going to be turbulent.
Straying a bit off-topic... Personally, I don't think multicore is the way to go. It creates an artificial separation of resources: i.e. I can have 2 arithmetic units per core, so 4 arithmetic units on a die, but if the thread running on core 1 could issue 4 parallel arithmetic instructions while the thread running on core 2 could issue none, both of core 1's arithmetic units would be busy on that cycle, leaving 2 instructions for the next cycle, while core 2's units would sit idle, despite the availability of instructions from core 1 just a few milimeters away. The same reasoning is valid for caches and we see most multicore designs moving to shared caches, because it's the most efficient solution, even if it takes more work. It is only natural to extend this idea to the sharing of all resources on the chip. This is accomplished by putting them all in one big core and adding multicore functional
So basically what your saying is... (Score:1)
Re:Since this is a dupe (Score:3, Interesting)
T
Lots of n^2 was changed to n in submission. (Score:2)
Its register rename, choocing which instruction goes next etc... increasing n^2 when when core changes.
Re:Lots of n^2 was changed to n in submission. (Score:2)
High risk in medical terminology means a statistically significant risk higher than average. This means that 1 in 6 babies have a risk that is outside the margin of error. Most likely this means that 1 in 6 babies have a 1 percent chance of brain damage. So roughly 0.16 percent of babies actually have some form of brain damage that can be attributed to coal pollution.
I do agree that 1 out of every 600 babies damaged by pollution is
Re:Since this is a dupe (Score:2)
Not sure what you mean here, but if you're talking about my estimate of the costs of exchanging information between cores, remember that this is due to the lack of bypass structures between cores, the need for explicit synchronization code, and the rather inefficient method of sharing data through the cache. Once hardware is dedicated to it, even in la
Re:Since this is a dupe (Score:2)
The basicly in every process generation you have to reduce length of each wire by 0.7 or have half as many wires. Inorder to keep the delay per mm at same. Since rc delay increases when scaling wires smaller. The latency of moving data around increases all the time.
Your transistor budget may go up, but the area that you can use with reasonable clockspeed per cycle goes down.
Here's a hint, even in a good condition
Only on slashdot that would of been insightfull... (Score:3)
He's programmer who doesn't need to think those things.
n^2 or n^3 algorithms (in terms of power and aread) are used in MOST part of the core. So when the guy recommends that in next generation instead of having 4 cores we have single core he suggested that we have one core which is twice as wide as one of those 4 cores.
Large fraction of code is pointer chasing, large fraction of code has ILP equal or
Re:Since this is a dupe (Score:1)
Neither feature improves the speed of assembly language programs. Out of order execution does not assist code that has been written to run fast.
On-chip cache does not help such code as much as plain old on-chip memory would.
Therefore Intel's and AMD's focus on on-chip complexity is to favor Windows Benchmark programs.
The fa
Re:Since this is a dupe (Score:2)
Get with the program. Sheesh
Re:Since this is a dupe (Score:1)
Re:Since this is a dupe (Score:5, Informative)
The good thing about in-order execution is that it keeps the actual silicon simple and uses less transistors. This keeps costs down and engineers have more die space to "spend" on other features, such as more cores or more cache.
The bad thing about in-order execution is that your compiled, highly-optimized-for-a-specific-CPU code will only really perform its best on one particular CPU. And that's assuming the compiler does its job well. Imagine in a world where AthlonXPs, P4s, P-Ms, and Athlon64s were all highly in-order CPUs. Each piece of software out there in the wild would run on all of them but would only reach peak performance on one of them.
(Unless developers released multiple binaries or the source code itself. While we'd HAVE source code for everything in an ideal world, that just isn't the case for a lot of performance-critical software out there such as games and commerical multimedia software.)
As a programmer, I like the idea of out-of-order execution and the concept of runtime optimization. Programmers are typically the limiting factor in any software development project. You want those guys (and girls) worrying about efficient, maintainable, and correct code... not CPU specifics.
I'd love to hear some facts on the relative performance benefits of runtime/compiletime optimization. I know that some optimizations can only be achieved at runtime and some can only be achieved at compiletime because they require analysis too complex to tackle in realtime.
Re:Since this is a dupe (Score:2)
That may have mattered in previous iterations of CPU hardware, but haven't the last f
Re:Since this is a dupe (Score:2)
You can have two processors that implement the exact same instruction set, yet have entirely different performance characteristics.
Of course, this happens even with complex out-of-order cores. With simpler, in-order cores, the difference really grows. You need to tightly couple your code (typically via compiler optimizations, unless you're hand-coding a
Re:Since this is a dupe (Score:2)
The Athlon, while instruction-set compatible with previous CPUs, had two multipliers on chip and only one shift
Re:Since this is a dupe (Score:2)
Usually said programmers sell out efficient code claiming that the framework has been tested and worked on by a lot of people, blah blah blah. The truth is that two good programmers will churn out roughly the same number of bugs per 1000 lines of code
Re:Since this is a dupe (Score:3)
Re:Since this is a dupe (Score:2)
I'd seen the odd reference to LLVM in the past, but I'd never seen a succinct description of its benefits until now. Thanks for the informative reply.
Re:Since this is a dupe (Score:2)
Not really. The best case for any in-order processor is to have dependent instructions as far apart from each other as possible. From this state, no amount of re-ordering instructions by an OoO processor will give any performance benefit. Similarly, no in-order pipeline will be particularl
Re:Since this is a dupe (Score:2)
You're assuming that the definition of "dependent instructions" is the same for every in-order processor sharing the same instruction set. I think that's a highly suspect assumption!
Different theoretical in-order x86 CPUs would surely differ in terms of execution units and other factors.
Re:Since this is a dupe (Score:2)
Re:Since this is a dupe (Score:2)
(Unless developers released multiple binaries or the source code itself. While we'd HAVE source code for everything in an ideal world, that just isn't the case for a lot of performance-critical software out there such as games and commerical multimedia software.)"
This isn't an issue that couldn't
Re:Since this is a dupe (Score:2)
Re:Since this is a dupe (Score:2)
You certainly made a great point here, though. To be honest, I'm not sure of the answer. I was banking on it being "both".
I'm going on various (admittedly secondhand) things I've heard about Xbox360/PS3 development along with several whitepapers I've read. Creating c
Re:Since this is a dupe (Score:2)
Re:Since this is a dupe (Score:2)
The real problem with dupes (Score:5, Insightful)
If I see an article I've already read at the top of the page I QUIT READING.
This has happened to me several times over the number of years I've read this site. Then I end up coming back and realizing it was a dupe and that I missed several interesting articles inbetween.
SO FOR THE LOVE OF GOD READ YOUR OWN WEBSITE.
Re:The real problem with dupes (Score:1)
I'm one of those people that just read summaries, and decide not to click on the link because it doesn't interest me. Seeing people say "dupe" leads me to think this article was worth posting twice.
Or Ars wasn't pleased with the ad-clicks from the p
Giving up the 'smart compiler' concept? (Score:4, Interesting)
Well, those "better compilers" don't seem to be falling from the sky, and AMD is beating Intel in work/MHz because of it.
Is Intel finally deciding "screw it, we'll make the CPU so smart, that even the crappiest compiled code will run smoothly" ?
Re:Giving up the 'smart compiler' concept? (Score:2)
Two cores? Me likee. (Score:2)
GHz (Score:2)
Re:GHz (Score:1)
However, Conroe has been announced to hit speeds as high as 3 ghz (or higher) for Intel's next Extreme Edition part. We may see speeds that high for the server version of Conroe (Woodcrest) as well.
Power6 delivers where Intel has failed... (Score:1)
Now, frequency isn't everything, but performance scaling is nearly linear if you hold the pipeline depth constant. (And scale
Re:GHz (Score:2, Interesting)
You never could do that in the first place. Within a CPU family, it used to be possible. (With Intels naming schemen today, I can't do it anymore either!) Compare a P-III 500MHz to a P-III 1GHz and you knew that the latter was approximately twice as fast. An 2GHz AMD Athlon XP was approximately twice as fast as a 1GHz AMD Ath
Moore's law isn't a law at all. (Score:5, Interesting)
Anyway, I do understand a bit about how it all works. OOOE has amazing potential, but, in the end the fact remains that you can only optomize things so much. The idea there is actually to kind of break up instructions in such a way that you can actually kind of multi-thread a task not originally designed for multi-tasking. A neat idea I must say, with definite potential. However, honestly, in the end the fact remains that you will run into a lot of instructions that it can't figure out how to break up or which actually can't be broken up to begin with. If they continue to run with this technology, they will improve upon both situations, but, in the end, the nature of machine instructions leads me to believe that this idea may not take them far to be brutally honest.
Let's not forget that one of the biggest competitors in the processors that focus on SIMD is kind of fading now. Apple is going to x86 architechure with all their might (and I must say I'm impressed at how smoothly they are switching -- it's actually exciting most Apple fans rather than upsetting them) and I think I read they no longer will even be producing anything with PowerPC style chips, which I suppose isn't good for the people who make them (maybe they wanted to move on to something else annyway?) At this point it's looking like it's more and more just the mobile devices who benefit from this style of chip, which is primarily just due to the fact that between their lack of need for higher speeds and overall design to use what they have efficiently, they use very little power and do what they do well in a segment like that.
Multi-threading, however, is a viable solution today and in the future as well. It just makes sense really. You start to run into the limitations as to how fast the processor is going to run, how many transistors you can squeeze on there at once, power and heat limitations, etc, however, if you stop at those limits and simply add more processors handling things, you don't really have to design the code all THAT well to take advantage of it and keep the growth continuing in it's own way. I can definitely see multicore having a promising future with a lot of potential for growth because even when you hit size limitations for a single core you can still squeeze more in there. Plus, I wonder if multicore couldn't work in a multi-processor setup? If it can't today, won't it in a future? Who knows, there are limits on how far you can go with multi-core, but, those limits are further away than single core by far and I really feel like they are more promising than relying on smart execution on a single core running around the same speed. In the end, a well designed program will be splitting up instructions on a SMP/multicore system much like the OOOE will try to do. While the OOOE may be somewhat better at poorly designed programs (ignoring for a moment the advantages that multithreading provides to a multitasking os since even on a minimal setup a bunch of other stuff is running in the background) overa
SIMD going out of style? (Score:1)
Re:SIMD going out of style? (Score:1)
Re:Moore's law isn't a law at all. (Score:1)
Well, I work for a major computer manufacturer (think top 3, they also make very nice printers [market leaders you might say]) and the Enterprise
WHY PREVIEW??? (Score:1)
Re:Moore's law isn't a law at all. (Score:1)
It sounds like you're advocating the oft-mentioned point that games are the main thing that will benefit. Well, this is true, but, there are some business or non-gaming oriented things where people will see the differences as well, and these shouldn't be discounted either. Firstly, we're going to need those things like MMX I guess. MS is determined that o
Re:Moore's law isn't a law at all. (Score:1)
Re:Moore's law isn't a law at all. (Score:3, Informative)
Which has been realized for about the past 20 years. Exploiting Instruction Level Parallelism (which requires an out-of-order-execution processor) has gotten us to where we are today. We're reaching the limits of what ILP can buy us, so the solution is to put more cores on a chip.
It may be possible to integrate OOOE into a multicore.
It is possible, and every single Intel multicore chip has done it. Same with IBM's Power5s. For general-purpose multicore processors, that is the norm.
dupe tagging solution! (Score:3, Insightful)
Alright mod me offtopic, but if
Re:dupe tagging solution! (Score:1)
Not based on PM (Score:2)
Who keeps perpetuating this stupidity, and when have we as a culture lost the ability to look past shiny things shown to us by guys in lab coats? The cores rock, the previos cores rock, they are not the same.
Just because Merom is more like the PM than the P4 means all of squat.