×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Inside Intel's Core i7 Processor, Nehalem

Soulskill posted more than 5 years ago | from the upgrades dept.

Intel 146

MojoKid writes "Intel's next-generation CPU microarchitecture, which was recently given the official processor family name of 'Core i7,' was one of the big topics of discussion at IDF. Intel claims that Nehalem represents its biggest platform architecture change to date. This might be true, but it is not a from-the-ground-up, completely new architecture either. Intel representatives disclosed that Nehalem 'shares a significant portion of the P6 gene pool,' does not include many new instructions, and has approximately the same length pipeline as Penryn. Nehalem is built upon Penryn, but with significant architectural changes (full webcast) to improve performance and power efficiency. Nehalem also brings Hyper-Threading back to Intel processors, and while Hyper-Threading has been criticized in the past as being energy inefficient, Intel claims their current iteration of Hyper-Threading on Nehalem is much better in that regard." Update: 8/23 00:35 by SS: Reader Spatial points out Anandtech's analysis of Nehalem.

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

146 comments

First Post (-1)

Anonymous Coward | more than 5 years ago | (#24713935)

First Post

Re:First Post (-1, Offtopic)

Anonymous Coward | more than 5 years ago | (#24713955)

Gratz on being the first loser.

Re:First Post (1, Interesting)

Anonymous Coward | more than 5 years ago | (#24714917)

What's with the Hebrew? [72.14.205.104] Nehalem? Are these the chips Mossad uses to accelerate the backdoor access to the Israeli-coded crypto cyphers? :-)

Re:First Post (1)

gnasher719 (869701) | more than 5 years ago | (#24717747)

What's with the Hebrew? Nehalem? Are these the chips Mossad uses to accelerate the backdoor access to the Israeli-coded crypto cyphers? :-)

Nehalem is a small town in Oregon, USA.

yeah, yeah, yeah.. they said this the last time.. (4, Insightful)

Anonymous Coward | more than 5 years ago | (#24713979)

The problem with hyperthreading is that it fails to deal with the fundamental problem of memory bandwidth and latency in the x86 architecture. It's true, some apps will see a 20% or better improvement in performance, but most won't see anything more than a marginal increase.

Still, if one can safely enable hyperthreading without slowing down your system, unlike the last time we went through this, we should consider it a success. Hopefully, Quickpath will provide the needed memory improvements.

I for one... (1, Insightful)

Anonymous Coward | more than 5 years ago | (#24714141)

I for one welcome the death of FSB and all that, but yet again it means a new motherboard, a new CPU socket and all that (DDR3 too). Better save up!

Re:I for one... (0)

turgid (580780) | more than 5 years ago | (#24717069)

I for one welcome the death of FSB and all that, but yet again it means a new motherboard, a new CPU socket and all that

I did that a couple of years back. I've gone from a single core Athlon 64 at 2.0GHz to a dual core at 2.6GHz and I'll be going quad core in a month or two, all with the same motherboard and RAM (and everything else). OK I might buy some faster RAM, but in theory I could still use the old stuff.

Hypertransport came out in 2003. It's 2008 and intel has only just got its competitor out.

If intel has got the hyperthreading right though (SMT, like Sun Niagara this time, not Pentium IV) that will be quite a performance advantage since it will help to hide memory latency.

Re:yeah, yeah, yeah.. they said this the last time (0, Offtopic)

negRo_slim (636783) | more than 5 years ago | (#24714463)

Still, if one can safely enable hyperthreading without slowing down your system, unlike the last time we went through this, we should consider it a success.

Aye, I remember the joys of the first HT tick back when tom's hardware was a less cluster fuck of a webpage. I do remember intel saying that although it wouldn't be found on later chips they did in fact plan on using the technology in one for or another eventually.

Re:yeah, yeah, yeah.. they said this the last time (1)

Jorophose (1062218) | more than 5 years ago | (#24714811)

Yes and it was included in Atom (that looks very much like a Pentium 4 with a 45nm process) before it was reintroduced in nehalem.

Re:yeah, yeah, yeah.. they said this the last time (4, Insightful)

thecheatah (977630) | more than 5 years ago | (#24716401)

The problem that you describe can also be applied to having multiple cores. If you read the article you will realize that they have taken MANY steps to prevent this.
For one they use ddr3 memory. Another thing is that they have much more intelligent pre-fetching mixed with the loop detection thingy. The cache size/design itself allows for many applications to run.
The problem that you describe is a problem with the OS's scheduler. It should understand the architecture that it is running on. It should know about the types of caches the way each processor shares them. etc. Thus, it only makes sense to use hyper-threading if 1. you are simply out of cores (the choice of using ht cores is iffy) 2. a single application has spawned multiple threads. Even then you have to take into account the availability of other cores that share the l2 or l3 cache.
I personally think that intelligent pre-fetching and loop detection thingy is something that needs more tests/statistics thrown at.
Like you say, there are some applications that take advantage of HT let them take advantage of it while writing smarter OSs that understand the problems with doing so.
Maybe they need a feed back mechanism from the processor for the OS to understand what is the best way to schedule tasks.

I dont know much about CPUS :-p, just from what I read and learned in school.

Re:yeah, yeah, yeah.. they said this the last time (4, Informative)

TheRaven64 (641858) | more than 5 years ago | (#24717407)

The problem with hyperthreading is that it fails to deal with the fundamental problem of memory bandwidth and latency

The entire point of SMT (of which HT is am implementation) is that it helps hide memory latency. If one thread stalls waiting for memory then the other gets to use the CPU. Without SMT, then a cache miss stalls the entire core. With SMT, it stalls one context but the other can keep executing until it gets a cache miss, which hopefully doesn't happen until the other one has resumed.

Re:yeah, yeah, yeah.. they said this the last time (1)

nicuramar (869169) | more than 5 years ago | (#24718189)

Of course you do realize that there has been quite a lot of improvements in the front-end, resulting in a drastically improved memory bandwidth? I believe this is part of the justification for bringing back SMT to their CPUs. Also, QuickPath doesn't really directly compare to anything in Core, since they have an off-die common memory controller.

Winux wules !! (-1, Offtopic)

Anonymous Coward | more than 5 years ago | (#24713981)

Winux wules winluzerz !!

DNF (2, Funny)

suck_burners_rice (1258684) | more than 5 years ago | (#24713985)

Meh. I'm still waiting for multicore quantum computing. Or at least something that can execute code that doesn't exist yet, so i can play Duke Nukem Forever. Actually, what I really want is a processor that can execute code by its spirit, rather than its letter, so buggy code will work correctly anyway. :-)

Re:DNF (4, Funny)

the eric conspiracy (20178) | more than 5 years ago | (#24714391)

You probably also want a user interface that does what you mean, not what you said.

Re:DNF (1)

Fred_A (10934) | more than 5 years ago | (#24717721)

You probably also want a user interface that does what you mean, not what you said.

There might still be TARs of DWIM floating around. Maybe he could port that.

The name is still dumb. (2, Insightful)

Anonymous Coward | more than 5 years ago | (#24713995)

'nuff said?

Re:The name is still dumb. (2, Insightful)

Anonymous Coward | more than 5 years ago | (#24716267)

As a matter of fact, the technology was called Simultaneous Multithreading (SMT) when it was developed by Digital Equipment and the University of Washington, long before Intel marketeers got their hands on it.

That old question (2, Funny)

AndroidCat (229562) | more than 5 years ago | (#24714063)

Is it 3.999999999 more accurate?

Re:That old question (0)

Anonymous Coward | more than 5 years ago | (#24714625)

Don't bring up that old joke. I'm typing this on a computer with a Core i6.99999999999999 RIGHT NOW and it's perfectly accurate!

only the super high desk tops have Quick Path and (4, Interesting)

Joe The Dragon (967727) | more than 5 years ago | (#24714077)

only the super high desk tops have Quick Path and Triple channel DDR3 and the bigger joke is the that there will be 2 differnt 1 cpu desktop Socket.

also the mobile will not have Quick Path.

all AMD cpus use hyper transport and all desktops will use the same socket and the upcoming AM3 cpus will work in the older am2+ boards. Also on amd you can use more then 1 chipset will intel it looks like you will be locked in to a intel chipset.

Re:only the super high desk tops have Quick Path a (5, Funny)

doyoulikeworms (1094003) | more than 5 years ago | (#24714249)

I'm pretty sure the parent post was written by a machine. Turing test: failed.

Re:only the super high desk tops have Quick Path a (4, Interesting)

beakerMeep (716990) | more than 5 years ago | (#24714475)

Take a deep breath. It's OK if AMD and intel both have good chips. The question really comes down to the brand of salsa anyways.

Re:only the super high desk tops have Quick Path a (1, Funny)

Anonymous Coward | more than 5 years ago | (#24714483)

>only the super high desk tops have Quick Path and Triple channel DDR3

So, you're saying that Intel is also supplying marijuana with these systems?

Sold!

Re:only the super high desk tops have Quick Path a (3, Insightful)

moozh84 (919301) | more than 5 years ago | (#24714595)

You won't be locked into an Intel chipset. Obviously NVIDIA will be making chipsets for Nehalem processors. So with Intel processors you will have Intel and NVIDIA chipsets. With AMD processors you will have AMD and NVIDIA chipsets. It won't be much different than it currently is, except most likely VIA will completely drop out of the market in favor of other ventures.

Re:only the super high desk tops have Quick Path a (1)

Jorophose (1062218) | more than 5 years ago | (#24714827)

Obviously? I really doubt nVidia will be able to make chipsets for Intel. And if they can it'll be crap ones, and even worse than typical nForces because they won't have QuickPath.

Re:only the super high desk tops have Quick Path a (1)

darien (180561) | more than 5 years ago | (#24716041)

Nvidia won't be competing with the initial X58 chipset, but they do plan to start supporting Nehalem at some point after launch.

Re:only the super high desk tops have Quick Path a (4, Interesting)

hairyfeet (841228) | more than 5 years ago | (#24714695)

Actually I don't know if they are cutting their own throat or not,but I have noticed I'm building a lot more AMD machines lately. And for the first time since the old K2(IIRC,they were the 400MHz ones) I am actually looking at building an AMD board for myself. The price on AMD dual cores has just gotten so cheap I can cut a good 35% off the cost by going AMD. But for most folks the X2 series has enough power that it is frankly overkill. But as always this is my 02c,YMMV

Re:only the super high desk tops have Quick Path a (2, Informative)

jessedorland (1320611) | more than 5 years ago | (#24716187)

At this point CPU's brands don't matter much, because they are as fast as we need them to be. And OS such as Windows is not fully using all the cores of a CPU -- and most games are not design to benefit duel core or quad core processors.

Re:only the super high desk tops have Quick Path a (2, Informative)

RightSaidFred99 (874576) | more than 5 years ago | (#24715117)

Unfortunately, AMD's "advanced technology" in HT doesn't help them win anywhere but in multi-socket servers. Intel's FSB is plenty sufficient for single socket desktops. So..what's your point again?

Re:only the super high desk tops have Quick Path a (1)

gnuman99 (746007) | more than 5 years ago | (#24716371)

Really? Try a Quad core with some memory intensive apps.

Re:only the super high desk tops have Quick Path a (1)

nicuramar (869169) | more than 5 years ago | (#24718209)

only the super high desk tops have Quick Path and Triple channel DDR3 and the bigger joke is the that there will be 2 differnt 1 cpu desktop Socket.

also the mobile will not have Quick Path.

...but then, they won't have as much to use QuickPath for either.

Power effiiency is the new "it" (5, Interesting)

Kjella (173770) | more than 5 years ago | (#24714087)

Nehalem is really the realization of what many slashdotters have claimed before - the typical user doesn't need that much more performance. Both datacenters and laptop users ask for the same thing - power efficiency - and Intel delivers. The Atom is another part of the strategy, even though it's current coupled with a very inefficient chipset.

The thing is, today we have the knowledge and complexity to fire up kilowatt systems and more - but they're costly running. Certainly there's the extreme hardcore gamers who won't mind running the hottest, most powerhungry quad crossfire system, but they're few and far between. Laptop users think battery life. Desktop users think electricity costs. The result is Nehalem, which promises to deliver a lot more performance per watt.

If the practise is as good as the theory, AMD is unfortunately in deep shit. They've always been good at delivering ok processors at an ok price, but power efficiency has really only been their strength compared to the Netburst (PIV) processors, not P3 or the Cores. If it amounts to "yeah your processors are cheaper but they cost more to operate" things will fall apart, which is sad since ATI is really doing fine. The 48xx series are kick-ass cards, I just hope they can keep up the competition against Intel...

AMD is big on cost and with intel forceing you to (0)

Joe The Dragon (967727) | more than 5 years ago | (#24714179)

AMD is big on cost and with Intel forcing you to use there chip set it will push costs up where as you can get a AMD 790GX / 780G board with side port ram for about $100 and up lower for boards with out it GeForce board with good on board video are the same price add 4gb of ram for under $100 and get a quad core staring at $150 3 core start at about $100 or a dual start at $50 and you can get a nice for a low cost and a board with 64-128 of board video ram will be good for vista and is better then a intel board that uses system ram and has slower on board video.

780G is also very power efficient (4, Insightful)

tknd (979052) | more than 5 years ago | (#24714347)

See here [tomshardware.com]

I know it's a tomshardware article but compared to what people have been posting in silent pc review forums the results are consistent. I do think with a better chipset and laptop style power supply the atom platform can go down to sub 20watts, but for now Intel is not making those boards or even allowing atom platforms to have fancy features like PCI-Express. In fact with the older AMD 690G chipset, some people at silent pc review were able to build sub 30watt systems.

Re:780G is also very power efficient (1)

Bert64 (520050) | more than 5 years ago | (#24717545)

The EEE 901 seems to draw somewhat less than 20 watts when running from AC...

Re:AMD is big on cost and with intel forceing you (0)

Anonymous Coward | more than 5 years ago | (#24716895)

Uh, could you repeat that in English and use more than one period instead of having one big long incomprehensible run on sentence with spelling errors 'cause as much as I try my parser is choking on it.

Re:Power effiiency is the new "it" (0, Troll)

NicknamesAreStupid (1040118) | more than 5 years ago | (#24714201)

You obviously haven't run Vista (smart move). It is going to take more cores than an Apple orchard to get that OS off the ground. In this fat world of ours, Vista has a BMI of a googleplex.

Re:Power effiiency is the new "it" (4, Funny)

Anonymous Coward | more than 5 years ago | (#24714305)

Here we go, jumping the gun before we hear what Jerry has to say...

Re:Power effiiency is the new "it" (-1)

Anonymous Coward | more than 5 years ago | (#24714641)

Jerry should start with... morons that can't get Vista up and running with moderate hardware are just that. :D

My hardware is above average, but not close to cutting edge. The Vista box I have flies. Yes, I have tweaked it a bit. Sooo, either I am a genius (supra-genius), or I am a beautiful and unique snowflake. Is Vista perfect? No way. Is it total crap? No way. But you couldn't tell from the /. fanboi base.

Sorry, the FUD that goes through here irks me sometimes.

Re:Power effiiency is the new "it" (5, Insightful)

DigiShaman (671371) | more than 5 years ago | (#24714263)

I've always thought that the biggest problem with AMD was the fact their marketing is non-existent. Maybe they should start an "AMD Inside" campaign similar to that of Intel. All I know is that their brand name is fading into oblivion...and fast.

Re:Power effiiency is the new "it" (2, Insightful)

Pulzar (81031) | more than 5 years ago | (#24715645)

Intel has money to burn, so they can afford prime-time TV commercials... The question is -- is the return on investment worth it? Your average Joe will buy whatever Dell/HP offers them in the right price range. The ones who are looking for a specific CPU are generally informed enough not to be swayed by TV commercials.

Re:Power effiiency is the new "it" (-1, Troll)

canuck57 (662392) | more than 5 years ago | (#24714471)

Nehalem is really the realization of what many slashdotters have claimed before - the typical user doesn't need that much more performance. Both datacenters and laptop users ask for the same thing - power efficiency - and Intel delivers. The Atom is another part of the strategy, even though it's current coupled with a very inefficient chipset.

True. The users have jet engines on Volkswagen chassis right now. For Vista? Wowa, that is nuts. I want more performance in the new processors without the OS baggage thank you.

The thing is, today we have the knowledge and complexity to fire up kilowatt systems and more - but they're costly running. Certainly there's the extreme hardcore gamers who won't mind running the hottest, most powerhungry quad crossfire system, but they're few and far between. Laptop users think battery life. Desktop users think electricity costs. The result is Nehalem, which promises to deliver a lot more performance per watt.

If you are a laptop user, a X2 (AMD) is by the far the best. Video chipset aside, get NVidia, as ATI sucks. Mind you, I haven't bought ATI for awhile, with their anti-open source and poor driver support for products like ATI Video Blunder.

If the practise is as good as the theory, AMD is unfortunately in deep shit. They've always been good at delivering ok processors at an ok price, but power efficiency has really only been their strength compared to the Netburst (PIV) processors, not P3 or the Cores. If it amounts to "yeah your processors are cheaper but they cost more to operate" things will fall apart, which is sad since ATI is really doing fine. The 48xx series are kick-ass cards, I just hope they can keep up the competition against Intel...

Actually, I think AMD produces good processors and blundered with acquiring ATI. ATI while real good last century, they lost it. They became anti-Linux, support in OSes like BSD and Solaris dwindled and they wouldn't even say no to emails. ATI Vido Blunder just topped the cake.

I will not buy ATI until it say on the package Solaris, Linux and Windblows supported.

BTW, you should get Firefox, it tells you where your spelling mistakes are and is a safer more portable browser. 3.0 is really a hoot.

Re:Power effiiency is the new "it" (0)

Anonymous Coward | more than 5 years ago | (#24714589)

AMD now owning ATI have gone ahead and started to open up ATI cards.
They're much more opensource friendly than Nvidia now

Re:Power effiiency is the new "it" (5, Interesting)

Kneo24 (688412) | more than 5 years ago | (#24714755)

You are behind the times. ATI cards, as far as price vs performance, are spanking NVidia's cards with moon rocks. I think a big helping hand in that is that for whatever reason, AMD said to them, "make better drivers, or else!".

Also, AMD has gone the route of trying to be more open source friendly with their cards, more so than NVidia.

Currently, you just can't go wrong with owning a current generation Radeon card right now.

Re:Power effiiency is the new "it" (1, Informative)

Anonymous Coward | more than 5 years ago | (#24717447)

I have been using Nvidia graphics hardware for the pass 2+ years (before that had an ATI 9600 XT - another good value for money card at that time, and more Nvidia cards from the pre-geforce days till then)

Recently I got myself an ATI 4850 card primarily cos of the open sourc'ing of the drivers.

I also got a 4870 card for another friend of mine (Gamer + office related work).

I also run Vista on my system whereas my friend dual boots Vista / XP.

We both have had blue screens due to the driver at least once so far (running 8.8 Catalyst - the latest) and under Vista the system had to recover from grapichs driver issues.

It is nice to have a good piece of hardware which is very good value for money, but current windows drivers have not been very stable so far (both XP / Vista).

As I don't do much graphics work in Linux, I can't comment on that.

Re:Power effiiency is the new "it" (1)

canuck57 (662392) | more than 5 years ago | (#24717863)

You are behind the times. ATI cards, as far as price vs performance, are spanking NVidia's cards with moon rocks. I think a big helping hand in that is that for whatever reason, AMD said to them, "make better drivers, or else!".

Also, AMD has gone the route of trying to be more open source friendly with their cards, more so than NVidia.

Currently, you just can't go wrong with owning a current generation Radeon card right now.

Nice sell. And yes, I did suspect my original post was going to get mod -1.

But the fact remains, has ATI released code or how to for say a ATI TV Wonder USB 2.0? Last I checked the answer was no. There were plenty who bought this to find bad drivers, would not even work with MCE! But just checked, finally some better support...unfortunately a year late for me as I junked it.

Maybe AMD changed ATI? As just before ATI was bought they did a pretty crappy job in supporting Linux and the *BSDs. I still remember the rant at the time where open source developers wouldn't even get a email answer from them. How we forget this.

I know AMD the underdog now owns ATI, but it didn't do anything for their share prices either. But ATI is not on my buy list until I see the portability, price and performance for myself.

Re:Power effiiency is the new "it" (1)

blahplusplus (757119) | more than 5 years ago | (#24714887)

"The thing is, today we have the knowledge and complexity to fire up kilowatt systems and more - but they're costly running. Certainly there's the extreme hardcore gamers who won't mind running the hottest, most powerhungry quad crossfire system, but they're few and far between."

I think this is a misinformed statement personally, not intentionally as a slant against you but, gamers are one of the few driving the technology in many key area's of research : World simulation, A.I., etc, "Games" are misnomers for the enormous amount of subjects in which "games" (simulations) are advancing our knowledge by leaps and bounds. Not only that many of us contribute our CPU power to mass distributing computing projects (Set@home, folding@home, etc) that help the people who are designing massive parralel internet computing via GPU's and CPU's in it's own right, which is really in it's infancy. I'd love to see shared computing in OS's by default and turned on for things like folding@home, and things we genuinely need like more scientific research, with an "opt out" button, should anyone not want to do so.

Much of the CPU/GPU power in the world goes unused for the electricity they ocnsume. I'd love to see when computers are idle in such a way that they are naturally used to solve problems by default when the computer is idling.

Most people are too stupid or ignorant to figure out how to donate their time or setup internet computing to help speed up research in many areas. I've wondered why microsoft hasn't done this with their screen-savers, with certain organizations like medicine, biology, physics, enregy, etc.

Next, all games are serious hardcore engineering and simulations. I've thought about modelling economic phenomena via sattelite and have it read directly into a 'game', so that we can see it in real time and using susbtitution study the flows of money as fields of energy. So you can actually observe the behaviour of money and slow down the transactions over time, like how you can with an MP3 or wave file with MP3 playing software that allows you to adjust the speed of the song, pause, go back and forth in time.

There is not enough visualization of what is going on IMHO in many areas of research, math is merely a description of geometric and spatial relationships in the real world, anything that has structure of any kind (information, etc) is geometric whether this is realized or not, the fact is most people are not great at symbolic processing, but they are very good at what comes naturally: Vision.

Our computers IMHO are in the dark ages, butttons, widgest, etc... they can't recognize our voices, predict what we will type or say, and most importantly they can't even act as secretaries or organizers of our life...

In short the suck! Computers will one day be able to do teh job of secretaries, and clerks, and wouldn't that be great if programming reached such an amazing level that we could have software agent's do the grunt work for us instead of having to waste time doing all the boring shit because the computer is too stupid?

There is never enough computing power, and people who think so are painfully naive. computing power = more powerful applications = more power to make things easier to use and asbtract away the machine and have hte machine interact in more human and autonomous ways instead of a machine a slave to fixed programming.

Sooner or later we will have programs that program themselves, evolve and adapt themselves, and we will be amazed at the stuff that they can do, this won't happen without the hardware.

Re:Power effiiency is the new "it" (0)

Anonymous Coward | more than 5 years ago | (#24716019)

math is merely a description of geometric and spatial relationships in the real world

You have it backwards... and this paragraph ruins your speech. You're like one of the people who tries to come up with a new theory of light particles because it "makes sense" to yourself.

Re:Power effiiency is the new "it" (-1, Flamebait)

blahplusplus (757119) | more than 5 years ago | (#24716769)

"You have it backwards... and this paragraph ruins your speech. You're like one of the people who tries to come up with a new theory of light particles because it "makes sense" to yourself."

I am a mathematician doing research in the areas of logic and cognitive linguistics thank you very much. I do research and I'm far far ahead of your lower cognitive status. Tell me when you look at a piece of paper with a black square on it, how do you know the distinct square is different from the all white paper? Distinction is the creation of the concept of object, all conceptions are derived from the world of nature's geometry. If you don't believe such you are simply ignorant and quite mad and miseducated. Most importantly I can demonstrate it logically. Come wise one, come to our group... let us see how wise you are.

http://groups.yahoo.com/group/lawsofform/ [yahoo.com]

"I know you won't believe me, but the highest form of Human Excellence is to question oneself and others."--Socrates

http://www.lawsofform.org/ [lawsofform.org]
http://www.boundarymath.org/ [boundarymath.org]

Re:Power effiiency is the new "it" (3, Interesting)

distantbody (852269) | more than 5 years ago | (#24716001)

Nehalem is really the realization of what many slashdotters have claimed before... ...power efficiency - and Intel delivers.

Putting the cringe-worthy PR tone aside (are you connected to intel in any way?), the lowest-clocked 'mainstream desktop' Bloomfield CPU (running at 2.66 GHz, 45nm, quad-core) has a TDP of 130W! Now, efficient or not, that is one hot-and-sweaty processor, making me wonder that if Nehalem truly does have '1.1x~1.25x / 1.2x~2x the single / multi-threaded performance of the latest Penryn ('Yorkfield', 2.66GHz, 45 nm, quad-core, 95W TDP) at the same power level', why wouldn't they let the efficiency gains carry the performance increase of Nehalem for the same TDP?

Look I may or may not be missing something, but I have been reading plenty of (uncomfortably positive, perhaps bankrolled) material on nehalem, yet I can't shake the perception that, with a huge TDP increase, the return of hyperthreading and the cannibalization of L2 cache for L3 cache, Nehalem seems far more Pentium 4 than Penryn.

Not on the desktop it isn't (5, Informative)

Chemisor (97276) | more than 5 years ago | (#24717641)

> Desktop users think electricity costs.

Bullshit. The difference between a 130W Nehalem and a 65W Core2 is 65W, which is 11 cents per day (at 7c/kW) or $39/year if you run the computer 24/7. Most people turn the computer off when it's not in use, and 8 hours per day is more likely, or 3 cents per day and maybe $10/year. I'd say the cost is entirely negligible, especially when you compare it to your $80/month Comcast bill.

Here we go again (2, Interesting)

PingXao (153057) | more than 5 years ago | (#24714185)

Hyperthreading. I thought I was getting an ultra-tech processor when I bought my Dell 8400 some years back, with its 3.2 GHz P4 hyperthreaded power-sucking processor. Once all the reviews and independent technical evaluations and benchmarks were in, it was revealed that outside of a few niche application areas, hyperthreading wasn't all that great.

It's a good sign Nehalem is also focusing on lowering power usage, the reason Intel had to finally abandon their Tejas plans (the old 8400 Coppermine P4 was a juice junkie). But why return to a feature like hyperthreading that has been thoroughly debunked? New software being written is still struggling with SMP multiple cores and threads running in parallel. Why gum up the works even more with a questionable feature? It makes very little sense to me.

One justification would be if it had the potential to significantly reduce rendering times in animation and CGI applications. I thought Intel's plans for the mid-term were to go towards many-core processors (many more than 4 or even 8). Maybe hyperthreading is just a way to kick software designers in the arse, because software that can really take advantage of multi-threading is scarce. It's really quite amazing how much the hardware has outstripped the ability of software to keep up.

Re:Here we go again (5, Interesting)

Traiano (1044954) | more than 5 years ago | (#24714333)

Don't assume that since Hyper-Threading failed with Netburst that it is forever doomed to fail again. The primary problem with that architecture was that stages along the pipeline didn't support multiple threads. So, any thread context switches forced a flush of Netburst's very, very long pipeline. Intel's next generation of pipelines track multiple threads at all stages and make the prospect of HT much more attractive.

Re:Here we go again (3, Interesting)

Waffle Iron (339739) | more than 5 years ago | (#24714997)

Hyperthreading can make a lot of sense in some circumstances. Sun pushed hyperthreading to its limits to achieve very impressive energy efficiency for certain niche workloads with its Niagra CPUs and derivatives. (IIRC, up to 128 threads per chip.)

Re:Here we go again (3, Informative)

salimma (115327) | more than 5 years ago | (#24716545)

8 threads per core in Niagara 2; you get up to 64 threads, as the chip is available with 4, 6 or 8 cores.

Re:Here we go again (5, Insightful)

tftp (111690) | more than 5 years ago | (#24714363)

It's really quite amazing how much the hardware has outstripped the ability of software to keep up.

It's not amazing at all. Most desktop applications are single-threaded because you, the operator, are single-threaded. MS Word could enter words on all 100 pages of your book simultaneously, but you aren't able to produce them. An audio player could decode and play 100 songs to you at the same time, but you want to listen to one song at a time...

I can see niche desktop applications where multiple threads are of use. For example, GIMP (or Paint.net or Photoshop) could apply your filter to 100 independent squares of the photo if you have 100 cores. However the gain would be tiny, the extra coding labor would be considerable, and you still need to stitch these squares... all to gain a second or two of a rare filter operation?

The most effective use of multiple cores today is either in servers, or in finite element modeling applications.

Re:Here we go again (5, Insightful)

Anonymous Coward | more than 5 years ago | (#24714563)

It's not amazing at all. Most desktop applications are single-threaded because you, the operator, are single-threaded....

That's a pretty simplistic view. Other than the obvious historical reasons, I believe that most applications are single threaded because the languages and tools for writing non-trivial robust multi-threaded applications is lagging far behind the capability to run them.

Re:Here we go again (3, Insightful)

Mycroft_VIII (572950) | more than 5 years ago | (#24714915)

Games, 3d rendering in general, but games are a big common app that can utilize good multi-threading.
And multiple cores? Just the O.S. alone runs many things at once, then you've got your drivers, the applications, the widgets, the viruses(hey they're processes too, just because some people have a bit of prejudice:)), the bittorrent running in the background, and the list goes on.

Mycroft

Re:Here we go again (1)

AaronLawrence (600990) | more than 5 years ago | (#24716255)

You've trotted out the same old arguments.

Games are in fact one of the ONLY things on consumer PCs that make heavy use of the hardware. Some people edit video also, or play HD video on their desktop. A small fraction do other 3D tasks. Of course these particular apps can use lots of CPU, but they always have.

The rest of it is trivial. In case you hadn't noticed, most modern OSes sit there using less than 1% of CPU most of the time. Sure, there are occasionaly bursts of activity but these are rare and usually related to other things that are already demanding.

Even the viruses are usually limited far more by the internet connection (how much spam can they send) than the CPU.

Re:Here we go again (1)

Mycroft_VIII (572950) | more than 5 years ago | (#24716575)

Oh I wasn't arguing necessary, just usefull and more efficient to have multiple threads running at the same time.
      And modern gpu's can help a lot with some of those tasks.
I probably should have pointed out my own perspective might be a tad skewed as I run 3d rendering apps (well poser mostly) than can easilly peg all four cores and slam my ram, so naturally I'm all for bigger,better,faster,cheaper in computers.
      Besides look at the laser, it was a solution looking for a problem at first, and have found a few nails for it.

Mycroft

Re:Here we go again (1)

serviscope_minor (664417) | more than 5 years ago | (#24716337)

For example, GIMP (or Paint.net or Photoshop) could apply your filter to 100 independent squares of the photo

I think GIMP does. On my machine, it's been using more than 100% of one core according to top. Anyone else noticed this with recent versions? And if you're editing 250MPix images, it makes a big difference.

multi-cores for the i/o (1)

Joseph_Daniel_Zukige (807773) | more than 5 years ago | (#24716577)

one core for the mouse, one core for the display, no, make that two. Two cores for the s/ata and four cores for the USB3.

That's the monkey goes, ...

Re:Here we go again (0)

Anonymous Coward | more than 5 years ago | (#24717049)

It's not amazing at all. Most desktop applications are single-threaded because you, the operator, are single-threaded.

So you're saying I can't read slashdot and listen to music at the same time?

Re:Here we go again (3, Insightful)

TheRaven64 (641858) | more than 5 years ago | (#24717451)

It's not amazing at all. Most desktop applications are single-threaded because you, the operator, are single-threaded. MS Word could enter words on all 100 pages of your book simultaneously, but you aren't able to produce them.

Absolute nonsense. Most applications have inherently parallel workloads that are implemented in sequential code because context switching on x86 is painfully expensive.

Consider your example of a word processor. It takes a stream of characters and commands. It runs a spelling, and possibly grammar, checker in the background. It runs a layout and pagination algorithm. Both of these can also be subdivided into parallel tasks. If you insert an image, it has to decode the image in the background. Then we get to the UI, updating the view of the document via scrolling and so on while the model is not modified.

Re:Here we go again (1)

maraist (68387) | more than 5 years ago | (#24717697)

Being an enterprise multi-threaded programmer, I'm going to beg to differ.. The reason being that people THINK single-threaded when they program - I know because I've had to retrain lots of entry-level programmers who give little/no thought to race-conditions, synchronized routines.

But also in this MT environment, there is a LOT that a trivial text-editor can be doing in the background.. The more complex the task you are performing, the more real-time analyzers can be thought up by trivial-editor writers.

Take MS word.. You have grammer checking, but what about background googling to do FACT checking. In programming, we have control-space or in bash we have tab 'completion'. This is all easy enough to do in a single-thread, but what if the data isn't deterministic? What if it's contextually relevant. Then having a background thread do deep seated analysis after every keystroke make the completion operation more than just 'redo this thing I told you to remember', but it's a litteral co-pilot, where you can trust that you can give a task to some separate mind to make a small decision for you - one that is submitted for your approval.

This is a VERY complex task, and is obviously highly sensitive to the nature of your work. But story writing, formal documentation, technical specifications, functional specifications, test-plans, business plans, excel data-analsys, emailing. To say nothing of programming.

I have 4 gig of memory and the fastest, widest number of CPUs I can get my hands on, and that's MOSTlY because of my trivial text-editor (for which I often fall back out to vim if I'm remote, or gedit/kompare). And I do ZERO graphics work.

Tools are limited only by our imaginations. AI will never take off if people don't harness their abilities, in small practical everyday ways.

Re:Here we go again (0)

Anonymous Coward | more than 5 years ago | (#24718123)

It's not amazing at all. Most desktop applications are single-threaded because you, the operator, are single-threaded.

Who says that the application that I'm looking at is the only one? Have you done a 'ps' on your system lately? How many PIDs come up?

Re:Here we go again (0)

Anonymous Coward | more than 5 years ago | (#24718191)

because you, the operator, are single-threaded. MS Word... An audio player...

Two programs at the same time, I believe that I, the operator, am multi-threaded.

Re:Here we go again (4, Informative)

JorDan Clock (664877) | more than 5 years ago | (#24714453)

After reading the overview from Anandtech, it has been revealed that Hyper-Threading is far more efficient on Nehalem than any P4 could have hoped to be. It has better cache, better access to memory, and is a much wider core. Hyper-Threading also allows Nehalem to do more with each clock. I highly suggest reading Anandtech's breakdown of Nehalem. It is very comprehensive and does a great job of explaining things in quite a fine grain of detail.

Re:Here we go again (2, Interesting)

Anonymous Coward | more than 5 years ago | (#24716131)

The Nehalem architecture is designed to maximize performance for a given power level. If you happen to be running a legacy application which cannot take advantage of all the cores then the unused cores will go into a low power state and the cores in use will overclock until the selected power envelope is reached.

I, for one, welcome our new automatic overclocking overlords.

niggers fuck ya (-1, Redundant)

Anonymous Coward | more than 5 years ago | (#24714291)

them black bitches have you in the ass. straight outta compton. y'all know apple and linux users like it.

how much is enough? (4, Informative)

Tumbleweed (3706) | more than 5 years ago | (#24714341)

At this point, as long as I can watch HD video without any noticeable slowdowns, I'm good. A GPU or integrated video solution that can do that plus some energy efficient CPU is really all I'm interested now. The software issues with the 4500HD are disappointing, but hopefully it's *just* a software issue this time, and can be fixed soon enough.

Then again, that's just me; I'm not a gamer or video editor.

Re:how much is enough? (1)

arkane1234 (457605) | more than 5 years ago | (#24714629)

so pretty much your saying since you do stuff that can be done with relatively old hardware, there should be no more upgrading for more abilities?

Re:how much is enough? (3, Insightful)

PitaBred (632671) | more than 5 years ago | (#24715435)

He's saying that there's no killer application for the general user to upgrade to the latest and greatest. Gamers, sure, but they're a SMALL minority of computer users. Multi-threading and more cores than we have now doesn't really do anything for the average person. Until it does, these updates will be received with lukewarm approval. It won't be like the original Pentium again.

there is never enough ... (2, Interesting)

boorack (1345877) | more than 5 years ago | (#24717097)

It's just that software does not keep up with hardware advances. There are many semi-ai or ai things I would have running on my PC. Classical example is indexing images or videos. Being able to query "show me all pictures where my girlfriend wields watch on her left hand" etc.

My favorite would be a robot which will clean up my house. Not just hoover or clean up a floor. Also, clean up higher standing things, recognize what is a useful thing, what is a piece of rubbish and what I should decide if it should be tossed out. That kind of robot would also alert me that something needs to be repaired (like leaking roof), fix simple things (leaking pipes?), and generally take care of my property keeping it well by maintaining and fixing early enough, taking care of all living plants etc. And i would rather talk with this device using a natural language than program it by clicking or writing some kind of bizzare script ;)

That kind of thing certainly needs enormous computational power. You need to recognize objects in images coming from its sensors (be it cameras, laser/infrared sensors etc.), solve a kinematic and dynamic equations of robot arms in realtime, have some advanced AI - both in solving basic problems of geometry and moving objects, and more sophiscated AI, including some non-trivial ontology-like database (so robot won't close a plant in a cabinet letting it die. So, you need to crunch incredible amounts of data and do not consume too much power. I think that current designs still needs some work to keep with such kind of workload.

Gene pool comment (2, Interesting)

blahplusplus (757119) | more than 5 years ago | (#24714447)

"completely new architecture either. Intel representatives disclosed that Nehalem 'shares a significant portion of the P6 gene pool,"

That's like saying equations share a significant portion of numbers gene pool. It's all geometry when you get down to it. I mean really, there are going to be certain circuit geometries that are always good to use and whom you can't totally get away from.

Re:Gene pool comment (4, Insightful)

AcidPenguin9873 (911493) | more than 5 years ago | (#24715361)

I'm not sure what you mean by geometries. SRAM arrays, flops, random logic, carry-lookahead adders, Wallace-tree multipliers (building blocks of processors) generally look similar across all high-performance ASICs over the past 15 years. Circuit geometries themselves have almost certainly changed completely since P6 days - 45nm is a hell of a lot smaller than 350nm, and the rules governing how close things can be have almost certainly changed.

I think what the article really means is that Nehalem shares a lot of the architectural concepts and style of the P6: similar number of pipe stages, similar number of execution units, similar decode/dispatch/execute/retire width (I think Core 2/Penryn/Nehalem are 4 and P6 was 3), similar microcode, etc. Of course enhancements and improvements have been made in things like the branch predictor, load-store unit, and obviously the interconnect/bus...but if you look at Nehalem closely enough, and indeed if you look at Pentium M, Core 2, Penryn too, you can see the architecture of the P6 as an ancestor.

Re:Gene pool comment (1)

blahplusplus (757119) | more than 5 years ago | (#24716781)

"I'm not sure what you mean by geometries."

In terms of existent structure, surface, or energy, what isn't geometry? What isn't a shape that has existent structure and can be detected?

If you can't answer that, then you'll know :)

Will OS X's Snow Leopard use HT more? (4, Insightful)

Nova Express (100383) | more than 5 years ago | (#24714635)

Given how closely Apple has worked with Intel before and after the processor switch from PowerPC, I wonder how much more Hyper-Threading aware OS X 10.6 (AKA Snow Leopard) will be? After all, it's supposed to be a "tuning" release focused on full 64 bit performance across the OS, so it wouldn't surprise me to see OS X 10.6 to see much greater speed gains from HT than Vista on Nehalem, especially given Anandtech's description of how Vista screws up Turbo mode [anandtech.com] on Penryn-based systems. (And of course, MS won't go back and put hyperthreading awareness in XP at all...)

Re:Will OS X's Snow Leopard use HT more? (1)

gnasher719 (869701) | more than 5 years ago | (#24718063)

Given how closely Apple has worked with Intel before and after the processor switch from PowerPC, I wonder how much more Hyper-Threading aware OS X 10.6 (AKA Snow Leopard) will be?

I don't think an operating system actually needs very much support for Hyperthreading.

Of course the OS needs to know about Hyperthreading and not schedule two threads to run on the same core while any other core is completely unused (so hyperthreading would only be used if number of running threads > number of cores). And if different threads have different priorities, you would want to use hyperthreading for threads with low priority and use a full core for a thread with higher priority. If the OS gives statistics of CPU usage, you might want to count time running hyperthreaded a bit lower.

Apart from that, I don't think there is much to do. MacOS X already knows that processors are not completely symmetric, so a programmer can say that two threads should run on cores that are close together (better with lots of communication between threads) or on cores that are far apart (better for independent threads). It probably should be possible to turn hyperthreading off for things like profiling and measuring performance, because hyperthreaded timings and timings without HT cannot be compared.

Nehalem? (0)

Gothmolly (148874) | more than 5 years ago | (#24714757)

Isn't that one of the books of Mormon?

Re:Nehalem? (2, Interesting)

Perf (14203) | more than 5 years ago | (#24716899)

Nah, it's named after a river in Oregon, which in turn, is named after a Native American tribe.

Re:Nehalem? (0, Troll)

kellogs (1345891) | more than 5 years ago | (#24717053)

shut it up all. It definitely comes from "ne halim" - slang for "we're gonna eat ourselves". Lols if they only knew how that sounds in my language ! ^|^ _

Intel Will Regret This (4, Interesting)

Louis Savain (65843) | more than 5 years ago | (#24714805)

More than any other organization, Intel knows that multithreading is bad. Lots of smart people such as professor Edward Lee [berkeley.edu] (the head of U.C. Berkeley's Parallel Computing Lab) have warned Intel of the disaster down the road. It is time for Intel and everybody else to make a clean break with the old stuff. There is an infinitely better way to design and program parallel computers that does not involve the use of threads at all. Instead of the Penryn, Intel should have picked something similar to the Itanium, which has a superscalar architecture [wikipedia.org]. A sequential (scalar) core has no business doing anything in a parallel multicore processor. Intel will regret this. Sooner or later, a competitor will read the writings on the wall and do things right. Intel and the others will be left holding an empty bag. To find out the right way to design a multicore processor, read Transforming the TILE64 into a Kick-Ass Parallel Machine [blogspot.com].

Re:Intel Will Regret This (1, Insightful)

Anonymous Coward | more than 5 years ago | (#24715269)

Yeah, that's what Intel thought as well, ten years ago. Many valuable lessons were learnt.
They're still continuing the Itanium line, I'd guess primarily for the research value and to save face, but I don't think they're particularly eager to face the ridicule they'd get from committing all their mistakes a second time.

Re:Intel Will Regret This (1)

Bert64 (520050) | more than 5 years ago | (#24717937)

Well, Itanium was a good idea, and getting away from the legacy cruft of x86 would be a good thing, but in this case competition and closed source software are stifling progress...

Competition because people won't migrate until there is a clear cut case to do so, or they are forced... Apple were able to transition their users from m68k to ppc and then to x86 because there was no other way forward... Had a third party been producing clones, people would have chosen the path of least resistance and stuck with the clones. The same is true of x86 with AMD and Via still producing compatible chips, Intel were eventually forced to follow AMD and implement their 64bit extensions.

Same with closed source software, existing binary software won't run on a new architecture (or will run poorly through emulation) thus users won't buy the new architecture since it doesn't run their programs, and vendors won't want to port their software to an architecture that hasn't got enough users to make it profitable. What little closed source software has been ported to IA64 was mostly due to deals with HP and Intel.

Re:Intel Will Regret This (2, Interesting)

paradigm82 (959074) | more than 5 years ago | (#24717663)

Intel's CPU's have been superscalar since P6 (Pentium Pro). They can execute 3-4 instructions per clock under optimal conditions (yes all the way through the pipeline). They have out-of-order execution, speculative execution, register renaming etc. However, there's a limit to how much you can execute in parallel at the instruction level.

Could you elaborate on what Intel's CPU's are missing and what Edward Lee was warning about?

Re:Intel Will Regret This (1)

Great Blue Heron (1225522) | more than 5 years ago | (#24717985)

Perhaps the GP should have included a link to "The Problem with Threads" by Edward A. Lee, IEEE Computer, 39(5):33-42, May 2006, also available here [berkeley.edu] as Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2006-1 .

ECC? (1, Insightful)

Anonymous Coward | more than 5 years ago | (#24715613)

Now that the memory controller will be in the CPU, does that mean they'll enable ECC RAM support for their consumer-level systems, the same way most AMD boards do?

The idea of using 4GB or more with no error correction just doesn't interest me.

QuickPath? HyperTransport? (2, Interesting)

sam0737 (648914) | more than 5 years ago | (#24715845)

The QuickPath sounds so like AMD's HyperTransport. 3 pairs per CPU, integrated controller is exactly what AMD's doing for long long time.

20-bit wide 25.6 GB/s per link? HyperTransport is already capable at deliverying 41.6 GB/s per link in 2006. (according to Wikipedia)

Well. . . (0)

Anonymous Coward | more than 5 years ago | (#24716183)

I dunno whether this is common knowledge yet (bracing for karma hit if it is) but the big deal with the new processors should not be that they will have completely different sockets. I happen to know someone who knows someone who knows an engineer who's designing a cooling system for a server that uses one of these new CPUs. The huge architecture change is partly a result that the cores in these new procs will self-scale their own clocks and voltages (SpeedStep) to an extent never before seen (thus the need for a more reactive cooling system). They're also almost preposterously power efficient.

Re:Well. . . (1)

Joseph_Daniel_Zukige (807773) | more than 5 years ago | (#24716605)

If they're so power efficient, why do they need much of a cooling system at all?

If the efficiency means a reactive cooling system, are we going to waste the saved energy pumping the coolant?

Yeah, I'm being facetious, but I get the feeling someone is checking of boxes on a feature list instead of slowing down to do real engineering. The only company that has succeeded with that is Microsoft, and they only succeeded in bleeding the industry dry and abandoning us for the highwaymen.

Re:Well. . . (1)

Bert64 (520050) | more than 5 years ago | (#24717745)

The idea is that the cores will scale independently of each other, so that if you are running a single threaded app on a quad core cpu 3 cores will shut down and the remaining core will overclock itself...
Most multi core cpus are clocked lower than single core chips can be, so this is a way of recovering some single thread performance.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...