Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel Details Nehalem CPU and Larrabee GPU

ScuttleMonkey posted more than 5 years ago | from the business-is-war dept.

Intel 166

Vigile writes "Intel previewed the information set to be released at IDF next month including details on a wide array of technology for server, workstation, desktop and graphics chips. The upcoming Tukwila chip will replace the current Itanium lineup with about twice the performance at a cost of 2 billion transistors and Dunnington is a hexa-core processor using existing Core 2 architecture. Details of Nehalem, Intel's next desktop CPU core that includes an integrated memory controller, show a return of HyperThreading-like SMT, a new SSE 4.2 extension and modular design that features optional integrated graphics on the CPU as well. Could Intel beat AMD in its own "Fusion" plans? Finally, Larrabee, the GPU technology Intel is building, was verified to support OpenGL and DirectX upon release and Intel provided information on a new extension called Advanced Vector Extension (AVX) for SSE that would improve graphics performance on the many-core architecture."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered


Nehalem? Larrabee? (2, Interesting)

thomasdz (178114) | more than 5 years ago | (#22778240)

Heck, I remember when "Pentium" came out and people laughed

Re:Nehalem? Larrabee? (4, Informative)

TechyImmigrant (175943) | more than 5 years ago | (#22778270)

They are code names, not product names.

Intel has a rich collection of silly code names.

Re:Nehalem? Larrabee? (4, Informative)

Have Blue (616) | more than 5 years ago | (#22778514)

Most of Intel's codenames are names of real places [wikipedia.org].

Re:Nehalem? Larrabee? (5, Funny)

Anonymous Coward | about 6 years ago | (#22779846)

Most of Intel's codenames are names of real places.

How long until they release a chip named after Intercourse,PA,
Or my favourite, Wankers Corner,OR

AMD will join the fun and look to France for inspirational place names, such as Condom, Tampon and Herpes

Not to be outdone, poor old Amiga Inc finally release a new computer named after the village of Shittington,in the UK,with an update scheduled for 2025 named after Mount Buggery in Australia.

Re:Nehalem? Larrabee? (1)

hairyfeet (841228) | about 6 years ago | (#22780004)

Hey,don't forget Bald Knob and Pigskillet,AR.I think my state has to be in the top 10 for stupid town names.Which is good,since we're 49th in everything else.The only thing that saves us from dead last is the consistent suckage of Mississippi.Go Mississippi!

Re:Nehalem? Larrabee? (1)

mrbluze (1034940) | more than 5 years ago | (#22778626)

Larrabee has been around for ages. I remember how in Get Smart he worked for Control, but in the end he quit to join IBM. Now we find him again, this time at Intel. I think I'm having another one of those headaches again.

Re:Nehalem? Larrabee? (0)

Anonymous Coward | more than 5 years ago | (#22778766)

Larrabee? Wasn't that the superhero with the suction cup ears?

Re:Nehalem? Larrabee? (2, Informative)

glitch23 (557124) | more than 5 years ago | (#22778716)

They typically (maybe all) come from various types of things (e.g. mountains [mckinley]) in the north west portion of North America. You'll notice many sound the same such as Tukwila and Willamette.

OT-sig (0)

Anonymous Coward | about 6 years ago | (#22780106)

There are parallel examples, like marsupials that parallel other sorts of mammals that are placental. They look and act roughly similar and occupy similar niches in their local environment.

Re:Nehalem? Larrabee? (-1, Flamebait)

Anonymous Coward | more than 5 years ago | (#22778306)

Nehalem just sounds so durned Zionist, I feel like I'm putting the Hz on Palestinians just by buying one.

Re:Nehalem? Larrabee? (-1, Troll)

Anonymous Coward | more than 5 years ago | (#22778592)

I was thinking similar but I couldn't have phrased it any better than you. Too bad I have no mod points.

Re:Nehalem? Larrabee? (3, Interesting)

slew (2918) | more than 5 years ago | (#22778410)

Nehalem? Larrabee?
Heck, I remember when "Pentium" came out and people laughed

Heck, I remember when "Itanium" came out and people laughed...

But before they laughed, I remember a bunch of companies folded up their project tents (sun, mips, the remains of dec/alpha). I'm not so sure companies will do the same this time around... Not saying this time Intel doesn't have their ducks in a row, but certainly, the past is no indication of the future...

Re:Nehalem? Larrabee? (4, Interesting)

TheRaven64 (641858) | more than 5 years ago | (#22778686)

But before they laughed, I remember a bunch of companies folded up their project tents (sun, mips,
I think you are mistaken. MIPS still exists, but SGI stopped using it. HP killed both PA RISC and Alpha, but they co-developed Itanium, so it isn't entirely surprising. Sun kept developing chips, and currently hold the performance-per-watt crown for a lot of common web-server tasks.

Re:Nehalem? Larrabee? (1)

drinkypoo (153816) | about 6 years ago | (#22779714)

Just for the record, Sun canceled their last product line because they flailed on completing it in a timely fashion, and by the time it came out it would have been dramatically outdated already. So instead they canned it; brought out broader versions of existing chips rather than deeper, new processors; and built x86-64-based systems in the meantime.

Re:Nehalem? Larrabee? (1)

theendlessnow (516149) | about 6 years ago | (#22780008)

Heck, I remember when "Itanium" came out and people laughed...

Whoa! You mean they stopped laughing? Now that's major news!

Re:Nehalem? Larrabee? (2, Interesting)

Kamokazi (1080091) | more than 5 years ago | (#22778526)

These are code names, not product names. They will probably all be Core 2(3?), Xeon, etc.

Gflargen and Blackeblae (0)

deathtopaulw (1032050) | more than 5 years ago | (#22778264)

Haven't they heard of numbers?
You know... cpu 5?

Re:Gflargen and Blackeblae (5, Informative)

iknownuttin (1099999) | more than 5 years ago | (#22778314)

Haven't they heard of numbers?

You can't trademark numbers. When AMD started releasing "x86" numbered processors, Intel filed suit and lost. The judge stated that you can't trademark numbers. It's such an old case, this is what I found in the last 10 minutes regarding Intel and trademarking numbers [theinquirer.net].

I'm tired and too lazy to find the actual lawsuit.

Re:Gflargen and Blackeblae (1)

91degrees (207121) | more than 5 years ago | (#22778520)

Would be nice if there was some sort of pattern to the naming though. Pentium, Pentium II, Pentium III and Pentium 4 made it clear which one was newer (although the shift to arabic numerals was a little inconsistent). I have no idea where the other processors fit into this pattern.

Re:Gflargen and Blackeblae (2, Interesting)

ChronoReverse (858838) | more than 5 years ago | (#22778576)

Well, it went from Core, to Core 2. I'd presume these new chips would get the "Core 3" moniker.

Re:Gflargen and Blackeblae (1)

compro01 (777531) | more than 5 years ago | (#22778618)

they then went Pentium D, core/core duo, core 2 duo/core 2 quad, though i dunno what they're gonna call this bunch.

Re:Gflargen and Blackeblae (0)

Anonymous Coward | more than 5 years ago | (#22778666)

bah, pentium itself was a naming scheme fuckup too imho

pentium clearly references the fact that the pentium 1 was supposed to be the 586. They should have named the pentium pro (P6 architecture, get it?) the sexium :P (P2 would have been Septium or Heptium, P3 Octium etc..)

The core scheme works OK imho, core just means CPU, 2 means 2nd generation micro-architecture, solo/duo/quad means number of cores. I fully expect nehalem to be called Core 3 Quad, or something to that effect (lest intels marketing drones fuck it up), and 'sandy bridge' should be Core 4

Re:Gflargen and Blackeblae (1)

Naughty Bob (1004174) | more than 5 years ago | (#22778734)

I remember the days when a youngster could go to bed dreaming of Longhorn firing bits through a tweaked Sexium....

Re:Gflargen and Blackeblae (1)

ozmanjusri (601766) | about 6 years ago | (#22780668)

dreaming of Longhorn firing bits through a tweaked Sexium.

Well, at least with Longhorn there's no way it's going to be premature.

Re:Gflargen and Blackeblae (5, Funny)

tzot (834456) | more than 5 years ago | (#22778700)

Pentium, Pentium II, Pentium III and Pentium 4 made it clear which one was newer (although the shift to arabic numerals was a little inconsistent).
Someone sent an email to the Intel board of directors, allegedly from CIA, beginning with "Dear Sirs: it has come to our attention that you label your products with arabic numerals."

It took them a while to get that it was a joke.

Re:Gflargen and Blackeblae (2, Insightful)

DiEx-15 (959602) | about 6 years ago | (#22780172)

Now, please keep in mind my understanding of the law is next to "naive" but here is my understanding:

For something to be considered "trademarkable" there has to be some form of association with the trademark. For example: Mickey Mouse & the Walt Disney Castle are trademarks of Walt Disney since you see or hear these images, you conger the images of Disney and such. Now if Intel could prove such links with numbers, perhaps there is a chance. HOWEVER the reason this has been (and always will be) a total demonstration in futility is because numbers can't generate the same iconic images as words or pictures. Numbers are numbers and signify values, not property or anything tangible. Granted there are trademarks with numbers in them but usually they have a letter or two thrown in. That is where it goes from just numbers to a word - a word with numbers in them. That is when it can be trademarked.

What Intel is trying to do is go "If you use 10206 as a name for something, we will sue!" The problem is:
1) I will sue Intel because that is part of a story I have and have proof I beat them to. (Although that is totally off the real topic here & I would meet with their pit bull lawyers)
2) If you got 10206 as a math answer, how would the law differentiate between it and Intel's property?
3) If 10206 was part of a formula, bar code, serial number, part number, etc., how would the system know if it is a violation of trademark laws?

Think about this - The number 42 is a part of the Hitchhikers Guide story. I can safely use "42" in anything I want because its a number AS LONG AS I don't go and say "it's the meaning of life" BECAUSE then it would have an association. Now as far as Intel, they can't say "the number is associated with our chips" because there is such a weak (at best) association between a number and something physical (the chip).Mostly I think the law has told Intel "Whatever. The numbers look more like a serial number rather than a trademark worthy thing". That is why Intel can't get its wishes.

Anyways, that is my ten cents (my two cents is free...) and I could totally be wrong here. However that is my understanding.

Intel Vs. AMD? (3, Insightful)

Naughty Bob (1004174) | more than 5 years ago | (#22778268)

Could Intel beat AMD in its own "Fusion" plans?
Intel is hugely advanced on AMD at this point, however, without AMD we wouldn't be seeing these releases. Hurray for the market, I guess....

Re:Intel Vs. AMD? (1)

iknownuttin (1099999) | more than 5 years ago | (#22778336)

Intel is hugely advanced on AMD at this point, however, without AMD we wouldn't be seeing these releases. Hurray for the market, I guess...

Hell yeah! Without AMD, we'd all be on x86 technology. Although, there is/was Motorola. Wouldn't it be nice to run multiple time line(s) scenarios?

Re:Intel Vs. AMD? (1)

renegadesx (977007) | more than 5 years ago | (#22778352)

So far AMD's Phenom processor is not outpreforming the Core 2 Quad's as expected but at the same time AMD has set up an architecture that will give the ability to expand to more cores easier.

This could give AMD an advantage come beyond quad cores however Intel I am sure are hard at work to make sure they stay in the lead.

Re:Intel Vs. AMD? (0)

Anonymous Coward | more than 5 years ago | (#22778756)

Yeah, kinda like how the Nehalem can have up to eight cores.

Re:Intel Vs. AMD? (2, Interesting)

WarJolt (990309) | more than 5 years ago | (#22778362)

Intel has expensive really fast multi core processors.
AMD 64-bit processing is better. Depending on the type of processing you're doing that could mean a lot.
We all know what a debacle Intels integrated graphics were in the past. I'm not sure if they should be using that as a marketing point.
Since AMD acquired ATI I would assume AMDs integrated graphics would be far superior.

NVIDIAs stock price hasn't been doing so good in the last couple months. Could this mean a return of integrated graphics? I'd bet my money on AMD who already owns ones of the big players.

Re:Intel Vs. AMD? (1)

Naughty Bob (1004174) | more than 5 years ago | (#22778436)

I am nobody's fanboy, but I kinda believe that if/when the transition to 64-bit picks up speed, Intel would miraculously produce something fairly crushing.

Your post makes me think that Intel will attempt a take-over of Nvidia, hostile or otherwise. But I have no knowledge in this area.

Re:Intel Vs. AMD? (3, Interesting)

eggnoglatte (1047660) | more than 5 years ago | (#22778534)

Your post makes me think that Intel will attempt a take-over of Nvidia, hostile or otherwise. But I have no knowledge in this area.
Why would they? Intel already has the biggest GPU marketshare (bout 50% or so), and they achieve that with integrated graphics, that are arguably the way of the future. My guess is that NVIDIA will become the SGI of the early 21st century - they'll cater to a high-speed niche market. Too bad, actually, I kind of like their cards (and they have by far the best 3D Linux performance).

Re:Intel Vs. AMD? (1)

Naughty Bob (1004174) | more than 5 years ago | (#22778630)

they'll cater to a high-speed niche market
They don't already?

To your general point- I fully agree, what with my 8800 equipped Ubuntu box 'n all.

Re:Intel Vs. AMD? (0)

Anonymous Coward | about 6 years ago | (#22780784)

How can something that never was return?

Re:Intel Vs. AMD? (2, Interesting)

Joe The Dragon (967727) | more than 5 years ago | (#22778378)

But AMD has better on board video and there new chipset can use side port ram.

Video on the cpu may be faster but you are still useing the same system ram and that is not as fast that ram on a video card and that ram it on it's own.

Re:Intel Vs. AMD? (2, Insightful)

Naughty Bob (1004174) | more than 5 years ago | (#22778590)

Video on the cpu may be faster but you are still using the same system ram and that is not as fast that ram on a video card and that ram it on it's own.
Nobody could argue against that, but the two approaches solve different problems currently. If the drift is towards an all in one solution, then the drift is towards less capable, but cheaper tech. Most gamers are console gamers, perhaps the chip makers are coming to the conclusion that dedicated GPUs for the PC are a blind alley (a shame IMHO).

Re:Intel Vs. AMD? (2, Interesting)

Joe The Dragon (967727) | about 6 years ago | (#22779134)

With things like vista do you really want to give up 128-256 of system ram + the bandwidth need for that just for areo? The on chip video should have side port ram like the new amd chip can use or maybe have 32meg+ of on chip ram / cache for video. Just having a ddr 2/3 slot or slots with there own channels will be better then useing the same ones that are justed for system ram but ram on video cards is faster then that.

also console games don't have mod's / users maps and other add ones they also don't have that many free games.

you don't have many mmorpg games on them and the Xbox is pay to play online vs free on the pc and ps3.

Re:Intel Vs. AMD? (1)

Naughty Bob (1004174) | about 6 years ago | (#22779198)

I just don't think the lack of side port ram is a deal breaker is all. It would be nice, but system ram is cheap, and it looks like the cool developments are in other directions. I'd be happy to be proved wrong.

True enough about the restricted nature of console gaming, but don't expect that to inform 'Big Silicon' in its future decisions. Money is their only friend.

Re:Intel Vs. AMD? (1, Informative)

Anonymous Coward | about 6 years ago | (#22779422)

Given that you can purchase a system with 3-4 GB quite cheaply, and by time these processors come to market the price will go down (though it's unlikely that computers will commonly come with over 4gb until 64bit is more reliable, or until there's some application that you actually need 4gb for.) It stands to reason that if you purchase a computer with one of these processors, it will come with 4gb of ram. As someone who runs vista with 3gb, i can assure you that it is enough for any needs I have found (I seldom get over 70% usage, and if i do at least 20% is Firefox and pidgin and their memory leaks, or i'm running a game.) (no IM client should hog 170+mb of memory, especially with only AIM running and only a few windows open, but i have it lying around, so i don't mind too much)

Vista also has a ceiling of 3.5gb if I remember correctly. I forget what the extra 512mb gets used for if you have it, but if you have 4gb of ram in a vista system, the system will not show a some 2^x mb amount of it, and it isn't some amount being dedicated to integrated graphics.

Re:Intel Vs. AMD? (1)

ZachPruckowski (918562) | about 6 years ago | (#22780600)

In 2009, even the lowest-end laptops are gonna have 2 GB of RAM. 128 MB for video is small potatoes for a consumer machine. Sure, you could add some extra RAM on the side, but that's $20 on a $800 machine (which sounds small, but is a decent slice of the profits). And adding 32 MB on die for video wouldn't help free up bandwidth or all that much RAM (because it's not large enough to not overflow into main RAM, and it's got to get to the Display Adapter somehow either way).

The reason Intel has so much of the market share is because of price - Intel's IG solutions (and low-end chipsets overall) are the cheapest on the market. The chip-integrated GPU will be popular because it can cut those costs further. For the 95% of the populace who doesn't need more than Vista+Office+web+email, the goal now is to make their computers lighter, cheaper, and more efficient. Most users can't tell the difference between a 3870 and an Intel IG (given 2+ GB of RAM, which is soon to be normal), simply because the most graphically intensive thing they do is Flip 3D.

Re:Intel Vs. AMD? (2, Interesting)

0111 1110 (518466) | more than 5 years ago | (#22778832)

without AMD we wouldn't be seeing these releases.
Actually this seems a bit disingenuous to me. Intel released Penryn way before they had to. Intel (the hare) was so far ahead of AMD (the tortoise) with the 65nm Core 2 that they could have sat back and relaxed for a while, saving R&D costs while waiting for AMD to catch up at least a little. I mean look at Nvidia for a perfect counterexample. Most people believe that they already have a next gen GPU ready but that they are sitting on it until they have someone to compete with besides themselves. To a lot of people that seems to make sense. Especially if you *only* care about making as much money as possible and don't care about being a technology leader. The only problem I have with that logic is that you will be losing sales from upgraders as well as allowing your competition to get closer to you so that you cannot price as high. But obviously Nvidia seems to feel that the savings in R&D costs and not competing with their own products is enough to justify it. Of course there is always the possibility that Nvidia is just not ready with their new tech yet, but not many people seem to believe that.

Instead of waiting for some real competition, Intel released Penryn more or less right on schedule when the only competition they had was their own 65nm processors. Of course the quad cores are only just being released now, but they are still releasing them way before AMD has anything to really compete with them. People make all kinds of cynical statements about business methods without even considering corporate culture. Has it ever occurred to anyone that Intel simply may not believe in only releasing new tech when they absolutely have no choice due to competition?

I'm not saying competition is not a good thing, but I don't think AMD is presenting much competition to Intel at the moment. AMD is in big trouble and Intel is well aware of that fact. I just don't think that it is competition that is driving Intel forward. Competition may affect their pricing, but I think Intel would keep right on with their two year tick tock cycles and process shrinks even if AMD folded tomorrow.

Re:Intel Vs. AMD? (2, Insightful)

Naughty Bob (1004174) | about 6 years ago | (#22778952)

It is only about the money. All decisions ultimately come back to that. With Penryn, huge fabricating plants were coming online, and they couldn't have justified (to shareholders) not following through. That it kept Intel's jackboot firmly on the AMD windpipe was in that instance a happy sweetener.

Re:Intel Vs. AMD? (1)

Zeinfeld (263942) | about 6 years ago | (#22779264)

Actually this seems a bit disingenuous to me. Intel released Penryn way before they had to. Intel (the hare) was so far ahead of AMD (the tortoise) with the 65nm Core 2 that they could have sat back and relaxed for a while, saving R&D costs while waiting for AMD to catch up at least a little. I mean look at Nvidia for a perfect counterexample. Most people believe that they already have a next gen GPU ready but that they are sitting on it until they have someone to compete with besides themselves.

There are plenty of ultra high end folk who would pay mondo dollars for a faster GPU regardless of whether there was a faster one from the competition. More likely would be that nVidia recognize that they have somewhat pushed the edge as far as power output (heat) goes of late and they could use something of a minimum feature size shrink before they go to the next step.

GPUs are pretty tricky heat wise because you can light up a lot more of your chip real estate with active circuits. Intel are not just putting masses of cache on their chips for speed, they need some less thermally intensive areas on the chip.

Re:Intel Vs. AMD? (1)

mako1138 (837520) | about 6 years ago | (#22779754)

Think further back. A few years ago, the Opteron and the Athlon64 took a big bite out of Intel's market share. That happened because Intel was arrogantly chasing higher clocks (P4) and awkward architectures (Itanium). The tick-tock strategy was adopted in response to AMD's success. If AMD hadn't embarrassed Intel so badly, I doubt we'd be seeing such rapid product cycles today.

Though, 45nm processors are currently in short supply. They're usually sold out, and are marked up considerably.

http://techreport.com/discussions.x/14323 [techreport.com]

TPM (0)

Anonymous Coward | more than 5 years ago | (#22778304)

So.. which one of these is going to be Intel finally slipping an integrated TPM into the chipset so that everyone can benefit from owning an expensive cable box/console, rather than a real computer?

Re:TPM (2, Insightful)

trickonion (943942) | more than 5 years ago | (#22778338)

I dont understand your comment, I, like many other people dont like the idea of TPM, and from your post it seems you are sarcastically agreeing with me. (via the word slipped). You also, however say so we can get the advantage of owning an expensive cable box (which I could actually see as an advantage, if you already have one in your house).
Your post confuses me (or I'm being retarded, this has happened twice before in my life, along with the 3 times I've been wrong), and it forces me to conclude, that you, AC are in fact a woman and are using feminine wiles.

Re:TPM (3, Funny)

Xtravar (725372) | more than 5 years ago | (#22778450)

it forces me to conclude, that you, AC are in fact a woman and are using feminine wiles.
INTRUDER ALERT!!!! Sound the alarms!! We've got a code 159. Get me a traceroute, now!!!

Still waiting for Mumbai (0, Flamebait)

heroine (1220) | more than 5 years ago | (#22778320)

It's time for Intel to use Indian names, considering they're all designed in India.

Re:Still waiting for Mumbai (0)

Anonymous Coward | more than 5 years ago | (#22778356)

Didn't Core 2 come from Israel?

Please stop naming after WA and OR places (0)

Anonymous Coward | more than 5 years ago | (#22778396)

Tukwila? WTF? Tukwila is a hole. The ONLY thing there worth noting it South Center Mall. What next, Kent, Renton, Auburn or maybe Tacoma or Lakewood. Maybe just confuse the hell out of everyone who's not a native and grew up more than 20 miles away from this place and name it Puyallup. Not even the news anchors or reporters around here can get it right when they start.

Re:Please stop naming after WA and OR places (1)

Neil Hodges (960909) | about 6 years ago | (#22778876)

I don't know; Puyallup has a ring to it not-too-different from the other chip codenames. For all we know, it's already been used.

If you want obscure, try Stanwood, Smokey Point, or Granite Falls. They all sound like CPU names, but are cities most never have even heard of.

More Integrated Garbage? (4, Insightful)

immcintosh (1089551) | more than 5 years ago | (#22778464)

So, this Larrabee, will it be another example of integrated graphics that "supports" all the standards while being too slow to be useful in any practical situation, even basic desktop acceleration (Composite / Aero)? If so, I've gotta wonder why they even bother rather than saving some cash and just making a solid 2D accelerator that would be for all intents and purposes functionally identical.

Re:More Integrated Garbage? (1)

frieko (855745) | more than 5 years ago | (#22778544)

Intel GMA950 does compositing just fine in Leopard. Compiz works plenty fast on it too, though it's buggy as hell.

Re:More Integrated Garbage? (1)

GXTi (635121) | more than 5 years ago | (#22778546)

What they need to do is make some discrete graphics cards. They seem to have a clue when it comes to making their hardware easy to work with from Linux; if only the cards had more horsepower they'd be a favorite in no time.

Re:More Integrated Garbage? (4, Insightful)

Kamokazi (1080091) | more than 5 years ago | (#22778566)

No, far, far, from integrated garbage. Larrabee will actually have uses as a supercomputer CPU:

"It was clear from Gelsinger's public statements at IDF and from Intel's prior closed-door presentations that the company intends to see the Larrabee architecture find uses in the supercomputing market, but it wasn't so clear that this new many-core architecture would ever see the light of day as an enthusiast GPU. This lack of clarity prompted me to speculate that Larrabee might never yield a GPU product, and others went so far as to report "Larrabee is GPGPU-only" as fact.

Subsequent to my IDF coverage, however, I was contacted by a few people who have more intimate knowledge of the project than I. These folks assured me that Intel definitely intends to release a straight-up enthusiast GPU part based on the Larrabee architecture. So while Intel won't publicly talk about any actual products that will arise from the project, it's clear that a GPU aimed at real-time 3D rendering for games will be among the first public fruits of Larrabee, with non-graphics products following later.

As for what type of GPU Larrabee will be, it's probably going to have important similarities to we're seeing out of NVIDIA with the G80. Contrary to what's implied in this Inquirer article, GPU-accelerated raster graphics are here to stay for the foreseeable future, and they won't be replaced by real-time ray-tracing engines. Actually, it's worthwhile to take a moment to look at this issue in more detail."

Shamelessly ripped from:

http://arstechnica.com/articles/paedia/hardware/clearing-up-the-confusion-over-intels-larrabee.ars/2 [arstechnica.com]

Re:More Integrated Garbage? (2, Interesting)

donglekey (124433) | about 6 years ago | (#22780102)

Very interesting and I think you are right on the money. 'Graphics' is accelerated now, but the future may be more about generalized stream computing that can be used for graphics (or physics, or sound, etc) similar to the G80 and even the PS3's Cell (they originally were going to try to use it to avoid having a graphics card at all). This is why John Carmack thinks volumetrics may have a place in future games, why David Kirk thinks that some ray tracing could be used (not much, but don't worry it wouldn't really bring that much to the game anyway) and why Ageia created a company made to be sold before Intel, AMD/ATI, and Nvidia got into the stream processing business and beat them at their own game. Imagine all the what ifs you can think of in the video game world and they will start to become plausible over the next decade (but forget about ray tracing, it wouldn't be a good use of power at 100x the speed that we have now).

Ummmmm, no (4, Interesting)

Sycraft-fu (314770) | more than 5 years ago | (#22778620)

First off, new integrated Intel chipsets do just find for desktop acceleration. One of our professors got a laptop with an X3000 chip and it does quite well in Vista. All the eye candy works and is plenty snappy.

However, this will be much faster since it fixes a major problem with integrated graphics: Shared RAM. All integrated Intel chipsets nab system RAM to work. Makes sense, this keeps costs down and that is the whole idea behind them. The problem is it is slow. System RAM is much slower than video RAM. As an example, high end systems might have a theoretical max RAM bandwidth of 10GB/sec if they have the latest DDR3. In reality, it is going to be more along the lines of 5GB/sec in systems that have integrated graphics. A high end graphics card can have 10 TIMES that. The 8800 Ultra has a theoretical bandwidth over 100GB/sec.

Well, in addition to the RAM not being as fast, the GPU has to fight with the CPU for access to it. All in all, it means that RAM access is just not fast for the GPU. That is a major limiting factor in modern graphics. Pushing all those pixels with multiple passes of textures takes some serious memory bandwidth. No problem for a discrete card, of course, it'll have it's own RAM just like any other.

In addition to that, it looks like they are putting some real beefy processing power on this thing.

As such I expect this will perform quite well. Will it do as good as the offerings from nVidia or ATi? Who knows? But this clearly isn't just an integrated chip on a board.

Re:More Integrated Garbage? (1)

Funk_dat69 (215898) | more than 5 years ago | (#22778770)

Larrabee could be more of a hedge against IBMs Cell and Nvidia's GPUs for high computational workloads. The addition of graphics being a page from Nvidias book to try and get gamers to fund their HPC conquests.

Re:More Integrated Garbage? (1)

Joe The Dragon (967727) | about 6 years ago | (#22779284)

part of the slow down comes from having to use system ram at lest amd got that right be letting there new chipset with video built in use side port ram there are MB with coming soon and it should be nice to see how much of a speed up that gives you with just on board video alone and with hybrid Crossfire.

Re:More Integrated Garbage? (0)

Anonymous Coward | about 6 years ago | (#22780646)

Are you people running the same chip I am? It says "Intel" on the side and everything? I've got an X3000 "integrated" GPU here, and it runs fine. It's autodetected in Debian and has a great free driver. Composite works fine, and I can even play OpenArena (Quake3 engine) on my 24" widescreen LCD with most of the features turned on and at a pretty good framerate, even. In terms of correctly supporting OpenGL features on Xorg, it's at least as good as my ATI, and far better than my older Matrox card.

Speaking of which, had an ATI card before this, and the Intel chip absolutely crushes it. Things improve so fast that a modern integrated GPU is far faster than an AGP card from just a couple years ago. I'm a bit of a graphics geek, and I understand that integrated graphics just ain't "cool", but I'm hard-pressed to come up with an actual reason to pay a bunch of money when my built-in graphics chip does both more than I need, and far more than the last graphics card I bought.

Are you guys running ancient drivers, or trying to play Doom 7 at 2560x1600, or do you just like complaining about products that don't have l33t names like "X1900 XTX"? As far as I'm concerned, the Intel graphics are just about perfect. I wish everything on my system worked this well.

HyperThreading (2, Interesting)

owlstead (636356) | more than 5 years ago | (#22778568)

"Also as noted, a return to SMT is going to follow Nehalem to the market with each core able to work on two software threads simultaneously. The SMT in Nehalem should be more efficient that the HyperThreading we saw in NetBurst thanks to the larger caches and lower latency memory system of the new architecture."

Gosh, I hope it is more effective, because in my implementations I actually saw a slowdown instead of an advantage. Even then I'm generally not happy with hyper-threading. The OS & Applications simply don't see the difference between two real cores and a hyperthreading core. If I run another thread on a hyperthreading core, I'll slowdown the other thread. This might not always be what you want to see happening. IMHO, the advantage should be over 10/20% for a desktop processor to even consider hyperthreading, and even then I want back that BIOS option so that disables hyperthreading again.

I've checked and both the Linux and Vista kernel support a large number of cores, so that should not be a problem.

Does anyone have any information on how well the multi-threading works on the multi-core Sun niagara based processors?

Re:HyperThreading (2, Interesting)

jd (1658) | more than 5 years ago | (#22778824)

This is why I think it would be better to have virtual cores and physical hyperthreading. You have as many compute elements as possible, all of which are available to all virtual cores. The number of virtual cores presented could be set equal to the number of threads available, equal to the number of register sets the processor could describe in internal memory, or to some number decided by some other aspect of the design. Each core would see all compute elements, and would use them as needed for out-of-order operatons. The primary idea would be to hide the multithreading of the chip from the OS, yet take advantage of being able to multithread. In addition to that, however, if one core can't exploit multiple threads but another core can exploit many, then you don't waste compute elements or slow things down by not making resources available.

(Since a compute element is designed for one specific task, you end up with a maximum number of supportable virtual cores equal to the number of pools times the number of elements in each pool. The minimum number of cores would be determined by the maximum number of threads generated by any instruction supported. If the CPU was really smart, it could "hotplug" CPUs to increase and reduce the number of cores that appear to the operating system, so that if there's a heavy, sustained use of the threading, the CPU doesn't try to overcommit resources.)

Good and bad (1)

Bullfish (858648) | more than 5 years ago | (#22778608)

While these processors may end up being great, in the end they may very well push AMD over the edge if you consider that AMD's new processors get clobbered by Intel's old processors. In the end, unless AMD pulls a rabbit out of their hat by the end of the year, this may either be the last innovation Intel makes for a while, or the last affordable one. As consumers we owe AMD a vote of thanks for driving Intel to the level they are at now.

Why the brick wall? (1)

Reality Master 101 (179095) | more than 5 years ago | (#22778640)

I can't even find the clock speed in that article, which means we're STILL probably stuck at 3.5 Ghz +/- .5 Ghz, which we've been stuck for what, three, four years? What the hell happened? If we're still shrinking components, why are we not seeing clock speed increases?

Re:Why the brick wall? (1)

glitch23 (557124) | more than 5 years ago | (#22778768)

I can't even find the clock speed in that article, which means we're STILL probably stuck at 3.5 Ghz +/- .5 Ghz, which we've been stuck for what, three, four years? What the hell happened? If we're still shrinking components, why are we not seeing clock speed increases?

Intel's current designs are basically focusing on what I'd consider horizontal scaling instead of vertical. That is, they are increasing the # of cores that run at a lower frequency which makes up for raising the clock speed. In addition, they run cooler. You aren't losing ground. If the Core 2 Duos weren't more efficient and provided better performance then Intel wouldn't be beating AMD's ass with them. You now have up to 4 cores in a single package each running at 2-3ghz (not sure the exact number for the high-end Extreme chips) instead of a single or even 2 cores running at a theoretical 4 or 5 ghz. The performance difference may or may not be much but it is still more using the horizontal scaling rather than vertical, not to mention better on power requirements.

Re:Why the brick wall? (4, Informative)

djohnsto (133220) | more than 5 years ago | (#22778772)

Because power generally increases at a rate of frequency^3 (that's cubed). Adding more cores generally increases power linearly.

For example. Let's start with a single-core Core 2 @ 2GHz. Let's say it uses 10 W (not sure what the actual number is).

Running it at twice the frequency results in a (2^3) = 8X power increase. So, we can either have a single-core 4 GHz Core 2 at 80W, or we can have a quad-core 2GHz Core 2 at 40W. Which one makes more sense?

Re:Why the brick wall? (0)

Anonymous Coward | about 6 years ago | (#22779518)

I thought the formula for CPU power usage was voltage * frequency^2. Your basic argument remains intact though; it remains very difficult to scale frequencies up higher while having any power efficiency - the lower the voltage, the less difference between a 1 and 0 there is, so the more precise you have to be in order to read them.

Initially intel wanted/expected the pentium 4 line to top out around 8-10 ghz, and built the architecture on that notion. It was significantly flawed - 3.8 was the highest frequency they reached with factory settings, and i'm sure even it required extensive cooling. I forget the record for overclocking a p4, but I don't think it even reached 8 ghz; this was using liquid nitrogen to cool the CPU. It still exploded after 30 seconds or so, creating a hole in the motherboard where the CPU should have been.

So yes, intel has wisely decided to back off from the ghz race, especially as most consumers have enough computing power and are more concerned with performance/price - either to buy or per watt.

Re:Why the brick wall? (1)

realmolo (574068) | about 6 years ago | (#22779916)

Well, performance-wise, a single-core 4GHz Core 2 makes more sense.

We still don't have much software that can really take advantage of multiple cores. A single core running at 4GHz is going to be MUCH faster on almost every benchmark than 2 cores running at 2GHz each.

But, it doesn't matter. Multi-cores are the future, and we need to figure out a way to take advantage of them.

Re:Why the brick wall? (0)

Anonymous Coward | about 6 years ago | (#22778852)

While clockrate has stagnated, performance per clock has increased. That's not a bad thing, especially considering Intel's past clock-fraud.

Re:Why the brick wall? (5, Informative)

TheSync (5291) | about 6 years ago | (#22779102)

1) We've hit the "Power Wall", power is expensive, but transistors are "free". That is, we can put more transistors on a chip than we have the power to turn on.

2) We also have hit the "Memory Wall", modern microprocessors can take 200 clocks to access DRAM, but even floating-point multiplies may take only four clock cycles.

3) Because of this, processor performance gain has slowed dramatically. In 2006, performance is a factor of three below the traditional doubling every 18 months that occurred between 1986 and 2002.

To understand where we are, and why the only way to go now is parallelism versus clock speed increase, see The Landscape of Parallel Computing ReseView from Berkeley [berkeley.edu].

Re:Why the brick wall? (0)

Anonymous Coward | about 6 years ago | (#22779904)

Here's what I would like to know. Smalltalk is usually slower than comparable languages but with all these cores it's performance should be on par with other languages.

Welcome to minusdot were you mod as low as you can to show your approval. You get a bonus just for showing up.

F*K, look around! (-1, Offtopic)

davFr (679391) | more than 5 years ago | (#22778682)

While /. people are wanking off the next CPU from Intel, people die in Tibet because they want to be free. And not free as beer, you nerds.

Re:F*K, look around! (1)

zakeria (1031430) | more than 5 years ago | (#22778728)

Your right we should all take the problems of the world upon us and go bunkers!!! go save a whale you knob! people die for stuff all over the world "including the USA & UK".

Re:F*K, look around! (0)

Anonymous Coward | about 6 years ago | (#22779140)

The difference is, this CPU stuff is actually interesting

Re:F*K, look around! (1)

MobileTatsu-NJG (946591) | about 6 years ago | (#22779812)

While /. people are wanking off the next CPU from Intel, people die in Tibet because they want to be free. And not free as beer, you nerds.
Please tell me how I can save a life over there.

Anti-Trust Question... (2, Interesting)

dosh8er (608167) | more than 5 years ago | (#22778694)

... because I simply _don't_ trust any company/companies with market share as vast as Intel (yeah, I know, the "Traitorous Eight" [wikipedia.org]). Apparently, AMD has had a lot of legal beef with Intel in the past, in fact, they used to be best buds, until Intel snaked AMD from some business with IBM. I know it's only a matter of time before Intel outwits AMD in the mass sales of proc.'s (esp. in the desktop/laptop field... I personally LOVE the power-saving on my Dual-Core... 3.5 Hrs avg. on a battery is GREAT for the powerhorse that it is), but what can AMD do? Merge with ATI... oops, already been done. So is AMD restricted to GPU market for the rest of their (profitable) life?

I can see this going two ways:
1) Intel forces AMD outta business. AMD ends up liquidating its stock/technology to foreign companies (read: outside USA).
2) AMD Brings an Anti-Trust case against Intel for 'unfair practices' or some crap (IANAL).

However, there is ALWAYS the possibility that Intel pulls another Pentium Bug [wikipedia.org]. Remember the mid-late 90's ? (God how _could_ we _forget_ the 90's!?) Either way, AMD needs to diversify their R&D and/or look for more lucrative business opportunities (whatever that means), or -the winner IMHO- work with IBM on this power saving crusade.

Was denken Sie, Slashdot Crowd?

Re:Anti-Trust Question... (1)

Bacon Bits (926911) | about 6 years ago | (#22779800)

The Pentium Bug isn't going to happen again. Or rather, it still happens but it doesn't matter.

Since the Pentium, all Intel (and AMD) processors have used microcode. That is, there is a layer of abstraction between machine code that the processor executes and the actual electronic logic on the chip. It's a layer between the physical processor and Assembly. What it allows you to do is provide bug fixes for processor design errors. It's slightly slower because it's an extra decode operation, but it allows for much more complex and risky designs because you can fix errors you find after the chip is in production.

Microcode updates are loadable by the BIOS or by the OS. They are loaded each time the PC is powered on. BIOS updates are preferable, of course, but those "drivers" you can download for AMD processors, and certain updates available through OS vendors (http://support.microsoft.com/kb/936357) apply updates to the OS microcode.

closing the FSB loophole (1)

0111 1110 (518466) | about 6 years ago | (#22779088)

While this new architecture sounds amazing, and I am planning to upgrade in about a year when this is released, is anyone else a bit worried about the overclocking potential of Nehalem? Intel sells their high end $1000 + 'Extreme' CPUs with an unlocked multiplier and, other than a higher bin, that is really its only selling point. I remember the days before Intel started locking down the multipliers. Lots of people thought it might spell the end of overclocking. But of course it turned out that FSB overclocking, although RAM and chipset limited, was a perfectly viable alternative and so overclockers were freed from Intel's pricing structure. But this seems like an opportunity for Intel to add value to their more profitable high binned parts by closing the FSB loophole and leaving no way for overclockers to do their thing. Could it be that by next year only the rich will be able to afford a bleeding edge CPU?

dual monitor support? (1)

rhavenn (97211) | about 6 years ago | (#22779228)

So, will Intel finally release a dual-DVI setup then? I love nVidia, but their lack of FreeBSD x64 support and the fact that I really have no need for a nVidia card outside of dual-monitor support has me searching high and low for a decent dual-DVI setup that works with xorg drivers and has 3D/DRI out of the box that lets me use some 3D affects and do basic 3D programming without stuttering like a mofo or switching to "soft-ware rendering" mode :(

FYI: ATI lost me as a customer with their many years of zero Linux support and not to mention they still don't support FreeBSD. I won't use them except for some integrated server boards where it doesn't matter.

No forgiveness for ATI (1, Insightful)

BrunoUsesBBEdit (636379) | about 6 years ago | (#22779636)

[quote]ATI lost me as a customer with their many years of zero Linux support and not to mention they still don't support FreeBSD. I won't use them except for some integrated server boards where it doesn't matter.[/quote]

No forgiveness for ATI. I think we need to stay loyal to the companies that first showed us respect and show us the most respect today. Intel has poured resources into Linux and Xorg. When we are able to of load all HD video decoding from our CPUs to our GPUs, it will be Intel that makes that possible. For years ATI and nVidia have taunted the MythTV community with $25 512MB video cards that could easily handle HD video if only the manufactures would support us. This is a grievance of which I can't easily let go.

Re:dual monitor support? (2, Interesting)

dpokorny (241008) | about 6 years ago | (#22779680)

Effectively all of Intel's chipsets support dual digital outputs. Many mobile chipsets support 5+ unique outputs. Just take a look at the spec sheets available at developer.intel.com. It's a question of the motherboard manufacturers -- they need to put one or more sDVO transmitters on the motherboard to support the physical DVI connectors.

There is a standard called ADD+ that allows you too connect the transmitters via an AGP or PCIe card, however, given that drivers are validated with specific transmitters, it's unusual to find ADD+ cards outside of driver development groups or validation teams.

However, if you can find an ADD+ card with a pair of common transmitters such as the Chrontel CH7307, then you can get your dual DVI outputs.

(Not speaking as an official representative of Intel Corporation)

Re:dual monitor support? (1)

dbIII (701233) | about 6 years ago | (#22780224)

You are not going to see an open nvidia driver until the silly software patent mess vanishes. You could continue to punish nvidia for the sins of SGI et al or you could just download their drivers.

The Giant is awakened (3, Interesting)

markass530 (870112) | about 6 years ago | (#22779266)

I say this, as an admitted AMD fanboy, and in hopes that they can make a comeback, to once again force intel into a frenzy of research and development. I Can't help but imagine that AMD exec's are saying something along the lines of Isoroku Yamamota's famous WWII post pearl harbor quote, "I fear that all we have done is to awaken a sleeping giant." It's all gravy for consumers so one can't help to be happy at the current developments. However to ensure future happiness for consumers, one must also hope for an AMD Comeback.

the year 3000 (0, Redundant)

rice_burners_suck (243660) | about 6 years ago | (#22779276)

when are they gonna come out with a processor capable of performing the way processors will in, say, the year 3000?

Re:the year 3000 (0)

Anonymous Coward | about 6 years ago | (#22779536)

Um .. the year 3000, maybe?

What's with the Hebrewlish? (1)

flyingfsck (986395) | about 6 years ago | (#22779888)

These Intel Hewbrewlish names are getting really hard to pronounce.

"Hebrew English is to be helpings and not to be laughings at."

prefix confusion... (1)

drew (2081) | about 6 years ago | (#22780414)

Given that processors with four cores are called "quad-core", shouldn't a six core processor be a "sexa-core" processor? Calling a six core processor "hexa-core" would imply that a processor with four cores should be called "tetra-core."

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account