Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Larrabee Based On a Bundle of Old Pentium Chips

ScuttleMonkey posted about 6 years ago | from the making-old-new-again dept.

Intel 286

arcticstoat writes "Intel's Pat Gelsinger recently revealed that Larrabee's 32 IA cores will in fact be based on Intel's ancient P54C architecture, which was last seen in the original Pentium chips, such as the Pentium 75, in the early 1990s. The chip will feature 32 of these cores, which will each feature a 512-bit wide SIMD (single input, multiple data) vector processing unit."

cancel ×

286 comments

first post (-1, Offtopic)

Anonymous Coward | about 6 years ago | (#24089941)

God, people are annoying, just figured I'd try it just once :)

Pentium 75? (5, Funny)

Anonymous Coward | about 6 years ago | (#24089949)

Ah the dreams of the past, a beowulf cluster of old computers come to life :)

Re:Pentium 75? (5, Funny)

Divebus (860563) | about 6 years ago | (#24090177)

Making math errors at blazing speeds...

Re:Pentium 75? (-1, Troll)

Anonymous Coward | about 6 years ago | (#24090403)

Ironically, the people who made these lame jokes the most (Apple fanbois) now advocate Intel chips as being the best. Yet another example of do as I do, not as I say from the Apple camp.

Re:Pentium 75? (3, Insightful)

merreborn (853723) | about 6 years ago | (#24090465)

Making math errors at blazing speeds...

Ironically, the people who made these lame jokes the most (Apple fanbois) now advocate Intel chips as being the best. Yet another example of do as I do, not as I say from the Apple camp.

I know I'm wasting my time responding to such a blatant troll, but they're nothing hypocritical about saying that the original Pentium 1 was a pretty bad chip, and the Core 2 Duo is a pretty great one.

Failing to reliably perform basic floating point ops is pretty embarrassing. But Intel's come a long way since then.

Re:Pentium 75? (4, Funny)

StikyPad (445176) | about 6 years ago | (#24091283)

Oh it performed them reliably.. just reliably wrong.

Re:Pentium 75? (2, Funny)

bluefoxlucid (723572) | about 6 years ago | (#24090719)

I advocate ARM as the best. :(

Re:Pentium 75? (5, Informative)

Anonymous Coward | about 6 years ago | (#24090913)

I don't care if you're a C64 fanboi, Pentiums made mistakes. Apple had nothing do to with it. Read here [wikipedia.org] .

And this also from the same source... "In June 1994, Intel engineers discovered a flaw in the floating-point math subsection of the Pentium microprocessor. Under certain data dependent conditions, low order bits of the result of floating-point division operations would be incorrect, an error that can quickly compound in floating-point operations to much larger errors in subsequent calculations. Intel corrected the error in a future chip revision, but nonetheless declined to disclose it."

Re:Pentium 75? (1)

i.of.the.storm (907783) | about 6 years ago | (#24091323)

Really? I don't think Apple fanbois [sic] would know that much about Pentium math errors, judging from their apparent age levels... On a more serious note, I think it's just CPU geeks who make those jokes, not Mac fans.

Re:Pentium 75? (4, Funny)

BUL2294 (1081735) | about 6 years ago | (#24090435)

Oh, don't worry about that. Games will just be more interesting. For example, that 3D monster you're trying to hack to death with a chainsaw will now suddenly shift to a different part of the screen... Or maybe you'll get a cool color-cycling effect from some incorrectly calculated values...

"Intel Graphics Inside--it's all in good fun!"

Re:Pentium 75? (5, Funny)

Lemmeoutada Collecti (588075) | about 6 years ago | (#24091239)

You mean my FPS will behave like World of Warcraft now? Wonderful!

Re:Pentium 75? (0)

Anonymous Coward | about 6 years ago | (#24090453)

32 cores at a time!

What the hell is Larrabee? (4, Insightful)

vondo (303621) | about 6 years ago | (#24089953)

A little context might help. This isn't the Inquirer for god's sake.

Re:What the hell is Larrabee? (5, Informative)

Darkness404 (1287218) | about 6 years ago | (#24089985)

Larrabee is the codename for a discrete graphics processing unit (GPU) chip that Intel is developing as a revolutionary successor to its current line of graphics accelerators. The video card containing Larrabee is expected to compete with the GeForce and Radeon lines of video cards from NVIDIA and AMD/ATI respectively. More than just a graphics chip, Intel is also positioning Larrabee for the GPGPU and high-performance computing markets, where NVIDIA and AMD are currently releasing products (NVIDIA Tesla, AMD FireStream) which threaten to displace Intel's CPUs for some tasks. Intel plans to have engineering samples of Larrabee ready by the end of 2008, with public release in late 2009 or 2010.[1]

According to Wikipedia http://en.wikipedia.org/wiki/Larrabee_(GPU) [wikipedia.org]

Re:What the hell is Larrabee? (4, Insightful)

TransEurope (889206) | about 6 years ago | (#24090337)

Interesting is also that intel expects a maximum power consumption of at least 300 Watts. I personally expect nothing from that thing. The ancient technology of the cores and the perspective of building a system serving and cooling a hotspot of 300 Watts doesn't make these cards my favourite choice yet. I#m very sceptic about Intes try of making a high end graphic board. I really can't imagine that old cores of first gen Pentiums will be able to compete with modern stream processing units. I'm wondering that Intel wasn't able to choose some RISC-design at least, maybe i960.

Re:What the hell is Larrabee? (4, Insightful)

lorenzo.boccaccia (1263310) | about 6 years ago | (#24090561)

also considering that at least the three last attempt of intel of building a high end graphic board failed miserably, and are now almost a recurring joke.

sorry for the drunken english

Re:What the hell is Larrabee? (1)

davidsyes (765062) | about 6 years ago | (#24091193)

"I#m very sceptic about Intes try of making a high end graphic board."

That exemplifies why I feel burned for buying a laptop with an Intel video chip. Next time, I'll get another make, an ADD-ON chip that still is affordable, or in the $600 range of laptop.

Re:What the hell is Larrabee? (-1, Troll)

bluefoxlucid (723572) | about 6 years ago | (#24090787)

Then this is a fucking bad design.

If you wanted massively parallel specialized architecture, you should have used a specialized chip. If you wanted something more generic, you should have used ARM (600MHz XScale peaks at 0.5W of power! And performs multiple insns per clock due to the efficient pipelining inherent in the ARM ILA...).

32 x 600MHz@0.5W == 19.2GHz@16W, woo!

Re:What the hell is Larrabee? (5, Informative)

ciroknight (601098) | about 6 years ago | (#24091365)

Yes, 32 x 600MHz x 1MIP/MHz @ 0.5W == 19.2 GIPS@16W.

Meanwhile...

32 x ???MHz (Unknown, but likely to be 900+ to be competitive with current designs) x 3+MIPS/MHZ + 32 x 512-bit SIMD units = OMGWTFHAX @ 300W.

Seriously. The "Pentium" base of this design is damned near irrelevant. At this point, all it's doing there is scheduling execution on the SIMD units. If you've seen any modern GPU designs, they're basically hugely parallel cores attached to a few "director" cores which puts everything where it needs to go. The original Pentium is probably the most powerful CPU with the least complicated design on the process, with the least amount of legacy MMX cruft.

Re:What the hell is Larrabee? (3, Informative)

jandrese (485) | about 6 years ago | (#24090005)

According to TFA, it's a graphics card that Intel is making to compete with Intel and ATI. I'm guessing it's going to be highly optimized for Ray Tracing given Intel's statements in the past. Total power consumption estimates are jaw dropping, TFA estimates around 300W.

Re:What the hell is Larrabee? (1)

jandrese (485) | about 6 years ago | (#24090019)

Obviously they're competing with nVidia and ATI, not Intel and ATI. Geez, even mandatory previews don't always work.

Re:What the hell is Larrabee? (0)

Anonymous Coward | about 6 years ago | (#24091373)

Wait, are previews mandatory now? Since I habitually preview it could be true without my having noticed, so I'll purposely try to post this without previewing first.

Posting AC to get this OT comment a score of 0.

Re:What the hell is Larrabee? (4, Insightful)

poetmatt (793785) | about 6 years ago | (#24090349)

Not only is the power retarded, but ATI already can do 100% native ray tracing [techpowerup.com] which crushed intel bigtime.

I welcome intel trying to push for marketshare but it's going to be many generations before intel can play catchup on graphics cards...specifically when we get around to 32+GB of ram and you can afford a couple gigs for graphics (at which point we'll need 4+ gigs for graphics probably), the performance of an integrated solution will still be lacking. Graphics bandwidth and needs increases far exponentially beyond that of processing needs for anything graphics intensive by definition (currently).

Re:What the hell is Larrabee? (4, Informative)

Joce640k (829181) | about 6 years ago | (#24090369)

Not quite...

Larrabee is a general purpose number cruncher with high degree of parallelism.

NVIDIA/ATI are moving towards making their graphics cards capable of running general purpose code. Intel is coming from the other side, moving a general purpose parallel-compute engine towards doing graphics.

Yes it's a subtle difference and yes they'll meet in the middle, it's just a question of angles.

Intel wants the parallel compute market more than it wants the graphics card market so that's who it's pitching this at.

Re:What the hell is Larrabee? So, then want Kaos (0)

davidsyes (765062) | about 6 years ago | (#24091241)

and they want to keep Kontrol... They want to shag the field with Austen-sible power CON-sumptions? So, do they want to *86* or DEEP-SIX ATI & nVidia and others?

Re:What the hell is Larrabee? (1)

clampolo (1159617) | about 6 years ago | (#24090051)

A little context might help. This isn't the Inquirer for god's sake.

It's Intel's graphics chip for competing with nvidia. They are moving into this turf because nvidia is attempting to use their CUDA technology to make the CPU less important.

So it's only natural that Intel is fighting back.

Re:What the hell is Larrabee? (5, Funny)

KlomDark (6370) | about 6 years ago | (#24090095)

It's one of the larger cities in Wyoming. Get with it. ;)

Re:What the hell is Larrabee? (5, Funny)

Anonymous Coward | about 6 years ago | (#24090141)

It's one of the larger cities in Wyoming. Get with it. ;)

Only if you have a head cold.

Re:What the hell is Larrabee? (0, Troll)

andphi (899406) | about 6 years ago | (#24090381)

That's impossible! Everyone knows that no one lives in Wyoming. The population is bovine, all the way down.

Re:What the hell is Larrabee? (0)

Tumbleweed (3706) | about 6 years ago | (#24090573)

It's one of the larger cities in Wyoming. Get with it. ;)

I suppose it depends on what you define as a 'city'. By my definition, there ARE no 'cities' in Wyoming, only some large towns. The largest town in Wyoming has about 55k people, right? That ain't a city, son.

Big mountains, though - the Grand Tetons are *way* more impressive than the Rockies.

The first and only time I ever saw a tequila lollipop (WITH WORM) was at a gas station outside of Gillette, Wyoming. That's the kind of experience that sticks with a man.

Re:What the hell is Larrabee? (1)

je ne sais quoi (987177) | about 6 years ago | (#24091387)

The three largest "Cities" are: Cheyenne -- 56k, Casper -- 50k, Laramie -- 26k. Total population of the state is 522k, yet it's the 5th largest state by size.

Re:What the hell is Larrabee? (1, Informative)

dpiven (518007) | about 6 years ago | (#24090875)

Uh, I think you're talking about LARAMIE, not Larrabee.

Manycore GPU (5, Interesting)

DrYak (748999) | about 6 years ago | (#24090293)

Larrabee [wikipedia.org] is going to be Intel's next creation in the GPU world. A many core GPU which has the following peculiarities :

- fully compatible with x86 instruction set. (whereas other GPU use different architecture, and often instruction sets that aren't as much adapted to run general computing).
Thus, the Larrabee could *also* be used as a many core main processor (if popped into a quick path socket) and used to execute a good multicore OS. Something that's not achievable with any current GPU (both ATI's and nVidia's completely lack some control structures - both are unable to use subroutines and everything must be in-lined at compile time)

- unlike most current Intel x86 CPUs, features a shallow pipeline, executing instruction in-order. Hence, the Larrabee (and the Silverthorne which also have such characteristics) are regularly compared with old Pentiums (which also share those characteristics) since the initial announcement and including in TFA.

- feature more cores with narrower SIMD : 32 cores able each to handle 16 32bit float simultaneously. Whereas, for exemple nVidia's CUDA-compatible GPU have up to 16 cores only, but each able to execute 32 threads over 4 cycles and keep up to 768 threads in flight.
This enable Larrabee to cope with slightly more divergent code than traditional GPUs and make it a good candidate to run stuf like GPU accelerated RayTracing.

Hence all the recent technical demos running Quake 4 in raytracing mentionned on /.

That's for what Intel tells you.

Now the old and experienced geek will also notice that Intel has only kept making press releases and technical demo running on plain regular multi-chip multi-core Intel Cores (just promising that the real chip will be even better than the demoed stuff).

Meanwhile, ATI and nVidia are churning new "half"-generations each 6 months.

And the whole Larrabee is starting to sound like a big vaporware.
 

Re:What the hell is Larrabee? (1)

sexconker (1179573) | about 6 years ago | (#24090851)

Ironically, if you had been reading The Inq for the past forever, you'd know what Larrabee was.

Re:What the hell is Larrabee? (1)

Kamokazi (1080091) | about 6 years ago | (#24091399)

lrn2google

Seriously. I hit an article on Slashdot probably about once a week where I am not entirely sure what they are talking about, so I look the damn thing up. Usually it's some particle physics voodoo, a new/obscure programming language/concept or an acronym for something I knew by another name. Slashdot covers a wide variety of very technical topics, they can't be expected to elaborate on them all.

Sounds good! (2, Funny)

Anonymous Coward | about 6 years ago | (#24089989)

Sounds great, as long as you don't plan on doing any floating point math [wikipedia.org] on it!

Re:Sounds good! (0, Redundant)

resonance378 (1169393) | about 6 years ago | (#24090061)

Curses! Beaten to the punch! I was in fact going to point to the same article. I remember my neighbor getting all upset about this in his brand new Pentium base system.

Re:Sounds good! (3, Funny)

h4rm0ny (722443) | about 6 years ago | (#24090391)


Hey, only Intel provide you with a floating point that really floats - why you never know where it's going to end up! Now that's floating!:D

Re:Sounds good! (2, Funny)

Anonymous Coward | about 6 years ago | (#24090485)

Intel, Intel, give me your answer do,
Going hazy, can't divide three by two.
My answers I can't see 'em,
They're stuck in my Pent-i-um,
So you'd look great
If you would make
A functional FPU.

(best sung by mid-'90s speech synthesisers)

Spock comes to mind... (1)

geekmansworld (950281) | about 6 years ago | (#24089993)

"Stone knives and bearskins"

Pentiums? (3, Funny)

h4rm0ny (722443) | about 6 years ago | (#24090003)


This is just unbelievably good news. After all this time, I get to start telling Pentium jokes again! I never thought I would!

Re:Pentiums? (5, Funny)

Anonymous Coward | about 6 years ago | (#24090117)

Intel... where quality is job 0.9995675!

Re:Pentiums? (2, Funny)

Red Flayer (890720) | about 6 years ago | (#24090309)

This is just unbelievably good news. After all this time, I get to start telling Pentium jokes again! I never thought I would!

This is slashdot. You didn't need something like this to beat the Pentium dead horse... or for that matter, any dead horse.

In other words,

In Soviet Russia, floating-point arithmetic messes up Pentium

Netcraft confirms, Pentium is undead. Brainssss!

Imagine a Beowulf cluster of these.

Et cetera, ad infinitum.

Re:Pentiums? (0)

Anonymous Coward | about 6 years ago | (#24090395)

You forgot ??? and Profit!, you insensitive clod.

Re:Pentiums? (1)

Tumbleweed (3706) | about 6 years ago | (#24090633)

Hey, you insensitive clod ... you forgot the Natalie Portman and the hot grits! (and the welcoming of the new overlords and all the bases that are belonging to us).

SIMD = Single Instruction, Multiple Data (4, Informative)

Joce640k (829181) | about 6 years ago | (#24090011)

Get your acronyms right....

now accepting memes for fdiv bugs (-1, Redundant)

Anonymous Coward | about 6 years ago | (#24090031)

Imagine, a single chip containing a beowulf cluster of fdiv bugyness...

In Soviet Russia fdiv bugs you on 32 cores.

In Korea, old cores for old people.

eh, I give up

tm

I'm no expert but (4, Funny)

Gat0r30y (957941) | about 6 years ago | (#24090065)

The card features one 150W power connector, as well as a 75W connector. Heise deduces that this results in a total power consumption of 300W,

Um, that just doesn't seem to quite add up to me.

Re:I'm no expert but (5, Informative)

tlhIngan (30335) | about 6 years ago | (#24090129)

The card features one 150W power connector, as well as a 75W connector. Heise deduces that this results in a total power consumption of 300W,

Um, that just doesn't seem to quite add up to me.

Power can come from multiple sources. In this case, you have a 150W power connector (probably a 6pin PCIe one), and another 75W one (yet another 6pin PCIe). The remaining 75W comes from the PCIe connector itself.

Nothing terribly unusual - a number of cards are coming out in configurations like this, and 300W for a video card is starting to become the norm, depressing as it is.

Re:I'm no expert but (1)

Gat0r30y (957941) | about 6 years ago | (#24090213)

Thanks for clarifying, and you are right, 300W is out of control for a graphics card. On the upside, maybe I won't game so much anymore because of the electricity bill.

Re:I'm no expert but (4, Funny)

MightyMartian (840721) | about 6 years ago | (#24090411)

Or from the loss of mental acuity due to serious RF interference melting your brain.

"Look at da pretty colors..."

Re:I'm no expert but (1)

Yvan256 (722131) | about 6 years ago | (#24090499)

My Core 2 Duo Mac mini + ViewSonic VP171s are both listed at 30-35W average.

Hearing about videocards requiring power connectors AND wasting 300W of power just seems insane to me.

Not to mention the power for the CPU, RAM, hard drives, LCD, etc. And since all of this crap generates heat, some of you are also paying double/triple since you run the AC to counter the heat.

Re:I'm no expert but (0)

Anonymous Coward | about 6 years ago | (#24090789)

300 W is absurd. 240 W should be enough for any GPU.

Re:I'm no expert but (2, Informative)

i.of.the.storm (907783) | about 6 years ago | (#24091403)

...and 300W for a video card is starting to become the norm, depressing as it is.

Not really, die shrinks have been actually driving down power consumption. If you look at this page: http://www.guru3d.com/article/radeon-hd-4850-and--4870-crossfirex-performance/3 [guru3d.com] you can see that the latest generation Radeon 4850 and 4870 consume much less power than the power hungry peaks set by the 2900XT. The 4850 system uses less than 300W at full load. That's pretty damn impressive considering the ridiculous amount of performance it puts out.

Re:I'm no expert but (0)

Anonymous Coward | about 6 years ago | (#24090171)

This would be in addition to the power it draws through the pcie interface remember.

Re:I'm no expert but (0)

Anonymous Coward | about 6 years ago | (#24090191)

Plus the 75W that the PCIe bus supplies.

Re:I'm no expert but (0, Redundant)

Futile Rhetoric (1105323) | about 6 years ago | (#24090229)

The card would also draw some power from the PCI-E slot.

Re:I'm no expert but (0, Redundant)

tixxit (1107127) | about 6 years ago | (#24090289)

The PCIe bus itself supplies up to a max of 75W. So, 150 + 75 + 75 = 300.

Re:I'm no expert but (4, Funny)

h4rm0ny (722443) | about 6 years ago | (#24090319)

Um, that just doesn't seem to quite add up to me.

It does if you work it out on a Pentium I [wikipedia.org] :D

Re:I'm no expert but (2, Funny)

Chyeld (713439) | about 6 years ago | (#24091199)

The card features one 150W power connector, as well as a 75W connector. Heise deduces that this results in a total power consumption of 300W

Um, that just doesn't seem to quite add up to me.

Seeing as it's based on a cluster of Pentiums, did you really expect it to add up?

Re:I'm no expert but (-1, Redundant)

Anonymous Coward | about 6 years ago | (#24091427)

He added them up on an old Pentium...

Weird Al was right.... (2, Funny)

kannibul (534777) | about 6 years ago | (#24090139)

It really is all about the Pentiums.

Imagine a... (0, Redundant)

dave562 (969951) | about 6 years ago | (#24090147)

Beowulf Cluster of Pentium 75s!!!

Doh! Intel already beat me to it.

good. (4, Insightful)

apodyopsis (1048476) | about 6 years ago | (#24090153)

good. sounds like a sensible engineering decision.

on the basis that..
the design is well known, understood and has had rigorous testing in the field
they will no doubt fix any understood errors firstlimits the RnD to the multicore section

as long as the chip performs well for the silicon overhead then they should feel free to cram as many in as they want.

seems perfectly sensible to me.

32 Pentiums 75? (2, Funny)

Anonymous Coward | about 6 years ago | (#24090209)

Core 1: 4195835/3145727 = 1.33382
Core 2: 4195835/3145727 = 1.33382
Core 3: 4195835/3145727 = 1.33382
Core 4: 4195835/3145727 = 1.33382
.
.
.
Core 31: 4195835/3145727 = 1.33382
Core 32: 4195835/3145727 = mmm... 1.33374? Oh, f*ck!

I doubt it (5, Interesting)

Bender_ (179208) | about 6 years ago | (#24090219)

I doubt it. Maybe they mentioned the Pentium as an example to explain an in-order superscalar architecture as opposed to more modern CPUS.

-There is a lot of overheard in the P54C to execute complex CISC operations that are completely useless for graphic acceleration.

-The P54C was manufactured in a 0.6micron BiCMOS process. Shrinking this to 0.045micron CMOS (more than 100x smaller!) would require a serious redesign up to the RTL level. Circuit design had evolve with process technology.

-a lot more...

The "Core" chips were based on the Pentium III (1)

Joce640k (829181) | about 6 years ago | (#24090409)

...and the Pentium III was basically the same as the Pentium Pro.

If Intel is going backwards then why not go all the way back to the original Pentium? Makes sense to me.

Re:The "Core" chips were based on the Pentium III (3, Informative)

TeknoHog (164938) | about 6 years ago | (#24091007)

PPro was the first Intel processor that was RISC internally, with translation from x86. Whereas the original Pentium and the P-MMX were pure CISC. This is the main reason I seriously doubt they'd use P54C in Larrabee.

I don't quite agree (1)

dreamchaser (49529) | about 6 years ago | (#24090419)

It's more likely that they are taking basic design concepts. It says 'based on' not 'clone of'. By optimizing some of the overhead you mention with more modern architectural technicques than can both keep it simple and capitalize on modern optimizations.

Re:I doubt it (3, Interesting)

Enleth (947766) | about 6 years ago | (#24090493)

It's unlikely but not impossible - don't forget that the Pentium M and, subsequently, Core line of processors was based on Pentium III Coppermine, whereas the Pentium 4 Netburst architecture developed in the meantime was abandoned completely. Going back to Pentium I would be a bit on the extreme, but it's possible that they meant some basic design principles of Pentium I, not the whole core as it was. Maybe they will make something from scratch, but keep it similar to the original Pentium's inner RISC core, or maybe redo it as a vector processor or hell knows what. It was a citation from a translated interview with some press monkey, so you can expect anything.

Check your math (1)

argent (18001) | about 6 years ago | (#24090513)

It's only 13x smaller. :)

Re:Check your math (1)

Laglorden (87845) | about 6 years ago | (#24090731)

13x13 = 169 or 177 for larger values of 13

Re:I doubt it (4, Interesting)

Chip Eater (1285212) | about 6 years ago | (#24090567)

A process shrink, even a deep one like .6 um to 45 nm shouldn't require too many RTL changes if the design was done right. But I don't think they are using "soft" or RTL cores. Most likely this P54C was a custom design. Shrinking a custom design is a lot more tedious. Which might help explain why they chose such a old, small core.

Yes, "based on" seems to be the key phrase (3, Insightful)

mbessey (304651) | about 6 years ago | (#24090709)

Obviously they're not just going to slap a bunch of Pentium cores on there and call it good. But the high-level design can probably start off with the P54, and just rip out stuff that doesn't need to be supported, possibly including:

Scalar floating-point, 16-bit protected mode, real mode, operand size overrides, segment registers, the whole v86 mode, the i/o address space, BCD arithmetic, virtual memory, interrupts, #LOCK, etc, etc.

Once you've done that, you'll have a much simpler model to synthesize down to an implementation. And with a slightly-modified compiler spec, you can crank out code for it with existing compilers, like ICC and GCC.

Re:I doubt it (3, Interesting)

georgewilliamherbert (211790) | about 6 years ago | (#24090739)

One does not "shrink" a chip by taking photomasks and shrinkenating. One redoes the design / layout process, generally. The P5 series went from 0.8 um to 0.25 um over its lifetime (through Tillamook), stepping through 0.6, 0.35, and finally 0.25 um.

It was 148 mm^2 at 0.6 um, so the process shrink should bring it down to a floorplan of around a square millimeter or so a core. Not sure how big the die will be for Larrabee, but the extra space will probably support the simple wide data unit per core and more cache. If the SIMD is simple it could be another 3-4 million transistors / 1 square mm or so. For a 100 mm^2 chip that gives you another 30 mm^2 or so for I/O and cache (either shared, or parceled out to the cores).

Bill Waterson process (4, Funny)

DragonHawk (21256) | about 6 years ago | (#24091069)

One does not "shrink" a chip by taking photomasks and shrinkenating.

'course not. You use a transmogrifier. In the industry, it is known as the "Bill Watterson" process.

It can also be used to turn photomasks into elephants, which, while less profitable, is immensely entertaining if the operator didn't see you change the setting.

Re:I doubt it (1)

Brain_Recall (868040) | about 6 years ago | (#24091149)

You do realize they have automated tools to take Verilog source (or whatever they use) and throw it on to silicon. Sure, it probably won't run at the clock frequency that you would get with hand-tuned circuits, but it'll work.

Re:I doubt it (5, Informative)

ratboy666 (104074) | about 6 years ago | (#24091153)

The original Pentium (which went to 166Mhz, at the end, not just 75Mhz), used U and V execution pipes. No translation to micro-ops, and no "out of order". Indeed, there shouldn't be a need for that in Larrabee, anyway, given the number of cores. It would almost be better to get rid of the V pipe, and add SIMD, instead.

Your comments on CISC are bit off-base; the idea is to execute shaders in x86 machine code. They can be simple (limited flow control), or complex (general CPU/GPU).

"out-of-order" (ei. Pentium Pro and better) is not so good with that many cores doing that kind of work. It would get the hardware into a lot of trouble. Better to keep it simple, and add more cores.

A better start point would probably have been ARM, but that would lose the compatibility edge. If Larrabee works, it will take the GP-GPU market by storm. It needs:

1 - to publish itself as an NUMA access CPU (add a bit to tell the OS what it is for)
2 - compiler optimizations for the particular CPU architecture, preferably broken into two pieces:
2a - "straight line" shader code
2b - branching code
3 - a guide to the new NUMA characteristics.

With that in place, a standard (BSD/LINUX) OS will be able to use it for regular jobs. Or, for those special "I need the SIMD unit" jobs. The biggest hassle is trying to split control of those new CPU units between OpenGL and the regular scheduler (this is a kernel hack that Intel will have to make). It would be easier to jam this into OpenSolaris, but that isn't anywhere near popular enough.

Don't you want your video card to assist compiling large source when not gaming/modeling? Why not?

And, a few "extra" points

- Intel already has an optimizing compiler for the P54C architecture, and we have gcc.
- The architecture, including U/V pipelines only used 3.1 million transistors.
- A GeForce 7800 GTX has 302 million transistors -- 100x the number of the original Pentium processor.

So, I would think that using 32 "Pentium Classic" cores reduced would be quite feasible -- you need some (lots) of logic to ensure that they can all access their respective memories. The general SIMD implementation will take quite a bit of real estate as well. There is probably a budget of 600M transistors (wild ass guess) to Larrabee, estimate derived from power consumption estimates.

The gate size shrink should result in higher speeds. There may be a danger in the complex instruction interpretation routines, but these can be corrected. The single cycle instructions are already a (more than less) synchronous design, and should scale trivially.

Anything I am missing?

I, for one, am looking forward to buying a desktop super-computer with Larrabee.

So can we expect another "rounding error" debacle? (0)

Anonymous Coward | about 6 years ago | (#24090255)

Careful, Intel! Don't base these on core designs that are TOO old!

Or do y'all still think math is like playing horseshoes?

Marko DeBeeste (3, Funny)

Marko DeBeeste (761376) | about 6 years ago | (#24090321)

Larrabee is the Chief's cousin

Re:Marko DeBeeste (3, Informative)

sconeu (64226) | about 6 years ago | (#24090595)

I can't believe it took this long for someone to find the "Get Smart!" reference.

Would you believe.... 39 posts?
How about 20?

How about one FRIST POST and an In Soviet Russia?

This may be the ultimate victory... (1)

TransEurope (889206) | about 6 years ago | (#24090373)

... of the A20 gate!

how exciting! (-1, Troll)

Anonymous Coward | about 6 years ago | (#24090383)

I seriously don't care about Larrabee or what the hell kind of architecture it has. It is going to be a pile of EPIC FAIL just like any of their other GPUs.

Interesting choice... (2, Interesting)

Antony T Curtis (89990) | about 6 years ago | (#24090593)

If anyone remembers those old original Pentiums, their 16-bit processing sucked - so much that a similarly clocked 486 could outperform them. I guess that it would be reasonably trivial for Intel to slice off the 16bit microcode on this old chip to make a 'pure' 32-bit only processor. I am sure that they will be using the designs with a working FPU... but for many visual operations, occasional maths errors would largely go unnoticed. Remember when some graphics chip vendors were cheating on benchmarks by reducing the quality ... and how long it took for people to notice?

Although, if I had Intel's resources and was designing a 32-core cpu, I would probably choose the core from the latter 486 chips... I don't think a graphics pipeline processor would benefit much from the Pentium's dual instruction pipelines and I doubt that it would be worth the silicon realestate. The 486 has all the same important instructions useful for multi-core work - the CMPXCHG instruction debuted on the 486.

Re:Interesting choice... (1)

Pinback (80041) | about 6 years ago | (#24091083)

Yup, its confirmed. We're getting 32 i960 cores in one chip. Dust off those floating-point-on-integer libraries.

That isn't a graphics card, its 32 laserjet brains on one card.

Marketing Math (3, Insightful)

fpgaprogrammer (1086859) | about 6 years ago | (#24090617)

From TFA "Heise also claims that the cores will feature a 512-bit wide SIMD (single input, multiple data) vector processing unit. The site calculates that 32 such cores at 2GHz could make for a massive total of 2TFLOPS of processing power."

I don't see how they get to 2 TFLops.

512-bit = 64 bit * 8 way SIMD or 32 bit * 16 way SIMD. Let's go with the bigger of these two and say we are performing 16 single Floating point operations per clock-cycle per core. 16 operations per clock-core * 32 cores * 2 Billion clocks per second = 1024 Single Precision GFlops. It looks more like 512 Double Precision GFlops for 300 Watts which means a DP Teraflop on Larabee will cost you 513 Dollars a Year [google.com] at 10 cents/kWH. If we're considering single precision, we can cut this in half to 257 dollars per years per single precision teraflop.

Compare to Clearspeed which offers 66 DP GFLops at 25 Watts costing 332 dollars [google.com] for a sustained DP teraflop for a year.

even the NVidia Tesla has better performance at single precision: you can buy 4 SP TFlops consuming only 700W or 5.7 GFLops/Watt, for an annual power budget of 153 dollars [google.com] .

Re:Marketing Math (1)

hattig (47930) | about 6 years ago | (#24090979)

And with the nVidia or ATI/AMD option, you can get it now.

Not "late 2009 or maybe 2010" like Larrabee.

Both companies current cards do 1 TFLOP or more in single precision mode. By 2010 they'll have doubled that.

In addition, assuming it will run at 2GHz is another massive leap. A single silverthorne core consumes a couple of watts at 1.6GHz and doesn't have a 512 bit SIMD unit attached to it that may or may not run at that speed either, whilst consuming quite a bit of power too.

Intel might get ahead if they can run graphics on X cores and physics on Y cores and other GPGPU on Z cores at the same time - I believe that both AMD and nVidia's designs have context switching issues in this regards right now. Both also have dedicated extra hardware for graphics support, so I presume Intel have that extra non-generic functionality as well...

Oh, and regarding graphics, does anyone here trust Intel's driver team to come up with working drivers for a new architecture straight away? It usually takes them a year after hardware release. Don't hold your breath.

Re:Marketing Math (2, Insightful)

David Greene (463) | about 6 years ago | (#24091279)

I don't see how they get to 2 TFLops. 512-bit = 64 bit * 8 way SIMD or 32 bit * 16 way SIMD. Let's go with the bigger of these two and say we are performing 16 single Floating point operations per clock-cycle per core. 16 operations per clock-core * 32 cores * 2 Billion clocks per second = 1024 Single Precision GFlops.

Most likely there is a muladd unit, which would double the peak FLOPS.

correction, 31.874582034 cores (0, Redundant)

swschrad (312009) | about 6 years ago | (#24090669)

our precise calculations at Intel suggest that partial core technology has great potential.

Uh, isn't that true of the Core CPUs too? (0)

Anonymous Coward | about 6 years ago | (#24090745)

I get the feeling this is supposed to be shocking news, but I must be missing something important. Isn't the Core microarchitecture also based on the original Pentium? I mean, I thought it was a redesign of the Pentium M series which was derived from the Pentium III which evolved from the Pentium II...and we know where that came from.

Larabee supposedly has 32 cores (1)

scourfish (573542) | about 6 years ago | (#24090989)

But when I run CPU-Z on the system, it only reports 31.33374 cores

Re:Larabee supposedly has 32 cores (1)

scourfish (573542) | about 6 years ago | (#24090999)

And my processor has spelling errors

Why does intel keep re-using past designs... (1)

hyperz69 (1226464) | about 6 years ago | (#24091045)

First Core Tech was based off pre Netburst Architecture and now this. In 5 years intel will announce a 4096 Core 80386 for sound your sound card or something. ;P

Re:Why does intel keep re-using past designs... (1)

Quattro Vezina (714892) | about 6 years ago | (#24091273)

That's because NetBurst was architecturally inferior to even the original P5 Pentium. If it were possible to overclock a 486 to 3+ GHz, it would perform about the same as a NetBurst chip.

The older technology was better in every way.

Compare with Niagara 2 and 3, and Cell (3, Interesting)

hattig (47930) | about 6 years ago | (#24091137)

Right. It clearly isn't using the Pentium design, but a Pentium-like design.

To that, they will have added SMT, because (a) in-order designs adapt to SMT well because they have a lot of pipeline bubbles and (b) there will be a lot of latency in the memory system and SMT helps hide that. I would assume 4 way SMT, but maybe 8. Larrabee will therefore support 128 or 256 hardware threads. nVidia's GT280 supports 768.

The closest chip I can think of right now is Sun's Niagara and Niagara 2 processors, except with a really beefy SIMD unit on each core, and a large number of cores on the die because of 45nm. I think Niagara 3 is going to be a 16 core device with 8 threads/core, can anyone confirm?

Note that this is pretty much what Sony wanted with Cell, but Cell was 2 process shrinks too early. 45nm PowerXCell32 will have 32 SPUs and 2 PPUs (whereas Larrabee looks like it is matching an equivalent of a weak-PPU with each SPU equivalent). It could run at 5GHz too... power/cooling notwithstanding.

I already thought of this.. (2, Interesting)

greywire (78262) | about 6 years ago | (#24091247)

at least 20 years ago, I thought, hey, with the density and speed of transistors these days, and with RISC being popular, why not go all the way and make chip with literally hundreds of (wait for it..) Z80 cpu's?

Of course I and others dismissed the idea as being just slightly ludicrous. But then, at the time, I also thought eventually there would be Amiga emulators and interpreted versions of C language, for which I was also called crazy to think...

Why Not 486's (1)

Nom du Keyboard (633989) | about 6 years ago | (#24091293)

Why not 486 cores? Then you could put 4X as many of them on your die. They already include integral FP and 1 op/cycle for most instructions.

bugs aplenty (1)

SendBot (29932) | about 6 years ago | (#24091335)

ha! anyone remember the f00f bug [wikipedia.org] ?

I learned how to embed machine code into C and ran amok halting university systems with that for a little while.

Or about that floating point bug [wikipedia.org] ?

FDIV Errata (1)

Nom du Keyboard (633989) | about 6 years ago | (#24091467)

Will it include the FDIV bug X32?
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...