Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel Shows Off 80-core Processor

Zonk posted more than 6 years ago | from the next-up-a-skillion-core-system dept.

Intel 222

thejakebrain writes "Intel has built its 80-core processor as part of a research project, but don't expect it on your desktop any time soon. The company's CTO, Justin Rattner, held a demonstration of the chip for a group of reports last week. Intel will be presenting a paper on the project at the International Solid State Circuits Conference in San Francisco this week. 'The chip is capable of producing 1 trillion floating-point operations per second, known as a teraflop. That's a level of performance that required 2,500 square feet of large computers a decade ago. Intel first disclosed it had built a prototype 80-core processor during last fall's Intel Developer Forum, when CEO Paul Otellini promised to deliver the chip within five years.'" Update: 06/01 14:37 GMT by Z : This article is about four months old. We discussed this briefly last year, but search didn't show that we discussed in February.

Sorry! There are no comments related to the filter you selected.

But... (2, Funny)

Anonymous Coward | more than 6 years ago | (#19351077)

Does it run Linux?

cue (5, Funny)

russ1337 (938915) | more than 6 years ago | (#19351097)

Cue the 'needed to run Vista' jokes....

Re:cue (2, Funny)

NickCatal (865805) | more than 6 years ago | (#19351121)

Finally I can realize my dream of playing 500 instances of Quake 3 on one machine!

Re:cue (0)

jellomizer (103300) | more than 6 years ago | (#19351171)

Unfortunatly Vista is not designed parallel enough to really handle it well, you may get the performace of an 8 core system system (Running Vista on a Mac Pro via Bootcamp). But anything above that I doubt you will see a speed improvement, other applications if designed correctly will probably have better performance though if they take advantage of say 64 cores and leaving the rest for the OS and other apps.

Re:cue (1, Insightful)

benzapp (464105) | more than 6 years ago | (#19352071)

Do you have any evidence of this? Are you saying that if I write an application to execute 16 threads simultaneously 16 different processors on a machine running Vista, that application will not see any speed increase over running that 16 thread application on an 8 processor machine?

Why would this be? And what is with the mac pro nonsense? Do you really think only apple makes 8 core machines?

Re:cue (0, Flamebait)

Ngarrang (1023425) | more than 6 years ago | (#19351699)

This 80-core CPU might explain why Micro$oft wants to rewrite the Windows kernel to be more threaded. New instruction set, new OS. Old apps would run in a VM.

And, we may be seeing the processor for the XBox 720. Or XBox 1040. Micro$oft has always shown a preference for Intel, even though have supported AMD chip extensions, as well.

Re:cue (0)

Anonymous Coward | more than 6 years ago | (#19351789)

> And, we may be seeing the processor for the XBox 720. Or XBox 1040.

I'd like to see it for the XBox 1040-EZ.

Re:cue (1)

saboola (655522) | more than 6 years ago | (#19351841)

The Xbox 360 uses a triple core PowerPC based CPU, what makes you think they would go back to intel?

Re:cue (1, Flamebait)

homeobocks (744469) | more than 6 years ago | (#19351925)

Oh, I get it! You spell "Microsoft" with a "$" replacing the "s" because Microsoft likes money! Then you write some shallow technical-sounding drivel around it to legitimize your flagrant adolescent fanboyism as Slashdot's trademark pseudo-intellectual circle-jerk! Clever!

Re:cue (2, Funny)

Ngarrang (1023425) | more than 6 years ago | (#19351977)

Oh, I get it! You spell "Microsoft" with a "$" replacing the "s" because Microsoft likes money! Then you write some shallow technical-sounding drivel around it to legitimize your flagrant adolescent fanboyism as Slashdot's trademark pseudo-intellectual circle-jerk! Clever!
Karma was meant to be burned, not whored.

Re:cue (1, Funny)

Anonymous Coward | more than 6 years ago | (#19352345)

1040? How'd you come up with that number?

Re:cue (1)

kimvette (919543) | more than 6 years ago | (#19352339)

Considering how slow Vista is compared to Win2K and XP, perhaps in this case you meant "queue?"

My ROFLcore processor goes.. (1)

GonzoTech (613147) | more than 6 years ago | (#19351101)

My ROFLcore processor goes soi soi soi soi soi soi soi soi.. and allows me to 40 box world of warcraft.

Older Story (2, Informative)

Maddog Batty (112434) | more than 6 years ago | (#19351105)

Older story on this here: http://hardware.slashdot.org/article.pl?sid=06/09/ 26/1937237 [slashdot.org]

Sure would be nice to have a play with it once they have worked out how to program it...

Re:Older Story (1)

ajanp (1083247) | more than 6 years ago | (#19351767)

Oh snap, you just slashdotted the slashdotters.

Spidey Sense tingles. I sense a conspiracy involving the slashdot admins posting a dupe of an old slashdot article because Intel dishes out the $$ for all those ads. Follow the example of your digg brethren and revolt! REVOLT before its too late!!!!

Whatever... (0)

Anonymous Coward | more than 6 years ago | (#19351111)

80 times the bullshit SPEC scores the computing world will look forward to from Intel.

IA64 (4, Insightful)

AvitarX (172628) | more than 6 years ago | (#19351119)

I remember when IA64 was the next huge supercomputer on a chip 5 years off.

It didn't work out too well for Intel.

Re:IA64 (5, Funny)

ciroknight (601098) | more than 6 years ago | (#19351253)

I remember when Pentium was the next huge chip from Intel that was a few years off.

I guess we all know how that one turned out.

Re:IA64 (1)

drinkypoo (153816) | more than 6 years ago | (#19351693)

Also Pentium IV - piece of shit, still pretty damned fast, sold like mad. Sometimes Intel's failures are great successes.

It may be known as "a teraflop", but... (5, Informative)

91degrees (207121) | more than 6 years ago | (#19351123)

It's known incorrectly.

The measurement is "FLOPS". Floating Point Operations Per Second. It's an acronym. The 'S' is part of the acronym. Hence even if you only have oneof them, it's still a FLOPS. And it's capitalised.

Strictly speaking it should be "trillion FLOPS" as well since it's not an SI unit but my pedantry is limitted.

Re:It may be known as "a teraflop", but... (1)

peterpi (585134) | more than 6 years ago | (#19351173)

A trillion flopses?

Re:It may be known as "a teraflop", but... (5, Funny)

KIFulgore (972701) | more than 6 years ago | (#19351219)

Trickses flopses... precious...

One flop is enough for me... (1)

drgonzo59 (747139) | more than 6 years ago | (#19351227)

Itanic, anyone? [wikipedia.org]

Re:It may be known as "a teraflop", but... (5, Insightful)

Anonymous Coward | more than 6 years ago | (#19351191)

The measurement is "FLOPS". Floating Point Operations Per Second.
But that spells "FPOPS".

If we're going to be speaking strictly, get it right:

FLoating point Operations Per Second

Re:It may be known as "a teraflop", but... (0)

Anonymous Coward | more than 6 years ago | (#19352335)

Almost, but not quite right.

FLoating point OPeration(s) per second

a single FLOP
or multiple FLOPS

You are right, but... (0)

Anonymous Coward | more than 6 years ago | (#19351711)

You'll sound like a complete dolt talking about the new intel processor that can make it to "one teraflops". The letter 's' denotes a plural in many, many latin-derived languages, so strictly speaking the real mistake was made by whoever originally coined the acronym.

It's not unlike the .gif thing. The public isn't wrong for pronouncing it "jif". The creator was wrong for failing to align the pronunciation with the spelling.

Re:You are right, but... (1)

91degrees (207121) | more than 6 years ago | (#19352051)

Well, I don't think there's ever been an elecronic computer that took more than a second to manage a floating point operation. Hence, just say how many FLOPS you have. A trillion flops makes more sense to more people anyway. Not everyone knows the SI prefixes. Outside of electronics, even the Mega prefix is unusual.

Re:It may be known as "a teraflop", but... (1)

drinkypoo (153816) | more than 6 years ago | (#19351793)

Wow, you got modded up. See if you're on a roll by next explaining the difference between Mebibytes and Megabytes :D

Re:It may be known as "a teraflop", but... (0)

Anonymous Coward | more than 6 years ago | (#19351853)

FLOP = Floating Point OPeration

In English, we don't actually capitalise acronyms; capitalisation is for non-acronym initial-letter abbreviations. Laser == acronym; AMD != acronym. By the way, good on you for spelling "capitalise" with an S.

Terraflop = trillion flops. Terraflop is collectively a singular

Terraflop per second is perfectly valid. A terraflop CPU runs at a terraflop per second in the same way that a gigabit network card runs at a gigabit per second.

Furthermore, SI quantifiers are allowed to be used on non-SI units.

See, unlike 91degrees, my pedantry is not limited ;)

Correction (0)

Anonymous Coward | more than 6 years ago | (#19352069)

Oops - what I meant to say was FLOP = FLoating-point OPeration.

Re:It may be known as "a teraflop", but... (0, Redundant)

johnw (3725) | more than 6 years ago | (#19352067)

The measurement is "FLOPS". Floating Point Operations Per Second. It's an acronym.
If we're going to be pedantic, that would be FPOPS.

Re:It may be known as "a teraflop", but... (1)

jagilbertvt (447707) | more than 6 years ago | (#19352319)

Funny how you capitalized FPOPS and not FLOPS. It's really FLoating Point Operations Per Second

beowulf cluster? (4, Funny)

Anonymous Coward | more than 6 years ago | (#19351127)

can you imagine.

mod parent up (0)

Anonymous Coward | more than 6 years ago | (#19351273)

mod parent up

Re:beowulf cluster? (5, Funny)

jollyreaper (513215) | more than 6 years ago | (#19351477)

beowulf cluster?

can you imagine.
Yeah, man. Or what if Intel codenamed their next processor Beowulf? *inhales, holds breath, exhales slowly, smoke twisting lazily* Can you imagine a Beowulf cluster of Beowulfs or did I just blow your mind?

Re:beowulf cluster? (0)

Anonymous Coward | more than 6 years ago | (#19352085)

can you imagine.
Yeah. Put Vista on that and you'd have a beowulf clusterfuck!

Cell? (0)

Anonymous Coward | more than 6 years ago | (#19351133)

Intel's research chip has 80 cores, or "tiles," Rattner said. Each tile has a computing element and a router, allowing it to crunch data individually and transport that data to neighboring tiles.

That "multiple non-general-purpose cores" approach sounds a lot like the Cell design to me.

Deja vu all over again (3, Insightful)

$RANDOMLUSER (804576) | more than 6 years ago | (#19351135)

"Intel CEO promises to deliver magical new uber processor within five years".

Stop me if you've heard this one before...

OK: "Stop, for the love of God!" (1)

drgonzo59 (747139) | more than 6 years ago | (#19351257)

Itanium [wikipedia.org]

Re:Deja vu all over again (3, Funny)

Fozzyuw (950608) | more than 6 years ago | (#19351295)

"Intel CEO promises to deliver magical new uber processor within five years".

Great now, I have to wait five years before I buy my next computer because nothing else will compare unless it's got 80 cores. My duel core looks soooo small now. =(

Re:Deja vu all over again (1)

Hoi Polloi (522990) | more than 6 years ago | (#19352121)

Will this save me from getting owned in online games? 50000 FPS should do the trick.

Re:Deja vu all over again (0)

Anonymous Coward | more than 6 years ago | (#19352219)

Never mind, I've heard it will come with a copy of Duke Nukem Forever and Windows CAIRO!

core 2 duo has a higher transistor density? (2, Insightful)

doombringerltx (1109389) | more than 6 years ago | (#19351137)

Intel used 100 million transistors on the chip, which measures 275 millimeters squared. By comparison, its Core 2 Duo chip uses 291 million transistors and measures 143 millimeters squared.
Maybe its just because I haven't had my morning coffee yet, or is that a typo?

Re:core 2 duo has a higher transistor density? (2, Insightful)

Andy Dodd (701) | more than 6 years ago | (#19351157)

80 cores means there are probably quite a lot of on-chip interconnects between the cores.

Re:core 2 duo has a higher transistor density? (3, Informative)

imsabbel (611519) | more than 6 years ago | (#19351349)

Core2 has 2 or 4 Mbyte l2 cache. 1 bit cache is 6 transitors == more than 200 of those 291 million transistors are high-density cache. (Density of cache is a lot higher than that of logic, which the 80 core cpu nearly solely is made of).

(Btw, i fucking HATE the "millimeters squared" expression. Its square millimeter. 275 mm squared would be more than 65 cm^2.)

Re:core 2 duo has a higher transistor density? (1)

DrDitto (962751) | more than 6 years ago | (#19351935)

Intel does not use the classic 6-transistor SRAM cell. Their SRAM technology is cutting-edge and a couple generations beyond everybody else.

It's possible... (3, Informative)

mbessey (304651) | more than 6 years ago | (#19351849)

I thought that was a little weird, too. But the 80-core chip could simply have more wires (and therefore, fewer transistors). Given that they mention that there are routing elements between the cores, it's possible that a lot of the chip's real estate is taken up by massive busses between adjacent cores.

Another explanation might be that they didn't want to waste the time/expense to come up with an optimized layout, or that they intentionally spaced things out to make testing easier.

Comparison? (0)

Anonymous Coward | more than 6 years ago | (#19351141)

That's a level of performance that required 2,500 square feet of large computers a decade ago..
Why compare this to 10 year old technology?

Re:Comparison? (1)

$RANDOMLUSER (804576) | more than 6 years ago | (#19351229)

How many Library of Congress' is that?

Frak everything, we're doing 80 blades (5, Funny)

jollyreaper (513215) | more than 6 years ago | (#19351143)

I'm sorry but when I see these competitions I always come back to this Onion piece. A classic.

http://www.theonion.com/content/node/33930 [theonion.com]

Fuck Everything, We're Doing Five Blades

By James M. Kilts
CEO and President,
The Gillette Company

Would someone tell me how this happened? We were the fucking vanguard of shaving in this country. The Gillette Mach3 was the razor to own. Then the other guy came out with a three-blade razor. Were we scared? Hell, no. Because we hit back with a little thing called the Mach3Turbo. That's three blades and an aloe strip. For moisture. But you know what happened next? Shut up, I'm telling you what happened--the bastards went to four blades. Now we're standing around with our cocks in our hands, selling three blades and a strip. Moisture or no, suddenly we're the chumps. Well, fuck it. We're going to five blades.

Sure, we could go to four blades next, like the competition. That seems like the logical thing to do. After all, three worked out pretty well, and four is the next number after three. So let's play it safe. Let's make a thicker aloe strip and call it the Mach3SuperTurbo. Why innovate when we can follow? Oh, I know why: Because we're a business, that's why!

You think it's crazy? It is crazy. But I don't give a shit. From now on, we're the ones who have the edge in the multi-blade game. Are they the best a man can get? Fuck, no. Gillette is the best a man can get.

What part of this don't you understand? If two blades is good, and three blades is better, obviously five blades would make us the best fucking razor that ever existed. Comprende? We didn't claw our way to the top of the razor game by clinging to the two-blade industry standard. We got here by taking chances. Well, five blades is the biggest chance of all.

Here's the report from Engineering. Someone put it in the bathroom: I want to wipe my ass with it. They don't tell me what to invent--I tell them. And I'm telling them to stick two more blades in there. I don't care how. Make the blades so thin they're invisible. Put some on the handle. I don't care if they have to cram the fifth blade in perpendicular to the other four, just do it!

You're taking the "safety" part of "safety razor" too literally, grandma. Cut the strings and soar. Let's hit it. Let's roll. This is our chance to make razor history. Let's dream big. All you have to do is say that five blades can happen, and it will happen. If you aren't on board, then fuck you. And if you're on the board, then fuck you and your father. Hey, if I'm the only one who'll take risks, I'm sure as hell happy to hog all the glory when the five-blade razor becomes the shaving tool for the U.S. of "this is how we shave now" A.

People said we couldn't go to three. It'll cost a fortune to manufacture, they said. Well, we did it. Now some egghead in a lab is screaming "Five's crazy?" Well, perhaps he'd be more comfortable in the labs at Norelco, working on fucking electrics. Rotary blades, my white ass!

Maybe I'm wrong. Maybe we should just ride in Bic's wake and make pens. Ha! Not on your fucking life! The day I shadow a penny-ante outfit like Bic is the day I leave the razor game for good, and that won't happen until the day I die!

The market? Listen, we make the market. All we have to do is put her out there with a little jingle. It's as easy as, "Hey, shaving with anything less than five blades is like scraping your beard off with a dull hatchet." Or "You'll be so smooth, I could snort lines off of your chin." Try "Your neck is going to be so friggin' soft, someone's gonna walk up and tie a goddamn Cub Scout kerchief under it."

I know what you're thinking now: What'll people say? Mew mew mew. Oh, no, what will people say?! Grow the fuck up. When you're on top, people talk. That's the price you pay for being on top. Which Gillette is, always has been, and forever shall be, Amen, five blades, sweet Jesus in heaven.

Stop. I just had a stroke of genius. Are you ready? Open your mouth, baby birds, cause Mama's about to drop you one sweet, fat nightcrawler. Here she comes: Put another aloe strip on that fucker, too. That's right. Five blades, two strips, and make the second one lather. You heard me--the second strip lathers. It's a whole new way to think about shaving. Don't question it. Don't say a word. Just key the music, and call the chorus girls, because we're on the edge--the razor's edge--and I feel like dancing.

Re:Frak everything, we're doing 80 blades (1)

grev (974855) | more than 6 years ago | (#19351263)

Not even close.

Adding more blades to a razor doesn't have much of an effect on "performance", but increasing the core count on a processor theoretically raises processing power exponentially.

Re:Frak everything, we're doing 80 blades (1)

Evangelion (2145) | more than 6 years ago | (#19351343)


I think you mean theoretically linearly, and in practice logarithmically.

Re:Frak everything, we're doing 80 blades (0)

Anonymous Coward | more than 6 years ago | (#19351359)

You humorless bastard!

Exponential (1)

drgonzo59 (747139) | more than 6 years ago | (#19351373)

O RLY?

Add 2 cores, you get 2x TFlops, add 4 -- get 16x, add 8 you get 256x TFlops. Why stop there, add 80 you get 2^80 = 1208925819614629174706176x more TFlops. Heck, add 1000 cores and you can simulate the universe! Well, I am putting my life's savings into Intel stock!

Re:Frak everything, we're doing 80 blades (1)

jollyreaper (513215) | more than 6 years ago | (#19351379)

Not even close.

Adding more blades to a razor doesn't have much of an effect on "performance", but increasing the core count on a processor theoretically raises processing power exponentially.
I'll wait for the benchmarks on finished products. Remember how Intel drove the whole "more mhz=faster computer" thing for the longest time? Engineering will try to explain how things work to Marketing but they'll just run with what they can grasp, even if they don't have a good grip. As I understand it, additional cores help for some applications but don't really help for others. Sorry, I've just been bullshitted by marketing-types too many times to take the whole "80 cores" thing as an automatic good.

Re:Frak everything, we're doing 80 blades (1)

avronius (689343) | more than 6 years ago | (#19351429)

The onion article was about competition, not performance. It was a satiric look at how one company feels pressured to innovate in an arena where continuously adding *features* will not have a significant impact on the performance of the product. You can't simply substitute Intel for Gillette and assume that the remainder of the article would mesh up.

It was an interesting parody. Perhaps not entirely on topic, but related.

And it was freakin' hilarious.

Re:Frak everything, we're doing 80 blades (0)

Anonymous Coward | more than 6 years ago | (#19351285)

Man, that's the funniest thing I've read in ages. Thanks for brightening my day.

Re:Frak everything, we're doing 80 blades (1)

Peet42 (904274) | more than 6 years ago | (#19351655)

Ditto. :-)

Re:Frak everything, we're doing 80 blades (1)

liquidpele (663430) | more than 6 years ago | (#19351613)

Haha....
You'll love this then.
octaginator 8 blades [youtube.com]

-Reece

Re:Frak everything, we're doing 80 blades (0)

Anonymous Coward | more than 6 years ago | (#19351963)

My razor has something like 80 cutting blades, however, it is split into 3 rotating discs, so i still win.

Gillette Fusion (1)

The New Andy (873493) | more than 6 years ago | (#19351741)

The onion is wrong - they actually went to 6 blades. And no I'm not kidding [gillettefusion.com] . Yes, I am bending the truth a bit (the 6th blade is on the back of the razor, to try to slice your hand when you change the blades)

Re:Frak everything, we're doing 80 blades (0)

Anonymous Coward | more than 6 years ago | (#19351753)

Re:Frak everything, we're doing 80 blades (2, Insightful)

jollyreaper (513215) | more than 6 years ago | (#19351949)

I find this even funnier:

http://money.cnn.com/2005/09/14/news/fortune500/gi [cnn.com] llette/ [cnn.com]
Well shit, they should just rename the Onion to The Daily Prophet. Remember that little bit they did about Bush after the first time he was (s)elected, "Our Long National Nightmare of Peace and Prosperity Is Over?" http://www.godlessgeeks.com/BushNightmare.htm [godlessgeeks.com] Here it is with links to all the jokes that came true. Shit!

No x86 (1)

Ramble (940291) | more than 6 years ago | (#19351159)

It's hard for me to be too impressed, witha specialised chip you can do almost anything, this isn't different from the claims from a small company that they can make chips run at 10GHz, oh, but it's not x86.

Build an x86 prototype one and I'll worship at the alter of Intel for years to come.

Re:No x86 (0)

Anonymous Coward | more than 6 years ago | (#19351545)

Modern chips are usually RISC machines with x86 emulation hardware.

Re:No x86 (1)

drinkypoo (153816) | more than 6 years ago | (#19351891)

this isn't different from the claims from a small company that they can make chips run at 10GHz, oh, but it's not x86.

And that is somehow supposed to be a bad thing?

Not only a dupe... but of an old story (5, Insightful)

CajunArson (465943) | more than 6 years ago | (#19351167)

It must be a really slow news day. From the dateline:

Published: February 11, 2007

Not to mention that Slashdot (even Zonk) Covered this LAST YEAR [slashdot.org] .
But that's OK, I'm sure Slashdot gave insightful and cogent coverage of real events that actually matter to geeks on this site, you know, like the Release of a new major version of GCC [gnu.org]
Oh wait.... that (like a bunch of other actually interesting stories) would be in the aptly-named, sir not appearing on this website category due to it not making enough banner revenue.

Re:Not only a dupe... but of an old story (1)

Valtor (34080) | more than 6 years ago | (#19351687)

An important new feature of GCC 4.2 is its support of OpenMP [openmp.org] . In this age of multi-core CPU, this is a must. It's even supported in MS Visual Studio 2005. OpenMP is the way to go IMHO, if you don't know yet what it is, you've got to check it out...

all that aside, The monitor on the last pic? wtf? (1)

tehtest (995812) | more than 6 years ago | (#19352173)

Gee, I'm showing CUTTING EDGE STATE OF THE ART TECHNOLOGY! on a POS dell monitor that has to be 3 years old? WTF?

Imagine a... (5, Funny)

simong (32944) | more than 6 years ago | (#19351181)

oh.

Re:Imagine a... (1)

asCii88 (1017788) | more than 6 years ago | (#19352115)

Somone please explain to me why this is +5 funny

AMD's response (3, Insightful)

drgonzo59 (747139) | more than 6 years ago | (#19351201)

This is a nice move by Intel. I wonder what AMD's plans are...81 cores?

Besides, with most software being single-threaded I don't know if a consumer will immediately need more than 4 cores for a while. I can still see software companies trying to come up with ways to keep all 80 cores busy..."Well, they need at least 20 anti-virus processes, 10 genuine advantage monitors, and we'll install 100 shareware application with cute little icons in the task bar by default. There, that should keep all the cores nice and warm and busy -- our job is done!".

But in all seriousness, I would expect some extremely realistic environmental physical simulations (realtime large n-body interactions and perhaps realtime computational fluid dynamics)...now that's something to look forward to!

Re:AMD's response (1)

mstahl (701501) | more than 6 years ago | (#19351355)

I think one of the possible uses for an 80-core CPU that nobody's really talked about is multiple redundancy. If one core should somehow get fried, you have 79 left.

Re:AMD's response (1)

drgonzo59 (747139) | more than 6 years ago | (#19351457)

If that's the case, might as well connect all unused heatsinks to a griddle and I'll fry my by bacon and eggs in the morning on them. If some of them burn up or get too hot it's "ok" I got 60 others waiting...

Or... you could just have two and order a CPU to be delivered when one burns up. In fact that's what happens on some mainframes. If a part fries, the machine calls "home" and the company will send a replacement immediately. Sometimes the administrator will find out something went bad only when the replacement part already arrived at the door.

Re:AMD's response (1)

ciroknight (601098) | more than 6 years ago | (#19351459)

AMD's response was buying ATi in order to work on their future chip, "Fusion", which will incorporate somehow a GPU-type accelerator on-die or at least in-package with a traditional x86 CPU.

GPUs already have "many cores", if you can really call them that; 16 ROPs, 80 texture units, 64 shader cores, etc. Intel's approach is a much simpler architecture (in fact, "too simple" right now, the cores are practically feature-less), but DAAMIT's makes more business-sense (re-use what we've already got vs. invent something new).

Re:AMD's response (1)

drgonzo59 (747139) | more than 6 years ago | (#19351593)

That makes sense. The GPUs sometimes have a a higher transistor count than CPUs...

The problem with Fusion is if they kill the add-on graphics and you just buy one Fusion processor that costs say $400 to plug into the CPU slot. In the meantime, NVIDIA releases their new generation board and Intel releases a new generation CPU. The consumer can choose to upgrade one or the other or both, but an AMD customer is stuck just one expensive part and would have to upgrade it as one piece.

Perhaps in the future the CPU, the graphics card and the memory will all be on one giant module. You get high performance but not the ability to customize individual components.

Ob (2, Funny)

rlp (11898) | more than 6 years ago | (#19351301)

In Soviet Russia, Intel's 80 core processor imagines a Beowolf cluster of you!

Re: Ob (1)

FST777 (913657) | more than 6 years ago | (#19351585)

But does it run...

ah, to hell with all these "obs"!

Re:Ob (1)

jjsavage (1075197) | more than 6 years ago | (#19352207)

After a grueling 15 seconds of Googling, I can't find out what 'ob' means. Is it 'obscure'?

100 cores on the chip on the wall... (1)

Earl The Squirrel (463078) | more than 6 years ago | (#19351317)

I wonder how long before our kids are singing,

100 cores on the chip on the wall, 100 cores on the chip
take one down, pass it around
99 cores on the chip on the wall....

ummm.... (0, Redundant)

mstahl (701501) | more than 6 years ago | (#19351331)

Imagine a beowulf cluster of these?

desktop version (1)

Ep0xi (1093943) | more than 6 years ago | (#19351375)

maybe in a hundred years i could afford an 80 core Celeron with the HUGE amount of 1Mb L2 cache

teraflop_s_ (1)

Lobais (743851) | more than 6 years ago | (#19351401)

'The chip is capable of producing 1 trillion floating-point operations per second, known as a teraflop'
Should probably be "known as a teraflops".
You don't two fps, one fp either, right?

For the love of god... (2, Insightful)

tomstdenis (446163) | more than 6 years ago | (#19351409)

and all that is holy on this sacred Earth ...

This isn't a general purpose processor. Think "cell processor" on a larger scale. You wouldn't be running your firefox or text editor on this thing. You'd load it up and have it do things like graphics processing, ray tracing, DSP work, chemical analysis, etc...

So stop saying "we already don't have multi-core software now!!!" because this isn't meant for most software anyways.

Tom

Re:For the love of god... (3, Insightful)

ciroknight (601098) | more than 6 years ago | (#19351561)

"This isn't a general purpose processor."

...Yet. You're right in the fact that these cores are incredibly simplistic, so much so that they make DSPs look functional, but really what's going on here is a science project to develop the on-chip network, not to develop the CPU cores as much. Intel envisions lifting the networking component out of this design and applying it to various different cores, so that a general computing core can be mixed in with DSP cores and other "Application Specific Accelerator" cores.

So no, this model you're not going to be running Firefox or your text editor on (in fact, I doubt you even _could_ do this, these cores currently are very, very stripped down in their capacity to do work, to where they're basically two MACs tied to a small SRAM and a "network adapter"), but never-say-never, this style of chip is right around the corner.

Not usefull yet.. (4, Interesting)

CockroachMan (1104387) | more than 6 years ago | (#19351471)

It's useless to keep putting more cores into a processor when we still don't have a decent parallel programming paradigm.

80 cores is an absurd number, with the parallelism level that we have in today programs, most of the cores should be idle most of the time.

Re:Not usefull yet.. (1, Funny)

drinkypoo (153816) | more than 6 years ago | (#19351751)

80 cores is an absurd number, with the parallelism level that we have in today programs, most of the cores should be idle most of the time.

Depends on the task. It might be useful for a webserver or something. Besides, most single CPUs are idle most of the time (unless people are running folding@home, or are part of a really complex botnet)

Re:Not usefull yet.. (1)

Crazy Taco (1083423) | more than 6 years ago | (#19351759)

In addition, mathematical models show that more than 16 cores actually HURT performance in most applications. This chip would likely only be useful if we had a better parallel programming paradigm (maybe Intel built the chip so they could study that) and we had a specific application it was useful for (something that could use tons of parallel processors). For most applications, 80 cores will probably hurt performance, and at the very least will be idling and wasting power doing NOPs most of the time.

Re:Not usefull yet.. (1)

ciroknight (601098) | more than 6 years ago | (#19351987)

"For most applications, 80 cores will probably hurt performance, and at the very least will be idling and wasting power doing NOPs most of the time."

We can throw away the whole idea of NOPs when it comes to this chip and the technologies we have today. If we're not using the core, simply turn the core off, it's just going to be wasting energy, and you can turn the core back on pretty damn quick (~100's of ns) if you need it. Since these cores are virtually cache-less already, there's no huge to-memory penalty we get with today's enormous cores when they shut down to lower energy states, and it makes sense to have an on-chip load balancer which will make sure no one core gets over taxed and is on more than any other core.

Lastly, we don't need to change the apps as much as people think we do. For the applications that make sense to parallelize, great, do it. For the ones that don't, don't. It's not going to hurt anything if you don't. You still get N.mGHz speeds on your mono-core app, and to be truthful, it's highly likely that's more than your app will ever need, especially ones that spend most of their time I/O bound (word processors (keyboard), web browsers(network)).

Re:Not usefull yet.. (1)

ciroknight (601098) | more than 6 years ago | (#19351765)

I really don't understand this idea that we "can't use multiple cores yet" because we don't have some magical, mythical necessary programming model that will make this come alive instantly. The fact is, we've had the necessary models for decades now. Adding multiple cores doesn't necessarily mean we need to change our programs at all, but rather it means we need to change our Operating Systems.

To the point: right now, we typically schedule applications to run on time-slices, to virtually expand one processor to every single process we have running. With multi-core, we need to change the way we schedule to be more granular, and to assign better core affinity (to the point where we can address specific cores directly from the operating system, always running the same application on the same core). Every task gets its own core, and can then use threading (or spawning another process) to request/force more core-time if necessary.

Ordinary desktops have been parallel for a long, long time, we've just hidden it from the users and from the programmers because it's hideously complex when it comes to timing and scheduling. The whole idea of the Operating System was to hide this complexity from the users to begin with, and to put it instead on the shoulders of smarter, better software that has been combed over and refined. To the point: our OSes have emulated parallel machines because we didn't have parallel machines, real-time (or at least near-real-time) multitasking would be impossible without it. Now, we have parallel machines, and we can stop emulating it or at least minimize our need to.

Re:Not usefull yet.. (0)

Anonymous Coward | more than 6 years ago | (#19352135)

Build it and they will come.

Isn't this just (1)

Colin Smith (2679) | more than 6 years ago | (#19351487)

A transputer [wikipedia.org] which was around in the 80s?

Hmm has it really taken 20 years of research or ... Wonders... 20 years later... Patents...?

 

look closer (1)

icebones (707368) | more than 6 years ago | (#19351537)

the article says that it won't run x86, i.e.: no XP/Vista, but if you look very closly at the picture of the what they showed at the demo, the XP taskbar appears to be at the bottom of the monitor. Hmmm.

Re:look closer (0)

Anonymous Coward | more than 6 years ago | (#19351913)

XP is probably running on a separate system that is monitoring the cpu. Rather than being run on the cpu.

Re:look closer (1)

CogDissident (951207) | more than 6 years ago | (#19351953)

You do know, XP has a dual-core OS (XP 64 bit edition), and simply expanding it to allow it to run on more than 2 doesn't sound like too much of a trick. Fully utilizing all of those, different issue, but getting it to simply "run" a 64bit program doesnt seem impossible.

Re:look closer (1)

DaveV1.0 (203135) | more than 6 years ago | (#19352223)

One word: Emulation.

Core count competition? (1)

Cctoide (923843) | more than 6 years ago | (#19351775)

No, mine's bigger!

Hey I've got a program it can run (1)

mstahl (701501) | more than 6 years ago | (#19351815)

perl -e 'fork while 1;'


There ya go. Think of it as a benchmark. How long can the 80-core processor run that without dying?

Anyone else think (1)

mandark1967 (630856) | more than 6 years ago | (#19352183)

...the heatsink for this puppy will need a fan using an engine from a V22 Osprey to cool all those cores?

Cool! A Minnie Driver/Anne Hathaway love scene. (1)

Impy the Impiuos Imp (442658) | more than 6 years ago | (#19352263)

"64 processors is enough for anybody!" >:(
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?