Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel Squeezes 1.8 TFlops Out of One Processor

Hemos posted more than 7 years ago | from the that's-a-lotta-juice dept.

Intel 168

Jagdeep Poonian writes "It appears as though Intel has been able to squeeze 1.8 TFlops out of one processor and with a power consumption of 62 watts." The AP version of the story is mostly the same; a more technical examination of TeraScale is also available.

Sorry! There are no comments related to the filter you selected.

Oblig. (2, Funny)

Anonymous Coward | more than 7 years ago | (#17982206)

Imagine a Beowolf cluster of those!!

Re:Oblig. (5, Interesting)

niconorsk (787297) | more than 7 years ago | (#17982300)

It's quite fun to consider that when the original joke was made, the processing power of that Beowulf cluster would probably been quite close to the processing power of the processor discussed in the article.

Re:Oblig. (1)

Vandilzer (122962) | more than 7 years ago | (#17982352)

They did RTFA:

"However, considering the fact that just 202 of these 80-core processors could replicate the floating point performance of today's highest performing supercomputer, those power consumption numbers appear even more convincing: The Department of Energy's BlueGene/L system, rated at a peak performance of 367 TFlops, houses 65,536 dual core processors."

Re:Oblig. (2, Interesting)

Anonymous Coward | more than 7 years ago | (#17982684)

It is entirely not true that you could replace today's fastest computer with this kind of technology and get the same performance. These new Intel CPU's are really difficult to program efficiently. You would only get good performance on certain problems sets.

Re:Oblig. (2, Interesting)

PitaBred (632671) | more than 7 years ago | (#17985364)

Because it doesn't take special problem sets and programming on the current supercomputers?

Re:Oblig. (1)

utopianfiat (774016) | more than 7 years ago | (#17985312)

Christopher Lloyd was really freaking out about the fact that it required 1.21 Jiggawatts of power.

Both cool and useless for 99% of computing (4, Insightful)

tomstdenis (446163) | more than 7 years ago | (#17982226)

The trick like SPEs is finding way to efficiently use them in as many tasks as they can.

I'm glad to see Intel is using their size for more than x86 core production though.

Tom

Re:Both cool and useless for 99% of computing (1)

billsoxs (637329) | more than 7 years ago | (#17982332)

I read an article in the morning paper (probably AP) where they said it might not make it out of the development stage. As I understand what they have done is add high-k to the gate stack - greatly reducing power consumption. So - it might still be x86 architechure but it will run a lot for very little power - at standard (3 GHz) frequencies.

99% is exagerated (4, Interesting)

Anonymous Coward | more than 7 years ago | (#17982496)

The first thing that jumped out at me was the presence of MACs. They are the heart of any DSP. So, this chip is good for computation although not necessarily processing. As other posters have pointed out, this chip could become a very cool GPU. It should also be awesome for encryption and compression. Given that the processor is already an array, it should be a natural for spreadsheets and math programs such as Matlab and Scilab. Having a chip like this in my computer just might obviate the need for a Beowolf cluster. :-)

The title is misleading (5, Informative)

xoyoboxoyobo (945657) | more than 7 years ago | (#17982302)

That's not 62 watts at 1.8 teraflops. That's 62 watts at 3.16 GHz FTFA: "Intel claims that it can scale the voltage and clock speed of the processor to gain even more floating point performance. For example, at 5.1 GHz, the chip reaches 1.63 TFlops (2.61 Tb/s) and at 5.7 GHz the processor hits 1.81 TFlops (2.91 Tb/s). However, power consumption rises quickly as well: Intel measured 175 watts at 5.1 GHz and 265 watts at 5.7 GHz. However, considering the fact that just 202 of these 80-core processors could replicate the floating point performance of today's highest performing supercomputer, those power consumption numbers appear even more convincing: The Department of Energy's BlueGene/L system, rated at a peak performance of 367 TFlops, houses 65,536 dual core processors."

Re:The title is misleading (-1, Offtopic)

The Warlock (701535) | more than 7 years ago | (#17982444)

FLOPS != Hertz.

Re:The title is misleading (1, Insightful)

Anonymous Coward | more than 7 years ago | (#17982542)

Good for your for understanding that. Now if only you would make an effort to try to understand what xoyoboxoyobo wrote. (Hint: Nowhere in his comment does he equate flops with hertz.)

Re:The title is misleading (0)

Anonymous Coward | more than 7 years ago | (#17984030)

no flops obviously dont equal hertz. what the warlock meant by what he said and was hopefully what you infered, is that there is not some static linear relationship (like a coefficient ie gallons to liters)... however for a given software procedure, with implementation to a given ABI, with given hardware implementing said ABI where the software procedure completely operates within circuitry controlled by the clock in question (ie: Intel's magestically scheduled benchmark on a radically different abi deep within their secret testing facilities, and the proc thats the subj of TFA.) then there is mighty-darn close to a straight line. AND

if you had read xoyoboxoyobo's excerpt of TFA "For example, at 5.1 GHz, the chip reaches 1.63 TFlops and at 5.7 GHz the processor hits 1.81 TFlops" you would have where he equates flops with hertz.

all xoyoboxoyobo was trying to do was point out that the article description as posted was wrong/misleading because it quoted the power consumption at 3.16 Ghz and the work output of 5.7 Ghz, which takes more than 4 times the power. But on /. everything is fish tales anyways, 4x is nothing, we're used to seeing articles of what vista was supposed to do for the last 4+ years oooooh BURN!

Re:The title is misleading (0)

Anonymous Coward | more than 7 years ago | (#17985164)

aww, did he hurt your feelings there, The Warlock? :(

I see that as a feature... (3, Funny)

StressGuy (472374) | more than 7 years ago | (#17982502)

Get the bugs worked out be Xmas and you could sell at 1.81 Tflop easy-bake oven

{...I need more sleep...}

Re:The title is misleading (0)

Anonymous Coward | more than 7 years ago | (#17983042)

Sure the title is misleading!

My calculations are:

The chip appears to be 2.5" side.. that's about 6.25sqin. area... which means that a flop is about...
3.47 pico-square-inch! which nowhere in the article!

Boy those flops are small these days!

Re: The title is misleading (1)

Dolda2000 (759023) | more than 7 years ago | (#17983186)

Furthermore, I think it's kind of weird to say that it's "one processor". It may be one chip, but is a processor defined by its die? Since it's an 80-core chip, isn't it more accurate to say that it's 80 CPUs on one die, just as a dual-core chip is rather two CPUs on one die? It's not as if it isn't impressive, but I think it's kind of misleading to say that it's just one processor.

Re: The title is misleading (1)

TeknoHog (164938) | more than 7 years ago | (#17985642)

I wonder the same whenever some marketing genius mentions a dual-core processor. Of course, processors didn't have cores until Intel innovated the Core architecture ;)

Take it a step further..... (1)

bostons1337 (1025584) | more than 7 years ago | (#17982304)

Now if they can only find a way to lessen its thirst for volts they could make it useful for the masses.

Imagine a Beowulf cluster... (-1, Offtopic)

drachenfyre (550754) | more than 7 years ago | (#17982310)

of these things. Oh wait. It IS its own Beowulf cluster...

Just imagine (2, Insightful)

andyck (924707) | more than 7 years ago | (#17982386)

"Intel" "Introducing the NEW CORE 80, personal laptop supercomputer running Windows waste my ram and cpu cycles SP2 edition" But seriously this looks interesting for the future. Now we just need software to fully utilize multicore processors.

Re:Just imagine (1)

TheUni (1007895) | more than 7 years ago | (#17983092)

Core 80? Psh. I'm waiting for Core2 80...

Though i'm tempted to wait for Core-Quad 80 extreme.... 320 cores!

Re:Just imagine (0)

Anonymous Coward | more than 7 years ago | (#17984902)

Yeah, we need to make software that can utilize multiple processors/cores. We can call it ... "Same time Many Processors" ... "SMP" for short...

bus speed (0)

Anonymous Coward | more than 7 years ago | (#17982412)

bus speed *cough* bus speed *cough* bus speed

Re:bus speed (0)

Anonymous Coward | more than 7 years ago | (#17982522)

That was informative. Now would you care to elaborate on which bus you are referring to? Are you saying "wow, it has a 256GB/s internal bus". Or are you saying "well regardless of how many Tflops the thing can do, it's still faced with dealing with a outside world saddled by slow buses"? If it's the latter, fine, this is just one of their research projects, one can probably safely assume that the creators of PCI are busily working on more advanced buses on one front, and optimizing compilers to minimize bus latency effects on the software front.

What kinds of apps does this make reasonable? (4, Interesting)

DoofusOfDeath (636671) | more than 7 years ago | (#17982462)

Does this permit the practical use of any truly breakthrough apps?

Does it suddenly make previously crappy technologies worthwhile? I.e., does image recognition or untrained speech recognition become a mainstream technology with this new processing power?

Re:What kinds of apps does this make reasonable? (5, Funny)

truthsearch (249536) | more than 7 years ago | (#17982554)

Does it suddenly make previously crappy technologies worthwhile?

Vista?

(Sorry, couldn't resist.)

Re:What kinds of apps does this make reasonable? (5, Funny)

DoofusOfDeath (636671) | more than 7 years ago | (#17982624)

Clippy?

"It looks like you're writing a five-page essay on the role of the Judicial branch during periods of famine in the late 1850's."

Re:What kinds of apps does this make reasonable? (2, Informative)

Heembo (916647) | more than 7 years ago | (#17984116)

I used to teach 5th grade computer class, and please do not underestimate the power of Clippy(tm). I would instruct my students to remove Clippy, as I have done per habit for so long, but they would rebel. I recall at least several classes where Clippy hypnotized my class (and kept them preoccupied and easy to deal with.)

Re:What kinds of apps does this make reasonable? (1)

hackstraw (262471) | more than 7 years ago | (#17985096)

I recall at least several classes where Clippy hypnotized my class (and kept them preoccupied and easy to deal with.)

Leave that to the experts.

That is what drugs and TV are for.

Re:What kinds of apps does this make reasonable? (0, Offtopic)

danlock4 (1026420) | more than 7 years ago | (#17985198)

I used to teach 5th grade computer class, and please do not underestimate the power of Clippy(tm). I would instruct my students to remove Clippy, as I have done per habit for so long, but they would rebel. I recall at least several classes where Clippy hypnotized my class (and kept them preoccupied and easy to deal with.)
*sniffle* The things 5th graders get to use these days! Why, when I was a lad, we didn't get access to computers until 6th grade, but we learned BASIC programming, darn it! (The school had about four CBM Pet machines with built-in green monochrome CRTs.) There was no Clippy to waste our time!

Re:What kinds of apps does this make reasonable? (1)

Heembo (916647) | more than 7 years ago | (#17985358)

Sorry man, I was teaching them VBA programming, but they wanted clippy. Parents rule in private schools.

Re:What kinds of apps does this make reasonable? (1)

riskeetee (1039912) | more than 7 years ago | (#17984602)

Duke Nukem Forever!

Re:What kinds of apps does this make reasonable? (4, Interesting)

Frumious Wombat (845680) | more than 7 years ago | (#17982650)

Atomistic simulations of biomolecules. Chain a bunch of those together, and you begin to simulate systems on realistic time scales. Higher-resolution weather models, or faster and better processing of seismic data for exploration. Same reason that we perked up when the R8000 came out with its (for the time) aggressive FPU. 125 MFlops/proc@75MHz [netlib.org] was nothing to sneeze at 15 years ago. If they can get this chip into production in usable quantities, and if it has the throughput, then they're on to something this time.

Of course, this could just be a single-chip CM2 [wikimedia.org] ; blazingly fast but almost impossible to program.

Re:What kinds of apps does this make reasonable? (2, Interesting)

Intron (870560) | more than 7 years ago | (#17982698)

Realtime, photorealistic animation and speech processing? Too bad AI software still sucks or this could pass a Turing test where you look at someone on a screen and don't know whether they are real or not.

Re:What kinds of apps does this make reasonable? (1)

CubicleView (910143) | more than 7 years ago | (#17982908)

I "seriously" doubt that this could be used to pass a turning test. The noise and heat from the fan sink keeping a 250+ watt processor cool would be a dead giveaway. If I recall correctly though I don't think you need a fancy avatar for the robot/computer/whatever to pass the turning test. It's more of a black box approach where all that matters is what the box says not how it says it.

Re:What kinds of apps does this make reasonable? (5, Funny)

Intron (870560) | more than 7 years ago | (#17983368)

Sorry, your post made me realize that a sophisticated processor is unnecessary. It's already difficult to tell whether a message is from a human or just a randomly generated string of nonsense.

Re:What kinds of apps does this make reasonable? (2, Insightful)

vertinox (846076) | more than 7 years ago | (#17984108)

Does this permit the practical use of any truly breakthrough apps?

From my understanding perhaps with that many cores, the OS could simply allocate one application per core.

But the OS has to support that feature or have applications that know how to call unused cores.

From my understanding Parallels for OS X only uses one core and picks the second core to run on for the best performance.

Of course then there are applications that could be programmed to use all the cores at once if they needed to do scientific calculations or something like Ray Tracing.

wahoo (1)

DaMattster (977781) | more than 7 years ago | (#17982478)

I gotta get me one of these. This lends new creedence the Staples Red Button of major scientific and engineering problems. "That was easy!"

EIGHTY Cores??? (4, Funny)

rwyoder (759998) | more than 7 years ago | (#17982500)

64 cores should be enough for anybody.

Re:EIGHTY Cores??? (0)

Anonymous Coward | more than 7 years ago | (#17982906)

How many times has this dead horse been beaten? 64 times should be enough for anybody.

Re:EIGHTY Cores??? (1)

mr_mischief (456295) | more than 7 years ago | (#17983926)

Build me a home computer that supports five to forty times the memory of all its competitors and then make fun of the PC.

Seriously, when IBM and Microsoft released the IBM 5100 PC and MS-DOS/PC-DOS, the Apple II+ had 48k expandable to 64k, the Atari 600XL had 16k and the 800XL had 64k, Commodore hadn't yet released the Commodore 64 leaving them with the 5k VIC-20, and the Tandy Color Computer 1 had 32k. Most of these systems have 6800-series processors in the one megahertz range. The IBM had a processor which did similar work per clock as the Motorola chip, and was 4.77 Mhz.

Sure, IBM and Microsoft could have had the foresight to support 16 megabytes or even 64 gigabytes of memory using the same software as their 256k and 512k offerings that were upgradeable to 640k. I doubt they knew how successful the platform would be. At the time, computers leapfrogged each other and people bought whole new platforms from completely different companies. Data was moved from one system to another by floppy or cassette tape if you were lucky, or by hand if you weren't. IBM probably had no idea the same platform would be overhauled so many times back in 1981.

IBM was probably mostly interested in getting companies using their mainframes and midrange systems to stick with the brand anyway. It's like Harley Davidson and John Deere selling golf carts or like Ralph Lauren making bedsheets. Those companies want you to think of them whenever you think of anything related to their core business. I'm betting Harley Davidson doesn't make a lot of money on golf carts, but even at break-even it's better than seeing a Kawasaki golf cart owner go and buy a Kawasaki motorcycle.

IBM had the right product at the right time in the 5100, and it took off. The 640k limit seemed silly when people knew what EMS and XMS were. Since OS2, Windows, Linux, and most other operating systems put the processor into protected mode and don't use the BIOS after they finish loading, it's no longer an issue. Now it's two or four cores, 32 or 64 bits, 100Mb or 1000Mb networking, and hundreds of gigabytes on your hard drive. It's Windows XP vs. Vista vs. Linux vs. OS X vs. BSD vs. whatever. It's no longer 64k vs. 640k, twin 360k floppies vs. single 160k floppies, 40 columns vs. 80 columns, RS-232 vs. RS-485 serial ports (with many manufacturers not doing either), 4 colors vs. 16, and a 5-meg hard drive being an expensive option.

So please, can we give some credit where it's due, and get over a bit of shortsightedness on a product that's five years older than Blake Ross?

Wow, I can't wait! (3, Funny)

cciRRus (889392) | more than 7 years ago | (#17982598)

Gonna get one of these. That should bump up my Vista Experience score.

Re:Wow, I can't wait! (2, Funny)

daeg (828071) | more than 7 years ago | (#17982668)

Except there won't be any Vista drivers. Damn!

Real-time Ray Tracing? (5, Interesting)

Dr. Spork (142693) | more than 7 years ago | (#17982728)

When I read about this I didn't get all worked up, since I imagine that it will be almost impossible for realistic applications to keep all 80 cores busy and get the teraflop benefits. But then I read about the possibility of using this for real-time ray tracing, and got very intrigued!

Ray tracing is embarassingly parallelizable, and while I'm no expert, two terraflops might just be enough calculating power to do a pretty good job at scene rendering, maybe even in real time. To think this performance would be available from a standard 65nm die that uses 65 watts... that really could make a difference to gamers!

Re:Real-time Ray Tracing? (2, Interesting)

Vigile (99919) | more than 7 years ago | (#17982862)

Yep, that's one of the things that got me excited about it as well. Did you also read this article on ray tracing on the same pcper.com site by a German guy that made a Quake 4 ray tracing engine?

http://www.pcper.com/article.php?aid=334 [pcper.com]

Re:Real-time Ray Tracing? (0)

Anonymous Coward | more than 7 years ago | (#17982968)

Firstly, as addressed elsewhere, this chip can do 1.8 TFlop, and can run at 65W, but not both at the same time. At full speed it uses a lot more power.

Secondly, "a big difference"? I play a lot of games and I always complain that CGI lighting (in movies too) looks pretty crap, but I can't really say that the lighting in games truly affects the enjoyment. It also seems like raytracing just isn't worth it in a value per FLOPs sense - even if these chips were available they'd probably be better used in the current graphics way, rendering pixel shader effects and the like.

Re:Real-time Ray Tracing? (1)

tcas (691110) | more than 7 years ago | (#17983204)

I'm sorry, but this comment is really crazy:

Firstly, there are hundreds of computation-intensive applications that can keep 80 cores busy: environmental modeling, protein folding... anything that currently uses a supercomputer.

Secondly, why is the parallelizable nature of ray tracing embarrassing?! It's parallelizable exactly because each ray is computed independently of other rays - I don't see what is embarrassing or surprising about that.

Finally, talking about the application to consumer gaming shows that you completely missed the point of this story. Way down the line... another whole generation of hardware later, maybe you'll have 80 cores on your home computer. But it's a little early to be thinking about that right now.

Re:Real-time Ray Tracing? (5, Informative)

ispeters (621097) | more than 7 years ago | (#17983282)

Secondly, why is the parallelizable nature of ray tracing embarrassing?! It's parallelizable exactly because each ray is computed independently of other rays - I don't see what is embarrassing or surprising about that.

It's embarrassing because "Embarrassingly parallel" [wikipedia.org] is the technical term for problems like ray tracing. It's a parallelizable problem wherein the concurrently-executing threads don't need to communicate with each other in order to complete their tasks so the performance of a parallel solution scales almost perfectly linearly with the number of processors that you throw at the problem.

Ian

Re:Real-time Ray Tracing? (1)

VE3MTM (635378) | more than 7 years ago | (#17983516)

"Embarrassingly parallel" is a term for such problems, where each step or component is independent and requires no communication.

http://en.wikipedia.org/wiki/Embarrassingly_parall el [wikipedia.org]

Re:Real-time Ray Tracing? (1)

fitten (521191) | more than 7 years ago | (#17983810)

It's parallelizable exactly because each ray is computed independently of other rays - I don't see what is embarrassing or surprising about that.


As others have said, "embarassingly parallel" isn't a derogatory term any more than "greedy algorithm" is.

Re:Real-time Ray Tracing? (0)

Anonymous Coward | more than 7 years ago | (#17984266)

As others have said, "embarassingly parallel" isn't a derogatory term any more than "greedy algorithm" is.

So, if others have said it, why did you feel the need to reply and add nothing to the conversation?

Re:Real-time Ray Tracing? (1)

fitten (521191) | more than 7 years ago | (#17985586)

Dunno... perhaps you could answer that question yourself, now ;)

Re:Real-time Ray Tracing? (1)

Linker3000 (626634) | more than 7 years ago | (#17984356)

Ooh look - three replies in a row - parallel!! - explaining the definition of a term related to parallel processing.

Is something going to explode now?

Re:Real-time Ray Tracing? (1, Insightful)

Anonymous Coward | more than 7 years ago | (#17984294)

While the ray tracing algorithm is embarrassingly parallel, I would imagine memory access is not. Having 80 cores accessing pretty much the same data (mainly textures) could be a problem. Perhaps procedurally generating textures would solve this. Perhaps caching is enough. I'm no ray tracing expert so please correct me if I'm wrong here.

Re:Real-time Ray Tracing? (1)

schmiddy (599730) | more than 7 years ago | (#17984948)

I'd just like to point out, that yes, it would be great to do real-time raytracing with such powerful processors. Last week I was up until 6 in the morning waiting for a 2+ hour render of a reasonably simple scene to finish. Yeah, these procs would be great... if someone could just write a parallelizable version of POV-ray for Linux. Before someone jumps in to point to the few ports out there, let me head you off:

A distributed version of POV-ray exists using the MPI library [demon.co.uk] , but it's based on the pretty old 3.1 branch (POV-ray is on 3.6beta right now). This is important because even the newest POV-ray betas have pretty vanilla features compared to some of the other experimental branches (like Mega POV) that include things like motion blur to simulate moving objects, etc. I haven't even tried MPI Pov because I like playing around with the must-have toys like radiosity.

A version that looks really good for Windows (bleh..) and is based off the 3.6 branch is SMPov [it-berater.org] . I really, really, really wish someone would port this to Linux so that I could have a chance to play..

And, finally, there is a patch to POV-ray that will work on Linux using the PVM library -- and it will work with the 3.5 branch. Sounds good, until you read the Howto [sourceforge.net] . Quoting directly: Radiosity is not working. The resulting image looks like a mosaic. The energy bias for each block is different because the radiosity equation is not globally resolved correctly.

I suppose someone's going to tell me I should just do it myself. *Sigh*. I'm actually learning Erlang right now to learn more about distributed processing. Maybe, someday..

Sounds like a cellular automata machine (0)

Anonymous Coward | more than 7 years ago | (#17982742)

The architecture is very much like how one might build a cellular automata machine, albeit with FPUs instead of lookup tables.

As an example, check out CAM-8: http://www.ai.mit.edu/projects/im/cam8/ [mit.edu]
This dated from 1993 or so, it took at least a 1 GHz Pentium III to match its cellular automata performance, if I recall correcly.

Tflops all over the place. (2, Funny)

tocs (866673) | more than 7 years ago | (#17982746)

I hope they can get them back in.

I for one welcome our new Android overlords... (5, Informative)

doomy (7461) | more than 7 years ago | (#17982788)

33 of these CPU's should be more than enough to construct Lt. Cmdr Data [wikipedia.org] .

Re:I for one welcome our new Android overlords... (1)

pimpimpim (811140) | more than 7 years ago | (#17983230)

If I follow the wikilink, most of the "information" there seems to come from an epside from 1989 [wikipedia.org] . Apart from the sad thing that some people actually treat these data as real, the fun thing is that apparently the scriptwriters who made up these fictional data did a pretty good job to make up computer specifications that would still be out of reach for normal PCs 20 years later.

Re:I for one welcome our new Android overlords... (0)

Kjella (173770) | more than 7 years ago | (#17985418)

Apart from the sad thing that some people actually treat these data as real, the fun thing is that apparently the scriptwriters who made up these fictional data did a pretty good job to make up computer specifications that would still be out of reach for normal PCs 20 years later.

Well for one it's 28 by now then, secondly they're supposed to be 300 years in the future. How's being off by a factor of 10 particularly impressive? In another 20 years, you'll see old sci-fi nerds making gags about having more processor power than Data.

exaflop computers? (2, Insightful)

peter303 (12292) | more than 7 years ago | (#17982956)

Since petaflops are likely by the end of the decade its time to imagine exaflops in 2020.

Pinch one loose (0, Offtopic)

jrmiller84 (927224) | more than 7 years ago | (#17983034)

"Intel Squeezes 1.8 TFlops Out of One Processor"

In other news, AMD pinches a 1.9 TFlops loaf out of one processor

What is the point for 80 cores on the FSB (2, Insightful)

Joe The Dragon (967727) | more than 7 years ago | (#17983072)

The FSB will be a big bottleneck even more so with the cpu needing to use to get to ram. You would need about 3-4 FSBs with 1-2 mb per core of L2 to make it fast.

Re:What is the point for 80 cores on the FSB (0)

Anonymous Coward | more than 7 years ago | (#17983592)

I don't see any mention of what type of memory or bus architecture they are using with this, but I think it's fair to assume that their chip architects arn't complete asshats, and that they've given it sufficient memory bandwidth to keep the cores all fed! ;-)

Re:What is the point for 80 cores on the FSB (0)

Anonymous Coward | more than 7 years ago | (#17983712)

Hello fool? Did you read any of the articles?

Re:What is the point for 80 cores on the FSB (1)

daniel_gustafsson (744665) | more than 7 years ago | (#17983948)

What is the point of commenting on an article you haven't read?

Re:What is the point for 80 cores on the FSB (1)

RightSaidFred99 (874576) | more than 7 years ago | (#17985282)

Umm, guh? This chip is an experimental chip and won't see the light of day for years. The FSB doesn't have years left. Ergo, this is a non sequitur - FSB has nothing to do with this chip.

hm (1)

UPZ (947916) | more than 7 years ago | (#17983134)

and here i was about to buy a core2duo p.o.s

More room for bloatware..... (1, Insightful)

madhatter256 (443326) | more than 7 years ago | (#17983224)

Yep. The only way to really use this effectively is to load it up with lots of bloatware. Imagine the tons of ads one can finally get with this type of CPU! doubleclick.net would seriously love this.

People still effectively use processing power equivelant to that of an 800mhz Pentium 3 for basic stuff (and I'm just talking about Word processing, email, internet, no gaming) on average. Why would someone need a quad core CPU, and a crappy videocard just for surfing the net, typing, etc?

In reality, that is what will ultimately happen. Just lots of stuff running in the background without us really noticing it. The speed and cores can make it easier to hide spyware in the background because you won't notice any slowdown in your system when the spyware loads, whereas if you have an older PC you will notice when something is running in the background as it will slow it down considerably. Bloatware will end up becoming tolerable when these types of CPUS start being put in desktop PCs. People will get used to it as much as most people tolerate spam in their email.

Re:More room for bloatware..... (1)

Udderdude (257795) | more than 7 years ago | (#17983652)

Yes, people will really tolerate random popups and keyloggers that steal passwords/credit card information. What?

Nope, think about what the other IHV's are doing. (1, Interesting)

Anonymous Coward | more than 7 years ago | (#17983710)

This clearly isn't for CPU's. It's for building GPU's and more importantly for intel get a part of the huge growing market demand for general purpose programming on GPU's. We'll have to call them something other than GPU's in 5-10 years as they'll do all sorts of other jobs too.

IBM saw this coming and went with the Cell, AMD saw this coming and bought ATi, NVidia already has a card that has all these shader units. Intel would be stupid not to respond. They've already admitted a discrete GPU part is on the way (http://www.reghardware.co.uk/2007/01/23/intel_dis crete_gpu_return).

Only the other day there was a story (either the register or inquirer that's AFAIK has been now deleted...) about their GPU part being a whole chunk of in order x86 parts on a chip. Pieces of the jigsaw are slotting togheter. Makes programming GPGPU stuff easy for many. Intel want to move x86 architecture onto GPU's.

Ah well, I wonder when we'll get that story confirmed. Intel are clearly up to something... I think we'll know what shortly. All in all it spells trouble for NVidia as being left out of the CPU part of the equation with Intel, AMD and in some respects IBM all with combo's.

Anon because I've signed way too many NDA's...

Re:Nope, think about what the other IHV's are doin (1)

ThosLives (686517) | more than 7 years ago | (#17985042)

The interesting question is, if you take a special-purpose processor (GPU) and turn it into a general-purpose processor, which was the wrong classification initially?

Only on slashdot... (1)

icydog (923695) | more than 7 years ago | (#17984284)

Only on slashdot will you find a post complaining about how bad of an idea an 80-core processor is. (On a side note, I'll finally be able to open PDFs in less time than it takes to go to the bathroom and back.)

Narrow Minded (4, Insightful)

Deltronica (1063232) | more than 7 years ago | (#17983364)

Many comments on this post are centered around the processor's use as a personal computing solution. There is much more to computing than PCs! When viewed alongside specialized programming technology, bioinformatics, neurology, and psychology, this (rather large) leap in processing power brings AI to yet another level, and continues the law of accelerated returns. I'm not saying "oh wow now we can have human-like AI", I'm just saying that the ability to process 1.8 Tflops is nothing to scoff. Personal computing is inane and almost moot when compared to the other applications that new processors may pave the way for. Know your facts, but use your imagination.

Re:Narrow Minded (0)

Anonymous Coward | more than 7 years ago | (#17984332)

Spot on. As well as climate simulations, think brain simulations. (Sorry no mod points.)

You won't notice a performance difference... (4, Funny)

Dekortage (697532) | more than 7 years ago | (#17983450)

They've already allocated 40 cores to the RIAA and MPAA for DRM processing, 30 cores to NSA/Homeland Security surveillance of all your computing activities, and 6 cores to combat spam and phishing. In the end, there is no net gain in performance over today's processors. Sorry.

(tongue firmly planted in cheek)

About time... (5, Funny)

nadamucho (1063238) | more than 7 years ago | (#17983530)

Looks like Intel finally put the "80" in 80x86.

WOW. How do you program it? (1)

Absolut187 (816431) | more than 7 years ago | (#17983538)

I was reading an article about this on the BBC
http://news.bbc.co.uk/2/hi/technology/6354225.stm [bbc.co.uk]
From that article:

There are already specialist chips with multiple cores - such as those used in router hardware and graphics cards - but Dr Mark Bull, at the Edinburgh Parallel Computing Centre, said multi-core chips were forcing a sea-change in the programming of desktop applications.
How is this done?
Take an RTS game like Starcraft for example.
Would there be one core assigned to AI path-finding, one for collision-detection, one for network-packet-creation, one for graphics, etc?

I'm not the best programmer in the world, but how the heck would you utilize 80 cores?

Re:WOW. How do you program it? (1)

Splab (574204) | more than 7 years ago | (#17984174)

Easy, you use something like CSP where just about everything is a thread.

Re:WOW. How do you program it? (1)

caffeinemessiah (918089) | more than 7 years ago | (#17984318)

There exists a moderately sized computing world outside of games. 80 cores, as you have pointed out, are clearly not directed towards gamers, or even personal computing at the moment. I would personally love one of these for my simulations, but I can use up absolutely any number of cores without too much trouble. If you want to extend it to games, it isn't very hard to imagine. As someone else mentioned, with a handful of cores you could probably do real-time ray tracing, which is naively parallel and can eat up any number of cores. You'd get photorealistic graphics, probably indistinguishable from real life. Throw in 40 cores there to run that big-screen plasma TV. Throw in a couple of cores for AI, which is much more than path-finding. You can have any number of learning algorithms that monitor every aspect of your gameplay and build better agents. In fact, throw in ..say..a dozen cores and you'd have enemies that might actually seem intelligent. A few more cores to track millions of particles and objects in the game.

Remember that a lot of algorithms can be parallelized to use any number of cores (it gets inefficient after a point, but there definitely is an initial speedup).

Re:WOW. How do you program it? (1)

pizza_milkshake (580452) | more than 7 years ago | (#17984460)

with compilers/tools meant for programming it. before virtual memory programmers had to program for their machine's RAM size and manually manage their memory using "overlays" (or so i've read), but now this concept seems horrid to younger programmers. a generation from now, programmers will read about how computers used to only have one logical core and think it ludicrous.

my uninformed, amateur guess is that functional languages will become more popular for programming massively multi-core machines (this coming from a C programmer). they will start to become faster than imperative languages because their workloads can be more easily recognized and farmed out to multiple cores.

OpenMP (1)

S3D (745318) | more than 7 years ago | (#17984468)

I'm not the best programmer in the world, but how the heck would you utilize 80 cores?

OpenMP hide multithreading from developer and make parallelization completly transparent. Couple of OpenMP instructions can parallelize complex loop, witn no effort form developer at all. That is especially easy in physical simulation and AI. http://en.wikipedia.org/wiki/OpenMP [wikipedia.org]

Re:WOW. How do you program it? (1)

MajinBlayze (942250) | more than 7 years ago | (#17984524)

Take a game like starcraft:
As mentioned earlier, ray tracing is embarrasingly parallel; each core can render a few hundred pixels, making real-time ray tracing possible at 30fps.

AI: some strategy applications of AI are parallel: i.e. figuring out several possible paths at once; as the path branches, more cores can be used to determine the best possible approach.

each unit can have an AI (probably more usefull in FPS games)

and finally: there is more to computing than starcraft. (sorry, Korea)

Re:WOW. How do you program it? (1)

deviousalex (1059572) | more than 7 years ago | (#17985076)

Who says each game would have to utilize 80 cores? These multi core processors make other things possible such as encoding videos and such in the background while playing a video game. Imagine with things like this everyone could easily run a game server while playing the game on the computer and having no slowdown. You could run a CS, UT, and a couple other game servers for friends all while playing one of these games!

Re:WOW. How do you program it? (0)

Anonymous Coward | more than 7 years ago | (#17985318)

OS X automatically uses as many cores as it can find, splitting the job between them. They swapped out the dual Core2Duos on a Mac Pro for two four-core processors and it was able to see and use every core it had.

Re:WOW. How do you program it? (1)

radish (98371) | more than 7 years ago | (#17985874)

Thread based programming really isn't that hard, particularly where you have a problem space which can be split up into discreet chunks of work. Example - a photoshop blur filter. Just divide the image up into (overlapping) chunks and blur each piece on a different thread. Another example - digital audio. Put each VST instrument on it's own thread. Once your apps are well threaded (and in many cases they already are) you can simply rely on the OS to schedule them over how ever many cores are available. For example, I write server code on my desktop box (single core w/hyperthreading) and it runs perfectly happily on the 64 core production servers, just faster.

Of course this is simplifying things a bit, and it is hard to get the very best performance from any given environment, but you can make a big difference quite easily.

frist Psot (-1, Offtopic)

Anonymous Coward | more than 7 years ago | (#17983616)

to stuff with FLOPs! (1)

Anonymous Cowpat (788193) | more than 7 years ago | (#17983786)

I want something that will do 1.8 trillion integer operations per second (single threaded). This simulation is taking 5 hours per run with this A64 3200+. Gimme give me 1.8TIOPs and I'll be listening.

The /. conundrum (1)

Sebastopol (189276) | more than 7 years ago | (#17983818)

You can't say this is useless, and support nVidia or ATI's stream computing, they are the same thing.

This is the future of CPUs: everyone is doing it, and with GFX manufacturers heading down this path, it proves to be a very interesting future.

not first; Connection Machine, Masspar (1)

peter303 (12292) | more than 7 years ago | (#17983998)

Others have built large scale parallelism in the past such as Thinking Machines and Masspar. They were not fully general CPUs, i.e. floating point. Plus the companies could only develop new generations on a 3-5 year time scale, so the general purpose workstations and clusters almost caught up by then. Having a "major" back large scale parallelism may finally lift the curse.

Anyone notice? (1)

treeves (963993) | more than 7 years ago | (#17984114)

One year later, and /. has updated their Intel logo to the new one?

Pfft (2, Funny)

elmedico27 (931070) | more than 7 years ago | (#17984148)

I'll believe it when I see a Beowulf cluster of these things

Dear Intel: (1)

rbarreira (836272) | more than 7 years ago | (#17984398)

please ue a power of two for the number of cores. Base 10 sucks.

Sincerely, /. nerds

Sorry (2, Funny)

rbarreira (836272) | more than 7 years ago | (#17984472)

Sorry, I obviously meant "Base 1010 sucks"...

Wonder why this hasn't been mentioned yet ?? (1)

ashren (965884) | more than 7 years ago | (#17984528)

anyone for Duke Nukem forever??

Is 1.8TFLOPS really that much though? (0, Troll)

OfNoAccount (906368) | more than 7 years ago | (#17985128)

Xbox360 = ~1TFLOPS
PS3 = ~2.18TFLOPS
According to Wikipedia [wikipedia.org]

Also, why does the article compare to a BlueGene variant, when in supercomputer terms it's really competing against things like MDGRAPE-3 which are already in the PFLOP range?

Re:Is 1.8TFLOPS really that much though? (1)

RightSaidFred99 (874576) | more than 7 years ago | (#17985350)

1.8 _Real_ TFLOPS is a lot. The 360 and PS3 have 1/2.18 "fake" TFLOPS.

110C??! (1)

Some_Llama (763766) | more than 7 years ago | (#17985154)

Did anyone else see that?

"Even more impressive, this chip is able to achieve incredibly high clock speeds on modest power usage. Running on a 1.0v current at 110 degrees C the tile maximum frequency is 3.13 GHz while at 1.2v the tiles can run at 4.0 GHz."

That would be about 250f, would peltier coolers be mandatory?

Good luck on the compiler (1)

cfulmer (3166) | more than 7 years ago | (#17985786)

As the article points out, this is a VLIW (Very Long Instruction Word) design -- in effect, each instruction word will be broken up into chunks, with a chunk going to each processor. This means that you can end up with some bizarre situations -- what happens, for example, if one processor needs to jump to one location in memory and the other 79 don't? Effectively, your compiler would need to be able to realize this, and have the instructions at that memory location for the 79 processors be the same. (In reality, I don't think you'd do this -- that processor would probably just sit and wait for the others.) This is not the equivalent of having two cores, with each able to run independently.

The real bottleneck here is the compiler, not the processor, because the compiler has to be able to pick up on implicit parallelism in the code and dole it out among the available cores. While it's possible to the compiler by using a language where the programmer specifies the parallelism, if you think about it, that's the opposite direction from the progress of computer languages in the last 20 years.

The biggest problem that this technology has is that it is expensive when compared with a compute cluster, which can scale easily and can be more easily programmed. The main time the cluster won't do better are the instances where each core needs results from other cores so frequently that the overhead in message passing is too high.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?