Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

"Intrepid" Supercomputer Fastest In the World

CmdrTaco posted more than 6 years ago | from the bravely-and-quickly dept.

Supercomputing 122

Stony Stevenson writes "The US Department of Energy's (DoE) high performance computing system is now the fastest supercomputer in the world for open science, according to the Top 500 list of the world's fastest computers. The list was announced this week during the International Supercomputing Conference in Dresden, Germany. IBM's Blue Gene/P, known as 'Intrepid,' is located at the Argonne Leadership Computing Facility and is also ranked third fastest overall. The supercomputer has a peak performance of 557 teraflops and achieved a speed of 450.3 teraflops on the Linpack application used to measure speed for the Top 500 rankings. According to the list, 74.8 percent of the world's supercomputers (some 374 systems) use Intel processors, a rise of 4 percent in six months. This represents the biggest slice of the supercomputer cake for the firm ever."

cancel ×

122 comments

Sorry! There are no comments related to the filter you selected.

So ... let met be the first to ask ... (5, Funny)

YeeHaW_Jelte (451855) | more than 6 years ago | (#23858557)

... will it run Vista with everything on?

Re:So ... let met be the first to ask ... (5, Funny)

cp.tar (871488) | more than 6 years ago | (#23858625)

... will it run Vista with everything on?

Sure it will.

As long as you don't run any programs.

Re:So ... let met be the first to ask ... (4, Interesting)

Gewalt (1200451) | more than 6 years ago | (#23858633)

No. Super computers tend to not have much in the way of graphics cards. Vista will not use software rendering. But for a mere 500$, you could upgrade the beast so it could.

Re: Your sig: (0)

m.ducharme (1082683) | more than 6 years ago | (#23860055)

I see what you did there....

Re: Your sig: (1)

Alpha Whisky (1264174) | more than 6 years ago | (#23863235)

Really? I would say his sig implies he's a sockpuppet since a UID > 1.2e6 wouldn't have been around in 1999 (not as a moderator anyways).

Re: Your sig: (1)

m.ducharme (1082683) | more than 6 years ago | (#23863539)

Fair enough. I was referring to the Inciteful/Insightful pun, however.

Re:So ... let met be the first to ask ... (0)

PenguinBob (1208204) | more than 6 years ago | (#23860765)

But to really get smoothness, you need $3,000 for tri-SLI!

Re:So ... let met be the first to ask ... (1)

arktemplar (1060050) | more than 6 years ago | (#23861875)

Well I would like to argue that, though this one doesn't I know there are some people in IBM looking into the TESLA GPUS's they pack a mean punch and now come with inherent support for double precision arithmetic - the only down side is its power usage and its form factor. At each one able to perform at about 200 GFLOPS even a couple of hundred of them would be extremely useful.

Re:So ... let met be the first to ask ... (2, Funny)

dubloe7 (966214) | more than 6 years ago | (#23858669)

I don't know, but Crysis looks amazing on it.

Re:So ... let met be the first to ask ... (4, Funny)

Conspiracy_Of_Doves (236787) | more than 6 years ago | (#23859143)

Please, be realistic. These guys are computer engineers, not miracle workers.

Honestly.. (1)

Junta (36770) | more than 6 years ago | (#23860809)

No, it really cannot. Since it isn't x86 based, it doesn't meet the minimum requirements for Vista.

Re:So ... let met be the first to ask ... (2, Funny)

awpoopy (1054584) | more than 6 years ago | (#23861189)

... will it run Vista with everything on?
Nothing does.

Remember when Apple used to compete in this... (1)

klubar (591384) | more than 6 years ago | (#23861293)

When the Mac G5 came out, Apple trumpeted the "new super computer". I think there were even a couple of project that put together a Mac super computer out of a cluster of G5 servers--perhaps it even made the list for a while. In looking at the list, I didn't even see one Apple machine listed (you'd think there would be someone who clustered a bunch of Apple/Intel servers).

Another example of the reality distortion field...remember the "first 64-bit desktop" and "the thinnest laptop*"?

*Ports and DVD drive not included.

Re:Remember when Apple used to compete in this... (1)

billcopc (196330) | more than 6 years ago | (#23861567)

I'm surprised they didn't claim "first Intel processor-based computer".

Re:Remember when Apple used to compete in this... (1)

Arkham (10779) | more than 6 years ago | (#23861855)

http://www.top500.org/node/13224 [top500.org]

This was the cluster. At its peak it was #14 on the top 500 list.

There's no reason to believe that Apple systems (XServes, etc) couldn't be used for a supercomputer cluster, but since they now use the same Xeon processors as everyone else, there's no compelling reason to choose them over another vendor of similar hardware.

Re:So ... let met be the first to ask ... (0)

Anonymous Coward | more than 6 years ago | (#23861357)

Actually according to the OS list, 5 of the systems run Windows:

Windows 5 1.00 % 159264 211320 25472

Re:So ... let met be the first to ask ... (1)

Ilgaz (86384) | more than 6 years ago | (#23861741)

MS started to get very active in supercomputing lately and their buddies started writing about how non existent they are in that field.

http://news.search.yahoo.com/news/search?p=Microsoft+supercomputing&c= [yahoo.com]

Of course, as long as they don't re-invent Unix, planet is safe. I can't picture people cleaning registry of a atomic explosion simulator.

what? where? (1)

Gewalt (1200451) | more than 6 years ago | (#23858587)

What happened to Blue Gene M, N and O?

Re:what? where? (3, Funny)

cp.tar (871488) | more than 6 years ago | (#23858655)

What happened to Blue Gene M, N and O?

I'm more concerned about A, C, G and T.

Re:what? where? (1)

Mick Malkemus (1281196) | more than 6 years ago | (#23858977)

Agreed. DNA computing just sounds too damn weird... Like, after I die, will my DNA be used for a computer, and will "I" be in there anywhere???

Re:what? where? (2, Funny)

cp.tar (871488) | more than 6 years ago | (#23859471)

This raises another question: how much porn can fit into one DNA molecule?
And should we store it in female DNA, just to be on the safe side?

Re:what? where? (0)

Anonymous Coward | more than 6 years ago | (#23859901)

What? No! I go for porn to AVOID women. Damn you and you fancy-schmanzy ideas!

Re:what? where? (4, Informative)

Henriok (6762) | more than 6 years ago | (#23861413)

The L in Blue Gene/L stands for Lawrence Livermore National Laboratory, the site for the first installment.
The P in Blue Gene/P stands for "Petaflops", the target performace
The Q in Blue Gene/Q is probably just the letter after P
The C in Blue Gene/C stands for "cellular computing", now renamed Cyclops64.

Linpack? So does it run Linux? (2, Funny)

cp.tar (871488) | more than 6 years ago | (#23858603)

Apparently, not necessarily. [netlib.org] It's just some Fortran routines.

So much for that joke.

Re:Linpack? So does it run Linux? (3, Informative)

k_187 (61692) | more than 6 years ago | (#23859661)

Actually, Intrepid does run linux according to the list.

Re:Linpack? So does it run Linux? (3, Funny)

ZephyrXero (750822) | more than 6 years ago | (#23859775)

Better question: Does Intrepid run Intrepid [ubuntu.com] ?

Third post (-1, Troll)

Anonymous Coward | more than 6 years ago | (#23858611)

bwahahah!

Perhaps even more importantly (3, Informative)

SpaFF (18764) | more than 6 years ago | (#23858617)

This is the first time a system on the TOP500 has passed the Petaflop mark.

Re:Perhaps even more importantly (2, Informative)

bunratty (545641) | more than 6 years ago | (#23858693)

"The supercomputer has a peak performance of 557 teraflops."

This is the first time a system on the TOP500 has passed the Petaflop mark.
Or 0.557 petaflops, but who's counting?

Re:Perhaps even more importantly (5, Informative)

clem.dickey (102292) | more than 6 years ago | (#23858849)

Or 0.557 petaflops, but who's counting?

You were misled by a terrible headline. The 0.557 petaflop computer is the fastest *for open science.* Roadrunner, at Los Alamos, tops the list. It does 1 petaflop.

Petaflops (2, Informative)

Henriok (6762) | more than 6 years ago | (#23861445)

..or more correctly: 1 Petalops. Can't leave the trailing "s" out, it stands for "second". "Floating point operations per" doesn't mean much.

Re:Petaflops (0, Offtopic)

ran-o-matic (667054) | more than 6 years ago | (#23861559)

The lack of any sort of post editing sux, doesn't it :)

Re:Petaflops (0)

Anonymous Coward | more than 6 years ago | (#23862065)

.. or even more correctly: 1 Petaflops. Can't leave the "f" out, as you noted, it stands for "floating". "Point operations per" doesn't mean much.
--

- J.

Re:Perhaps even more importantly (1)

SpaFF (18764) | more than 6 years ago | (#23858873)

RTFA. Yes, the supercomputer being discussed in the article has a peak performance of 557 TF, however it is number 3 on the TOP500 list. Number one on the TOP500 list is now over 1PF.

Re:Perhaps even more importantly (1)

cp.tar (871488) | more than 6 years ago | (#23858703)

Let me know when a system not on the list passes the petaflop mark.
That will be newsworthy.

Re:Perhaps even more importantly (1)

Dak RIT (556128) | more than 6 years ago | (#23859665)

Extrapolating from the performance development chart [top500.org] which shows a 10 fold increase about every 4 years (desktop computers should be pretty similar), and assuming top desktop computers today hit around 100 gigaflops, then you can expect we'll hit that sometime around 2024.

Re:Perhaps even more importantly (0)

Anonymous Coward | more than 6 years ago | (#23859763)

But SkyNet is supposed to be online by 2011!

frost pist? (-1, Offtopic)

Anonymous Coward | more than 6 years ago | (#23858619)

damn, if only i had one of these.

Unclassified speed (1, Funny)

Anonymous Coward | more than 6 years ago | (#23858667)

The supercomputer has a peak performance of 557 teraflops and achieved a speed of 450.3 teraflops on the Linpack application used to measure speed for the Top 500 rankings.

And that's the unclassified speed. Just imagine how fast it can really go! Just like the SR-71!

Re:Unclassified speed (1)

Icegryphon (715550) | more than 6 years ago | (#23861429)

It is so fast like Aurora [wikipedia.org] you can't even see it.

Supercomputer (4, Funny)

sm62704 (957197) | more than 6 years ago | (#23858699)

Computer scientists building the monstrosity admit that it still isn't powerful enough to run VISTA with all the bells and whistles turned on.

George Broussard says that when the next generation of this machine reaches the desktop, Duke Nukem 4ever will be released. "Really", he said, "The game's been finished for over five years now. We're just waiting for a powerful enough computer to play it on."

Sources say that besides computitng power, DNF is waiting for the holographic display. The The US Department of Energy's (DoE) high performance computing system lacks a holographic display.

Gamers were reportedly disappointed in the news, although most said the price of the DoE's new computer wouldn't faze them. "After all" one said, "you have to have a decent machine to play any modern game!"

Re:Supercomputer (1)

pitchpipe (708843) | more than 6 years ago | (#23861435)

Sources say that besides computitng power, DNF is waiting for the holographic display.

Wow, I'm waiting for that display too!

Re:Supercomputer (1)

sm62704 (957197) | more than 6 years ago | (#23862505)

Ok, I'll admit it; that typo was accidental. I wish it was on purpose ;)

Does not compute (4, Informative)

UnknowingFool (672806) | more than 6 years ago | (#23858721)

The title says: "'Intrepid' Supercomputer Fastest In the World" for open science while the article says "IBM's Blue Gene/P, known as 'Intrepid', is located at the Argonne Leadership Computing Facility and is also ranked third fastest overall." There needs to be some clarification. Roadrunner [networkworld.com] is considered the fastest in the world and is also built for the DOE. I'm guessing that Roadrunner is used exclusively by Los Alamos and is not available for open science while Intrepid is.

Re:Does not compute (1)

schklerg (1130369) | more than 6 years ago | (#23858901)

Well, the top 500 website even shows Roadrunner as the fastest here [top500.org] . Blue Gene/P looks to be #2.

Re:Does not compute (1)

bockelboy (824282) | more than 6 years ago | (#23859253)

Roadrunner is open for unclassified research for 6 months IIRC, but will be reserved for classified research for the rest of the time.

I'd really, really, love to learn about the programming practices one follows for a computer like that.

Re:Does not compute (1)

Poorcku (831174) | more than 6 years ago | (#23859885)

Roadrunner has AMD's Opterons in it. So while Intel has the most, AMD has the fastest.

Only partially true (2, Informative)

Nursie (632944) | more than 6 years ago | (#23860889)

It's made of tri-blade clusters, the opteron to do IO and various other mundane things, and then two Cell PowerX 8 (I think I have that right) blades to do the heavy lifting.

Nonsense (1)

Caboosian (1096069) | more than 6 years ago | (#23858815)

I've got my 2500+ XP overclocked to 1.9GHz. Beat that!

Imagine a beowulf cluster of these (0)

Anonymous Coward | more than 6 years ago | (#23858867)

Someone had to say it, this is slashdot after all.

The actual list (5, Informative)

Hyppy (74366) | more than 6 years ago | (#23858895)

Top500 [top500.org] has the actual list. Would have been nice to have this in TFA or TFS.

Inaccurate Summary (2, Informative)

Anonymous Coward | more than 6 years ago | (#23858963)

The title line of the summary isn't accurate - Intrepid is not the world's fastest supercomputer, just the fastest for 'open science'.

Love the fine print. (1)

kiehlster (844523) | more than 6 years ago | (#23859001)

I was thinking the Intrepid was the "Fastest in the World", but actually it's the fastest for open science. The DoE owns the top three on the list [top500.org] . Why do they need so many? If you're protecting the nation's energy, why not set and example and use less of it?

Re:Love the fine print. (1)

sm62704 (957197) | more than 6 years ago | (#23860525)

If you're protecting the nation's energy, why not set and example and use less of it?

Because the less energy there is, the more the DoE is needed. They have to protect their cushy jobs, you know.

Answer: nuclear bombs (1)

mangu (126918) | more than 6 years ago | (#23863827)

The DoE owns the top three on the list. Why do they need so many?

Because they simulate nuclear bombs, now that actual testing is forbidden by international treaty.

"Fastest supercomputer" an overused phrase. (1)

Skreech (131543) | more than 6 years ago | (#23859009)

Good for open science.

But yet another article that uses the phrase "Fastest supercomputer" for attention because it can qualify in the article which list out of the dozens it's on. We have a fastest supercomputer almost every week of varying speeds. See Roadrunner [slashdot.org] .

"Fastest supercomputer uses Slashdot"
The fastest supercomputer in Skreech's living room has posted a post on Slashdot.

I don't understand. (1)

Kyokushi (1164377) | more than 6 years ago | (#23859019)

The top500 list [top500.org] clearly show that roadrunner is #1. What's this one then?

Key words: "For open science" (3, Informative)

LighterShadeOfBlack (1011407) | more than 6 years ago | (#23859425)

The top500 list [top500.org] clearly show that roadrunner is #1. What's this one then?
I'll let TFA answer this one:

IBM's Blue Gene/P, known as 'Intrepid', is located at the Argonne Leadership Computing Facility and is also ranked third fastest overall.
In other words I don't really know why this is news. I don't think anything has changed about its position recently (other than Roadrunner becoming #1 a few weeks back).

Cliche (0)

Anonymous Coward | more than 6 years ago | (#23859059)

Achievement aside, isn't the name a cliche? Yeah, naming the thing is probably the last thing in their minds, but do they really have to scrape the bottom of the barrel and come up with "Intrepid"? How many times have people come up with something fast and then decided "Oh, we'll call it, 'The Intrepid'!"

Re:Cliche (3, Funny)

Gewalt (1200451) | more than 6 years ago | (#23859409)

You keep using that word. I do not think it means what you think it means.

gDefine: Intrepid [google.com]

Re:Cliche (2, Funny)

sm62704 (957197) | more than 6 years ago | (#23860293)

Achievement aside, isn't the name a cliche?

Intrepid can refer to: [wikipedia.org]
  • Chevrolet Intrepid, the International Motor Sports Association GT Championship car, which raced from 1991 to 1993
  • William Stephenson, the Canadian World War II spymaster whose code name was Intrepid
  • Dodge Intrepid, the automobile
  • Intrepid Games, a satellite company of the computer game developer Lionhead Studios, now disbanded
  • The Lunar module of the 1969 Apollo 12 lunar landing mission
  • Several real and fictional ships named USS Intrepid
    • USS Intrepid (1798), was an armed ketch captured as a prize by the US Navy on 23 December 1803 and later exploded in the harbor of Tripoli 4 September 1804
    • USS Intrepid (1874), was an experimental steam torpedo ram commissioned 31 July 1874 and sold 9 May 1892
    • USS Intrepid (1904), was a training and receiving ship launched 8 October 1904 and sold 20 December 1921
    • USS Intrepid (CV-11), was an aircraft carrier launched 26 April 1943 and decommissioned 15 March 1974. Intrepid opened as a museum in New York City during August 1982 and is designated as a National Historic Landmark
    • The fictional Star Trek Starfleet includes a line of Intrepid-class starships
      • USS Bellerophon (NCC-74705) Transports Vice Admiral William Ross, Dr. Julian Bashir and Section 31 operative Luther Sloan from Deep Space Nine to Romulus in the Star Trek: Deep Space Nine episode "Inter Arma Enim Silent Leges".
      • USS Voyager (NCC-74656)
  • Several ships named HMS Intrepid
    • The first Intrepid was a third rate ship of the line captured from the French in 1747.
    • The second Intrepid was a third rate ship of the line built in 1770.
    • The sixth Intrepid was an Apollo class cruiser which was sunk as a blockship in the Zeebrugge raid.
    • The seventh Intrepid, was an I class destroyer launched in 1936, that served in World War II and was sunk by air attack in 1943.
    • The eighth Intrepid (L11), launched 1964, was a landing platform dock that served in the Falklands War.
  • An American Civil War military balloon aircraft named Intrepid (balloon aircraft)
  • Union of Border Worlds ship BWS Intrepid in Wing Commander IV: The Price of Freedom
  • US-22 America's Cup Intrepid (yacht)
  • The Intrepid Sea-Air-Space Museum in Manhattan
  • Intrepid Ibex, the codename for the 8.10 (October 2008) in-development release of the Ubuntu Linux operating system
  • Intrepid Travel, Australia based small group adventure company.
  • Intrepid Kart, an Italian kart chassis manufacturer
I guess they could have called it the "dauntless", but I'm not sure why anyone would give a supercomputer either name. A ship, sure, but you would think they would use a name that was a synonym for "speedy" for a supercomputer, not "fearless".

Roadrunner? (0)

Anonymous Coward | more than 6 years ago | (#23859117)

Wait. How's the #3 the fastest? No I didn't RTFA cause I already read one this morning on Roadrunner

http://top500.org/list/2008/06/100

Oh, I suppose the "for open science" disclaimer handles that. Way to sensationalize another headline /.

What's the framerate? (1)

aaaaaaargh! (1150173) | more than 6 years ago | (#23859217)

What framerate does Crysis have on this machine with all settings maxed out?

This article is wrong or confused (1)

Minter92 (148860) | more than 6 years ago | (#23859275)

I work with Argonne and am involved with the HPC world. Sadly this article doesn't include a link to the actual top500 that would clear this mess up.
http://www.top500.org/lists/2008/06 [top500.org]

The #1 computer, the one over a petaflop, is RoadRunner at Los Alamos.

#2 is a bluegene machine from the DOE

#3 is Intrepid at argonne.

It's not clear to me how they could be so wrong in the article.

Top500 list, speculation, and private companies (1, Insightful)

painehope (580569) | more than 6 years ago | (#23859419)

Firstly, the Top 500 list is the "authoritative" list, released each year at the Supercomputing Convention. Until then, nothing is really official. Though the list has it's own flaws, mostly from vendors submitting flawed benchmarks and/or guesswork.


Secondly, the real benchmark is the application. Some algorithms run better on some platforms and worse on others. Period. Unless you are running a highly specialized set of applications - and nothing but - the rule of thumb is "design the best system you can, that has the best overall performance for the majority of codes, and if it excels in one area, great". Of course, most supercomputing is FP intensive, so anything that has an excellent FPU architecture will probably be your best bet. And don't forget bottlenecks, like storage, network, memory, etc.


And, last but not least, remember that there are a lot of private companies with incredibly large systems. And most of those companies do not advertise their processing capabilities. Universities and government labs do, private industry generally doesn't. Of course, private industry is often trapped by the portability of their applications to new architectures, thus rendering the use of a system like Blue Gene/P useless to them.

Obligatory (0)

Anonymous Coward | more than 6 years ago | (#23859429)

"Imagine a cluster of these...."

Re:Obligatory (1)

JCSoRocks (1142053) | more than 6 years ago | (#23859827)

It's a Beowulf cluster, not just a cluster! You AC types can't even get the meme's right anymore. What's slashdot coming to!?

Fastest computer in the world. (1)

nurb432 (527695) | more than 6 years ago | (#23859501)

That we know about. I bet behind closed doors somewhere there is nearly unlimited funding and there is a faster machine.

Booooring (4, Interesting)

Reality Master 101 (179095) | more than 6 years ago | (#23859507)

I liked (back in the Old Days) when supercomputer rankings where based on linear, single processor performance. Now it's just how much money can you afford to put a lot of processors in a single place. That was a real test of engineering. By the current standards, Google (probably) has the largest supercomputer in the world.

Unfortunately, single core performance seems to have hit the wall.

Wroooong (4, Informative)

dk90406 (797452) | more than 6 years ago | (#23859915)

Even in the Old Days, supercomputers had multiple processors.

--
In 1988, Cray Research introduced the Cray Y-MP®, the world's first supercomputer to sustain over 1 gigaflop on many applications. Multiple 333 MFLOPS processors powered the system to a record sustained speed of 2.3 gigaflops. --
The difference today is that almost all supercomputers use commodity chips, instead of custom designed cores.

Ohh - and the IBM one is almost a million times faster than the 20 years old '88 cray model.

Re:Booooring (1, Insightful)

Anonymous Coward | more than 6 years ago | (#23860107)

So how come Roadrunner has half the number of processors of Intrepid, but is twice as fast?

In supercomputing it's all about bandwidth. It always was and always will be. That's also why Google isn't on the list - a bunch of off the shelf hardware sucks at bandwidth.

Re:Booooring (1)

javilon (99157) | more than 6 years ago | (#23860187)

One day Google's supercomputer will wake up to consciousness and we will all be his slaves.

Re:Booooring (0)

Anonymous Coward | more than 6 years ago | (#23860627)

One day Google's supercomputer will wake up to consciousness and we will all be his slaves.

I, for one, welcome...

Never mind. Too easy.

Re:Booooring (1)

ch-chuck (9622) | more than 6 years ago | (#23861141)

Actually that would make a great sciFi movie plot with a message about the evils of overly intrusive marketing. A large advertising media firm with a corporate purpose of promoting products for shareholder profit gradually accumulates cpu sentience as the company grows, eventually reaching living consciousness and takes fulfilling it's mission to extremes, like vger returning from space, resulting in the eventual annoyance of every consumer on the planet.

     

Re:Booooring (4, Informative)

Salamander (33735) | more than 6 years ago | (#23860499)

That was a real test of engineering. By the current standards, Google (probably) has the largest supercomputer in the world.

Sorry, but no. As big as one of Google's several data centers might be, it can't touch one of these guys for computational power, memory or communications bandwidth, and it's darn near useless for the kind of computing that needs strong floating point (including double precision) everywhere. In fact, I'd say that Google's systems are targeted to an even narrower problem domain than Roadrunner or Intrepid or Ranger. It's good at what it does, and what it does is very important commercially, but that doesn't earn it a space on this list.

More generally, the "real tests of engineering" are still there. What has changed is that the scaling is now horizontal instead of vertical, and the burden for making whole systems has shifted more to the customer. It used to be that vendors were charged with making CPUs and shared-memory systems that ran fast, and delivering the result as a finished product. Beowulf and Red Storm and others changed all that. People stopped making monolithic systems because they became so expensive that it was infeasible to build them on the same scales already being reached by clusters (or "massively parallel systems" if you prefer). Now the vendors are charged with making fast building blocks and non-shared-memory interconnects, and customers take more responsibility for assembling the parts into finished systems. That's actually more difficult overall. You think building a thousand-node (let alone 100K-node) cluster is easy? Try it, noob. Besides the technical challenge of putting together the pieces without creating bottlenecks, there's the logistical problem of multiple-vendor compatibility (or lack thereof), and then how do you program it to do what you need? It turns out that the programming models and tools that make it possible to write and debug programs that run on systems this large run almost as well on a decently engineered cluster as they would on a UMA machine - for a tiny fraction of the cost.

Economics is part of engineering, and if you don't understand or don't accept that then you're no engineer. A system too expensive to build or maintain is not a solution, and the engineer who remains tied to it has failed. It's cost and time to solution that matter, not the speed of individual components. Single-core performance was always destined to hit a wall, we've known that since the early RISC days, and using lots of processors has been the real engineering challenge for two decades now.

Disclosure: I work for SiCortex, which makes machines of this type (although they're probably closer to the single-system model than just about anything they compete with). Try not to reverse cause and effect between my statements and my choice of employer.

Almost (1)

flaming-opus (8186) | more than 6 years ago | (#23864115)

"It's [google's data center] good at what it does, and what it does is very important commercially, but that doesn't earn it a space on this list."

This is the only false statement in your posting. Google's data centers are, in fact, a huge pile of intel/AMD processors connected with a couple lanes of gigabig ethernet. True, they are not designed for HPC, and therefore cannot compete with real supercomputers on REAL hpc applications. However, the top500 list is generated using linpack. Linpack is a terrible representation of performance on real HPC applications. Linpack almost exclusively rewards FP ALU throughput, scales almost perfectly on multicores, and requires very little of the interconnect, and has very modest memory bandwidth needs. Linpack is about the only HPC application that would work on google's data center. I bet they couple put together some pretty goood scores if they wanted to, but those machines are too busy making money to run silly benchmarks.

Otherwise, you're spot on, though it would help if you'd take the chip off your shoulder.

real measure (2, Insightful)

flaming-opus (8186) | more than 6 years ago | (#23861051)

Well, the real measure of fastest computer has a lot to do with what software you want to run on it. In the example of the top500 list, linpack scales almost perfectly as you add processor cores, and makes very limited demands of network speed, memory bandwidth, or single-processor performance. Other codes really can't scale past 16 processors, so these massive processor jumbles don't amount to a hill of beans.

Most codes are somewhere between. As the machine gets larger, the more effort has to be put in designing the software to actually use all the hardware.

Re:Booooring (1)

raddan (519638) | more than 6 years ago | (#23861233)

Interconnecting all of those cores is a real engineering challenge. The basic problem is covered in elementary discrete mathematics books. These guys are most definitely still pushing the envelope.

Re:Booooring (0)

Anonymous Coward | more than 6 years ago | (#23862949)

If you look at the early ACM and IEEE research papers, every company had their own ideas on how performance could be improved; parallel-processing using vector processing (SIMD), multiple general purpose CPU's (MIMD), superscalar pipelined instruction sets, custom ASIC's. Unfortunately, anything with custom silicon ended up needing a custom communication network, custom chassis and custom racks.

All of these innovations have been incorporated into the current generation of CPU's. An active research group cannot justify spending time and resources trying to reinvent existing technology when it's not their core research area and existing off-the-shelf products can do the job just as well.

A couple of the areas of active research in CPU design that have still to be solved are asychronous computing (eliminating all the chip real-estate dedicated to clock circuitry), and wireless interprocessor communication (either through optical or RF communication).

Exactly -- Parallelism isn't everything (1)

GPS Pilot (3683) | more than 6 years ago | (#23863149)

Many problems simply can't be parallelized. 95% of the time, throwing more cores at the problem doesn't help me. I hope per-core performance picks up a little pretty soon.

A supercomputer under your desk (1)

wvmarle (1070040) | more than 6 years ago | (#23859595)

It seems to me, at least superficially, that supercomputers these days do not use the fastest processors around. I'm sure there are processors faster than the Intels. They just use more of them.

Quite smart, as using commodity processors must save a lot of money compared to specialised processors. And I suppose it may make programming easier as the compilers for the architecture are there already, and are very mature in development.

But then what we now call an average desktop is what twenty years ago was a supercomputer. And what's now capable of running Vista that was ten years ago called a supercomputer. Or at the very least a very powerful and specialised workstation.

But does it do warp 9.95? (0)

Anonymous Coward | more than 6 years ago | (#23859605)

I thought the Intrepid main computer used neuro-gel packs?

Read the fine print! (1)

Mesa MIke (1193721) | more than 6 years ago | (#23859895)

It says, "... for open science."

Here's [lanl.gov] the actual Fasted Computer in the World.

Intel (1)

genican1 (1150855) | more than 6 years ago | (#23859897)

Yeah, like IBM would use Intel chips in their top of the line supercomputers! They use Power chips, and not even very fast ones at that.

Re:Intel (1)

Ilgaz (86384) | more than 6 years ago | (#23861425)

How could Intel PR attach themselves to the story which should be about first ever Petaflop (documented) supercomputer made possible by IBM low power/mhz PowerPC processors and AMD Processors?

Gotta respect to such PR and sold out tech journalists (!).

Where's Steele? (1)

CoolCucumber (183650) | more than 6 years ago | (#23860093)

Does anyone know why Purdue's 'Steele [purdue.edu] ' system isn't on the list?

But how much power does it use? (1)

1sockchuck (826398) | more than 6 years ago | (#23861009)

This year the top 500 also tracks how much power is used by each system. Systems under development at Oak Ridge National Lab will reportedly have annual power bills of more than $30 million when they debut in 2012. See ComputerWorld [computerworld.com] and Data Center Knowledge [datacenterknowledge.com] for more.

Beowulf Cluster of PS3s (4, Interesting)

Doc Ruby (173196) | more than 6 years ago | (#23861085)

The supercomputer has a peak performance of 557 teraflops and achieved a speed of 450.3 teraflops on the Linpack application


The PS3's RSX video chip [wikipedia.org] from nVidia does 1.8TFLOPS on specialized graphics instructions. If you're rendering, you get close to that performance. The PS3's CPU, the Cell [wikipedia.org] , gets theoretical 204GFLOPS on its more general purpose (than the RSX) onchip DSP-type SPEs, and some more on its onchip 3.4GHz PPC. A higher end Cell with 8 (instead of 7 - less one for "chip utilities" - in the PS3's Cell) delivers about 100GFLOPS on Linpack 4096x4096. Overall a PS3 has about 2TFLOPS, so 278 PS3s have a theoretical peak equal to this supercomputer. But they'd cost only $11,200. YMMV.

Re:Beowulf Cluster of PS3s (1)

Tweenk (1274968) | more than 6 years ago | (#23861837)

The only problem is that the RSX chip is inaccessible from Linux due to the hypervisor, and can only be utilized games from Sony. So you actually get 1/10 of your stated 2 TFLOPS in supercomputing applications. This is because the PS3 itself is sold close to production costs or even at a loss, and the real profit for Sony is derived from games. Allowing RSX to be used for supercomputing would destroy their business model.

Re:Beowulf Cluster of PS3s (1)

arktemplar (1060050) | more than 6 years ago | (#23862117)

connecting them together and the fudge factor for making sure it's the sustained performance and not hte theoretical peak would add the rest of the 110 Million $.

ps: yes those are very expensive 'wires' used to 'connect' them.

Re:Beowulf Cluster of PS3s (0)

Anonymous Coward | more than 6 years ago | (#23863311)

it's all about linpack, the nvidias and cells of the world won't bench anywhere near their theoretical flop rating using linpack.

Re:Beowulf Cluster of PS3s (1)

Doc Ruby (173196) | more than 6 years ago | (#23863603)

The Cell scored 50% of its theoretical rating on Linpack, at 100GFLOPS, as I said. For $400.

Headline (1)

Mistah Bunny (1256566) | more than 6 years ago | (#23861667)

That's a misleading headline. It's the fastest "open science" supercomputer. Roadrunner has it beat.

The summary is wrong - both Intel and AMD together (3, Interesting)

Tweenk (1274968) | more than 6 years ago | (#23861709)

It's not Intel chips that have 74.8% share, it's x86 chips. Those are produced by both AMD and Intel. In fact, there are 7 systems with x86 hardware in the top 10, and the 4 faster ones use AMD Opterons (Crays are also Opterons) while the 3 slower use Xeons.

I swear I will not hurt anyone... (1)

vorlich (972710) | more than 6 years ago | (#23861853)

Does Sarah Connor know about this?

Beowulf cluster of... (0)

Anonymous Coward | more than 6 years ago | (#23862293)

They made a supercomputer out of a Beowulf fluster of Dodge Intrepids? I knew I shouldn't have settled for the Stratus!

Distributed computing is the new thing. (0)

Anonymous Coward | more than 6 years ago | (#23862803)

These expensive behemoths are a thing of the past in terms of raw computing power. I believe the SETI@home network to have the most power at its fingertips. Of course, there are still tasks that can't be distributed easily so there is still justification for the existence of these sauroids. Let's hope the john DOEs put it to good use and don't just run nukulah bomba osama simulations on it...

there only 12 supercomputers this year (1)

peter303 (12292) | more than 6 years ago | (#23862837)

I defined a supercomputer as the top order of magnitude of speed. That would 100 to 1000 teraflops in mid-2008 or 12 computers.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?