Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

SGI & NASA Build World's Fastest Supercomputer

timothy posted more than 9 years ago | from the but-does-it-run-windows dept.

Silicon Graphics 417

GarethSwan writes "SGI and NASA have just rolled-out the new world number one fastest supercomputer. Its performance test (LINPACK) result of 42.7 teraflops easily outclasses the previous mark set by Japan's Earth Simulator of 35.86 teraflops AND that set by IBM's new BlueGene/L experiment of 36.01 teraflops. What's even more awesome is that each of the 20 512-processor systems run a single Linux image, AND Columbia was installed in only 15 weeks. Imagine having your own 20-machine cluster?"

cancel ×

417 comments

Sorry! There are no comments related to the filter you selected.

Got the premier comment! (-1, Troll)

Anonymous Coward | more than 9 years ago | (#10638138)

Who wants frosty piss?

Re:Got the premier comment! (0)

Anonymous Coward | more than 9 years ago | (#10638252)

from floppy peckers?

hmmmm...... (4, Funny)

commo1 (709770) | more than 9 years ago | (#10638139)

Let's see them predict the weather.....

Re:hmmmm...... (5, Funny)

Anonymous Coward | more than 9 years ago | (#10638243)

Today we predict a high of +3 Funny, with localised Trolling.

Tomorrow looks like developing a slight rise in Insightful post, but a drop in overall Informative. "First Post" will remain as a constant pattern.

That's nothing... (5, Funny)

Anonymous Coward | more than 9 years ago | (#10638145)

...when they hit the "TURBO" button on the front of the boxes they'll really scream.

Re:That's nothing... (5, Informative)

jm92956n (758515) | more than 9 years ago | (#10638258)

when they hit the "TURBO" button on the front of the boxes they'll really scream.

They did! According to C-Net article [com.com] they "quietly submitted another, faster result: 51.9 trillion calculations per second" (equivalent to 51.9 teraflops).

Read on to the next paragraph (5, Interesting)

jd (1658) | more than 9 years ago | (#10638393)

There it talks of a third run, at 61 teraflops, slightly over the estimated 60 teraflops predicted.


Ok, so we have Linux doing tens of teraflops in processing, FreeBSD doing tens of petabits in networking, ... What other records can Open Source smash wide open?

Re:Read on to the next paragraph (5, Informative)

Troll-a-holic (823973) | more than 9 years ago | (#10638460)

From the article -

NASA Secures Approval in 30 Days
To accelerate NASA's primary science missions in a timely manner, high-end computing experts from NASA centers around the country collaborated to build a business case that Brooks and his team could present to NASA headquarters, the U.S. Congress, the Office of Management and Budget, and the White House. "We completed the process end to end in only 30 days," Brooks said.


Wow. That's incredibly fast, IMHO.

As the article mentions, I suppose NASA owes this to the success of their 512-processor Kalpana system [nasa.gov] , in honor of the late astronaut Kalpana Chawla.

And look at this --

"In some cases, a new Altix system was in production in as little as 48 hours," said Jim Taft, task lead, Terascale Applications Group, NASA. "This is starkly different from implementations of systems not based on the SGI architecture, which can take many months to bring to a reliable state and ready for science."

w00t! That's like super-fast in terms of development time. Good job, NASA. Way to go.

And what about the other companies mentioned in the article?

In addition to Intel Itanium 2 processors, the Columbia installation features storage technology from Brocade Communications and Engenio Information Technologies, Inc., memory technology from Dataram Corporation and Micron Technology, Inc. and interconnect technology from Voltaire.

I've not heard of any of them other than Voltaire - are they well known in this area, or are they defense/NASA contractors of some kind?

Re:Read on to the next paragraph (1, Informative)

Anonymous Coward | more than 9 years ago | (#10638462)

FreeBSD hasn't broken any networking records.

This time there really is a turbo button! (5, Informative)

Dink Paisy (823325) | more than 9 years ago | (#10638297)

This result was from the partially completed cluster, at the beginning of October. At that time only 16 of the 20 machines were online. When the result is taken again with all 20 of the machines there will be a sizeable increase in that lead.

There's also a dark horse in the supercomputer race; a cluster of low-end IBM servers using PPC970 chips that is in between the BlueGene/L prototype and the Earth Simulator. That pushes the last Alpha machine off the top 5 list, and gives Itanium and PowerPC each two spots in the top 5. It's amazing to see the Earth Simulator's dominance broken so thoroughly. After so long on top, in one list it goes from first to fourth, and it will drop at least two more spots in 2005.

20 system cluster?!? (5, Funny)

Emugamer (143719) | more than 9 years ago | (#10638146)

I have one of those... in a spare room!

Who cares about a 20 system cluster, I want a one 512 processor machine!

or 20, I'm not that picky

Everyone needs one! (5, Funny)

Dzimas (547818) | more than 9 years ago | (#10638147)

Just what I need to model my next H-bom... uhh... umm.... I mean render my next feature film. I call it "Kaboom."

Re:Everyone needs one! (4, Funny)

polecat_redux (779887) | more than 9 years ago | (#10638347)

Just what I need to model my next H-bom... uhh... umm.... I mean render my next feature film. I call it "Kaboom."

Not to be pedantic, but the correct term is "Freedom Bomb".

or (2, Funny)

The Islamic Fundamen (728413) | more than 9 years ago | (#10638150)

The World Serires outcome!

Re:or (0)

Anonymous Coward | more than 9 years ago | (#10638192)

It's gonna be the Red Sox. 4-0 top of the 7th, up 2-0 in the series.

Re:or (1)

over_exposed (623791) | more than 9 years ago | (#10638199)

Been watching the games? It's gonna be Boston...

OMFG!!!!! (-1)

CodeWanker (534624) | more than 9 years ago | (#10638152)

They named it Colombia. SO, I guess that means it'll crash faster than any other computer in history.

Ways you are wrong (3, Informative)

RealProgrammer (723725) | more than 9 years ago | (#10638228)

Computer superclusters don't even have O-rings.

They don't carry schoolteachers.

They don't fly in the air.

This runs Linux, not Windows. It won't crash.

Re:Ways you are wrong (2, Funny)

WormholeFiend (674934) | more than 9 years ago | (#10638342)

Computer superclusters don't even have O-rings

so it's not water-cooled?

[didn't RTFA]

Re:Ways you are wrong (0)

Anonymous Coward | more than 9 years ago | (#10638358)

Wrong shuttle.
You are referring to Challenger.
But then, a lack of facts generally don't bother most people.

Re:Ways you are wrong (1)

RealProgrammer (723725) | more than 9 years ago | (#10638428)

>Challenger

Oh, yeah. Oops. I actually watched that from Japan when I was in the service. Not happy.

Re:OMFG!!!!! (0)

Anonymous Coward | more than 9 years ago | (#10638234)

They named it Colombia.

Colombia? They're going to use it to predict cocaine production? Well, that certainly explains the Genesis crash...

Hmm... Nasa? (-1, Flamebait)

AcidFnTonic (791034) | more than 9 years ago | (#10638155)

Nasa eh? well hopefully linux keeps this project from crashing, unlike the previous nasa work...

Wow---- (5, Funny)

ZennouRyuu (808657) | more than 9 years ago | (#10638159)

I bet gentoo wouldn't be such a b**ch to get running with all of that compiling power behind it :)

and thats only 4/5 of the performance! (3, Informative)

m00j (801234) | more than 9 years ago | (#10638161)

According to the article it got 42.7 teraflops using only 16 of the 20 nodes, so the performance is going to be even better.

One is a parity bit... (3, Funny)

NotQuiteReal (608241) | more than 9 years ago | (#10638287)

... um never mind.

RAEM (redundant array of expensive machines) just doesn't ring right - to close to REAM.

Intent of NASA... (1)

Faustust (819471) | more than 9 years ago | (#10638162)


The major question is what does NASA hope to accomplish with this new setup?

With all of the new private space industry, NASA has been set free to explore the further reaches of space. The question is, where will they go next?

Re:Intent of NASA... (3, Funny)

SenatorTreason (640653) | more than 9 years ago | (#10638200)

Seti@Home. They'll be in the Top 10 in no time!

Re:Intent of NASA... (1)

OverlordQ (264228) | more than 9 years ago | (#10638230)

Well in TFA you would see this quote:
"Also significant is the number one," added Brooks, "because with just one of Columbia's 20 Altix systems, we've reduced the time required to perform complex aircraft design analysis from years to a single day."

The obligatory phrases... (0)

techmuse (160085) | more than 9 years ago | (#10638163)

Imagine a beowulf cluster of those...
In Soviet Russia, LINPACK simulates YOU.
All your nodes are belong to us.

Re:The obligatory phrases... (1)

over_exposed (623791) | more than 9 years ago | (#10638215)

You forgot: "Yeah, but does it run Linux?"

Re:The obligatory phrases... (1)

kst (168867) | more than 9 years ago | (#10638425)

All your nodes are belong to us.

I think you mean
All your node are belong to us.

And after further cooperation with Redmond... (4, Funny)

ferrellcat (691126) | more than 9 years ago | (#10638165)

...they were *almost* able to get Longhorn to boot.

Re:And after further cooperation with Redmond... (1)

onya (125844) | more than 9 years ago | (#10638395)

OMFGLOL

kind of like you *almost* made a funny?

its not the hardware thats important (5, Funny)

fender_rock (824741) | more than 9 years ago | (#10638168)

If the same software is used, its not going to make weather predictions more accurate. Its just going to give them the wrong answer, faster.

Re:its not the hardware thats important (3, Interesting)

khayman80 (824400) | more than 9 years ago | (#10638268)

Well, maybe what makes the weather models inaccurate is the grid size of the simulations. If you try to model a physical system with a finite-element type of approach and set the gridsize so large that it glosses over important dynamical processes, it won't be accurate.

But if you can decrease the grid size by throwing more teraflops at the problem, maybe we'll find that our models are accurate after all?

Re:its not the hardware thats important (1)

fender_rock (824741) | more than 9 years ago | (#10638336)

Pehaps, but that might not always be the case. The other day, /. had an article on that old computer running the new Mac OS. It took about a week to get through some of the boot process, but it still worked fine. The only real way to test would be to run the new cluster and the older system with the same data and see what data is outputted. If they are the same, then new software should be written or the weather models need slight fine-tuning. If the results are different, then you would be correct, and the problem would be limited by the resources available to the computer. But speed doesn't necesarily always guarantee accuracy.

Re:its not the hardware thats important (3, Interesting)

chriguhose (676441) | more than 9 years ago | (#10638299)

I'm not an expert on this, but your statement is in my opinion not completly true. Weather forecasting is a little bit like playing chess. One does have a lot of different path to take to find the best solution. Increased computing power allows for "deeper" searches and increases accuracy. My guess is that more accuracy requires exponentially more computing power. Comparing earth simulator to colombia makes me wonder how much accuracy has increased in this particular case.

Re:its not the hardware thats important (1)

Hatta (162192) | more than 9 years ago | (#10638386)

The question is whether the limiting factor is the amount of data we have on the system, or how much we can do with that data. Weather is a fundamentally chaotic system, with sensitive dependence on initial conditions. So eventually any inaccuracy in the data will be amplified and throw off our predictions. But then again, we have a lot of data already, with satellites and weather balloons and airplanes flying through hurricanes, and so forth. Maybe we can squeeze a little more knowledge out of this data with this computer.

20? Try 10420 (0)

thegoofeedude (771803) | more than 9 years ago | (#10638170)

20 Machines with 512 processors? I think of that more as 10420 machines, not just twenty. Impressive!

Re:20? Try 10420 (1)

Tet (2721) | more than 9 years ago | (#10638385)

20 Machines with 512 processors? I think of that more as 10420 machines, not just twenty. Impressive!

You may think that, but you'd be wrong. It's 20 machines. After all, you don't think of a 100 CPU Sun E15K as 100 machines, or even a dual CPU desktop as two machines. SSI on Linux has come a long way...

Re:20? Try 10420,no 2560, make it 20 after all. (3, Interesting)

anon mouse-cow-aard (443646) | more than 9 years ago | (#10638412)

uhm... Well 2560 motherboards, 'cause their quad-cpu... Altix is the SGI C-bricks that used were built to house 4 IA64 cpu's per brick. otoh... no... really it really is 20 machines with 512 processors each, because the memory is globally shared (all processors have access to all the memory, albeit at different latency and performance: NUMA (Non Uniform Memory Access). and a single linux kernel is running on the whole thing.

In other news... (2, Funny)

thedogcow (694111) | more than 9 years ago | (#10638172)

SGI & NASA now have developed a computer that will be able to run Longhorn.

Re:In other news... (0)

Anonymous Coward | more than 9 years ago | (#10638399)

Hahaha!!!!!

Do they really want to name it Columbia? (-1, Flamebait)

xxxJonBoyxxx (565205) | more than 9 years ago | (#10638173)

Do they really want to name it Columbia?

Isn't that French for "crash-and-burn"?

Imagine.... (0)

wolfemi1 (765089) | more than 9 years ago | (#10638174)

....A beowulf cluster of.... holy crap!

I think you meant to say... (2, Funny)

spineboy (22918) | more than 9 years ago | (#10638335)

A Beowolf cluster of Beowolf clusters....

ARRRGGGHHHH PEOPLE'S HEADS ARE EXPLODING!!!

Now you know that there's some engineer with acces to this thing thinking how he can jump to the front of SETI@HOME.

Yeah, but can it do... (-1, Troll)

mosel-saar-ruwer (732341) | more than 9 years ago | (#10638177)


...42.7 teraFirstPosts?

Photos of System (5, Informative)

erick99 (743982) | more than 9 years ago | (#10638178)

This page [sgi.com] contains images of the NASA Altix system. After reading the article I was curious as to how much room 10K or so processors take up.

Interesting Facts (4, Informative)

OverlordQ (264228) | more than 9 years ago | (#10638181)

1) This was fully deployed in only 15 weeks.
(Link [sgi.com] )

2) This number was using only 16 of the 20 systems, so a full benchmark should be larger too.
(link [sgi.com] )

3) The storage attached holds 44 LoC's (link [sgi.com] )

Re:Interesting Facts (1)

chriguhose (676441) | more than 9 years ago | (#10638391)

According to http://das.doit.wisc.edu/misc/top500.jpg/ [wisc.edu] this number was reached using 16 boxes containing 504 processors each.

Same source a little bit further down ( 6. position to be exact) one can find another measurement they made using 8x 512 processors, result was 19.56 GFlop/s

Nah....... (1)

KenwoodTrueX (825304) | more than 9 years ago | (#10638184)

I wonder if something like the seti project results in a super computer even faster then this? Millions of desktops linked together could be. Something like that is probably not counted though (although I still consider it a supercomputer myself).

Free Flat Screen HERE! [freeflatscreens.com]

Imagine a... (2, Funny)

Anonymous Coward | more than 9 years ago | (#10638185)

...single node of these...

oh wait, sorry, Cray deja-vu :-)

Finally.... (0)

Anonymous Coward | more than 9 years ago | (#10638193)

...a computer that can run Doom at 60 FPS.

Ok, what is the point of this? (-1, Troll)

Bold Marauder (673130) | more than 9 years ago | (#10638196)

Really, given the fact that most popular computers have enough processing power to handle anything, and the fact that clustering technology has evolved and is usable in case they aren't...what is the point in the "super computer"? As a layman, it seems pretty obvious to me that there's no need for this, it's just ego and publicity which is going to eventually just DRIVE UP TAXES.

So, again, what is the point, exactly?

Re:Ok, what is the point of this? (0)

Anonymous Coward | more than 9 years ago | (#10638219)

Your kidding right?

Re:Ok, what is the point of this? (1)

Cryect (603197) | more than 9 years ago | (#10638220)

Because most popular computers don't have enough processing power to handle anything.

Sure they have plenty of processing power if you aren't running complex simulations, but if you are doing any type of scientific simulation its not hard to design a simulation that can bring a super computer to its knees.

Re:Ok, what is the point of this? (1)

Bold Marauder (673130) | more than 9 years ago | (#10638261)

Sure they have plenty of processing power if you aren't running complex simulations, but if you are doing any type of scientific simulation its not hard to design a simulation that can bring a super computer to its knees.
Ok, I expected someone would say that, and that's fine. But isn't that exactly the scenerio that clustering technologies have been created to be used in?

I seriously have a hard time imagining what kind of problem could not be solved with a cluster of pentium fours, each with 4-5 cpus (for a total of approx 12-15 GHZ each).

It certainly can't be a very commonly occuring one.

Re:Ok, what is the point of this? (1)

Chrispy1000000 the 2 (624021) | more than 9 years ago | (#10638327)

Try to plot the future of the solar system, with +10k objects with +1 km diamter (guessing) for any significant length of time, factoring in merely the gravitational pull of all the objects upon one another. You'd be hard pressed to calculate that for on your little network there. The thing is, they are not even factoring in solar flares, etc, etc.

Or what about something that can predict solar flares, or even creat a reasonably working model of the sun? All the convection currents and magnetic field simulations would bring your system to it's knees.

There's quite a few resons why they need this much power, but, as you said, it's not exactly a large percentage. But then again, these things aren't all that common, either.

Re:Ok, what is the point of this? (1)

TechnologyX (743745) | more than 9 years ago | (#10638394)

Or, try to find a pattern in the stock market.

Just watch out for those pesky ants and Rabbi's

( Before you mod me troll or whatever, it's a reference from the movie Pi [imdb.com] .. which strangely had very little to do with Pi )

Re:Ok, what is the point of this? (0)

DAldredge (2353) | more than 9 years ago | (#10638376)

Stock Market sims for one...

Re:Ok, what is the point of this? (1)

Dr Tall (685787) | more than 9 years ago | (#10638251)

The point is that your first statement isn't entirely true. Sure, a popular computer *can* do everything, but how long it takes to do something is another matter. Simulation programs exist (for things such as human heart beats) that tie up hours of processing time on supercomputers, let alone on your personal "popular computer". Finally, I really don't think a lone supercomputer is going to raise your taxes significantly compared to, hrm... say a war?

Re:Ok, what is the point of this? (1)

synthparadox (770735) | more than 9 years ago | (#10638259)

Predicting weather, analysis of objects in conditions where hundreds and thousands of variables are present, etc.

I remember my dad worked on the Grand Challenge project [umn.edu] .

A single timestep took around an hour and took up around 60 nodes on a Origin 2000 system (I think thats what it was at the time). He did his processing at the MSC (Minnesota Supercomputing Institute). But with faster computers doing more calculations, research takes less time and money basically.

Re:Ok, what is the point of this? (4, Funny)

dagur (821323) | more than 9 years ago | (#10638293)

Yes what is the point? We all know the resulting answer is going to be 42.

Re:Ok, what is the point of this? (4, Insightful)

servognome (738846) | more than 9 years ago | (#10638414)

Really, given the fact that most popular computers have enough processing power to handle anything, and the fact that clustering technology has evolved and is usable in case they aren't...what is the point in the "super computer"?
The super computer is a cluster (10k+ processors in 20 nodes).
Not all applications/computations scale by just adding computers to the cluster.
An example would be solving for z: x=84+19, y=5*3, z=x+y
The ultimate solution z is limited by the speed x & y can be solved. You can have an individual computer solve for x and another for y in parallel. But no matter how many more computers you add, none of them can solve z until x&y are solved first, and none of them would speed up the computation of x&y.
After a certain scale, you do not get benifits of parellel processing, so the only way to speed things up is to make each individual computer faster.

Here's the current list... (4, Funny)

daveschroeder (516195) | more than 9 years ago | (#10638208)

Prof. Jack Dongarra of UTK is the keeper of the official list in the interim between the twice-yearly Top 500 lists:

http://www.netlib.org/benchmark/performance.pdf [netlib.org] See page 54.

And here's the current top 20 [wisc.edu] as of 10/26/04...

mankind has finally created... (-1, Redundant)

Anonymous Coward | more than 9 years ago | (#10638222)

...the minimum doom3 hardware requirement

Re:mankind has finally created... (1, Informative)

Anonymous Coward | more than 9 years ago | (#10638236)

Cool...something that won't slow to a crawl while playing Sims 2.

windows (0, Flamebait)

fender_rock (824741) | more than 9 years ago | (#10638227)

Too bad its not running windows. They could set a world record for fastest windows crash after install. Mine's only a few minutes, image twenty 512 systems!

NEC's seems to be faster (1, Informative)

Anonymous Coward | more than 9 years ago | (#10638231)

Just wanted to remind you of an earlier post on slashdot [slashdot.org] about NEC's SX-8 which has peak performance of 65 TFlops. Now, which one is the fastest?

Re:NEC's seems to be faster (3, Informative)

toby (759) | more than 9 years ago | (#10638288)

NEC's is announced, this one is installed.

PowerPC just got 0wned! (-1, Flamebait)

Anonymous Coward | more than 9 years ago | (#10638233)

Half the processors, and _more_ speed!
PowerPC sure looks good on paper, but doesn't do so well in the real world.

http://das.doit.wisc.edu/misc/top500.jpg

Re:PowerPC just got 0wned! (1)

aristotle-dude (626586) | more than 9 years ago | (#10638447)

Uh. Uh, 20X512 = 10240 Itanium 2 processors.

The System X cluster contained 1150 machines containing 2 CPUs each which equals 2300 CPUs in total. You were saying? Not to mention you are comparing an expensive Server CPU with a desktop/workstation CPU.

Why don't we wait for IBM to build a Power 4+ or Power 5 super cluster?

NASA.org? (5, Funny)

lnoble (471291) | more than 9 years ago | (#10638244)

Wow, I didn't know the NewAdvancedSearchAgent had such an interest or budget for super computing. I'd think they'd be able to afford their own web server though instead of being parked at domainspa.com and having to fill their entire page with advertisments.

Try NASA.GOV.

Re:NASA.org? (1)

lnoble (471291) | more than 9 years ago | (#10638306)

Fixed that oddly quick, for a slashdot editor.

What is the stumbling block? (5, Insightful)

Dancin_Santa (265275) | more than 9 years ago | (#10638245)

Why does it take so long to build a super computer and why do they seem to be redesigned each time a new one is desired?

It's a little like how Canada's and France's nuclear power plant system are built around standardized power stations, cookie cutter if you will. The cost to reproduce a power plant is negligble compared to the initial design and implementation, so the reuse of designs makes the whole system really cheap. The drawback is that it stagnates the technology and the newest plants may not get the newest and best technology. Contrast this with the American system of designing each power plant with the latest and greatest technology. You get really great plants each time, of course, but the cost is astronomical and uneconomical.

So to, it seems with supercomputers. We never hear about how these things are thrown into mass production, only about how the latest one gets 10 more teraflops than the last and all the slashbots wonder how well Doom 3 runs on it or whether Longhorn will run at all in such an underpowered machine.

But each design of a supercomputer is a massive success of engineering skill. How much cheaper would it become if instead of redesigning the machines each time someone wants to feel more manly than the current speed champion, that the current design be rebuilt for a generation (in computer years)?

Re:What is the stumbling block? (1)

Dr Tall (685787) | more than 9 years ago | (#10638278)

But then what are the engineers supposed to do? Bored engineers like making new supercomputers.

Although I joke, I do see your point. Perhaps it would be wiser if we left our current supercomptuer designs alone for a while until we really need an upgrade. Maybe they could spend some of their time fixing Windows instead?

Re:What is the stumbling block? (1)

Doppler00 (534739) | more than 9 years ago | (#10638309)

Here's one reason it takes so long: you have to construct a special building to put a large super computer in. That could take several months to years to complete. You can't just set up computers in any old warehouse, you need the proper power, air conditioning systems, cable conduits, etc...

Bringing pre manufactured super computers into the building is probably the easiest step.

Re:What is the stumbling block? (0)

Anonymous Coward | more than 9 years ago | (#10638433)

Because supercomputing is about pushing the envelope. You don't push the envelope by reusing old designs, you incorporate new research into building a better machine. This technology is tested on the big projects and trickles down to the enterprise and/or consumer level sooner or later. Don't you suppose nobody would do this if the numbers didn't add up financially?

As an aside, I'd much rather live with a hodge-podge of power plants, some cutting edge, some old, than a uniformly ancient, unsafe grid of socialist-designed power stations.

Re:What is the stumbling block? (2, Interesting)

kst (168867) | more than 9 years ago | (#10638440)

Why does it take so long to build a super computer ...
It doesn't. [rocksclusters.org]

Which NASA is this again? (1)

wviperw (706068) | more than 9 years ago | (#10638266)

Ermm, which NASA are we talking about again?

National Aeronautics and Space Administration [nasa.gov]
New Advanced Search Agent [nasa.org]

Re:Which NASA is this again? (1)

wviperw (706068) | more than 9 years ago | (#10638277)

Guess I am too late, the munchkins already switched the article link to the .gov site.

Re:Which NASA is this again? (1)

jd (1658) | more than 9 years ago | (#10638432)

I suspect the latter. The space agency is busy building a catapult large enough to send astronauts to Mars.

will soon be surpassed... (4, Informative)

Doppler00 (534739) | more than 9 years ago | (#10638283)

by a computer they currently being set up at Lawrence Livermore National Lab: 360 teraflops [zdnet.com]

The amazing thing about it is that it's built at a fraction of the cost/space/size as the Earth simulatior. If I remember correctly, I think they already have some of the systems in place for 36 teraflops. It's the same Blue Gene/L technology from IBM, just a larger scale.

Cost (5, Interesting)

MrMartini (824959) | more than 9 years ago | (#10638302)

Does anyone know how much this system cost? It would be interesting to see how good of a teraflop per million dollar ratio they achieved.

For example, I know the Virginia Tech cluster (1,100 Apple Xserve G5 dual 2.3Ghz boxes) cost just under $6 million, runs at a bit over 12 teraflops, so it gets a bit over 2 teraflops per million dollars.

Other high-ranking clusters would be interesting to evaluate in terms of teraflops per million dollars, if anyone knows any.

What happened to NEC's new Vector Supercomputer (1)

alphan (774661) | more than 9 years ago | (#10638311)

65 > 43

It was here on slashdot last week [slashdot.org] , IIRC. :)

Re:What happened to NEC's new Vector Supercomputer (0)

Anonymous Coward | more than 9 years ago | (#10638422)

According to the press release [linuxworld.com.au] , 65 teraflops is only the predicted theoretical performance; it hasn't actually been built and tested in Real Life.

Not fully true (2, Informative)

ValiantSoul (801152) | more than 9 years ago | (#10638332)

They were only using 16 of those 20 servers. With all 20 they were able to peak 61 teraflops. Check the article [com.com] at CNET.

Ya know... (5, Funny)

Al Al Cool J (234559) | more than 9 years ago | (#10638343)

It's getting to the point where I'm going to have call shenanigans on the whole freakin' planet. Am I really supposed to believe that an OS started by a Finnish university student a decade ago and designed to run on a 386, is now running the most powerful computer ever built? I mean, come on!

Seriously, am I on candid camera?

The important question is... (1)

comrade009 (797517) | more than 9 years ago | (#10638352)

Can it play Doom 3?

My proposed use of this super computer.... (4, Funny)

chicagozer (585086) | more than 9 years ago | (#10638353)

Emulating a Centris 650 running Mac OS X at 2.5 Ghz.

This is a surprising development. Congrats to SGI (1)

PenguinOpus (556138) | more than 9 years ago | (#10638354)

This is very surprising. SGI has been waning for the last several years and the top spot on the supercomputer list has been static for two years waiting for someone to build something better than Earth at a reasonable price. For them to get 80% of the machines working in 15 weeks and get 42TFlops out of it is very impressive.

Congratulations to the remaining engineering team at Silcon Graphics!

Yes, but... (1)

aardwolf204 (630780) | more than 9 years ago | (#10638380)

Yes, but does it run linux?

Re:Yes, but... (0)

Anonymous Coward | more than 9 years ago | (#10638421)

Yes it does. RTFA.

Yes (0)

Anonymous Coward | more than 9 years ago | (#10638442)

Just about all super computers do.

70.93 TeraFLOPs (5, Interesting)

chessnotation (601394) | more than 9 years ago | (#10638382)

Seti@home is currently reporting 70.93 TeraFLOPs/sec. It would be Number One if the list were a bit more inclusive.

In Soviet Russia, Beowulf Clusters Imagine You! (0)

Nova Express (100383) | more than 9 years ago | (#10638445)

Sorry, but under the Mandatory Cliche Consolidation Act of 2004, it had to be said.

I feel so dirty now...

And Itanium2 takes the lead! (1)

Thaidog (235587) | more than 9 years ago | (#10638446)

Damn who would have thought? Only SGI could make that possible!

And with all that power (1)

krray (605395) | more than 9 years ago | (#10638455)

And with all that power they feel the need to .ZIP their .JPG images which actually shaved off an entire 4K on this single 848K file. Wow. I should have thought of that.

Columbia [sgi.com]

Where can I get one? (1)

Anhaedra (760705) | more than 9 years ago | (#10638458)

Where can I get one of these machines, how much will it cost, and how do they score on 3DMark 2005?
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>