Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

$208 Million Petascale Computer Gets Green Light

samzenpus posted more than 5 years ago | from the that's-a-lot-of-solitaire dept.

Supercomputing 174

coondoggie writes "The 200,000 processor core system known as Blue Waters got the green light recently as the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications (NCSA) said it has finalized the contract with IBM to build the world's first sustained petascale computational system. Blue Waters is expected to deliver sustained performance of more than one petaflop on many real-world scientific and engineering applications. A petaflop equals about 1 quadrillion calculations per second. They will be coupled to more than a petabyte of memory and more than 10 petabytes of disk storage. All of that memory and storage will be globally addressable, meaning that processors will be able to share data from a single pool exceptionally quickly, researchers said. Blue Waters, is supported by a $208 million grant from the National Science Foundation and will come online in 2011."

cancel ×

174 comments

Sorry! There are no comments related to the filter you selected.

imagine... (5, Funny)

spiffmastercow (1001386) | more than 5 years ago | (#24866369)

nah, nevermind

Yes, but the article doesn't address a few questio (0, Redundant)

Anonymous Coward | more than 5 years ago | (#24866417)

  1. does it run Linux?
  2. has anyone contemplated creating a beowulf cluster of these things?
  3. will this, like a 115MPH fastball, obsolete all the other computers in the world in activities like first posting?

Re:Yes, but the article doesn't address a few ques (4, Funny)

QuantumRiff (120817) | more than 5 years ago | (#24866737)

It will not run 32 bit linux, so of course, the admins in charge are going to bitch about the lack of adobe flash support.

Re:Yes, but the article doesn't address a few ques (2, Funny)

KGIII (973947) | more than 5 years ago | (#24867205)

Nah but it will finally run Vista.

oblig. (0, Offtopic)

fuocoZERO (1008261) | more than 5 years ago | (#24866419)

Skynet ring a bell? Mark your calendars 2011... (yeah... so maybe I just got done watching the Terminator movies...)

Re:oblig. (2, Informative)

Bill, Shooter of Bul (629286) | more than 5 years ago | (#24866509)

No. This is Urbana,Illinois. HAL 9000 [wikipedia.org] would be more appropriate.

Re:oblig. (1)

QuantumG (50515) | more than 5 years ago | (#24866519)

Ya know there's a tv series now? Season 2 is about to start.

enjoy ;) [mininova.org]

Re:oblig. (1)

Ilgaz (86384) | more than 5 years ago | (#24867275)

I wouldn't be surprised if the actual delivery date was 2012 and some Govt. official said "IBM guys, there is a possibility that Terminator freaks and Mayan 2012 freaks combine, change it to 2011"

Look what CERN had to deal with and still dealing with on HADRON super collider :)

But... (0, Redundant)

Light303 (1335283) | more than 5 years ago | (#24866427)

does it run Vista? ... oh wait!

Re:But... (0)

Anonymous Coward | more than 5 years ago | (#24866543)

does it run Vista? ... oh wait!

Probably not powerful enough to meet Vista's minimum requirements.

$208 Million Petascale Computer Gets Green Light (5, Funny)

Naughty Bob (1004174) | more than 5 years ago | (#24866445)

I'm glad they've given it a green light.

Imagine having all that computer power, and not even knowing if it was switched on!

Re:$208 Million Petascale Computer Gets Green Ligh (1, Funny)

Anonymous Coward | more than 5 years ago | (#24866477)

Too bad it wasn't a red light with all those geeks around... I know I'd be interested!

Re:$208 Million Petascale Computer Gets Green Ligh (0)

Anonymous Coward | more than 5 years ago | (#24866483)

That sure would have somebody seeing red! (yuk, yuk, yuk)

Re:$208 Million Petascale Computer Gets Green Ligh (0)

Anonymous Coward | more than 5 years ago | (#24866587)

Insightful? it's [at least an attemp] at humor

Re:$208 Million Petascale Computer Gets Green Ligh (1)

DieByWire (744043) | more than 5 years ago | (#24868059)

I'm glad they've given it a green light.

Me too. It seems like Urbana sure learned their lesson about giving big, powerful computers red lights [slashfilm.com] about seven years ago. [wikipedia.org]

You know, that IS impressive but... (2, Funny)

Xaedalus (1192463) | more than 5 years ago | (#24866469)

Can it figure out how to brew the 'perfect' cup of coffee?

Re:You know, that IS impressive but... (2, Informative)

InlawBiker (1124825) | more than 5 years ago | (#24866533)

I think you meant tea. [wikipedia.org]

Re:You know, that IS impressive but... (1)

Ungrounded Lightning (62228) | more than 5 years ago | (#24867285)


You know, that IS impressive but... Can it figure out how to brew the 'perfect' cup of coffee?

I think you meant tea.

No, I think Xaedalus meant the [girlgeniusonline.com] perfect cup [girlgeniusonline.com] of coffee [slashdot.org] .

Fixed links: Re:You know, that IS impressive but.. (1)

Ungrounded Lightning (62228) | more than 5 years ago | (#24867451)

(That was SUPPOSED to be a preview while I got the links right. Here it is with the links fixed. View them in order...)

You know, that IS impressive but... Can it figure out how to brew the 'perfect' cup of coffee?

I think you meant tea.

No, I think Xaedalus meant the [girlgeniusonline.com] 'perfect' [girlgeniusonline.com] cup [girlgeniusonline.com] of coffee [girlgeniusonline.com] .

Naive question... (2, Interesting)

religious freak (1005821) | more than 5 years ago | (#24866475)

Yes, I know this is probably a very naive question, but has anyone here actually had the privilege of working on one of these things? I mean, what do they actually use this for?

I think it's awesome, but are there any concrete advancements that can be attributed to having access to all this computing power?

Just wondering...

Re:Naive question... (2, Funny)

Naughty Bob (1004174) | more than 5 years ago | (#24866507)

I mean, what do they actually use this for?

I think it has been designed to run IE8 beta 2.

Re:Naive question... (1)

religious freak (1005821) | more than 5 years ago | (#24866557)

Yeah, I hear it might even be able to run Vista... that is once they upgrade the memory.

Re:Naive question... (4, Funny)

Glith (7368) | more than 5 years ago | (#24866593)

Come on now. Let's be serious. They're trying to play Crysis.

Re:Naive question... (3, Interesting)

Ilgaz (86384) | more than 5 years ago | (#24867035)

Did you know that a very credible FAQ mentions Apple purchased a Cray for manufacturing/design and someone actually saw them emulate MacOS on that monster?

http://www.spikynorman.dsl.pipex.com/CrayWWWStuff/Cfaqp3.html#TOC23 [pipex.com]

I bet they tried some games too :)

Re:Naive question... (1)

BiggerIsBetter (682164) | more than 5 years ago | (#24867999)

Did you know that a very credible FAQ mentions Apple purchased a Cray for manufacturing/design and someone actually saw them emulate MacOS on that monster?

When told that Apple bought a super computer to design their next Mac, Seymour Cray replied, "That's odd, I'm using a Mac to design my next supercomputer."

Re:Naive question... (1)

Ilgaz (86384) | more than 5 years ago | (#24868127)

Cray and Steve Jobs are interestingly similar thinking people.

If you think about the fact that nobody (except armed guards and some top clearance people) will actually see the supercomputer and guy even uses a Mac Laptop to display a Macromedia powered animation on that case, you can easily think that guys are very similar to each other. That thing you see on machine is actually a Mac Powerbook http://www.spikynorman.dsl.pipex.com/CrayWWWStuff/Criscan/t3d_fr.jpg [pipex.com] . Poor thing displays a single animation all its life :)

Re:Naive question... (2, Funny)

cibyr (898667) | more than 5 years ago | (#24868247)

I bet they tried some games too :)

Nonsense! Everyone knows there aren't any games for mac :P

Re:Naive question... (1)

Ilgaz (86384) | more than 5 years ago | (#24868419)

I was about to add the famous "breakout, superbreakout" joke to my post but I forgot the other names of game.

Using a Mac myself and knowing how evil Mac moderators can be has nothing to do with it of course ;)

Re:Naive question... (1)

Dutchmaan (442553) | more than 5 years ago | (#24867235)

Now now.. lets be totally fair.. They're trying to play Crysis on Vista!

Re:Naive question... (3, Funny)

adona1 (1078711) | more than 5 years ago | (#24868071)

Actually, no, they're future proofing their computer for Duke Nukem Forever :)

Re:Naive question... (0)

Anonymous Coward | more than 5 years ago | (#24867861)

This is really funny!

Re:Naive question... (4, Interesting)

serviscope_minor (664417) | more than 5 years ago | (#24866675)

I don't use one myself, but I know people involved with supercomptuers. They are used for large simulations. Often this comes down to solving large systems of linear equations, since at the inner step finite elements need solutions to these large equation systems. The point is, the larger the computer the larger the grid you can have. This involves simulating a larger volume, or simulating the same volume in more detail (think, for example of weather systems).

As for concrete advancemants? I'm not in the biz, so I don't know, but I expect so. Apparently they're also used for stellar simulations, so I expect the knowledge of the universe has been advanced. I would be suprised if they haven't seen duty in global warming simulation too.

Re:Naive question... (4, Informative)

Deadstick (535032) | more than 5 years ago | (#24866781)

Weather modeling comes to mind, both terrestrial and space.

rj

Re:Naive question... (4, Informative)

mikael (484) | more than 5 years ago | (#24866869)

These machines are used to work on simulations that involve aerodynamics and hydrodynamics, quantum electrodynamics (QED), or electromagnetohydrodynamics. All of these simulations require that a mathematical model is constructed from a high density mesh of data points (2048 ^ 3). Blocks of such points are allocated to individual processors. Because of this, each processor must be able to communicate at a high speed with its neighbours (up to 26 neighbours with a cubic mesh).

Usually, the actual individual calculations per element will be take up less than a page of mathematical equations, but require high precision, so the data values will be 64-bit floating point quantities. A single element might require 20 or more variables. Thus the need for some many processors and high clock speed.

Re:Naive question... (5, Funny)

bh_doc (930270) | more than 5 years ago | (#24868639)

electromagnetohydrodynamics

And quantum electroptical tomographics. See, I can make shit up, too...

Re:Naive question... (1)

pablomme (1270790) | more than 5 years ago | (#24866877)

Yes, I know this is probably a very naive question, but has anyone here actually had the privilege of working on one of these things? I mean, what do they actually use this for?

The one application I know this computer is going to run is quantum Monte Carlo [wikipedia.org] , which is an electronic-structure method. QMC is intrinsically parallel due to its stochastic nature, but the degree of parallelism involved here requires further breakdown of the algorithm. There are quite a few research groups putting effort into this.

Other applications, if I am not mistaken, are also meant to be highly parallelizable, possibly nearing the boundary of embarrasingly parallel [wikipedia.org] tasks. This is probably to make sure that the resource is used to its full extent.

Re:Naive question... (5, Informative)

Ilgaz (86384) | more than 5 years ago | (#24866961)

Do you notice neither USA or Russia blows a portion of planet to test nuclear weapons anymore? It is because the planet is so peaceful so further research is not required? Unfortunately no.

These monsters can simulate a gigantic nuclear explosion in molecular level.

Or for peace purposes, they can actually simulate that New Orleans storm based on real World data and pinpoint exactly what would happen.

Re:Naive question... (2, Funny)

blantonl (784786) | more than 5 years ago | (#24867391)

Or for peace purposes, they can actually simulate that New Orleans storm based on real World data and pinpoint exactly what would happen.

Right.

That's why the City of New Orleans evacuated to Baton Rouge.

Re:Naive question... (1)

Ilgaz (86384) | more than 5 years ago | (#24868367)

I remember FEMA claiming that they have predicted this would happen and reported to Govt. but they didn't care.

My post sounded like you would use a super computer to do evil things only so I tried to balance it via New Orleans. In fact, every single less nuclear explosion as result of super computers simulation is a positive thing itself. They will keep stupidly designing/testing them anyway.

Re:Naive question... (1)

QuantumRiff (120817) | more than 5 years ago | (#24866977)

Think of the number of open tabs you could use in Google's new Chrome Browser! With separate processes for each tab, they could have the internet open at once!

Re:Naive question... (1)

florescent_beige (608235) | more than 5 years ago | (#24868081)

I mean, what do they actually use this for?

Very detailed solutions of nonlinear field equations. The kind of thing that aerothermodynamics deals with.

If someone comes out of the woodwork who happens to be a cross between Alan Turing and Kelly Johnson, maybe that person could use a machine like this to design a combined cycle turbo/ram/scramjet and then Richard Branson could use it to power a real spaceship, not something that's just called a spaceship.

It's not that crazy to imagine a talented individual could simulate all the expensive work on scramjets that NASA has done over the last 15 years and maybe design a workable SSTO air-breathing flying thingy. I'd love that. Someone has to make Constellation look like the sad dinosaur it really is.

The real obstacle to that would be the simulation software. That's why it'll take a genius or two.

Re:Naive question... (4, Informative)

Rostin (691447) | more than 5 years ago | (#24868189)

I'm working on a PhD in chemical engineering, and I do simulations. I occasionally use Lonestar and Ranger, which are clusters at TACC, the U. of Texas' supercomputing center. Lonestar is capable of around 60 TFLOPS and Ranger can do around 500-600 TFLOPS. A few users run really large jobs using thousands of cores for days at a stretch, but the majority of people use 128 or fewer cores for a few hours at a time.

My research group does materials research using density function theory, which is an approximate way of solving the Schroedinger equation. Each of our jobs usually uses 16 or 32 cores, and takes anywhere from 5 minutes to a couple of days to finish. Usually we are interested in looking at lots of slightly different cases, so we run dozens of jobs simultaneously.

The applications are pretty varied. Some topics we are working on -
1) Si nanowire growth
2) Si self-interstitial defects
3) Au cluster morphology
4) Catalysis by metal clusters
5) Properties of strained semiconductors

How many human brains is that? (1)

QuantumG (50515) | more than 5 years ago | (#24866493)

Apparently, by 2020, personal computers will have the same processing power of the human brain (Kurzweil 2005). My personal computer has 2 cores, my friend's personal computer has 8 cores, so let's say 4 cores is an average. Cores double every, what, 18 months? In the next 12 years there's 144 months, which is 8 doublings. So what's that, 1024 cores? So this computer is, clearly, 195 times smarter than a human!

Or maybe raw processing power just isn't a good indication of how near or far the Singularity is, ya think?

Re:How many human brains is that? (1)

religious freak (1005821) | more than 5 years ago | (#24866575)

Perhaps this is what the funding is actually for... Nah, that'd be giving the government way too much credit, right?

Re:How many human brains is that? (1)

Joe The Dragon (967727) | more than 5 years ago | (#24866699)

There are others limits to the systems power like the ram bandwidth and size / HD size and speed.

Re:How many human brains is that? (1)

QuantumG (50515) | more than 5 years ago | (#24866715)

Hehe, actually, the problem is a lack of *software*.

Re:How many human brains is that? (0)

Anonymous Coward | more than 5 years ago | (#24867071)

Estimates for the human brain are about 1-10 petaflops. So this super computer can, maybe, almost match the complexity of your brain.

Re:How many human brains is that? (1)

bussdriver (620565) | more than 5 years ago | (#24867619)

I'm familiar with the paper. He ball parks cpu to human simulation and I seem to remember somebody else placing it around 2032.

The big issue often ignored is the neuron networks are NETWORKS more than anything else and you can have as many transistors as you like but if you can not handle ball park interconnects (10**14?) with most moving data in parallel it will be a very slow simulation. The brain is massively PARALLEL so it can handle running as slow as it does.

CPU evolution greatly impacts estimates and that is hard to do just within it's own realm. After multimedia is less of a driving force (merging CPU, DSP, GPU were predictable 10 years ago,) I suspect AI applications will start to pick up enough to drive the market once some better software exists.

The needs are so unlike everything else I don't see most the advances being of much benefit unless AI becomes a big enough area on its own.

Re:How many human brains is that? (2, Interesting)

Surt (22457) | more than 5 years ago | (#24868001)

2020 seems unlikely. A reasonably accurate real-time synaptic simulation can run maybe 100 neurons on a high end pc today, probably less. A human brain has about 100 billion neurons, so we're 1 billion times short in computation. Last time I checked, GPUs had not yet been used in neuron simulation, so I'll even give you that we may be 1000 times better off. That's still 1 million X improvement needed to match the brain, or roughly 20 more generations of computer hardware, at a generous 18 months, that leaves us at 30 years, 2038.

I will be seriously surprised if an even vaguely accurate simulation of the human brain is running before 2050.

Re:How many human brains is that? (1)

QuantumG (50515) | more than 5 years ago | (#24868075)

Kurtzweil is of the opinion that study of brain scanning leads to optimization of the algorithms used in the brain to run faster and better on digital computers. So when he says 2020 he means that the hardware will be commonly available to run these optimized algorithms at sufficient speed to reach human capabilities.. and then he goes on to say that the algorithms will be ready by then as there's all these practical uses for them before we even get to the point where we can combine them together into a working human-level intelligence.

I, on the other hand, am of the opinion that we don't know how to make these algorithms and that we could be studying the brain for centuries before we do.. and that there may be other algorithms which are completely unlike the ones used by the human brain, that we can't discover by studying the human brain, and yet are likely to be discovered much earlier if proper resources are dedicated to do so. I agree with him that the hardware of 2020 will be sufficient to run these algorithms at good speed, but I believe that will only be the case LONG after 2020 as we need to know how to run these things with bad performance before we start optimizing them, and that means that we need much more processing power to try out our theories with.

A little low? (0)

Anonymous Coward | more than 5 years ago | (#24866497)

Didn't Blue Gene/L have almost 500 TFlops sustained back in 2007? 1 PFlop by 2011 seems a little... slow.

Perhaps the architectural differences will allow for substantially higher real world performance, but by raw numbers it doesn't seem as if it's moved up very much.

Re:A little low? (0)

Anonymous Coward | more than 5 years ago | (#24866525)

Actually the NSA beat a petaflop 6 years ago.

Re:A little low? (0)

Anonymous Coward | more than 5 years ago | (#24866551)

I don't think BGL was sustained. This is proposing 1 PFlop sustained for a variety of apps. But it does seems like someone could easily trump this before '11.

A little slow? (1)

nanostuff (1224482) | more than 5 years ago | (#24866521)

Didn't Blue Gene/L do nearly 500 TFlops sustained in 2007? Doubling that by 2011 seems a little... slow. Perhaps the architectural difference will have more substantial benefits in real world performance, but by the given numbers alone, it seems like a disappointing upgrade.

Re:A little slow? (0)

Anonymous Coward | more than 5 years ago | (#24866815)

This is 1 petaflop in real applications(useful to scientists), not in just the simple Linpack benchmark(not useful to most scientists). The Blue Waters machine will have a peak of something like 10 or 20 PFlops. This would make it something like 32 times as powerful as the big Blue Gene/L machine.

Re:A little slow? (1)

nanostuff (1224482) | more than 5 years ago | (#24867123)

What is a real application in this case? Seems to me like performance could vary wildly from one "real world" application to another, so it may not be as simple as to say 1 Petaflop real performance. Peak in this case would indicate a much larger performance difference, but again, not a particularly meaningful number. But if it runs Crysis, that's all I need to know.

Help (-1, Troll)

Anonymous Coward | more than 5 years ago | (#24866531)

my goatse hurts. what should i do?

More crap code (2, Insightful)

kramulous (977841) | more than 5 years ago | (#24866539)

Cool thing about the globally addressable petabyte. That way people writing really crappy code that don't bother thinking about their memory storage can just thrash away. And who cares about pipeline stalls.

I find it funny how the people who have never been formally trained with writing in a language (Mathematics, and just science in general) write the best codes while the majority of the IT people I see write the most appalling code I've ever seen. I think it has something to do with the fact that the science people don't pretend to know everything and are much more willing to learn something new while the IT people already know everything.

Re:More crap code (1, Insightful)

Anonymous Coward | more than 5 years ago | (#24866945)

Cool thing about the globally addressable petabyte. That way people writing really crappy code that don't bother thinking about their memory storage can just thrash away. And who cares about pipeline stalls.

I doubt they can just write crappy code. It's very unlikely that all this memory is on a single bus, so the more distant a core is from the memory it's addressing, the slower that access is.

It's a little bit like putting a video card with 1GB of lightning-quick video RAM in your computer. That VRAM is part of your computer's address space, so you can certainly use it just like regular RAM. But talking to it over the PCI-E bus is slow as fuck compared to talking to main memory. This is why game developers spend a lot of time ensuring that all the textures in a scene can fit in a typical video card's local RAM. It has nothing to do with whether the GPU itself can handle that level of detail without slowing down: it's because your framerate will drop from 50 to 2 the instant you have to transfer a texture from main memory to VRAM.

I think it has something to do with the fact that the science people don't pretend to know everything and are much more willing to learn something new while the IT people already know everything.

It's because science people aren't familiar with the programming aphorism: "Code first, optimize second, if at all." Optimized code is more difficult to write, more difficult to understand, more difficult to make bug-free, more difficult to maintain, and more difficult to extend to include new features. It's also more difficult to port, which is a big problem because optimized code is dependent not just on a particular processor family or manufacturer, but sometimes on the microcode level of a specific stepping of a processor model.

Being a little more kind to the science-types, it's probably also because most scientific problems are very well-defined before they get to the stage where computers get involved. So it might be quite acceptable for a scientific program to run on only one machine and produce correct output for a certain narrow type of inputs. In that case -- and considering that the highly-paid research scientist and his team of grad students might be sitting on their thumbs until they gets back the results of the computations -- it can certainly make sense to expend significant effort on optimization.

For the most part, though, it's more important for a program to be usable, featureful, secure, and correct (more or less in that order) than it is for it to run fast. Only when poor performance begins to impact those primary concerns should you begin to optimize.

Re:More crap code (1)

Ilgaz (86384) | more than 5 years ago | (#24867099)

The Cray FAQ mentions super computers running on 99% load all the time . I think they still don't have the luxury to waste memory. It is just the programs they run actually needs/will need such a massive memory.

I understand your point but I don't think they let "buy more RAM" idiots to use such super computing power.

Remember the Mathematica on OS X was the first 64bit enabled code on PPC G5 since they (scientists) actually needed maxed out G5s (8 GB and 16GB on Quad G5).

Re:More crap code (1)

kramulous (977841) | more than 5 years ago | (#24867333)

but I don't think they let "buy more RAM" idiots to use such super computing power.

There are a few of those idiots around here. They're infecting the system with their 'document classification' and are completely unwilling to acknowledge that there are other techniques for dealing with large dense (usually only 10% in these cases!) matrices. Hilarious when they start telling the linear and non-linear algebraic mathematicians that they don't understand the complexities.

Here's a great example: finding various subsets of "1-2-3" in "1-2-3-2-4-5-1-7-6" (but gigabytes of the stuff stored in a mysql database) with the code in Java, data represented as String and perform a String.split("-") inside very large nested loops (runtime in months on a single cpu).

Turning them away is not an option. I get frustrated when you then have some other dudes that will milk every single flop, parallelise but can't have more than 30% of the machine because the other morons have submitted hundreds of jobs.

Just need to vent ;)

1 PB of shared memory !?! (1)

Mad Hughagi (193374) | more than 5 years ago | (#24867957)

I don't know where TFA got the "globally addressable PB". I think someone was misquoted.

I can't find any mention of it in the NCSA webpages, and no shared memory system exists on this level, ccNUMA or otherwise (NASA Ames has a 4TB altix system, which is evidently the largest in the world that is publicly acknowledged).

Software distributed shared memory hasn't really gone anywhere either, so I think someone was fantasizing when they wrote the article... globally accessible filesystems, sure, but shared memory is something else altogether...

Re:More crap code (1)

Crazy Taco (1083423) | more than 5 years ago | (#24868659)

I find it funny how the people who have never been formally trained with writing in a language (Mathematics, and just science in general) write the best codes while the majority of the IT people I see write the most appalling code I've ever seen.

Actually, most IT people don't have any formal training at all. Most of them are hacks who got into their jobs on the basis of family connections, a year at community college, time in a help desk (especially military helpdesks), or reading a couple of books. Most IT people DO write horrible code as you point out, but I just haven't observed them doing formal training, at least not what I'd call formal training. A day long ASP .Net "training" (if they even do that much) is not the same as getting degrees in computer engineering and computer science. As someone with degrees in both those fields, I get embarrassed when people say "he's in IT" or "he's the new IT guy". Call me a programmer please. IT == hack;

the science people don't pretend to know everything and are much more willing to learn something new while the IT people already know everything.

This part is true.

Re:More crap code (1)

Crazy Taco (1083423) | more than 5 years ago | (#24868729)

I do want to add one thing as my post above may have sounded too harsh towards IT people... I was referring to them as hacks only in instances where they start writing huge applications or designing big databases without learning how to write code or do database design first. IT people often know much more than I do about keeping a desktop running or a network up, and for that I'm grateful.

And there's nothing wrong with community college either, or even no degree, if you've made the effort to learn your skill well. But I have relatives in IT who don't, and insult my brother in law who actually did take the time to go to school and really learn our trade. So hearing about arrogant IT guys who write horrible code really struck a nerve. I hate coming in and cleaning up their mess, and then actually being told to my face that "my degree is meaningless and just a piece of paper".

But by all means people, if you want to code and are willing to take the time to learn to do it right, have at it. I would be thrilled to see more of you, degree or not.

Petascale? (0)

Anonymous Coward | more than 5 years ago | (#24866541)

I understand PetaBytes, and Petaflops, but what is a Petascale?
Something to measure the mass of really large objects (like a small planet)?

but will it run... (1)

presidenteloco (659168) | more than 5 years ago | (#24866545)

Vista fast enough?
Oh I forgot, that would cost 200 peta-dollars,
so maybe they won't use vista.

So ? (0)

Anonymous Coward | more than 5 years ago | (#24866547)

So what kind of framerate will it have in Quake?

Not so sure its the first (1)

ThatRandomPerson (1357053) | more than 5 years ago | (#24866625)

http://en.wikipedia.org/wiki/IBM_Roadrunner [wikipedia.org] So wouldn't that make this the second?

Re:Not so sure its the first (2, Interesting)

Phat_Tony (661117) | more than 5 years ago | (#24866835)

Yeah, that was my thought. Roadrunner at Los Alamos sits at the top of the 500 list [top500.org] with Rmax 1,026,000. I don't know enough about benchmarks to distinguish between "Rmax" and "sustained petascale," but it is achieving over a petaflop. Maybe someone here can tell us more about linpack [top500.org] vs. whatever they're using for this new one. I notice the article linked in the story mentions Roadrunner at the end, but without saying how it compares in speed. It doesn't seem to say by what specific measure this new computer's speed surpasses a petaflop.

Re:Not so sure its the first (2, Informative)

DegreeOfFreedom (768528) | more than 5 years ago | (#24867485)

Blue Waters will be the first to deliver a sustained petaflop on "real-world" applications, meaning various scientific simulations [uiuc.edu] . Specifically, the program solicitation [nsf.gov] required prospective vendors to explain how their proposed systems would sustain a petaflop on three types specific types of simulations, one each in turbulence, lattice-guage quantum chromodynamics, and molecular dynamics.

Granted, Roadrunner was the first machine to deliver a petaflop on the Linpack benchmark [netlib.org] (though certainly IBM's own implementation of it). The benchmark does nothing more than set up and solve a system of linear equations. Roadrunner solved a system of 2,236,927 equations (in other words, it had a 2,236,927-by-2,236,927 coefficient matrix) in 2 hours.

But Blue Waters is planned to deliver a petaflop on applications that normally don't sustain >80% of theoretical peak; these applications are lucky to get near 20%.

Star Trek "Data" rated at 60 Teraflops (4, Interesting)

peter303 (12292) | more than 5 years ago | (#24866649)

I just saw The Measure of a Man episode on the Star Trek Labor Day marathon. Data has a speed of 60 Teraflops and 100 petabytes of storage. That used to seem large in the late 1980s. (Episode were Data goes on trial whether he is a machine or sentient.)

But he was positronic (0)

Anonymous Coward | more than 5 years ago | (#24866749)

And we all know that positrons are substantially more powerful than electrons. Therefore, these electronic petaflops are no measure for Data's positronic teraflops.

Re:Star Trek "Data" rated at 60 Teraflops (1)

QuantumG (50515) | more than 5 years ago | (#24867075)

Demonstration of the triumph of software over hardware!

I believe it was Minsky who said that a 486 could run a human level intelligence, if only we knew the algorithm, but I can't seem to remember where he said it. Maybe I need new RAM!

Re:Star Trek "Data" rated at 60 Teraflops (1)

vivin (671928) | more than 5 years ago | (#24867081)

Bytes? I thought they used "Quads" as a measurement of storage...

Re:Star Trek "Data" rated at 60 Teraflops (2, Interesting)

Bones3D_mac (324952) | more than 5 years ago | (#24867095)

About a decade or so ago, I remember someone very crudely trying to ballpark the amount of storage that would be needed to contain the raw data of the entire human brain complete with a lifetime of experience at around 10 terabytes. Needless to say, that seems incredibly unlikely by today's standards.

Even if something like this were possible (storage not withstanding), the data itself would likely be unusable until we sufficiently understood just how our brains work with their own data enough to create a crude simulation to act as an interpreter. And, even with that, it's probably safe to assume that each brain sampled will likely have highly unique methods of storage and recall, each requiring their own custom-built brain-simulation interpreter.

Somehow, I don't think we'll be seeing anything close to this happening within our lifetimes short of violating our ethics regarding the rights of human life. Basically, something to the effect of strapping someone down while we inject their brain with nanobots designed to disassemble the brain one cell at a time, and then emulate the cell that was just removed, until the entire brain has been replaced with a nanobot driven substitute. (Only with a few added features to allow communication with external devices.)

Re:Star Trek "Data" rated at 60 Teraflops (1)

QuantumG (50515) | more than 5 years ago | (#24867231)

Meh, if you really want to throw teraflops at it, wait until we have enough processing power to simulate a human embryo growing to a fetus. That'll tell you a whole heck of a lot. From that you can use non-invasive NMRI to get data which you can infer structure from.. and if you actually understand that structure then you won't have to do any simulation, you can transcode it into something more appropriate for a digital computer. Basically, it all comes down to software because if you're just going to replicate the hardware of the brain, then why not just use a brain.

Speaking of human rights violations, how about hooking a few terabytes of storage up to a newborn. With an appropriate connector its developing brain should make use of the storage and by studying that you can learn all sorts of nifty stuff. Of course, this will likely make you all squeamish so let's say it's a baby monkey.. or a mouse. Although its not nearly as interesting.

Re:Star Trek "Data" rated at 60 Teraflops (1)

Profane MuthaFucka (574406) | more than 5 years ago | (#24867297)

The novelization of the movie "Star Trek: The Wrath of Khan" described a portable data pack, probably about the volume of a briefcase, that held 50 megabytes. The plot point was that when Khan raided the space station looking for the Genesis data, the scientists transferred it to this 50 megabyte portable memory and beamed down to the Genesis cave. They left a game running they had written instead. They were hoping the pretty graphics would fool Khan long enough to buy a little time.

At that time - 1982 - the biggest computer game ever written was some kind of Time Travel adventure game which filled 6 Apple II floppy disks.

Re:Star Trek "Data" rated at 60 Teraflops (1)

kabocox (199019) | more than 5 years ago | (#24867435)

Data has a speed of 60 Teraflops and 100 petabytes of storage.

Data is just pure bloat then... there have been many other fictional AIs that fit in mere K. There are times when I think that we could have a 100 yottaflop, 100 googolflop, or 100 googolplexflop computer and still not have developed AI.

I wonder... (1)

greymond (539980) | more than 5 years ago | (#24866657)

what their tech persons blood elf or tauren will look like?

Movie (0)

Anonymous Coward | more than 5 years ago | (#24866661)

Movie here [uiuc.edu] (79MB .mov).
NCSA page here [uiuc.edu] .

IBM again? Typical stupid government. (-1, Flamebait)

Anonymous Coward | more than 5 years ago | (#24866765)

Why don't they use Macs? Dont they know that Virginia Tech built one of the worlds fastest supercomputers using nothing but Macs? Why spend all that money on expensive and proprietary IBM equipment when you can use off the shelf Apple parts and put them together cheaper faster and with better usability than any other solution?

Yet another example of idiot government morons spending MY TAX MONEY on useless boondoggles. That's why I'll be voting Ron Paul, personally.

Skynet (0)

Anonymous Coward | more than 5 years ago | (#24866807)

I'm really looking forward to see what Skynet can do for us.

It's said... (2, Interesting)

jd (1658) | more than 5 years ago | (#24866861)

...Apple used to use a Cray to design their new computers, whereas Seymoure Cray used an Apple to design his.

More compute power is nice, but only if the programs are making efficient use of it. MPI is not a particularly efficient method of message passing, and many implementations (such as MPICH) are horribly inefficient implementations. Operating systems aren't exactly well-designed for parallelism on this scale, with many benchtests putting TCP/IP-based communications ahead of shared memory on the same fripping node! TCP stacks are not exactly lightweight, and shared memory implies zero copy, so what's the problem?

Network topologies and network architectures are also far more important than raw CPU power, as that is the critical point in any high-performance computing operation. Dolphinics is quoting 2.5 microsecond latencies, Infiniband is about 8 microseconds, and frankly these are far far too slow for modern CPUs. That's before you take into account that most of the benchmarks are based on ping-pong tests (minimal stack usage, no data) and not real-world usage. I know of no network architecture that provides hardware native reliable multicast, for example, despite the fact that most problem-spaces are single-data, most networks already provide multicast, and software-based reliable multicast has existed for a long time. If you want to slash latencies, you've also got to look at hypercube or butterfly topologies, fat-tree is vulnerable to congestion and cascading failures - it also has the worst-possible number of hops to a destination of almost any network. Fat-tree is also about the only one people use.

There is a reason you're seeing Beowulf-like machines in the Top 500 - it's not because PCs are catching up to vector processors, it's because CPU count isn't the big bottleneck and superior designs will outperform merely larger designs. Even with the superior designs out there, though, I would consider them to be nowhere even remotely close to potential. They're superior only with respect to what's been there before, not with respect to where skillful and clueful engineers could take them. If these alternatives are so much better, then why is nobody using them? Firstly, most supercomputers go to the DoD and other Big Agencies, who have lots of money where their brains used to be. Secondly, nobody ever made headlines off having the world's most effective supercomputer. Thirdly, what vendor is going to supply Big Iron that will take longer to replace and won't generate the profit margins?

(Me? Cynical?)

Re:It's said... (1)

mrsteveman1 (1010381) | more than 5 years ago | (#24867169)

"That's before you take into account that most of the benchmarks are based on ping-pong tests (minimal stack usage, no data) and not real-world usage."

Seems fine to me. I put all my new systems through the ping-pong test, sometimes i even win.

Re:It's said... (1)

Ilgaz (86384) | more than 5 years ago | (#24867183)

I can easily say that Apple and Cray connection is a valid claim since a very high profile Cray guy confirms it on the Cray FAQ:

http://www.spikynorman.dsl.pipex.com/CrayWWWStuff/Cfaqp3.html#TOC23 [pipex.com]

The FAQ also explains why a Beowulf can't match a supercomputer for certain tasks.

What makes me wonder is, what really happened to "Connection Machine" which is a massive break from Von Neumann architecture. It is like a plane compared to a car. How come they didn't evaluate such an invention?
http://en.wikipedia.org/wiki/Connection_machine [wikipedia.org]

Or is it like hybrid kernels which e.g. Apple took good side of Mach Microkernel but also added monolithic stuff?

Re:It's said... (3, Informative)

Bill Barth (49178) | more than 5 years ago | (#24867615)

You could not be more wrong.

Considering that we've got SDR IB with under 2 microseconds latency for the shortest hops (and ~3 for the longest), I think you need to go update your anti-cluster argument. :) The problems with congestion in fat trees have virtually nothing to do with latency. Yes massive congestion will kill your latency numbers, but given that you don't get cascades and other failures causing congestion without fairly large bandwidth utilization, latency is the least of your worries that that point. Furthermore, the cascades you talk about also aren't common except in extremely oversubscribed networks or in the presence of malfunctioning hardware. We do our best to use properly functioning hardware and to have no more that 2:1 oversubscription (with our largest machine not being oversubscribed at all).

MPICH ain't that bad (heck, MPICH2, even just it's MPI-1 parts might be considered to be pretty good by some). MPI as standard for message-passing is fine. I'd love to hear what you think is wrong with MPI and see some examples where another portable message passing standard does consistently better. Though it's a bit like C or C++ or Perl in that there are lots of really bad ways to accomplish things in MPI and a handful of good ones. It's low-level enough that you need to know what you're doing. But if you believe anyone that tells you they have a way to make massively parallel programming easy, I've got a bridge you might be interested in.

Finally, I don't know of much in the way of a "supercomputer" that's using TCP for it's MPI traffic these days, so you can put that old saw out to pasture as well.

Can You Imagine (2, Funny)

nickswitzer (1352967) | more than 5 years ago | (#24866873)

The amount of porn you can download with this thing? Isn't that the number one thing the computer has evolved to?

Don't worry (2, Funny)

EEPROMS (889169) | more than 5 years ago | (#24866895)

in 40 years some kid will laugh at your pathetic attempt at geek coolness when you mention the Bluewater and say "wow your old, Im amazed anyone needed a warehouse just for one petaflop even my Wango-matic game cube has 50 petaflops"

Can't take another 40 (4, Funny)

Main Gauche (881147) | more than 5 years ago | (#24867255)

in 40 years some kid will laugh at your pathetic attempt at geek coolness when you mention the Bluewater and say "wow your old..."

Forty more years of the kids saying "your"? Kill me now! :)

But will it be able to do...(fill in blank) (1)

DougF (1117261) | more than 5 years ago | (#24866921)

My taxes? My laundry?

F@H is already past 2.5 Petaflops (2, Interesting)

Anonymous Coward | more than 5 years ago | (#24867017)

Folding @ Home easly trounces this puny supercomputer.

And Damnit, it's AMERICAN (0)

Anonymous Coward | more than 5 years ago | (#24867149)

Sorry if I come off as Ameri-centric, but it's been a long damn time since I've heard "Biggest *** now planed" or "New big science project for..." and it turns out to be in the USA. It used to always be like that, but recently it's been all EU/Dubai/China or something. At least occasionally some things are still done big in the USA.

modC up (-1, Offtopic)

Anonymous Coward | more than 5 years ago | (#24867245)

Anonymous Coward (0)

Anonymous Coward | more than 5 years ago | (#24867469)

You all think this is funny. But come 2011, the year that terminator is supposed to come, we'll see who's laughing then...it all starts with a supercomputer that plays chess and then all of a sudden it wants to poop and play basketball. At that point in time, I'll be laughing with my hands in my pants and my fingers on a stick.

Imagine a... (0, Redundant)

bakedpatato (1254274) | more than 5 years ago | (#24867579)

oh never mind.

200 000 processors (1)

bigplrbear (1179259) | more than 5 years ago | (#24867831)

but can it run Crysis?

Freecell? (1)

idlehanz (1262698) | more than 5 years ago | (#24867983)

The people that appllied for the grant are looking forward to, "the best game of freecell ever!"

Power Consumption? (1)

Ascoo (447329) | more than 5 years ago | (#24868275)

While it's not always an issue to those that care about the final result, I wonder how much power this sucker is gonna drain from the local power grid. I can see it now.. A prof making a call to the local power company, telling them, "Please be advised, we're gonna black out Urbana-Champaign for the next couple of minutes." Too bad they decommissioned that research nuclear reactor [uiuc.edu] they had back in the '98. While it never was used for electricity, it probably could have been fitted. Then again, with all the wind they're used to, maybe a couple of wind turbines would be sufficient.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>