Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

NSF Announces Supercomputer Grant Winners

samzenpus posted more than 7 years ago | from the I-can't-allow-you-to-do-that-dave dept.

Supercomputing 82

An anonymous reader writes "The NSF has tentatively announced that the Track 1 leadership class supercomputer will be awarded to the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign. The Track 2 award winner is University of Tennessee-Knoxville and its partners." From the article: "In the first award, the University of Illinois at Urbana-Champaign (UIUC) will receive $208 million over 4.5 years to acquire and make available a petascale computer it calls "Blue Waters," which is 500 times more powerful than today's typical supercomputers. The system is expected to go online in 2011. The second award will fund the deployment and operation of an extremely powerful supercomputer at the University of Tennessee at Knoxville Joint Institute for Computational Science (JICS). The $65 million, 5-year project will include partners at Oak Ridge National Laboratory, the Texas Advanced Computing Center, and the National Center for Atmospheric Research."

cancel ×

82 comments

Sorry! There are no comments related to the filter you selected.

If anyone makes a Terminator joke (1, Funny)

Anonymous Coward | more than 7 years ago | (#20164105)

I will kill you.

Re:If anyone makes a Terminator joke (3, Funny)

locster (1140121) | more than 7 years ago | (#20164871)

Ahh what the heck - A terminator walks into a bar... barman: Why the mimetic polyalloy face? terminator: I'm a T-1000 terminator from the future sent to kill Sarah Conner.

Re:If anyone makes a Terminator joke (1)

MiniMike (234881) | more than 7 years ago | (#20165551)

I think you're still safe from him...

Re:If anyone makes a Terminator joke (1)

reverseengineer (580922) | more than 7 years ago | (#20165763)

Wrong movie. They're building one of the supercomputers in Urbana, Illinois, which means that the HAL Plant [wikipedia.org] must finally be operational, just a few years behind schedule.

Re:If anyone makes a Terminator joke (1)

Ortega-Starfire (930563) | more than 7 years ago | (#20169185)

I won't, but I could make some Deus Ex jokes! Well, I guess I better not.

Hot and Exciting??? (-1, Offtopic)

Anonymous Coward | more than 7 years ago | (#20164137)

This story is as hot and exciting as Nancy Reagan's underwear.

Re:Hot and Exciting??? (1)

Chilled urine. (1132739) | more than 7 years ago | (#20164813)

This story is as hot and exciting as Nancy Reagan's underwear.

Ooh baby!

No i didn't RTFA (1)

sumdumass (711423) | more than 7 years ago | (#20164141)

But is this the same award everyone was pissed about because it was going to IBM?

I'm curious if that was separate, if it was false or fake information or if they changed their minds afterwards?

universities or IBM? (3, Informative)

Will the Chill (78436) | more than 7 years ago | (#20164197)

I think that the NSF funds it, the universities get to run the research, and IBM gets to build the machine.

http://hardware.slashdot.org/article.pl?sid=07/08/ 06/0547226 [slashdot.org]

-WtC

*please insert sig for 2 more minutes*

Re:universities or IBM? (1)

shots47s (538099) | more than 7 years ago | (#20164455)

The "Blue River" machine is the IBM system that was referred to in the previous submission. Generally, the universities pick vendors to work with, for building a machine of this size and capability is well beyond the capability of any university. Last year, the University of Texas, Austin won a machine working with the vendor Sun.

Re:universities or IBM? (1)

Orville (104680) | more than 7 years ago | (#20169181)

"The University" won't have the only input into the selection of vendors: both of these projects are intended for a national audience as part of the NSF's Cyberinfrastructure program.

Of particular interest: the NSF "Track 2" machine is built to complement the capabilities of TeraGrid, and is also being built in a location (Oak Ridge Laboratory) that the DOE is using to build their own Petascale machine.

http://www.gcn.com/online/vol1_no1/40250-1.html [gcn.com]

(Both will rely on the readily available, federally administered Tennessee Valley Authority power system, interestingly enough.)

Re:No i didn't RTFA (0)

Anonymous Coward | more than 7 years ago | (#20164345)

The acrimony earlier in the week was because the name of the winning institutions were apparently explicitly listed in some documents posted on the NSF web site related to the NSB meeting earlier this week. Apparently some of the competing supercomputer centers are unhappy with the fact that this information was accidentally leaked ahead of the NSB meeting. Now that it's over, this seems rather like sour grapes though. The awards apparently aren't final just yet [tgdaily.com] , but they are expected to be approved.

I, for one, welcome our... (0, Redundant)

Will the Chill (78436) | more than 7 years ago | (#20164149)

top-500 petaflops-scoring hal-9000 overlords!

-WtC

*please insert sig for 2 more minutes*

Imagine a... (0)

Anonymous Coward | more than 7 years ago | (#20164165)

Imagine a Beowulf cluster of those!

I approve (5, Funny)

weak* (1137369) | more than 7 years ago | (#20164189)

I'm glad we have the NSF out there supporting the development of faster and faster supercomputers. Pretty soon these machines will be able to locate the correct Sarah Connor in the phone book on the first try.

Re:I approve (0)

Anonymous Coward | more than 7 years ago | (#20164319)

Didn't you see the first AC post? He's going to kill you now for mentioning the movie, probably after getting blown apart by large-gauge shotgun shells and reforming outside of his prison cell bars.

Only if Dr. Chandra replaces Mr. Langley. (0)

Anonymous Coward | more than 7 years ago | (#20164607)

Otherwise there may some issues to resolve first.

Hail Alma Mater! (1)

jwalter1 (1140107) | more than 7 years ago | (#20164215)

... ever so true ...

Re:Hail Alma Mater! (1)

reverseengineer (580922) | more than 7 years ago | (#20164453)

...We love no other,
So let our motto be
Victory, Illinois! Varsity!

Petascale? (0, Flamebait)

inKubus (199753) | more than 7 years ago | (#20164221)

Petascale (n) - a unit of measure equivalent to the dead weight of all the cats used to test lipstick by getting it rubbed on their eyes over a calendar year.

Re:Petascale? (1)

locster (1140121) | more than 7 years ago | (#20164725)

no no no, petascale just means computers that are typically about 10^15 mm across. I was going to get one but I just don't have the space.

wow... (4, Informative)

djupedal (584558) | more than 7 years ago | (#20164325)

"Infinite: Bigger than the biggest thing ever and then some. Much bigger than that in fact, really amazingly immense, a totally stunning size, real "wow, that's big," time. Infinity is just so big that by comparison, bigness itself looks really titchy. Gigantic multiplied by colossal multiplied by staggeringly huge is the sort of concept we're trying to get across here."

Re:wow... (2, Funny)

weak* (1137369) | more than 7 years ago | (#20164391)

"Infinite: Bigger than the biggest thing ever and then some. Much bigger than that in fact, really amazingly immense, a totally stunning size, real "wow, that's big," time. Infinity is just so big that by comparison, bigness itself looks really titchy. Gigantic multiplied by colossal multiplied by staggeringly huge is the sort of concept we're trying to get across here."
Who gave this guy E?

Re:wow... (1)

Barny (103770) | more than 7 years ago | (#20164479)

The late Mr Adams I believe :)

a question we all wanna know (1)

ILuvRamen (1026668) | more than 7 years ago | (#20164407)

So what kind of processors are in the new one (and the old ones). Are there just tons of basic, high end intel chips in it or is it some that they built not like any other that are unique and diverse to do special jobs? I seriously don't know.

Re:a question we all wanna know (1)

Barny (103770) | more than 7 years ago | (#20164499)

Hrmm, maybe the new Sun chip? since it has the memory controllers and 10gig-E built in already?

Why would they want to use a chip known for its bad interconnecting tech (fsb is so last century) :P

Re:a question we all wanna know (1)

Chilled urine. (1132739) | more than 7 years ago | (#20165991)

Hrmm, maybe the new Sun chip? since it has the memory controllers and 10gig-E built in already?

Oh yah, I lurve Sun Chips! Very tastee!

Re:a question we all wanna know (1)

777v777 (730694) | more than 7 years ago | (#20171537)

Sun -- Weren't they the ones who built Niagra without any Floating Point units? Sure seems useful to me...

A good guess would be POWER7 (1)

raftpeople (844215) | more than 7 years ago | (#20164581)

So what kind of processors are in the new one (and the old ones). Are there just tons of basic, high end intel chips in it or is it some that they built not like any other that are unique and diverse to do special jobs? I seriously don't know

Given that IBM is scheduled to deliver a multi-peta flops supercomputer to DARPA based on the POWER7 in the year 2010, it seems like a good guess that IBM would use the same technology for this one due in 2011, if they are the ones building it.

Obligatory... (0, Troll)

Seismologist (617169) | more than 7 years ago | (#20164575)

Yeah but does it run Linux?


Sorry, I couldn't resist, it was going to come up anyway... I can probably say the supercomputer won't run anything made by Microsoft. Who knows, maybe the next M$ os version will have minimum system requirements of a super computer.

Are these machines actually used? (1)

flyingfsck (986395) | more than 7 years ago | (#20164611)

I can't help but wonder whether these super machines are ever actually used to capacity. Since they are housed at universities I suspect that some professor runs two or three stupid little mental exercises on it and then it just sits there, glomps electricity and gathers dust.

Re:Are these machines actually used? (1)

haluness (219661) | more than 7 years ago | (#20164657)

As far as UIUC is concerned, they have some top notch people who are pushing computing to the limits with long time-scale molecular dynamics runs. And their not doing it to model a few atoms either. Klaus Schulten has been doing some very impressive work on simulating protein dynamics. Take a look at http://www.ks.uiuc.edu/Overview/KS/research.html [uiuc.edu] .

So I have a feeling that the new machines are going to be humming right along

Re:Are these machines actually used? (0)

Anonymous Coward | more than 7 years ago | (#20164777)

Shame on you for the self-plug, Klaus Schulten! But seriously I agree, the parent is just ignorant.

Re:Are these machines actually used? (2, Interesting)

Entropius (188861) | more than 7 years ago | (#20164697)

My PhD advisor does computational quantum chromodynamics on supercomputers. Quantum chromodynamics is the current theory of the nuclear force. Unfortunately, nobody can actually calculate all that much with it because the math is too hard, but we think it's the right theory because of some symmetry arguments. One of the big challenges at the moment in high-energy theory is to actually see what QCD predicts. Basically the perturbation + renormalization approach that worked so well for quantum electrodynamics doesn't work on QCD because of the "asymptotic freedom" property of quarks: the potential between two quarks grows without bound as you separate them, until it's big enough that you wind up color-polarizing the vacuum and creating a wad of quarks and gluons if you try to separate two quarks.

Since perturbation theory doesn't work, the only way to get answers out of the thing is to solve the equations numerically on a lattice using Monte Carlo methods. To do this requires, as you probably guessed, Big Fucking Computers.

Re:Are these machines actually used? (0)

Anonymous Coward | more than 7 years ago | (#20164831)

The last sentence says it all! :)

Re:Are these machines actually used? (1)

Alpha830RulZ (939527) | more than 7 years ago | (#20165269)

We have a new jargon winner, Ladies and Gentlemen! The parent should either be modded informative or bullshit, and I can't tell which, which I'm finding pretty amusing. Mod me toasted.

Re:Are these machines actually used? (1, Insightful)

Anonymous Coward | more than 7 years ago | (#20165317)

Actually, he uses surprisingly little jargon, considering how much stuff he COULD have thrown in there. (QCD even has its own custom-built supercomputers - see QCDOC) Then there are specific algorithms, approaches, etc. All in all, he sounds like he knows what he's talking about and summed it up pretty well, unsurprisingly since he's a grad student in the field, it seems. ... And he's right, the solution is big fucking computers. :)

Re:Are these machines actually used? (2, Informative)

DegreeOfFreedom (768528) | more than 7 years ago | (#20165995)

In fact, a lattice QCD problem was one of the model problems for the Track 1 proposals. Proposers had to "provide a detailed analysis of the anticipated performance of the proposed system on the following set of model problems...A lattice-gauge QCD calculation in which 50 gauge configurations are generated on an 84^3*144 lattice with a lattice spacing of 0.06 fermi, the strange quark mass m_s set to its physical value, and the light quark mass m_l = 0.05*m_s. The target wall-clock time for this calculation is 30 hours." Full details here [nsf.gov] .

This is a Big F-ing Problem that does in fact require Big F-ing Computers to solve. To meet the target time would require at least a petaflop of sustained performance; hence the inclusion of this problem in the call for proposals. The other model problems came from CFD and molecular dynamics, and there was a wide range of smaller required problems as well.

Now, none of this explains how these machines will really be used, or to what end. Nevertheless, I can vouch for such large machines being used under heavy load to solve very large problems. Poke around any of the national supercomputing labs' websites, and you should be able to find at least plenty of news releases, if not papers.

Here are some quick samples:

Re:Are these machines actually used? (3, Informative)

Minter92 (148860) | more than 7 years ago | (#20165481)

I worked as a system engineer on the supercomputers at NCSA from 97 till 2000. Once they are up and stable they are pretty much pushed to the limits. The users are constantly pushing for more procs, more memory, more storage. They'll use every flop they can get.

Re:Are these machines actually used? (1)

itamblyn (867415) | more than 7 years ago | (#20165669)

Yes

Re:Are these machines actually used? (1)

Kohath (38547) | more than 7 years ago | (#20168321)

I'm wondering why we need government grants to develop computers now. There are many companies that make computers. They'll make a fast one for you if you order it.

There are also real projects from the NSA and other government branches that need fast computers. Why not a specific grant to develop a computer for a specific application rather than just a "make a fast supercomputer"?

Should we have grants for "make a tastier fast-food french fry" next?

Re:Are these machines actually used? (1)

Orville (104680) | more than 7 years ago | (#20169125)

There are also real projects from the NSA and other government branches that need fast computers. Why not a specific grant to develop a computer for a specific application rather than just a "make a fast supercomputer"?

Because specific projects usually have a very finite lifetime, and supercomputing resources are terrifically expensive: that's why the NSF has "Cyberinfrastructure" as a major project. Researchers will apply for computer time as part of the normal grant process: current facilities are already being heavily used, and with the TeraGrid project (and the science gateways) it's hoped that use will increase for a broader range of scientific inquiry.

Re:Are these machines actually used? (1)

mabraham (517277) | more than 7 years ago | (#20178399)

A company like Sun or IBM eventually does get paid to make a fast computer... the body that won the grant just gets the right to make the nuts and bolts decisions about what sort of computer, how big, where it lives, etc. They're the one with the experts on these topics on their payroll, not the granting agency or the manufacturers. The reason there's a big grant up for grabs is that the sort of work that gets done on these machines is all paid for through government research grants. For buying computers, rather than parcel out a million bucks here and there to indivdual researchers (whose primary expertise is doing their research, not maintaining computing facilities), they opt for funding several large central computing resources and tell researchers to apply for time there. A decentralized high-performance computing model sucks because if one machine is sitting empty for whatever reason, other people can't jump in and use the valuable CPU cycles. On a central machine, they can, and the issue is how to resolve scheduling issues to give everyone a fair go :-)

In other news, programmer suicides up... (1)

Quadraginta (902985) | more than 7 years ago | (#20164711)

Since it's going to be massively parallel, it's only 500 times more powerful than some other computer if it has a beautifully parallelized problem to solve.

I've programmed computers scientifically for twenty-odd years, and one thing I've found is that massively parallel computers are very difficult to use efficiently, except when you're solving one of the relatively few problems which are obviously parallelizable and yet have interesting results. For example, solving 500 million tic-tac-toe games simultaneously is certainly impressive, but not very interesting. Solving a championship chess match is interesting, but it's not obvious at all how to do it well with 500 million simultaneous calculations. Therein lies the heart of the difficulty.

Part of the problem is undoubtably that we find it hard to think in parallel. We solve problems step by step, like a scalar machine. It's extremely difficult to even imagine what it would be like to solve a problem "all at once," in a fully parallel way, with each important factor simultaneously influencing all others.

So by me I'd say this is, for all of its Pyramid of Cheops grandeur, a second-rank research tool, for use in bashing problems to death that have well-defined, known algorithms for their solution. The real frontier is going to be people who noodle around on small systems figuring out how to "think" in parallel, who develop novel parallelizable ways to solve problems.

Re:In other news, programmer suicides up... (1)

XHIIHIIHX (918333) | more than 7 years ago | (#20164795)

Yeah sure fine, but first we gotta get vista to boot .

Re:In other news, programmer suicides up... (1)

DivineOmega (975982) | more than 7 years ago | (#20180097)

You are attempting to use CPU core 153. Cancel or Allow? Allow.

You are attempting to use CPU core 154. Cancel or Allow? Allow.

...

Re:In other news, programmer suicides up... (2, Interesting)

OldChemist (978484) | more than 7 years ago | (#20164819)

You make a good point. It is now possible to buy a quad core from Dell for about $750 (or less) to play around with. However, as mentioned earlier in this discussion, the work of Klaus Schulten at Illinois is quite instructive. His program NAMD (not another molecular dynamics program) has been designed from the ground up to scale well on many processors. This program does a lot better in this respect than most other md programs out there, although this will no doubt change. So don't despair about this being a second rank research tool. There are some folks poised to take good advantage of it. I do strongly agree with your point that fundamental advances can still be made on small systems.

Re:In other news, programmer suicides up... (1)

Quadraginta (902985) | more than 7 years ago | (#20173207)

Well...OK, and I know the Schulten work quite well from my time at UIUC. It's certainly impressive in many ways.

But my suggestion is that fundamental advances will only be made on small, cheap systems. See, a machine like this is so expensive that it's very hard to justify doing blue-sky goofball things on it, which will almost certainly turn out to be dumb ideas. You usually have to write a proposal, and the committee usually won't risk massive resources on an idea that is shaky, speculative as heck, or screwball.

But of course, a truly new and powerful idea does look screwball at first. (Otherwise, some other smart guy would have already come up with it.) This is one reason the graphical web browser was not invented at Microsoft. Who would spend the money required to develop such a piece of software, when there were no graphic-intensive web pages out there for it to use, and apparently no demand for one? No one sensible, with a bottom line to protect. Only some undergraduate dreamer (Andreesen). For that matter, the development of computer simulation itself is instructive: it was not respectable in its early days, and most academics thought there was very little you could learn by doing a computer simulation. It was Bernie Alder noodling around on Livermore's big computers while no one paid attention that finally came up with something amazing that convinced everyone that computer simulation by God made a lot of sense. Sure, we can all see it now -- 20/20 hindsight and all that -- but if Alder and Wainwright had been bigshot academic scientists in the glare of publicity using very expensive public resources, I'll bet they would never have risked doing something as apparently nutty as simulating hard-sphere fluids.

So who is going to come up with new and powerful ways to solve problems in parallel? Not, I think, the people using the World's Fastest And Most Expensive Parallel Computer(TM). Those folks can't afford to be seen goofing around, making mistake after mistake while they're learning. Instead it will be someone of whom you've never heard, screwing around on a $5000 32-node cheapie cluster because that represents such a trivial investment that no one minds if he does apparently stupid, apparently pointless things on it all day.

Re:In other news, programmer suicides up... (0)

Anonymous Coward | more than 7 years ago | (#20178461)

I'm a grad student in molecular dynamics, and I know Klaus, his code and his work. While you do want good scaling performance with respect to the number of processors, that's not a useful measure of the quality of the implementation compared with other programs. Total throughput of a given system on a given number of processors is a much better indicator. Why? Well if I write code that sucks on one processor, but which gets less-sucky fast when I add more processors to the problem (why this can happen is a technical thing), then my scaling is going to look pretty damn fine. It's still missing the point where you want each processor working optimally to maximise throughput. GROMACS (www.gromacs.org) is widely regarded as the fastest molecular dynamics code because of its heavy use of assembler-optimized inner loops. The glue is written in C, and this is comparable with NAMD's C++. The parallel decomposition is not the rate-limiting step in parallel MD - it's still heavily compute-bound, and that's where GROMACS still wins.

Re:In other news, programmer suicides up... (1)

mabraham (517277) | more than 7 years ago | (#20178527)

I'm a grad student in molecular dynamics, and I know Klaus, his code and his work. While you do want good scaling performance with respect to the number of processors, that's not a useful measure of the quality of the implementation compared with other programs. Total throughput of a given system on a given number of processors is a much better indicator. Why? Well if I write code that sucks on one processor, but which gets less-sucky fast when I add more processors to the problem (why this can happen is a technical thing), then my scaling is going to look pretty damn fine. It's still missing the point where you want each processor working optimally to maximise throughput. GROMACS (www.gromacs.org) is widely regarded as the fastest molecular dynamics code because of its heavy use of assembler-optimized inner loops. The glue is written in C, and this is comparable with NAMD's C++. The parallel decomposition is not the rate-limiting step in parallel MD - it's still heavily compute-bound, and that's where GROMACS still wins.

Re:In other news, programmer suicides up... (1)

OldChemist (978484) | more than 7 years ago | (#20179113)

Thanks for your comments. You make a good point that scaling alone is not enough, if the code being scaled is inefficient. So you are probably aware that a good thing to check is how many "seconds" of some standard simulation can be done per computing unit. Although Gromacs is supposed to be the fastest gun in the West - and it is on a single processor, it doesn't scale vary well. At least in my experience. This may be due to the kind of machines I have access to. You may want to look at the Gromacs web site where some examples of scaling are listed: http://www.gromacs.org/content/view/26/39/ [gromacs.org] For "large" jobs, it is probably best to do some tests to decide which program to use. There are, of course, other reasons one might prefer NAMD (Schulten's system) to Gromacs such as its integration with VMD (the Illinois group's graphics program) or the possibility of interacting with a simulation "on the fly" using a haptic device... Ciao, OC

Re:In other news, programmer suicides up... (0)

Anonymous Coward | more than 7 years ago | (#20165177)

I agree with many of your points, but I recall the words of some of the guys working on BG/L at Livermore who said that, in a sense, you face very different issues when talking about scaling on thousands of processors (let alone tens of thousands or even hundreds of thousands) than you do on small-scale parallel systems. Without a doubt, progress still needs to be made at the low end, but I'm glad to see it also being tackled at the high end - on the whole, we gain more knowledge this way.

Some might say it's all the same, but that would be a bit like saying microbiology and system ecology are the same - they have the same underlying principles, but the scales are completely different. Nanoengineering and civil engineering could perhaps be another example.

All that said, the system's usefulness isn't tied simply to its processor count - if I had a 32K processor system (for example), but my codes only scaled to 4K nodes, that still means I can run eight different datasets on 4K nodes each and fully utilize the system. Hardly a second rate tool if you have the need for cycles, and in these days, who doesn't?

Re:In other news, programmer suicides up... (1)

Vader82 (234990) | more than 7 years ago | (#20165333)

I'll have to disagree with you there. There are plenty of people who can think completely parallel but are limited by programming languages and all the tedium. For example, the concept for "take this pile of rocks from here and get it there" isn't inherently serial. I'd say 99% of people can grasp that you can do one rock at a time (hands), 10 rocks at a time (shovel), 100 rocks at a time (wheelbarrow) or 10,000 rocks at a time (bulldozer). Maybe thats too simplistic for your tastes, but most concepts aren't severely more convoluted.

In my opinion, the biggest hindering factor is that we've got to make the threading explicit. I know I'm talking about the "mythical parallelizing compiler" here but if you could say "factor all the numbers between 1 and 100 billion" instead of "for(i = 1; i 100000000000; i++){ factor(i); }" we'd be tons better off. But what language lets you do that? None that I'm aware of, though using a functional language might make that easier.

At any rate, the first computer cost $1M and there was, according to IBM, only a market of about 10. Nobody was thinking about making things run in parallel for quite some time. Since a language has never been designed from the ground up for the assumption of "this could run on anywhere from N down to 1 hardware" no language lets you easily capture, and thus exploit, parallelism.

A single bulldozer is a serial device. (2, Insightful)

mosel-saar-ruwer (732341) | more than 7 years ago | (#20168557)


Throwing a bunch of rocks at a single bulldozer is a serial act.

The parallel problem is to get a fleet of 100 bulldozers or 1000 bulldozers or 10,000 bulldozers simultaneously attacking a pile of rocks so that:

A) The bulldozers aren't constantly colliding with one another, and

B) When the bulldozers back off to avoid colliding with one another, they aren't all just sitting around twiddling their thumbs, needlessly burning diesel fuel [not to mention "prevailing" union wages & time value of the loan which was used to purchase the bulldozers], while waiting in endlessly long lines until it's time for their turn [finally!] to take a whack at the pile of rocks, and so that

C) The inefficiencies of B) aren't so great that it's actually counterproductive to have introduced the extra bulldozers in the first place.

Re:In other news, programmer suicides up... (1)

Quadraginta (902985) | more than 7 years ago | (#20172875)

I'd say you illustrate my point, that thinking "in parallel" is unnatural and difficult.

First of all, your problem with the rocks what in the business we call trivially parallelizable. You solve it like this:

int main() {
 
  int n, result ;
 
  printf("Enter number of rocks: ") ;
  scanf("%d",&n) ;
 
  result = move_one_rock() ;
 
  return(n * result) ;
}
Secondly, there are plenty of resources to let you program a trivial thing like unrolling a loop with no history dependencies. In fact, you don't even need to rewrite your code. If you have something like this:

for(i = 1 ; i < 100000000000; i++) {
/* important point: this step does not depend on the results of any other step */
      factor(i);
  }
You can just throw that puppy through any modern compiler capable of parallelizing and it will be parallized at the machine code level. No programmer thought required. But this, too, is an example of a trivially parallelizable program. There's no need for processors to communicate with each other at intermediate stages in the factoring, for example.

The nature of a difficult and interesting problem for which you'd like a parallel solution is one in which (1) you have many degrees of freedom, which you've got in your examples, but (2) the degrees of freedom are all strongly coupled (influence each other), which neither of your examples has.

All interesting problems in many-body physics have this quality. For example, why do proteins fold up the way they do? The degrees of freedom are the positions and velocities of all the atoms in the system, protein as well as water molecules, and the strong interactions are chemical bonds and the nonbonded forces between atoms. Another example: how do we make machine vision that recognizes objects as quickly and reliably (under differing light conditions, et cetera) as the human eye/brain combination? The degrees of freedom are the color and intensity of the individual pixels, and the strong interactions are the fact that an object is defined by edges, shadows, et cetera, and each of these things is a certain arrangement of pixels.

If you think about it, I hope you'll realize that it is inherently very, very difficult to write good algorithms for solving these kinds of problems. The limitation, IMHO, having worked in this field for a while, is not the hardware, and not even the software, but the wetware -- our ability to dream up algorithms to solve these problems. We know they exist: the human brain has a clock speed of 1 kHz, max, but it can solve the object recognition problem faster than the fastest computer with any number of processors you like. How? We don't know. We can't even imagine, yet. And that's the true frontier.

Re:In other news, programmer suicides up... (1)

MiniMike (234881) | more than 7 years ago | (#20165827)

There are more benefits to having a computer like this than being able to run 1 massive job. There may not be many problems which would use all of this at once, but there are dozens which can use maybe 5-10% of it. Or a model may use a small number of nodes, but need to be run multiple times with varying parameters. With a cluster like this, you can run them all at once. And maybe someone will figure out how to use the whole thing effectively, before it's obsolete!

Re:In other news, programmer suicides up... (1)

Quadraginta (902985) | more than 7 years ago | (#20173087)

That's a total waste of resources. The big cost in a machine like this is the lightning-speed interconnect between the processors and the fancy memory management that lets processors share memory in various ways. The cost of the processors themselves is, by comparison, trivial. Your kind of problem, a lot of small jobs running without knowledge of each other, is easily handled by a lot of small computers, and for a lot less money.

The only justification for a piece of hardware like this is a problem that needs all of the processors to even move forward. And it had better provide some truly surprising results, too, something you wouldn't have imagined had you simply solved a smaller system and scaled up in some obvious way. For example, we already know why argon freezes by simulating a few hundred atoms of the stuff. We learn nothing new by simulating a few hundred million atoms of argon.

Re:In other news, programmer suicides up... (0)

Anonymous Coward | more than 7 years ago | (#20168535)

Directly from the press release, "The system, to be allocated under normal TeraGrid policy, will permit researchers to use high-resolution, multiscale/multiphysics simulations for such tasks as studying the properties of proteins at the atomic scale; understanding the complexities of the brain; determining the fundamental properties of elementary particles; modeling natural disasters, and understanding the delicate balance of processes that are responsible for the global climate and its variation over time. The project includes several activities in education and outreach including efforts aimed at broadening the participation of women and minorities in science and engineering."

Re:In other news, programmer suicides up... (1)

Quadraginta (902985) | more than 7 years ago | (#20173261)

...aaaaand directly into the round file, with all the other glossy brochures written in marketspeak.

I've written quite enough of this fluff myself to be even a tiny bit impressed.

NSF (1, Funny)

Anonymous Coward | more than 7 years ago | (#20164717)

Why is the National Sanitation Foundation funding supercomputers?

http://www.nsf.org/ [nsf.org]

Re:NSF (1)

777v777 (730694) | more than 7 years ago | (#20171481)

Go look at the proposal. This machine is for the sole purpose of performing revolutionary computational science. They want scientific breakthroughs from this machine. You have to be trying for those types of problems to get any time on this machine according to the CFP(I think).

Birth of HAL 9000 (2, Funny)

AP2005 (922788) | more than 7 years ago | (#20164727)

though it would be at least 6 years too late.

Kind of an Inside Joke (1)

jmcharry (608079) | more than 7 years ago | (#20164885)

Instead of Blue Water, which is singularly inappropriate for a university located 900 miles from the nearest, wouldn't Boneyard be more appropriate?

Re:Kind of an Inside Joke (0)

Anonymous Coward | more than 7 years ago | (#20165601)

I believe the name Blue Waters relates to the consortium of collaborating universities and research labs, most of which are near Lake Michigan....

Re:Kind of an Inside Joke (1)

Ritchie70 (860516) | more than 7 years ago | (#20165863)

Don't you mean "paved over drainage ditch"?

Re:Kind of an Inside Joke (1)

Thundersnatch (671481) | more than 7 years ago | (#20211483)

Instead of Blue Water, which is singularly inappropriate for a university located 900 miles from the nearest, wouldn't Boneyard be more appropriate?

I dunno... Lake Michigan is pretty freaking big, and pretty freaking blue. At least from my personal observations. It's only about ~100 miles from UIUC.

And yeah, I know that "Blue Water" means in the Navy world, but then again, the Navy does a lot of training on Lake Michigan.

So, how much (1)

captnitro (160231) | more than 7 years ago | (#20164961)

.."Blue Waters," which is 500 times more powerful than today's typical supercomputers. The system is expected to go online in 2011.

But how much powerful is it than supercomputers in 2011? :)

Re:So, how much (1)

PPH (736903) | more than 7 years ago | (#20165033)

Can it run Vista?

Re:So, how much (1)

counterfriction (934292) | more than 7 years ago | (#20165457)

I suppose, definitively, one.

Grant Check (1)

PPH (736903) | more than 7 years ago | (#20164983)

I took my grant check straight to the bank. They refused to cash it. When I asked why, they pointed out that it has N.S.F. written right on the front.

Petascale (1)

Duncan3 (10537) | more than 7 years ago | (#20165005)

Oh my, 1 PFLOPS... that's not [stanford.edu] that big anymore. 4 years from now they should be talking 20+ PFLOPS at least.

I'm very interested in their bandwidth numbers and architecture, which the ydo not mention.

.

Re:Petascale (1)

MadUndergrad (950779) | more than 7 years ago | (#20165533)

Well, they did say petascale. It could be say, 10 or 20 PFLOPS.

Re:Petascale (1)

scheme (19778) | more than 7 years ago | (#20165663)

Oh my, 1 PFLOPS... that's not that big anymore. 4 years from now they should be talking 20+ PFLOPS at least.

There's a huge difference between a distributed system offering 1 PFLOPS and a tightly integrated system offering a fast interconnect and a petaflop of computing power. It's kinda of like saying a semitruck isn't all the impressive because you have a fleet of cars that have the same storage capacity. That's great until you need to move a large container or block of stuff that can't be parceled out...

Re:Petascale (1)

Duncan3 (10537) | more than 7 years ago | (#20173055)

Of course they are completely and totally different things! Which is why I want to know bandwidth numbers and topology. But promising to do a PFLOP in 2011 for that kind of money is not good.

I'm sure we'll be hearing more, and it will be very nice machine.

Re:Petascale (0)

Anonymous Coward | more than 7 years ago | (#20165689)

Does denigrating the work of others (work which, in all likelihood, you have no idea how to accomplish) give you an erection?

Re:Petascale (0)

Anonymous Coward | more than 7 years ago | (#20166547)

Nice troll, AC.

The parent poster should, of all people, know better than to make the kind of statement he did. Distributed computing and large-scale shared memory machines are not only suited for different tasks, they also scale much differently and require completely different code architectures. (When the proposals for this project went out, they were accompanied by numerous articles describing methods of converting existing scientific code to meet Petascale requirements.)

The parent poster designed and implemented some pretty impressive distributed computing projects; one would expect he'd refrain from snide comments based on an intentionally misleading premise. However, a quick google search will let you decide for yourself if that's his style or not.

Re:Petascale (1)

shots47s (538099) | more than 7 years ago | (#20167631)

The machine is supposed to be designed to give a sustained performance of 1 Pflops. The chart in the link you provided above shows peak performance. Most very efficient algorithms use roughly 20-40% of peak performance of a machine, a problem that is enhanced when one goes to large parallel systems. So the machine will have to have a peak performance that is much greater than the sustained in order to achieve this.

Re:Petascale (1)

Duncan3 (10537) | more than 7 years ago | (#20173011)

Actually that chart is sustained as well.

supercomputers suck! (0, Offtopic)

yoprst (944706) | more than 7 years ago | (#20165041)

Mainframes are so much better for managing my pr0n collection

one could only hope (1)

thatskinnyguy (1129515) | more than 7 years ago | (#20165621)

Could you imagine a Beowulf Cluster of these? Something's gotta run Web 3.0!

Re:one could only hope (0)

Anonymous Coward | more than 7 years ago | (#20165697)

Could you imagine a Beowulf Cluster of these? Something's gotta run Web 3.0!
Or Vista, even!
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>