Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

DOE Asks For 30-Petaflop Supercomputer

Soulskill posted about 2 years ago | from the go-big-or-go-home dept.

Supercomputing 66

Nerval's Lobster writes "The U.S. Department of Science has presented a difficult challenge to vendors: deliver a supercomputer with roughly 10 to 30 petaflops of performance, yet filled with energy-efficient multi-core architecture. The draft copy (.DOC) of the DOE's requirements provide for two systems: 'Trinity,' which will offer computing resources to the Los Alamos National Laboratory (LANL), Sandia National Laboratories (SNL), and Lawrence Livermore National Laboratory (LLNL), during the 2016-2020 timeframe; and NERSC-8, the replacement for the current NERSC-6 'Hopper' supercomputer first deployed in 2010 for the DOE facilities. Hopper debuted at number five in the list of Top500 supercomputers, and can crunch numbers at the petaflop level. The DOE wants a machine with performance at between 10 to 30 times Hopper's capabilities, with the ability to support one compute job that could take up over half of the available compute resources at any one time."

Sorry! There are no comments related to the filter you selected.

2013 will be the year of Linux... (0)

Anonymous Coward | about 2 years ago | (#42562843)

...on 30-Petaflop Supercomputers!

mmh... (1)

BarfooTheSecond (409743) | about 2 years ago | (#42563199)

... I've heard rumors that Microsoft will participate in the tender and propose their "HPC" solution based on Windows 8 (well, it's for all computer platforms, they say)

Re:mmh... (0)

Anonymous Coward | about 2 years ago | (#42564139)

Will not happen, so many of the codes that are run on these big machines are designed to run in a linux environment. Win 8 will never run on DOE supercomputer.

Re:mmh... (1)

unixisc (2429386) | about 2 years ago | (#42566543)

If they are based on the x64, they can. But the best way to do it would be to build them from Itanium III, and then have either Debian or FreeBSD (preferably Dragonfly BSD) running on it.

Re:mmh... (1)

alexandre_ganso (1227152) | about 2 years ago | (#42580173)

Hum, why BSD? He mentioned linux not because is the best solution ever (which might or might not be), but because a lot of petaflop-capable code was written specifically to run on it.. and because the big names (IBM, Crazy) fully support it. In fact, I don't remember ever using a BSD-based supercomputer. The top500 only shows one machine, at 0.1 petaflop, running a bsd-based OS. Search for os here: []

Re:mmh... (1)

BarfooTheSecond (409743) | about 2 years ago | (#42567841)

Forgot to mention I was kidding :-)

Sure Windows 8 will never run on such big machines. It will never run on my PC too, btw :-)

btw: is Windows kernel still limited to 64 cpus? (I'm not talking clusters and the like, but "single image")

Re:mmh... (0)

Anonymous Coward | about 2 years ago | (#42608831)

Windows Data Center supports up to 64 sockets and 256 virtual processors.

However, Windows hardware requirements are not based on the capability of the OS but the capability of the best machine they were able to test it with. So in reality Windows Datacenter and HPC probably support much more than 64 sockets/256 cores.

So . . . (1)

Anonymous Coward | about 2 years ago | (#42562917)

What are they going to use this machine for? Hopefully it's not Skyrim.

Re:So . . . (0)

Anonymous Coward | about 2 years ago | (#42563119)

What are they going to use this machine for? Hopefully it's not Skyrim.

Whatever, it'll be cheaper than an iMac anyways.

Re:So . . . (4, Informative)

godrik (1287354) | about 2 years ago | (#42563161)

This machines are most likely goign to be the replacement of the ones we already have. NERSC is presenting the projects that are run on its computing infrastructure on its web site [1]. You can see on the first page the project that are currently running jobs and what they are doing. For instance this project [2] is about designing artifical photosynthetic cells. If you are interested just check the project they are funding.

[1] []
[2] []

Re: artificial photosynthetic cells (3, Insightful)

neonsignal (890658) | about 2 years ago | (#42564485)

Are you really claiming that a computer being run by Los Alamos and called Trinity [] is primarily going to be used for alternative energy?

Re: artificial photosynthetic cells (3, Funny)

AHuxley (892839) | about 2 years ago | (#42564753)

Little Red Blogger from the Hood asks:
"What a deep budget you have," ("The better to educate you with"),
"Goodness, what big networks you have," ("The better to save you taxes by networking with)
"And what big transformers you have!" ("The faster to compute for you with"),
"What a big results you have," ("The better to nuke you with!")

Re:So . . . (2)

Orp (6583) | about 2 years ago | (#42567545)

I am an early user on the Blue Waters petaflop machine ( Mean time to failure for such a huge machine becomes a real issue when you have about 700,000 cores and who knows how many spinning hard drives and all that network infrastructure. However I and my research collaborators have managed to get jobs through that take on the order of 12 hours of wallclock time without a hardware fault, which is amazing IMO. I do wonder whether we can simply continue to expand the same basic computing infrastructure to a 10-20 PFLOP machine. There will have to be redundancy built into the hardware of any such machine such that if a compute node goes offline it seamlessly offloads it to another. Writing fault tolerant massively parallel code is possible but very challenging and most scientists won't or can't do it.

Re:So . . . (5, Informative)

Raul654 (453029) | about 2 years ago | (#42563243)

Back when I worked for Supercomputing group at Los Alamos, the supercomputers were categorized into 'capacity' machines (the workhorses where they did most of the work, which typically run at near full utilization) and capability machines (the really big / cutting-edge / highly unstable machines that exist in order to push the edge of what is possible in software and hardware. One example of such an application would be high energy physics simulation) . It sounds like these machines fall into the latter category.

Re:So . . . (1)

religious freak (1005821) | about 2 years ago | (#42564123)

I don't know. Personally whenever I see machines with specs like these I get the idea that the only practical application would be advanced AI.

Yes, I know the NNSA and others use this type of hardware to simulate physical environments and nuclear events but I just can't help but think there's a pretty good possibility our government is racing toward advanced AI systems. These computer folks are some of the best in the world and they know as well as anyone what an advancement in weapons tech an AI would be. At least that's what I'd be telling my superiors if I were working in these high computing departments.

Re:So . . . (0)

Anonymous Coward | about 2 years ago | (#42572425)

You can publicly see what workloads are running on NERSC-6 in realtime here:

Re:So . . . (1)

alexandre_ganso (1227152) | about 2 years ago | (#42585913)

There are a lot of different research that benefits from these kinds of machines. Mind you, the machine will hardly be running a single program at 30 pflops scale, but instead running dozens of smaller jobs at the same time, and economy comes with the scale. It's simpler to scale your job from 10,000 processors to 1 million on the same machine than running the smaller job in one site than porting to the big one. Besides, give 30 pflops and people on the physics, math and biology department will ask for 50 :)

Re:So . . . (1)

IAmR007 (2539972) | about 2 years ago | (#42564957)

Even if the names don't get changed, they still get upgraded a lot. The power costs are so significant (several million dollars a year) that running a system that's more than a couple years old is completely unfeasible. For example, I have an account on HECToR, which has gone through three/four upgrades since it was first built in 2008: 11,328 2.8GHz cores to 22,646 2.3GHz cores to 44544 cores to 90,112 2.3GHz cores (and ram upgrades along the way for a total of 90TB now). One of those was a two-part upgrade.

Re:So . . . (1)

davester666 (731373) | about 2 years ago | (#42563259)

C&C for SkyNet!

They just discovered (4, Funny)

Janek Kozicki (722688) | about 2 years ago | (#42562929)

bitcoins? :)

Re:They just discovered (-1, Flamebait)

steelfood (895457) | about 2 years ago | (#42564891)

It's a hedge against the U.S. dollar, what, with four more years of Obama printing money like it's made of paper.

Re:They just discovered (0)

Anonymous Coward | about 2 years ago | (#42566715)

That was so blindingly ignorant and off topic you should be both ashamed and nervous for your personal property.

Department of Science? (4, Informative)

Shag (3737) | about 2 years ago | (#42562939)

Oh, if only science were elevated to Department status, with a cabinet-level secretary!

I think you mean Department of Energy [] , Office of Science [] .

Re:Department of Science? (0)

Anonymous Coward | about 2 years ago | (#42564605)

This isn't really wrong - its common within the government to refer to bureaus within a department as "Department of" (even if that isn't their official name). Each agency has their own conventions for it.

Re:Department of Science? (1)

ILongForDarkness (1134931) | about 2 years ago | (#42564967)

Because clearly energy which is a part of science is more important than all science :)

Re:Department of Science? (1)

SpaceCracker (939922) | about 2 years ago | (#42569331)

Congressmen have already decided on creating the Department of Science. They're just looking for intelligent designers to do the job...

requirements (0)

Anonymous Coward | about 2 years ago | (#42562961)

It also needs to make a good cup of coffee.

30 petaflops? (0)

Anonymous Coward | about 2 years ago | (#42562995)

Ha......Not even enough to run windows.

GPU for peta flops (0)

Anonymous Coward | about 2 years ago | (#42562997)

So how many GPU for a Peta Flop ?

Re:GPU for peta flops (1)

GiganticLyingMouth (1691940) | about 2 years ago | (#42563155)

According to NVIDIA, the peak single precision floating point throughput is 3.95 TFLOPS for a Tesla K20X. If, for simplicity, we round up to 4TFLOPS and assume that it operates at peak efficiency (which is unlikely), you'd need 250 of these. Of course that also assumes linear scaling with the number of processing units (which is also unlikely). This would be for 1 PFLOPS mind you; the DOE wants 10 ~ 30.

So how many GPU for a Peta Flop ?

Not to be overly pedantic, but PFLOPS --> 10^15 FLOP/sec, so saying "Peta Flop" doesn't make any sense - that would just be 10^15 floating point operations, which without a time-scale (seconds) is meaningless.

Re:GPU for peta flops (0)

Anonymous Coward | about 2 years ago | (#42563455)

rediculous, it has always been measured in flops/sec, thats a given. your comment is meaningless

Re:GPU for peta flops (1)

GiganticLyingMouth (1691940) | about 2 years ago | (#42564129)

From wikipedia [] : FLOPS stands for "FLoating-point Operations Per Second, also flop/s". The /sec is part of the acronym. Hence "FLOPS/sec" would be floating point operations per second per second, which probably isn't what you meant. Likewise, saying "FLOP" denotes the execution of one floating pointer operation. There's no time metric, and is equally meaningless.

Have I got the supercomputer for you! (0)

Anonymous Coward | about 2 years ago | (#42563007)

Well if you need to use my E7500+5570 you can but you're going to have to pay me in better hardware.

How about a (1)

saphena (322272) | about 2 years ago | (#42563109)

Large cluster of Raspberry Pis

Re:How about a (4, Informative)

VorpalRodent (964940) | about 2 years ago | (#42563189)

This is Slashdot - I believe the meme you're looking for is "Beowulf cluster", and such a cluster of these things would probably even meet the recommended specs for Crysis.

Re:How about a (1)

mug funky (910186) | about 2 years ago | (#42563351)

but does it run linux?

Re:How about a (0)

Anonymous Coward | about 2 years ago | (#42564025)

No it doesn't you snob... It runs on clustered neXts developed by Apple's Steve-J; before that it was the Amega.

Jest aside - 30pf is a bucket-load-o-power; hell Watson is only ~80mf and beat Ken.

Re:How about a (1)

alexandre_ganso (1227152) | about 2 years ago | (#42585971)

Mostly, because of the network. Although the cpus (or the whole system, in the case of the Pi) are cheap, the inter-communication is SLOW. And this gets worse with scale. So what starts bad (with the PI) network, gets much worse in bigger scale.

Although this is an excellent test bed for teaching parallel computing - EXACTLY because it scales so badly, so the bad effects are exaggerated.

Yes, but does it run Linux? (0)

Anonymous Coward | about 2 years ago | (#42563219)

Do they want it to run that Icky Linux (again!) or do they want it to run windows? We are all very eager to find out about this one!

I just love that 10-30 range. So trivial... (0)

Anonymous Coward | about 2 years ago | (#42563231)

DOD: We would like to place an order of a few trillion dollars jets...
COMPANY: How many would you like ?
DOD: Oh, not sure... Say, 10-30 should do us fine... Do you have them in bright blue ? We like bright blue.

DOE already has 2 of them... (2)

WhitePanther5000 (766529) | about 2 years ago | (#42563297)

27 Petaflops at Oak Ridge
20 Petaflops at Lawrence Livermore []

Make that 3 of them (1)

WhitePanther5000 (766529) | about 2 years ago | (#42563317)

If their bottom line is 10 Petaflops, they have a 10 Petaflop one at Argonne too.

Isn't New Mexico selling a supercomputer? (1)

dr_leviathan (653441) | about 2 years ago | (#42563323)

Maybe the DOE should bid on that supercomputer being liquidated [] by the US state of New Mexico.

Re:Isn't New Mexico selling a supercomputer? (2)

l0ungeb0y (442022) | about 2 years ago | (#42563735)

Yeah -- but that is only spec'd at 172 TFlops, a long way away from 30 PFlops.

Nota problem, buy one today (1)

Henriok (6762) | about 2 years ago | (#42563383)

What's the problem? They can buy one that fits their need today. There are a variety of designs that will deliver this kind of performance available today from the likes of Cray and IBM.

Re:Nota problem, buy one today (0)

Anonymous Coward | about 2 years ago | (#42564379)

I'm not sure if those current designs can provide the required sustained performance and scalability under the magical 20MW of power.

Re:Nota problem, buy one today (1)

dutchwhizzman (817898) | about 2 years ago | (#42565333)

It's not the fact they already exist, but that they have to spend government money. Since they are spending that money, they have to get the taxpayers their moneys worth and have to put out a "tender" so suppliers can compete offering the best deal. In order to prevent personal preferences of people in power, bribes and such, tenders are usually rather strict in their requirements and procedures. This is about a lot of tax money, so it gets a lot of attention. Your local community probably puts out tenders for a contract to fix the holes in the road each year, or for putting a new roof on the public school three streets down as well. Those don't make it to SlashDot, but they work the same way.

this means the NSA already has one (1)

decora (1710862) | about 2 years ago | (#42563451)

and we won't learn about it until James Bamford writes another book. . .

Re:this means the NSA already has one (1)

NothingMore (943591) | about 2 years ago | (#42564585)

There are already 3 at national labs that are > 10 PFLOPS... ORNL's Titan, LLNL's BG/Q, and ANL's BG/Q. The story behind this is they want it cheaper than the crazy price tag those machines have.

Re:this means the NSA already has one (1)

AHuxley (892839) | about 2 years ago | (#42565317)

At 200MW? 80-65-megawatt 'power' upgrade seem to the quoted in some press too. Take off some power for the protester zapping electric fence and see how many chips you can power and cool with that mw range :) [] ’s-massive-supercomputer-project-spying

Petaflops (0, Redundant)

eulernet (1132389) | about 2 years ago | (#42563809)

10 petaflops is the minimum to run Windows 8 smoothly.

They ask for 30 petaflops, probably to run at least 3 other processes.

Re:Petaflops (1)

asifyoucare (302582) | about 2 years ago | (#42566089)

Ten petaflops for the OS.

Ten petaflops for McAfee.

Ten petaflops for the actual work.

They want to crack all your aes encryption (0)

Anonymous Coward | about 2 years ago | (#42564075)

say no

Puts the core in politically core-rect. (1)

Impy the Impiuos Imp (442658) | about 2 years ago | (#42564179)

Fastest on earth, "yet filled with energy-efficient multi-core architecture." :rolleyes

These are at cross-purposes. Do they want fastest on Earth, or pretty fast, but efficient, which is already driven by market mechanisms?

"Hey! Multi-core and multi-cultural both have 'multi' in it! Can we have multi-cultural architecture, too? How much extra is that?"

Re:Puts the core in politically core-rect. (2)

bazmonkey (555276) | about 2 years ago | (#42564675)

Fastest on earth, "yet filled with energy-efficient multi-core architecture." :rolleyes

These are at cross-purposes. Do they want fastest on Earth, or pretty fast, but efficient, which is already driven by market mechanisms?

No, it's not. Today's supercomputers are thousands upon thousands times faster than those of decades past but are NOT taking up thousands of times more space or electricity.

Hopper is 16,000 nodes and two Pflops. Cray can't just make 10 of them, put 'em together, and consider the order filled. Efficiency is a LOT of the challenge in making the world's fastest computers.

System Archetecture (2)

Required Snark (1702878) | about 2 years ago | (#42564501)

They have a comparatively large number of existing codes (around 600) that run with no GPU acceleration. They want to continue this code base and not have to modify it very much, so they are not going to use any non CPU integrated coprocessors. According to the article:

They could have built such a machine, but it would have required either discrete accelerators (a programming model they would rather skip) or something more proprietary like the Blue Gene platform (an architecture they have avoided). The hope is that by 2015, they will be able to get something on the exascale roadmap, but with a programming model that is reasonably friendly to CPU-based codes.

That most likely means integrated heterogeneous processors like NVIDIA's "Project Denver" ARM-GPUs, AMD's x86-GPU APUs, or whatever Intel brings to the table with integrated Xeon Phi coprocessing. Although more complex than a pure CPU solution from a software point of view, the integrated designs at least avoid the messy PCIe communication and the completely separate memory space of the accelerator device.

Note that one of the possible contenders is ARM with an integrated GPU. Slashdot readers are generally hostile to the idea of ARM for servers or HPC, but it is going to happen. Making the Top 100 list in the future will require more and more attention to FLOP/Watt, and ARM has a basic advantage over legacy oriented x86 architectures. Being dismissive of ARM is just as much of a fanboy attitude as being rabidly for any other architecture.

Re:System Archetecture (1)

alexandre_ganso (1227152) | about 2 years ago | (#42586113)

Don't forget that x86 comprises five of the top 10, being the rest Powerpc-based (BG/Q and Power7). Other contenders have much more chance on this market than, say, the workstation market.

Titan (1)

cmdr_tofu (826352) | about 2 years ago | (#42564623)

Am I wrong thinking that this is not dramatically faster than Titan (27 TF peak) []

The specifications in the doc are interesting nonetheless!

This Has To Be A Joke ! (0)

Anonymous Coward | about 2 years ago | (#42564851)

Really ?

What in the fuck would DoE do with such a machine:

1) arrive at the Blue Screen Of Death in a nanosecond

2) be able to upgrade to Windows 7 from Windows XP (their fabled draft is .DOC)

3) perhaps they would be able to do some service to the DoD and provide
Pentascale Toilet Internet Services on the new Dreadnoughts of which DoD had originally planned for 34 but now can only afford 3.


Sad But True

How do you keep hardware from killing your work? (1)

PeterM from Berkeley (15510) | about 2 years ago | (#42565081)

A question I wonder about---I guess "10-30 petaflop" of a standard multi-core architecture would require > 1M computational cores. Suppose you're running a code on 500,000 cores.

What is the mean time to failure of a core or some other piece of hardware required for that core to work? With 500k cores, I'd expect one to die every few hours or even minutes. Either that, or a random bit-flip from a cosmic ray.

Given that, how do you finish a computation that takes more than an hour or so? And how do you guarantee the integrity of that computation?

D'you have three cores working every piece of the problem, and have them "vote" on the result, taking the majority opinion in the INEVITABLE case of disagreement? That sort of implies that to have 10 petaflop of effective computing power, you need 30 petaflop of actual computing power, + all the overhead/hardware required for voting.

Integrity of the results for such large computations just seems like a very difficult problem!


Re:How do you keep hardware from killing your work (0)

Anonymous Coward | about 2 years ago | (#42566169)

You assume that they spend all this money to run a computer program that does something useful, and the accuracy is actually important to anyone.

I hope DoE will -model- LFTRs, to speed approvals (1)

ivi (126837) | about 2 years ago | (#42566047)

Liquid Fluoride THORIUM Reactors (LFTRs) could get a leg up
for -earlier- construction approvals, ie, if DoE puts some super-
computers to the task of modeling them mathematically, eg, to
help bring them on-line sooner.

Or... we can let India and/or China do all that... and buy the com-
pleted technology from them, after they've done that.

Re:I hope DoE will -model- LFTRs, to speed approva (1)

alexandre_ganso (1227152) | about 2 years ago | (#42586411)

Another good thing is that by having these more "friendly" reactors, you can power more supercomputers! It's a win-win situation

embarrassing (1)

znrt (2424692) | about 2 years ago | (#42570033)

The U.S. Department of Science issues a public document in an obsolete, closed, propietary document format? Come on. Even the most clueless consultant in business uses at least .docx. It's even the default, ffs.

Go sell them a darned coffee machine with a "100peta!" sticker on it, they won't tell the difference.

Double standards (1)

Cinnaman (954100) | about 2 years ago | (#42571139)

It's pretty cynical that western governments want to tax harmless carbon dioxide (eg. in Australia) and limit our energy consumption through constantly jacking up the rates yet build extremely power-hungry installations in order to crunch all the data needed to surveil the citizens and build a profile of them.

How much of the universe can it simulate? (1)

elucido (870205) | about 2 years ago | (#42572581)

If it's less than that of a human hair, they will need more processing power.

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?