Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

10-Petaflops Supercomputer Being Built For Open Science Community

Soulskill posted about 3 years ago | from the go-big-or-go-home dept.

Supercomputing 55

An anonymous reader tips news that Dell, Intel, and the Texas Advanced Computing Center will be working together to build "Stampede," a supercomputer project aiming for peak performance of 10 petaflops. The National Science Foundation is providing $27.5 million in initial funding, and it's hoped that Stampede will be "a model for supporting petascale simulation-based science and data-driven science." From the announcement: "When completed, Stampede will comprise several thousand Dell 'Zeus' servers with each server having dual 8-core processors from the forthcoming Intel Xeon Processor E5 Family (formerly codenamed "Sandy Bridge-EP") and each server with 32 gigabytes of memory. ... [It also incorporates Intel 'Many Integrated Core' co-processors,] designed to process highly parallel workloads and provide the benefits of using the most popular x86 instruction set. This will greatly simplify the task of porting and optimizing applications on Stampede to utilize the performance of both the Intel Xeon processors and Intel MIC co-processors. ... Altogether, Stampede will have a peak performance of 10 petaflops, 272 terabytes of total memory, and 14 petabytes of disk storage."

cancel ×


Sorry! There are no comments related to the filter you selected.

First post! (-1, Offtopic)

GameboyRMH (1153867) | about 3 years ago | (#37494060)

My First Posting through Fast Reactions experiment is a success!

Would sound more impressive... (1)

Moheeheeko (1682914) | about 3 years ago | (#37494068)

If they used AMD 16 core processors. a Stampede of Bulldozers.

Re:Would sound more impressive... (1)

GameboyRMH (1153867) | about 3 years ago | (#37494088)

If they used power-efficient ARM CPUs it could have been a Stampede of Hummingbirds.

Re:Would sound more impressive... (1)

fuzzyfuzzyfungus (1223518) | about 3 years ago | (#37494284)

I cringe at the amount of interconnect silicon that clustering such comparatively lightweight processors would require. The 32-bit address space would no doubt be a hit, as well...

Re:Would sound more impressive... (2)

Junta (36770) | about 3 years ago | (#37494326)

Don't bring technology concerns into a decision based on the neatest sounding name.

Re:Would sound more impressive... (1)

SuricouRaven (1897204) | about 3 years ago | (#37496206)

On the upside, much easier on power and cooling. x86 can win on performance-per-cycle, but ARM still wins on performance-per-watt.

Re:Would sound more impressive... (1)

fuzzyfuzzyfungus (1223518) | about 3 years ago | (#37496652)

Yeah, I'd just be curious to see the performance/watt numbers once you factor in all the assorted glue silicon required to get the mess talking to itself.

High speed network interconnects can get a little toasty themselves, and the amount of glue logic/core would be rather higher with the smaller, fewer-cores-per-socket ARM beasties.

They might still win, I don't have numbers one way or another; but networking isn't free...(it would be interesting, of course, to see some ARM HPC design that fabbed a zillion cores onto a die the size of a Xeon, with very fast networking between them; but a just-a-bunch-of-SoCs design might be pretty tepid.)

Re:Would sound more impressive... (1)

SuricouRaven (1897204) | about 3 years ago | (#37497084)

Depends how much interconnect you need. Some tasks need hardly any, while others can saturate multi-gigabit links with ease. As this is a general-purpose supercomputer, it'll have to be speced to handle the worst of loads... so high-capacity interconnect of some form.

An extreme case would be brute-force crypto, in which the inter-node traffic is so low the entire supercomputer could be quite easily built on 10base2.

Looks like a cluster (3, Insightful)

LordAzuzu (1701760) | about 3 years ago | (#37494100)

Not a supercomputer

Re:Looks like a cluster (1)

GameboyRMH (1153867) | about 3 years ago | (#37494186)

In there a distinct difference between the two?

Re:Looks like a cluster (1)

GameboyRMH (1153867) | about 3 years ago | (#37494210)

Don't ask me how I hit the N on the other side of the keyboard -_-

Re:Looks like a cluster (2)

fuzzyfuzzyfungus (1223518) | about 3 years ago | (#37494342)

Because the best available CPUs are only so fast, and logic boards only so large, both supercomputers and clusters end up being lots-and-lots-of-cards-connected-with-some-mixture-of-backplanes-and-cables at some point.

There's a smooth-ish order of progression in terms of interconnect speed and latency(ie. SETI@home is a cluster; but inter-node bandwidth is tiny and latency can be in the hundreds of milliseconds, a cheapo commodity cluster using the onboard GigE ports has better bandwidth and lower latency, Myranet or infiniband better again, but more expensive, certain proprietary fabrics tighter still, if even more expensive).

The sharp, dividing, line, though, is probably whether or not the system runs(or at least is capable of running, some may be carved up for sharing purposes) a single system image.

In this cluster, it sounds like each 2-socket node boots up, like a standard computer, and then starts chatting over the network. In a single system image setup, all the CPUs and RAM are visible as a unified address space and collection of cores. Under the hood, there may be a lot of chatter going over cables, rather than with a single logic board; but, so far as the software is concerned, it is all one computer.

Re:Looks like a cluster (1)

multimediavt (965608) | about 3 years ago | (#37499358)

SETI@home, although an embarrassingly parallel task, is not a cluster. Each client processes independent discrete data irrespective of the results of another client. There is no MPI so all you have is a bunch of machines running the same serial software on different data. Clusters can be used for such a thing, but it's a horrible waste of money on interconnects as there is no message passing. It's like saying a computer lab with all the same software on the machines is a "cluster" because all the machines are on a network. Nope. Doesn't work that way.

Re:Looks like a cluster (1)

David Greene (463) | about 3 years ago | (#37495438)

Yes, the network.

Re:Looks like a cluster (1)

Lunix Nutcase (1092239) | about 3 years ago | (#37494216)

I know you're trolling but most supercomputers these days are computing clusters.

Re:Looks like a cluster (1)

Anonymous Coward | about 3 years ago | (#37494360)

From Wikipedia (

Today, parallel designs are based on "off the shelf" server-class microprocessors, such as the PowerPC, Opteron, or Xeon, and coprocessors like NVIDIA Tesla GPGPUs, AMD GPUs, IBM Cell, FPGAs. Most[which?] modern supercomputers are now highly-tuned computer clusters using commodity processors combined with custom interconnects.

Re:Looks like a cluster (1)

carnivore302 (708545) | about 3 years ago | (#37494746)

Imagine a beowulf cluster of these clusters!

Re:Looks like a cluster (1)

galanom (1021665) | about 3 years ago | (#37503640)

Yeah, imagine how many fps will Quake II achieve!

Explain (1)

multimediavt (965608) | about 3 years ago | (#37499306)

[title] Looks like a cluster [/title]

Not a supercomputer

Are you saying this because it is not a single system image, shared memory machine or because you just don't think distributed memory clusters are supercomputers?

I ask because I have built supercomputers and I find your comment puzzling, at best.

Re:Looks like a cluster (0)

Anonymous Coward | about 3 years ago | (#37501324)

Perhaps not in some's strict use of the term, but a large portion of the TOP500 list have this HPC architecture.

Obligatory (0)

Anonymous Coward | about 3 years ago | (#37494152)

Obligatory comment of saying to just buy $27 million of amazon EC2 time

Re:Obligatory (2)

hawguy (1600213) | about 3 years ago | (#37494568)

Assuming you want to keep all of your compute nodes busy all the time, EC2 is not a good value.

They say they'll have several thousand servers. I don't know what a Zeus server is, but let's assume it's a 1U, 2 socket server and that they'll have 2000 of them. That will give them 2000 * 2 * 8 = 32,000 cores of CPU.

That's equivalent to 32000 / 4 = 8000 Amazon EC2 Quadruple Extra Large instances. Spot pricing right now matches Reserved instance pricing, $0.56/hour, so for $27M, they can get $27M / 8000 / 0.56 = 6026 hours, or 251 days of equivalent compute power.

If each server (plus network + storage/backup) costs $10,000 (A dual CPU 6 Core Xeon X5675 Dell R410 costs $5K retail), you've spent $20M on hardware. You'll need 50 42U racks to house your servers. Budget $1000/month for each rack, or $50K/month on coloc fees. So in one year you're spending around $600K in coloc fees, leaving $6.4M leftover for salaries and other overhead. (you'll end up needing a few extra racks to hold storage and network gear plus miscellaneous non-compute node servers)

So, $27M on EC2 gets you around 8 months of compute time. $27M in hardware gets you a full year of compute time and next year "only" costs you $600K excluding salaries.

Amazon is only a great deal if you're small enough to not want to manage your own servers, or your demand is variable and you can avoid paying for unused computing capacity that is only there to handle peak loads.

Re:Obligatory (1)

rgbatduke (1231380) | about 3 years ago | (#37495378)

Well said, sir! Now, if you can only build a small script that will repost that automatically to /. whenever somebody claims that EC2 is a good deal for tasks that will, in fact, keep all of your compute nodes busy all of the time (since your argument scales rather well)...


What is a Dell 'Zeus' server? (3, Informative)

hawguy (1600213) | about 3 years ago | (#37494170)

The article mentions that it's using Dell 'Zeus' servers, but the only information I can find about those servers online is that they are being used to build this cluster.

What is a Dell 'Zeus' server?

Re:What is a Dell 'Zeus' server? (1)

the linux geek (799780) | about 3 years ago | (#37494348)

A server with the new Sandy Bridge Xeons and the MIC Larrabee coprocessors.

Re:What is a Dell 'Zeus' server? (1)

Baloroth (2370816) | about 3 years ago | (#37494498)

Judging from the name, it's a server that shoots sparks and sleeps around a lot.

Stop mumbling (0)

Anonymous Coward | about 3 years ago | (#37494716)

it's a server that shoots sparks and sleeps around a lot

If it's a windows server, then just come out and say it.

Re:What is a Dell 'Zeus' server? (1)

Anonymous Coward | about 3 years ago | (#37495234)

It's a codename for a server based on the Xeon E5 processors that aren't currently announced/generally available

Re:What is a Dell 'Zeus' server? (1)

SCVirus (774240) | about 3 years ago | (#37497374)

It's a server used in the stampede cluster.

Computers? In texas? (-1)

Anonymous Coward | about 3 years ago | (#37494184)

Wait so they finally got computers in texas?

That's some impressive HW (-1)

Anonymous Coward | about 3 years ago | (#37494208)

10 petaflops, 272 terabytes of total memory, and 14 petabytes of disk storage.

That's almost enough to install and run Vista!

Re:That's some impressive HW (1)

cat5 (166434) | about 3 years ago | (#37494390)

OK, OK.. I'll bite: But can it run Crysis... and imagine a Beowulf clust... nevermind!

What happens to 'old' supercomputers? (1)

hsmyers (142611) | about 3 years ago | (#37494416)

While I applaud (and always do) advances in supercomputers, it raises the question of what happens to the previous generation(s). I'd love to get my hands on even one of the blade based boxes in your usual configuration. Might not be good for the projected tasks in modern proposals, but they would be more than good enough for my modest needs. Anyone know who the surplus process works?

Re:What happens to 'old' supercomputers? (1)

danbuter (2019760) | about 3 years ago | (#37494548)

I wouldn't be surprised if they are destroyed, especially if they have ever been used for any kind of military computing. Or maybe the main scientists have some seriously kick-ass home computers.

Re:What happens to 'old' supercomputers? (0)

Anonymous Coward | about 3 years ago | (#37495668)

Nope. I may model nuclear explosive packages with 3D multi-physics codes by day, but each evening I go home to the same crap PCs built with parts from Newegg just like everyone else. It doesn't help that the Republicans, in their infinite wisdom, have frozen our salaries...

Re:What happens to 'old' supercomputers? (1)

GameboyRMH (1153867) | about 3 years ago | (#37494554)

The old computers probably just get sent to a scrap yard in China.

Actually, that makes you wonder what happens when they land there...

Re:What happens to 'old' supercomputers? (1)

S-100 (1295224) | about 3 years ago | (#37495008)

There is a market for used "supercomputers". Yale recently purchased one. [] It was number 146 in the list of top 500 supercomputers, and they got it for a fraction of the cost when new.

NSF Blue Waters project reboot? (0)

Anonymous Coward | about 3 years ago | (#37494428)

Is this NSF's replacement for their failed attempt to get the Blue Waters computer up and running.
Well... I mean, IBMs failed attempt to predict that manufacturing costs would be lowered enough to make their Blue Waters bid feasible.

Re:NSF Blue Waters project reboot? (1)

Troy Baer (1395) | about 3 years ago | (#37496054)

No, this is an NSF Petascale "Track 2" project like TACC's earlier Ranger [] system or NICS [] ' Kraken [] system, whereas Blue Waters was/is the NSF Petascale "Track 1" project. Same basic idea, slightly different pots of money.

(Disclaimer: I work for NICS.)


bi7ch (-1)

Anonymous Coward | about 3 years ago | (#37494500)

THINKING ABOUT IT. (Click Here contributed code ggod to write you but I'd rather hear of business and problem; a few dying' crowd -

LOL! (1)

DaMattster (977781) | about 3 years ago | (#37494532)

Will it come with its own nuclear power plant to provide the necessary energy to power it? :)

Re:LOL! (1)

The Immutable (2459842) | about 3 years ago | (#37495110)

8500 computers at, let's high ball it at 1000 watts each (maybe they're running sli'd quadros or something for visualization) 8.5 megawatts. Considering the site it's at will probably be the size of a small neighborhood, that's not a huge amount.

Sounds like a sweet machine to run boinc apps on (1)

mrflash818 (226638) | about 3 years ago | (#37494626)

Sounds like a sweet machine to run boinc apps on.

I know you specifically looked for this (0)

Anonymous Coward | about 3 years ago | (#37494740)

Obligatory bitcoin comment.

...Fuck bitcoins

Re:I know you specifically looked for this (2)

ae1294 (1547521) | about 3 years ago | (#37495664)

Obligatory bitcoin comment.

...Fuck bitcoins

Yes I new Meme needs to be born....


Possible Application (0)

Anonymous Coward | about 3 years ago | (#37495182)

This will be great for the new Open Hardware nuclear bomb we are building.

Impressive if it were built today. (3, Informative)

flaming-opus (8186) | about 3 years ago | (#37495504)

By 2013, 10 petaflops will be a competent, but not astonishing system. Probably top 10-ish on the top500 list.

The interesting part here will be the MIC parts, from intel, to see if they perform better than the graphics cards everyone is putting into super computers in 2011 and 2012. The thought is that the MIC (Many Integrated Cores) design of knights corner are easier to program. Part of this is because they are x86-based, though you get little performance out of them without using vector extensions. The more likely advantage is that the cores are more similar to CPU cores than what one finds on GPUs. Their ability to deal with branching code, and scalar operations is likely to be better than GPUs, though far worse than contemporary CPU cores. (The MIC cores are derived from the Pentium P54C pipeline)

In the 2013 generation, I don't think the distinction between MIC and GPU solutions will be very large. the MIC will still be a coprocessor attached to a fairly small pool of GDDR5 memory, and connected to the CPU across a fairly high-latency PCIe bus. Thus, it will face most of the same issues GPGPUs face now; I fear that this will only work on codes with huge regions of branchless parallel data, which is not many of them. I think the subsequent generation of MIC processors may be much more interesting. If they can base the MIC core off of atom, then you have a core that might be plausible as a self-hosting processor. Even better, if they can place a large pool of MIC cores on the same die as a couple of proper Xeon cores. If the CPU cores and coprocessor cores could share the memory controllers, or even the last cache level, one could reasonably work on more complex applications. I've seen some slides floating around the HPC world, which hint at intel heading in this direction, but it's hard to tell what will really happen, and when.

Re:Impressive if it were built today. (1)

Anonymous Coward | about 3 years ago | (#37495696)

This is what AMD is doing, lol. Once again, Intel gets scooped by a few years by a company that knows how to plan ahead.

Re:Impressive if it were built today. (0)

Anonymous Coward | about 3 years ago | (#37498500)

AMD's glory days were in 2002/3. They produce lots of great news releases and whitepapers, but they are getting eaten alive by Intel. AMD has had to retreat to the low end of the mainstream market and try to undercut Intel on price. Bulldozer is an acknowledgement of this truth and will cement this relationship for several years to come.

For the average PC, AMD has to try to sell the message of "we have great upgradability! We can upgrade you from one underperforming CPU to another, somewhat less disappointing part!"

The main thing AMD has going for them is that the mainstream processor market is no longer quite as performance sensitive as it used to be. However this still means that the average AMD machine will either have to be upgraded sooner, or replaced sooner, than the average Intel machine.


Anonymous Coward | about 3 years ago | (#37496028)

You bought a Dell!

56 gigabit InfiniBand (1)

soldack (48581) | about 3 years ago | (#37496376)

They claim they will use 56 gigabit InfiniBand. Has anyone tested Mellanox's FDR adapters and switches? From what I understand, that is 14 gigabit over 4x cabling. I remember all the problems just getting 10 gigabit to work over 4x 2.5 gigabit copper. I imagine this must use fiber to get any distance from the server to the switch.

Their asic seems to support only 36 ports. Building a 2000 node network with 36 port switches will take a lot of interconnected switches. I wonder what topology they are going to use. Is anyone building bigger switches based on many interconnected 36-port asics?

Re:56 gigabit InfiniBand (1)

Bill Barth (49178) | about 3 years ago | (#37496446)

The 36-port part is the ASIC. The switch boxes have a lot more ports.

Re:56 gigabit InfiniBand (1)

multimediavt (965608) | about 3 years ago | (#37499440)

We had no problem getting (at the time) the largest 10 gig Infiniband installation running at VT in 2003 for System X. Fabric optimization was the hardest part, but we worked with a couple of vendors and were able to get an optimized fabric manager in place within a few months. I think the copper limit is still between 15 m and 20 m. Best cables we got were from Gore. We were using 64 port switches throughout to begin with and then moved to smaller leaf switches (24 port) and larger backbone switches (288 port). This allowed us to connect 16 nodes per leaf switch (2 switches per rack) and maintain only 2:1 over subscription to the backbone. It also allowed for a better fabric overall and performance was much improved.

Re:56 gigabit InfiniBand (1)

soldack (48581) | about 3 years ago | (#37506038)

I know about this...I worked on the SilverStorm's Fabric Manager while I was there. I remember going into the VT System-X room and seeing piles of bad cables from earlier setup. If I remember correctly, the very first network had more switch ASICs than hosts...both were around 2000 or so. I think the first switches used 8-port ASICs internally. We made massive improvements to our fabric scan time and reaction time to moving cables, nodes going down, etc. This was a good thing because the non-silverstorm IB switches that were there were at the start were having a failures all the time. I believe System-X eventually moved to SilverStorm IB switches (those 288s and 24 port switches). That 288 was fun to work on. Moving to 24-port ASIC based switches really cuts down on fabric scan and setup time.

But (2)

jirikivaari (2468926) | about 3 years ago | (#37496752)

Can we play NetHack on it?
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?