Beta

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

IBM To Build 3-Petaflop Supercomputer

Soulskill posted more than 3 years ago | from the onward-and-upward dept.

IBM 73

angry tapir writes "The global race for supercomputing power continues unabated: Germany's Bavarian Academy of Science has announced that it has contracted IBM to build a supercomputer that, when completed in 2012, will be able to execute up to 3 petaflops, potentially making it the world's most powerful supercomputer. To be called SuperMUC, the computer, which will be run by the Academy's Leibniz Supercomputing Centre in Garching, Germany, will be available for European researchers to use to probe the frontiers of medicine, astrophysics and other scientific disciplines."

cancel ×

73 comments

Sorry! There are no comments related to the filter you selected.

but (-1)

Anonymous Coward | more than 3 years ago | (#34543708)

One only needs a few MHz to do a first post.

Key take aways from the summary (1)

Anonymous Coward | more than 3 years ago | (#34543712)

Key take aways from the summary:

1. IBM will be responsible for a large 'SuperMuck'.
2. It will be used for probing by the Europeans.

Re:Key take aways from the summary (0)

Anonymous Coward | more than 3 years ago | (#34546362)

Key take aways from the summary

And if you read the complete article, the only other interesting tid-bit is that the CPUs are water cooled. No information on the type of Xeon's, the amount of memory or the type I/O. Still InfiniBand or 10Gb Ethernet?

From another article: "SuperMUC will [...] feature 3 PetaFlop/s peak performance, 320 TeraBytes of main memory and 12 PetaBytes of permanent storage. SuperMUC will be comprised of more than 110,000 processor cores, delivered by the aforementioned Intel Xeon processors,"

With the 14,000 processors from the original article, that means they will use 8-core Xeon CPUs, I guess Sandy Bridge.

And for some pictures: http://wapedia.mobi/de/LRZ_Garching_bei_M%C3%BCnchen + http://wapedia.mobi/de/SuperMUC

i now play chess vs the world (1)

chronoss2010 (1825454) | more than 3 years ago | (#34543718)

Chess anyone. What is this for ...no really....My question is WHY?

Re:i now play chess vs the world (1)

ravenspear (756059) | more than 3 years ago | (#34543728)

The SkyNet central core.

They realized that computers in office buildings and dorm rooms aren't powerful enough.

Re:i now play chess vs the world (3, Insightful)

Anonymous Coward | more than 3 years ago | (#34543762)

1: it's a dick-wagging contest to have the best supercomputer in the world.

2: huge supercomputers can be leased out to cycle-hungry organizations the same way one would lease office space in a skyscraper.

3: each incremental advancement represents overcoming various hurdles faced by all computing technology; the simple needs of common folk will become that little bit easier as a part of our constant forward march in technological advancement.

Re:i now play chess vs the world (1)

markass530 (870112) | more than 3 years ago | (#34550308)

I refuse to believe the chinese have a bigger dick then us.

Re:i now play chess vs the world (1)

hardwarefreak (899370) | more than 3 years ago | (#34557168)

1: it's a dick-wagging contest to have the best supercomputer in the world..

Absolutely correct. This was clearly demonstrated by the accelerated funding and freebie process that occurred when NASA was building its first SGI Altix monster, including emergency meetings with the Governor or California's office, the DOE directors office, Intel, and SGI. The build of that machine would have normally taken a year using the normal method of assembly, construction, and testing. They cut it to less than six months with one goal in mind, and it wasn't science. It was to get a sufficient number of Linpack runs in, and tune for a final couple of "peak" runs simply so they could send results to Dongarra before the deadline for the upcoming TOP 500 list.

They thought they were going to take the #1 spot, because they weren't paying attention to their "competition", which IIRC included the first BlueGene machine from IBM (which took the #1 spot on that list). The NASA "Columbia" Altix supercomputer ended up at #3 on that list. Intel and SGI lost a *bunch* of money on that system just to get the #1 spot on the list. And they didn't. Meaning they lost all that money for nothing. For Intel this didn't mean much. For SGI, well, we all know Rackable bought them for only $45 million--less than the price of the Columbia machine. This wasn't the first "deeply discounted" system SGI shipped, and when you add up all these deals, you understand their near bankruptcy situation, and sale for a song to Rackable. That was a very sad deal...

Re:i now play chess vs the world (0)

Anonymous Coward | more than 3 years ago | (#34544152)

To fight against a beowulf cluster of kasparovs!
His clones are in mass production as we speak.

Imagine if they overclocked.. oh wait. (2)

Coldegg (1956060) | more than 3 years ago | (#34543720)

This looks like a pretty awesome setup they have. I'm glad that the US has a few supercomputer projects planned for 2012 that will possibly bring the somewhat elusive #1 title back our way. We'll have to see, the competition as always is pushing the envelope and by that time who knows what else could be in the works from China, etc.

Anyways, pre - gratz to the Germans for their new machine. Is anybody familiar with the hot water cooling tech developed by IBM as mentioned in the article?

Re:Imagine if they overclocked.. oh wait. (1)

Neil Boekend (1854906) | more than 3 years ago | (#34557840)

They recently published this [discovery.com] . It's about cooling with relatively hot water (60 degrees Centigrade/140F). They probably researched this because the water is cheaper to cool by air when it's hotter.
Also in 2008 they published this [ibm.com] , a solution to cool inside stacked dies.

Re:Imagine if they overclocked.. oh wait. (1)

Coldegg (1956060) | more than 3 years ago | (#34563438)

Great info, thanks for your response!

3 Petaflops (0)

Anonymous Coward | more than 3 years ago | (#34543764)

So how long would that take to sort 3 petafiles?

Re:3 Petaflops (1)

ravenspear (756059) | more than 3 years ago | (#34543768)

with bubblesort

You may laugh but the bubble sort may soon be back (0)

Anonymous Coward | more than 3 years ago | (#34543804)

At a high enough petaflop rate, bubble sort becomes viable for all data sets :-)

Re:You may laugh but the bubble sort may soon be b (1)

Anonymous Coward | more than 3 years ago | (#34544168)

Indeed. Since bubble sort only swaps consecutive elements, and every outer loop proceeds over the dataset in linear order, it ensures 1) maximal locality of reference for the best possible use of cache and 2) very predictable memory access, allowing the processor to take advantage of cache read-ahead. No other sorting algorithm gets even close to using the memory hierarchy with such efficiency.

Re:You may laugh but the bubble sort may soon be b (0)

Anonymous Coward | more than 3 years ago | (#34544448)

Gimmicks like locality of reference only produce at maximum a multiplicative factor of speed up. How is that going to enable an O(n^2) algorithm to beat an O(n log n) algorithm? It's not like its trillions of times faster to

Re:3 Petaflops (3, Funny)

gmhowell (26755) | more than 3 years ago | (#34543772)

So how long would that take to sort 3 petafiles?

Hi, I'm Stone Philips with Dateline NBC...

Re:3 Petaflops (2)

ravenspear (756059) | more than 3 years ago | (#34543936)

My name is Chris Hanson you insensitive clod!

Re:3 Petaflops (2)

gmhowell (26755) | more than 3 years ago | (#34544208)

I bow to your superior knowledge on the cataloger of petafiles.

Re:3 Petaflops (1)

Aeternitas827 (1256210) | more than 3 years ago | (#34544334)

Epic sequence. And no C-C-C-Combo-Breaker over 2h07m to boot.

what are you doing hear? (1)

Joe The Dragon (967727) | more than 3 years ago | (#34545450)

what are you doing hear?

Not POWER7, Not BlueGene, (3, Interesting)

pip-PPC (46392) | more than 3 years ago | (#34543776)

From the article:

"The system will use 14,000 Intel Xeon processors running in IBM System x iDataPlex servers."

IBM has two in-house HPC platforms that could both reach 3 PFLOPS (BlueGene/Q and POWER7), but instead they're building a Xeon cluster. I'm surprised that they would want to put a machine near the top of the TOP500 that wasn't a full-on IBM benefit--maybe IBM Germany is the contractor, and they don't have the R&D expertise? Or the Xeon cluster is cheaper/easier to program and maintain?

Re:Not POWER7, Not BlueGene, (1)

Anonymous Coward | more than 3 years ago | (#34543838)

Didn't IBM help the Germans build such a machine back in the 1930's. As I remember it was to be used for counting people and general census taking....q

Re:Not POWER7, Not BlueGene, (0)

Anonymous Coward | more than 3 years ago | (#34543966)

Didn't IBM help the Germans build such a machine back in the 1930's. As I remember it was to be used for counting people and general census taking....q

God, this comment is so full of win!

Re:Not POWER7, Not BlueGene(BlueGene/Q) (3, Informative)

Required Snark (1702878) | more than 3 years ago | (#34543880)

Here is a look at the guts of the IBM next generation BlueGene/Q. http://www.theregister.co.uk/2010/11/22/ibm_blue_gene_q_super/page2.html [theregister.co.uk]

The Sequoia super that Lawrence Livermore will be getting in 2012 — IBM said it'd be in late 2011 back when the deal was announced in February 2009, so there's been some apparent slippage — will consist of 96 racks and will be rated at 20.13 petaflops. Argonne National Laboratory said back in August that it wanted a BlueGene/Q box, too, and it will have 48 racks of compute drawers for a total of 10 petaflops of floating-point power.

Both the Chinese machine and the German machine are not cutting edge designs. They represent what you can do with near commodity hardware and good but not fully custom packaging. They may look like top end machines today, but by 2012 they will not be in the top ten.

Re:Not POWER7, Not BlueGene(BlueGene/Q) (1)

mwvdlee (775178) | more than 3 years ago | (#34544056)

Hah! 20 petaflops may look like top end machines in 2012, but by 2015 they will not be in the top ten.

Re:Not POWER7, Not BlueGene(BlueGene/Q) (1)

Kilrah_il (1692978) | more than 3 years ago | (#34544430)

Bullshit, 3 petaflops should be enough for anyone.

Re:Not POWER7, Not BlueGene(BlueGene/Q) (1)

timeOday (582209) | more than 3 years ago | (#34547752)

Both the Chinese machine and the German machine are not cutting edge designs. They represent what you can do with near commodity hardware and good but not fully custom packaging.

The cooling system sounds genuinely innovative and beneficial, if successful:

It will also use a new form of cooling that IBM developed, called Aquasar, that uses hot water to cool the processors, a design that should cut cooling electricity usage by 40 percent, the company claims.

"SuperMUC will provide previously unattainable energy efficiency along with peak performance by exploiting the massive parallelism of Intel's multicore processors and leveraging the innovative hot water cooling technology pioneered by IBM. This approach will allow the industry to develop ever more powerful supercomputers while keeping energy use in check," said Arndt Bode, chairman of the Leibniz Supercomputing Centre board of directors, in a statement.

Re:Not POWER7, Not BlueGene, (2)

dimethylxanthine (946092) | more than 3 years ago | (#34543884)

Don't be too proud of this technological monster that's about to be constructed. 3 petaflops is insignificant next to the size of the kickback somebody will receive in the process of purchasing 14,000 CPUs.

Re:Not POWER7, Not BlueGene, (1)

Anonymous Coward | more than 3 years ago | (#34552896)

Don't try to frighten us with your sorcerer's ways, Lord Dimethylxanthine. Your sad devotion to that ancient religion has not helped you conjure up the stolen wikileaks cables, or given you clairvoyance enough to find the Rebels' hidden data cent..gghggghgg

Re:Not POWER7, Not BlueGene, (1)

Anonymous Coward | more than 3 years ago | (#34543912)

A big benefit of Xeons is that you can supplement them with off the shelf GPUs. Currently there aren't really a lot of high performance GPUs available for the POWER platform right now. It would be possible to port them, but not sure if it would be worthwhile for either IBM or NVidia to do so.

Re:Not POWER7, Not BlueGene, (1)

Junta (36770) | more than 3 years ago | (#34545794)

However, this config makes no mention of GPUs, so it's probably moot. If you are saying they may upgrade these later, I would be surprised if they are using systems with enough space to accommodate GPUs if not doing it up front. Most of these configurations, regardless of vendor, go to half the CPU density to make room (space, power, cooling wise) for gpus. When dealing with a scale of 14k cpus, you generally pick the config up front and don't bother going back for piece-wise upgrades.

The only way to tell if a config is thinking about GPUs 'down the line' is if they go ahead and put the parts in. Going x86 by itself isn't enough to give any sort of playground for GPGPU computing, you must have decent GPUs to even begin.

Re:Not POWER7, Not BlueGene, (0)

Anonymous Coward | more than 3 years ago | (#34544788)

x86 architecture was a specific requirement for the system. BlueGene and Power 7 are both efficient designs for this scale, but the software ecosystem is much smaller. Most applications need to be ported first, which can sometimes be challenging (or impossible if you don't have the source code).

The unique thing about this system (and the reason IBM was probably selected) is the warm water cooling system it will employ.

This system will be one of several affiliated with PRACE. I would expect other systems will offer an alternative CPU architecture.

Re:Not POWER7, Not BlueGene, (1)

Junta (36770) | more than 3 years ago | (#34545728)

Many don't care about the architecture because all their work is not hard to redo per-architecture. Those will jump on Itanium, POWER, Sparc, or whatever architecture the vendor has that hits the sweet spot. You can tell those as their top500 entries from year to year frequently jump architectures. These are also customers most amenable to jumping on the GPU bandwagon, despite the fact they are more painful to program for and require particular care and feeding to avoid becoming memory bottlenecked.

Many others don't have the luxury. In this case I suspect x86 was simply a hard requirement. Also note the lack of mention for GPUs. If going for the cheapest way to get to some petaflop number, this isn't going to be it. This is clearly based on requirements involving a consistent development environment (it also happens to be the most fungible set of vendors, since swapping between AMD and Intel CPUs is trivial whereas AMD and nVidia are currently not in practice for GPGPU).

Number 9 on the current list is a Blue Gene in Germany, so it seems IBM has plenty of BlueGene type talent in Germany, but they'll take money from customers on the terms the customer dictates even if they can't sell BlueGene or POWER7.

I hope they rename it. (2)

Ismellpoop (1949100) | more than 3 years ago | (#34543782)

SuperMUC is not cool on any level. Kind of makes my spine tingle with grossness actually.

Why I love Moore's law (2)

alvinrod (889928) | more than 3 years ago | (#34543796)

I absolutely love Moore's law. Think that this is an insanely awesome amount of computational power? Just wait around for 10-15 years and we'll likely have that same order of magnitude in our personal computers. Just look back at the supercomputer list from a decade ago [top500.org] and notice that right now we have hardware capable of getting similar performance. The best Intel processors can put out over 100 GFLOPS. Graphics cards are closer to 1TFLOPS.

Another way of looking at it is that we'll have a similar amount of power in our phones, tablets, etc. that we have in our desktops right now. Super computers are going to get even more super and the types of problems that are expensive to solve today continue to get cheaper. I'm still a young man, but given how far things have come since I was born, I can't help but wonder what the world will be like when I'm many years further along the road. If for no other reason than the vast amount of computational power that's available to us.

Re:Why I love Moore's law (3, Insightful)

NewtonsLaw (409638) | more than 3 years ago | (#34543824)

So our desktop computers will wait thousands of times faster than they do today... for the next keystroke or mouse button-press. :-)

Re:Why I love Moore's law (2)

sakdoctor (1087155) | more than 3 years ago | (#34543830)

Moore's law is ok.

I prefer that law, forget the name right now, that says that as computational power increases, windows will require ALL of it to run, greatly increasing demand for CPU and RAM, and lowering the cost of hardware just behind the curve for the rest of us.

Re:Why I love Moore's law (4, Funny)

FeepingCreature (1132265) | more than 3 years ago | (#34543862)

Moore's law is ok. I prefer that law, forget the name right now, that says that as computational power increases, windows will require ALL of it to run, greatly increasing demand for CPU and RAM, and lowering the cost of hardware just behind the curve for the rest of us.

As Intel giveth, Microsoft taketh away.

Re:Why I love Moore's law (1)

Memnos (937795) | more than 3 years ago | (#34546060)

I have a cat. I routinely exceed ten petaflops daily, but it screws up her neural net processing big time.

Re:Why I love Moore's law (4, Informative)

Esvandiary (1302095) | more than 3 years ago | (#34543996)

I believe May's Law is the one you're referring to; a corollary to Moore's Law, stating that software efficiency halves every 18 months (or two years).

Re:Why I love Moore's law (1)

Hognoxious (631665) | more than 3 years ago | (#34545042)

I don't know if it's the efficiency that falls, or just that all the extra power gets used on 3D-shaded semi-transparent smooth-scrolling menus.

Why yes Vista, I did glance at you!

Re:Why I love Moore's law (2)

alienzed (732782) | more than 3 years ago | (#34543896)

Computational power is definitely growing, but so are the requirements of the software that runs on them. Microsoft Word isn't any faster today than it was ten years ago, yet our computers are many times more powerful and more capable. Isn't it amazing to think that ten years ago, it was the year 2000. Time really does fly.

Re:Why I love Moore's law (1)

Aeternitas827 (1256210) | more than 3 years ago | (#34544438)

Overall, it's not necessarily the amount of computational power available that makes things happen; the extra power simply helps things happen faster. Having Petaflop systems available certainly helps out with the insanely complex tasks (like those in the medical and physics fields); the trickle-down of those developments also make for the nice shiny toys that we call phones. End of the day, though, to the casual user it means nearly nothing, except for when it affects the price tag and/or the performance of their Flash-based games.

In essence, the things we've found in the last, say, 18 years (base assumption, being that you call yourself a 'young man'--you could be a tad on either side, it's not a term i've used to describe myself (at 24 currently) in 4-5 years), would have been found with the systems of the day given sufficient time and ingenuity, without significant improvements on the hardware capabilities. The fact that the hardware is being pushed to the absolute limits is a byproduct of what I would consider to be human nature--to be the biggest, the best, the fastest, regardless of the goal; it's a natural progression of things, and where it has industrial implications, why not map those to the consumer sector? It certainly doesn't hurt (much), once it's proven to work well.

This then causes the software end of things to be developed in a way that it can maximize its effectiveness with the hardware given--push the envelope to the furthest acceptable extent possible, even where it's not strictly necessary for the user's goal (though that's not what the hardware was really developed for). This is where the consumer comes in to play in the whole scheme. They see numbers and what their system is still able to do (even though it's not much more than their previous system, in most cases); the fancy numbers are bigger, and because the software keeps things where they are, the hardware must accelerate more to keep the consumers in awe.

In closing, this progression is meaningless, given indefinite time; it's necessary because of the need for consumer awe, and we only ever got to that point because the consumers want that awe. Nothing that has happened here is really that awesome; the capability was always there, the speed was not, and the speed really only occurred because consumers were told that they needed more power to do everything that they always did before, further driving the bleeding edge a little bit higher than it was.

Re:Why I love Moore's law (0)

Anonymous Coward | more than 3 years ago | (#34545952)

Ah but it's not all the same. There are some applications that require speed that we don't quite have yet. For all the things we have learned so far, we will see later that Adam has not yet truly bitten the apple.

Re:Why I love Moore's law (1)

inode_buddha (576844) | more than 3 years ago | (#34546124)

Back in 1983, I had an Atari 800 and a Kaypro. They prolly had more power than the computers used to land on the moon (and I remember that too). In 1969, my Dad was doing his PhD in fluid dynamics on an IBM with 64k of core memory. My calculator blows the old mainframe away (though the mainframe did useful work for 25 yrs).

Re:Why I love Moore's law (1)

Memnos (937795) | more than 3 years ago | (#34546138)

True, but right now there are many orders of magnitude more processing of information occurring in just a few hundred of the cells in most parts of your own body.

Re:Why I love Moore's law (1)

ziggyzaggy (552814) | more than 3 years ago | (#34550726)

what's sad is the ability of bloat in the OS and applications to reduce our evermore powerful machines to the same sluggishness.

Re:Why I love Moore's law (1)

Fulcrum of Evil (560260) | more than 3 years ago | (#34550936)

Moore's law is over with - transistor density isn't jumping like it used to and we're running into thermal limits with current designs. That's why you see 4 core and more on the desktop.

germany (-1)

Anonymous Coward | more than 3 years ago | (#34543854)

fuck yeah.

Re:germany (3, Funny)

Anonymous Coward | more than 3 years ago | (#34544364)

Germany, FUCK YEAH!
Coming again, to save the mother fucking day yeah,
Germany, FUCK YEAH!
Federal parliamentary republic is the only way yeah,
Computations your game is through cause now you have to answer too,
Germany, FUCK YEAH!
Das Land der Dichter und Denker,
Germany, FUCK YEAH!
What you going to do when we come for you now,
it's the dream that we all share; it's the hope for tomorrow

FUCK YEAH!

BMW, FUCK YEAH!
Mercedes, FUCK YEAH!
Porsche, FUCK YEAH!
Engineering, FUCK YEAH!
Efficiency, FUCK YEAH!
Claudia Schiffer, FUCK YEAH!
Mozart, FUCK YEAH!
Bach, FUCK YEAH!
Einstein, FUCK YEAH!
Heisenberg, FUCK YEAH!
Max Planck, FUCK YEAH!
Von Braun, FUCK YEAH!
Berlin, FUCK YEAH!
Bismarck, FUCK YEAH!
German food ... (Fuck yeah, Fuck yeah)

Re:germany (1)

mcneely.mike (927221) | more than 3 years ago | (#34544774)

Only need to say one word...
German beer...Fuck yeah!

And Claudia Schiffer.
(Sorry, my word count program isn't working right now.)

How should we measure supercomputers now? (3, Informative)

Entropius (188861) | more than 3 years ago | (#34543888)

Once upon a time, supercomputers were bunches of general-purpose cpu's, and you made them faster by connecting up more of them.

Now people have realized that massively parallel special purpose chips (like Cell and, even more so, GPU's) can be used to do general-purpose computing, and have started to add those to clusters. But those chips have a lower bandwidth:flops ratio than the x86 etc. CPU's that have been historically used; the gap between a computer's "peak" FLOPS (on an ideal job with no communication requirements to either other nodes or to memory) and the performance it actually achieves is wider using something like CUDA than on a standard supercomputer. CUDA machines are so bandwidth-limited that people use rather hairbrained data compression schemes to move data from place to place, just because all the nodes have extra compute power lying around anyway, and the bottleneck is in communication. (The example that comes to mind is sending the coefficients of the eight generators of an SU(3) matrix rather than just sending the eighteen floats that make up the damn matrix. It's a lot of work to reassemble, relatively speaking, but it's worth it to avoid sending a few bits down the wire.)

CUDA is wonderful, and my field at least (lattice QCD) is falling over itself trying to port stuff to it. Even though it falls far short of its theoretical FLOPS, it's still a hell of a lot faster than a supercomputer made of Opterons. But we shouldn't fool ourselves into thinking that you can accurately measure computer speed now by looking at peak FLOPS. It makes the CUDA/Cell machines look better than they really are.

Re:How should we measure supercomputers now? (4, Informative)

afidel (530433) | more than 3 years ago | (#34544022)

No, the computers that the term supercomputer was coined for were all special purpose vector machines that couldn't even run an OS, they had to be fronted by a management processor. Only much later were clusters of commodity machines (often with specialized interconnects for high bandwidth and low latency) accepted as contenders for the name. Now with Cell and GPU's we are getting back to fast vector machine with a management computer in the front but now the front end computer is capable of computations (at least in the case of the GPGPU machines) and each machine is a few rack units instead of a couple racks.

Oh, and the measure you are looking for are Rmax to Rpeak which will tell you how efficient the machine is (at least for LINPACK which may or may not track with your own code depending on how chatty it is in comparison to the benchmark).

Re:How should we measure supercomputers now? (0)

Anonymous Coward | more than 3 years ago | (#34545766)

Er, not quite. Cell and GPU have much higher memory bandwidth then same-generation CPUs. The PCI-e bus connecting accelerator to host is a bottleneck, but a worse bottleneck is the network connecting nodes. And that affects all clusters, homogeneous or heterogeneous. But the general idea is correct: FLOPS is a poor measure when most codes are limited by network or memory latency or bandwidth.

Re:How should we measure supercomputers now? (1)

Memnos (937795) | more than 3 years ago | (#34546296)

Perhaps, just maybe, we could adopt a very fuzzy metric instead of precision without accuracy. Say for example, what important questions can they help us answer? As amorphous as that criterion will be, it will likely stimulate smart engineers to do whatever necessary to get tangible, valuable, benefits.

Re:How should we measure supercomputers now? (0)

Anonymous Coward | more than 3 years ago | (#34549946)

> the term supercomputer was coined for were all special purpose vector machines that couldn't even run an OS

That's quite an exaggeration. All but the first one or two Cray machines were delivered with the Cray OS, and Livermore lab
ported their own OS from the CDC Cyber machines to the Crays without too much trouble (the assembly language of the
Cray was close to the CDC (since Seymour designed both)). There was an attached front-end machine, but it was for
monitoring and maintenance (maybe interfacing to terminals, I don't recall).

They were batch operating systems, but so were most mainframe OSes at that time. You could type JCL commands
interactively if you were sitting at an operator console.

There were other machines that required a front-end which ran (most or all) the OS and handled program loading and maybe
I/O, but these tended to be considered an integral part of the system. (The Connection Machine is a good example.)

Re:How should we measure supercomputers now? (1)

Anubis350 (772791) | more than 3 years ago | (#34544042)

I was at the top500 talk at SC10 a few weeks ago, and that's their biggest issue right now. Hell, even core counts are contentious (how many cores does a Tesla C2050 have? Nvidia would say 448, the Top500 guys would say 56 ). The HPL benchmark does work, with some porting, on GPUs (I know firsthand, I used Nvidia's CUDA build of it for benchmarking for the SC10 cluster challenge) but there's a lot of devision I've seen in both vendors and researchers as to whether it's the best benchmark going forward.....

Re:How should we measure supercomputers now? (3, Interesting)

chichilalescu (1647065) | more than 3 years ago | (#34544050)

I had to really think about measuring the efficiency of a simulation and I came up with a single answer: money. I was at a lecture about gyrokinetic simulations, and when I heard about the amount of resources being used for some simulations, I asked "how much does one of these simulations cost, in euros?". Luckily for me, the guy knew (large simulations cost in order of thousands), and he also knew how much an experiment on ITER will cost (order of a million); his argument was "it's obviously efficient to run a thousand simulations and pick the most relevant set of parameters for an experiment afterwards".
I think this is the way to go with comparing supercomputers: "In order to simulate experiment X, we needed N1 euros for the developers, and N2 euros of electricity to run the code on a machine that cost N3 euros to build". If you want to be thorough, add some maintenance costs. It's a bit complicated, because developers might actually be researchers, and it's not very clear how much of their time goes into writing code... but we don't really have a better way of measuring efficiency.

Re:How should we measure supercomputers now? (0)

Anonymous Coward | more than 3 years ago | (#34550812)

The validity and usefulness of LINPACK has been argued for years (decades). Dense linear algebra is a particularly good match to
cache-based microprocessors, since there are algorithms that have a relatively high compute:bandwidth ratio. But many other
important scientific computing problems (the target market for supercomputers) aren't so lucky (differential equations, for example).

Years ago, when the Japanese Earth Simulator computer took the #1 place on the Top500 and held it for an unprecedented time,
it was mostly because of the memory speed that the machine had. The high bandwdith internal switches helped a lot too.

20 and 10 Pflops by then (1)

Henriok (6762) | more than 3 years ago | (#34544146)

By 2012 IBM will have built at least two Blue Gene/Q systems capable of 20 and 10 Pflops each. The "Sequoia" at Lawrence Livermore National Laboratory and "Mira" at Argonne National Laboratory. There should be plenty of petascale supercomputers in a variety of configurations and architectures by 2012.

Welcome CookieMonsterComputingOverlords! (1)

martin-boundary (547041) | more than 3 years ago | (#34544154)

To be called SuperMUC, the computer, which will be run by the Academy's Leibniz Supercomputing Centre in Garching, Germany,

No doubt named after the delicious Leibniz [wikipedia.org] cookies, mmmm, mmmm.

Re:Welcome CookieMonsterComputingOverlords! (1)

nutshell42 (557890) | more than 3 years ago | (#34551666)

No it's after the guy who got in an epic row with Newton over the invention of calculus that led to English mathematicians rallying around Newton's method in an upswell of patriotic fervor.

It took English mathematics a century to recover. So the next time you hear someone criticize something because "that's the way they do it in France" remember: Good artists copy, great artists steal (a bon mot I just came up with, pretty brilliant if I may say so)

the answer to life the universe and everything (2)

DanielGr (1958008) | more than 3 years ago | (#34544286)

Hopefully WE can can get the Germans to get the answer to this question.

Re:the answer to life the universe and everything (1)

mcneely.mike (927221) | more than 3 years ago | (#34544784)

And the winner is... 42!
Go Canada!

Re:the answer to life the universe and everything (0)

Anonymous Coward | more than 3 years ago | (#34544874)

The answer has been given, the germans will be looking for zee kwestion now.

Most powerful supercomputer? (1)

SirThe (1927532) | more than 3 years ago | (#34545082)

This wouldn't even come close to being the most powerful supercomputer, what with Blue Waters coming out in 2011..

May have already been done (1)

HeckRuler (1369601) | more than 3 years ago | (#34546036)

It's gonna run super-a-muc

Not in the top 10, certainly not number 1 (2)

Meeni (1815694) | more than 3 years ago | (#34550252)

this november's list already reached 2.5Pflops. A machine delivered at 3Pflops in 2 years from now will not even be in the top 10. Long term trend is to reach 5 to 10 Pflops by mid 2012. http://www.top500.org/list/2010/11/100 [top500.org]

Beowulf (0)

Anonymous Coward | more than 3 years ago | (#34550592)

Man, imagine a beowulf cluster of these.

FLOPS ? Why not go with .... (0)

Anonymous Coward | more than 3 years ago | (#34556114)

FLOPS ? FLOPPY ? Why do these ppl label HIGH PERFORMANCE with the direct opposite ?
Why not FLIPS ??? Floating Point Instructions Per Second ? Instead of Operations ... :-(

If you want PERFORMANCE ... Make a MANLY statement. Like MINE is FLIPS bigger / faster / longer than yours ...

C'mon Abrev. Daemon ... give us the good stuff ...

Germany? (0)

kmoser (1469707) | more than 3 years ago | (#34556746)

Hmmm, last time the Germans had a device capable of world-class encryption they almost won a world war. Almost.
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?
or Connect with...

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>