×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Windows Cluster Hits a Petaflop, But Linux Retains Top-5 Spot

timothy posted more than 3 years ago | from the heating-up-the-whole-outdoors dept.

Supercomputing 229

Twice a year, Top500.org publishes a list of supercomputing benchmarks from sites around the world; the new results are in. Reader jbrodkin writes "Microsoft says a Windows-based supercomputer has broken the petaflop speed barrier, but the achievement is not being recognized by the group that tracks the world's fastest supercomputers, because the same machine was able to achieve higher speeds using Linux. The Tokyo-based Tsubame 2.0 computer, which uses both Windows and Linux, was ranked fourth in the world in the latest Top 500 supercomputers list. While the computer broke a petaflop with both operating systems, it achieved a faster score with Linux, denying Microsoft its first official petaflop ranking." Also in Top-500 news, reader symbolset writes with word that "the Chinese Tianhe-1A system at the National Supercomputer Center in Tianjin takes the top spot with 2.57 petaflops. Although the US has long held a dominant position in the list things now seem to be shifting, with two of the top spots held by China, one by Japan, and one by the US. In the Operating System Family category Linux continues to consolidate its supercomputing near-monopoly with 91.8% of the systems — up from 91%. High Performance Computing has come a long way quickly. When the list started as a top-10 list in June of 1993 the least powerful system on the list was a Cray Y-MP C916/16526 with 16 cores driving 13.7 RMAX GFLOP/s. This is roughly the performance of a single midrange laptop today."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

229 comments

Ha! (-1, Troll)

Anonymous Coward | more than 3 years ago | (#34224364)

Zing-a-dong-dillo!

Petaflops per second? (5, Interesting)

mattventura (1408229) | more than 3 years ago | (#34224372)

2.57 petaflops per second

floating point operations per second per second?

Re:Petaflops per second? (4, Funny)

Shikaku (1129753) | more than 3 years ago | (#34224390)

I'd say Google datacenters accelerate at about that rate.

Re:Petaflops per second? (1)

Flytrap (939609) | more than 3 years ago | (#34225184)

I'm not sure that Google's data centres could qualify as a single super computer with each node solving a different part of the same problem...

Re:Petaflops per second? (3, Funny)

shogun (657) | more than 3 years ago | (#34225486)

I'm not sure that Google's data centres could qualify as a single super computer with each node solving a different part of the same problem...

World domination isn't a single problem?

Re:Petaflops per second? (1)

Mitchell314 (1576581) | more than 3 years ago | (#34225574)

No, there are many, many, many aspects to trying to take over the world. Trust me, 80% of it is middle management work (henchman pay-scale management alone is 20%. They kept on bugging me about wanting a better retirement plan).

Re:Petaflops per second? (0)

Anonymous Coward | more than 3 years ago | (#34225894)

I wouldn't consider my world domination as a problem, no.

Re:Petaflops per second? (1)

cheater512 (783349) | more than 3 years ago | (#34225524)

Erm...yes thats how clusters work. Its not possible to build a single machine fast enough to solve many problems.

Re:Petaflops per second? (4, Funny)

K. S. Kyosuke (729550) | more than 3 years ago | (#34224414)

One petaflop, two petaflops... (Anyway, I didn't know that MS has already shipped so many flops...)

Re:Petaflops per second? (-1, Flamebait)

Anonymous Coward | more than 3 years ago | (#34224554)

I don't think they've ever shipped a flop based on their sales. Check the flop in your pants. It's never shipped anything.

Re:Petaflops per second? (1)

K. S. Kyosuke (729550) | more than 3 years ago | (#34224592)

Check the flop in your pants. It's never shipped anything.

I know for sure that it hasn't, I don't have to check. Given my seasickness, I would immediately notice any attempt at getting into maritime cargo transport business on my flop's part.

Re:Petaflops per second? (3, Interesting)

History's Coming To (1059484) | more than 3 years ago | (#34224426)

I can only presume this is a manifestation of Moore's Law, the curve is now so steep the computers are accelerating as they're running. Or maybe a typo ;)

I'm willing to bet that the top end is going to become less and less relevant and we're going to be judging processors more and more by the "flops-per-watt" and "flops-per-dollar" rating. We're already in a position where clusters of commercial games machines make more sense than a traditional supercomputer for many applications, and I dread to think how much energy could be harvested from these using some efficient heat exchangers.

Re:Petaflops per second? (5, Funny)

Barefoot Monkey (1657313) | more than 3 years ago | (#34224448)

2.57 petaflops per second

floating point operations per second per second?

Well-spotted. It appears that this particular supercomputer gets faster the longer it is left running. Clearly the reason that it ran faster with Linux than with Windows was because in the latter case it needed to be restarted after every Patch Tuesday, thus limiting the potential speed increase to 6.88 zettaflops.

Re:Petaflops per second? (1)

Kjella (173770) | more than 3 years ago | (#34225904)

thus limiting the potential speed increase to 6.88 zettaflops.

If it could become a million times faster by installing Windows, there's something very very wrong with the world.

Trying too hard to be pedantic (0, Troll)

symbolset (646467) | more than 3 years ago | (#34224476)

FLoating point OPerationS per second. Now if you'll excuse me I need to go to the ATM machine if I can remember my PIN number.

Re:Trying too hard to be pedantic (0)

X0563511 (793323) | more than 3 years ago | (#34224610)

The unit is "flop" - the s indicates plurality - you don't say 1.2 petaflops, you say 1.2 petaflop (vs 2.2 petaflops).

It works the same way as a hertz.

Re:Trying too hard to be pedantic (1, Funny)

Anonymous Coward | more than 3 years ago | (#34224642)

Ooooh, that hertz!

Re:Trying too hard to be pedantic (0)

Anonymous Coward | more than 3 years ago | (#34225134)

No, the s indicates seconds...

Re:Trying too hard to be pedantic (1)

FrootLoops (1817694) | more than 3 years ago | (#34225396)

Both flop and flops [wikipedia.org] are acronyms, meaning either "FLoating-point OPeration" or "FLoating point OPerations per Second". The summary uses the former.

Since ignoring the words "per second" is unambiguous, I'd say it's mildly incorrect. Since this whole discussion takes the focus off the content (...a near record-breaking Windows cluster...), I'd say it's quite incorrect.

Re:Trying too hard to be pedantic (2, Informative)

Kilrah_il (1692978) | more than 3 years ago | (#34225028)

Actually, flops [wikipedia.org] means exactly what the OP said: Floating point operations per second.

So some kid on wiki got it wrong. (1)

symbolset (646467) | more than 3 years ago | (#34225372)

Probably got confused with MIPS. Hopefully somebody here will wander over and fix it. I don't have time to get involved in a wikipedia edit war today.

BTW: if you were looking for funny mods you could have gone down the SI Pebiflops fork, or the onion-belt milliflops fork. This line isn't going to get you there.

Re:So some kid on wiki got it wrong. (1)

TheRaven64 (641858) | more than 3 years ago | (#34225440)

MIPS also mean Millions of Instructions Per Second, so MIPS per second would be similar nonsense. Unless MIPS means Microprocessor without Interlocked Pipeline Stages, in which case MIPS per second would... also be nonsense, but an entirely more confusing kind of nonsense.

Re:Petaflops per second? (0)

Anonymous Coward | more than 3 years ago | (#34224976)

I like this one too:

Windows Cluster Hits a Petaflop

rofl

Won't somebody please think of the licensing cost? (-1, Troll)

zill (1690130) | more than 3 years ago | (#34224388)

because the same machine was able to achieve higher speeds using Linux

Well, duh.

Frankly I don't see how you can "supercompute" on Windows at all, with UAC and clippy popping up every other hour and whatnot.

Re:Won't somebody please think of the licensing co (1)

Peach Rings (1782482) | more than 3 years ago | (#34224530)

Linux isn't exactly an ultrascalable high-performance OS either. If you were building a supercomputer operating system from the start you'd make very different design decisions than Linus did.

Re:Won't somebody please think of the licensing co (5, Insightful)

bmo (77928) | more than 3 years ago | (#34224714)

Wait, what?

Have you ever paid attention to the OS trends in the Top500? All the proprietary OSes are disappearing. It used to be nearly all proprietary Unix and BSD. Now it's 91 percent Linux.

Here's a graph showing the demise of Unix in the Top500
http://www.top500.org/overtime/list/36/osfam [top500.org]

Linux doesn't scale? It fits in toasters and supercomputers. I think that's pretty good scaling if you ask me.

You could probably make the argument in 1991 when Linus smote the ground and came up with the kernel, but not anymore. You could probably even make that argument before kernel 2.0. But since then? Claiming that Linux doesn't scale well just makes you look like a Microsoft fanboy whistling while walking past the graveyard at best.

--
BMO

Re:Won't somebody please think of the licensing co (-1, Troll)

ToasterMonkey (467067) | more than 3 years ago | (#34225478)

Linux isn't exactly an ultrascalable high-performance OS either. If you were building a supercomputer operating system from the start you'd make very different design decisions than Linus did.

Have you ever paid attention to the OS trends in the Top500? All the proprietary OSes are disappearing. It used to be nearly all proprietary Unix and BSD. Now it's 91 percent Linux.

Here's a graph showing the demise of Unix in the Top500
http://www.top500.org/overtime/list/36/osfam [top500.org]

Linux doesn't scale? It fits in toasters and supercomputers. I think that's pretty good scaling if you ask me.

So do other proprietary and open OSes, as you yourself mentioned. Oh, but I forgot this is a popularity contest, and the immutable, doubleplusgood Linux is and always has been designed with extreme outward AND upward scaling in mind and excels above all else at it.

Silly me.

Re:Won't somebody please think of the licensing co (1)

bmo (77928) | more than 3 years ago | (#34225934)

Linux is and always has been designed with extreme outward AND upward scaling in mind

>always

You conveniently snipped "you could have made that argument..."

Your argument is not only existentially fallacious, but you put words in my mouth.

Cunt.

--
BMO

Re:Won't somebody please think of the licensing co (2, Funny)

Kell Bengal (711123) | more than 3 years ago | (#34225506)

What the hell kind of toaster runs Linux? There's hardly any justification for a mass-produced toaster to have any logic more complex than a relay. If there's an actual consumer toaster out there on the market that has linux controlling it, I'd like to see it (and buy it)!

Re:Won't somebody please think of the licensing co (1)

Guy Harris (3803) | more than 3 years ago | (#34225696)

What the hell kind of toaster runs Linux? There's hardly any justification for a mass-produced toaster to have any logic more complex than a relay. If there's an actual consumer toaster out there on the market that has linux controlling it, I'd like to see it (and buy it)!

No, you need NetBSD for that [embeddedarm.com].

Re:Won't somebody please think of the licensing co (3, Insightful)

zach_the_lizard (1317619) | more than 3 years ago | (#34224734)

Yet it powers most of the top 500 supercomputers and can run on embedded platforms. If that's poor scalability, I want to know what's good scalability.

Re:Won't somebody please think of the licensing co (0)

Anonymous Coward | more than 3 years ago | (#34225132)

Give me the money to design an OS from the start.

I suspect a few hundred billion will do it.

Re:Won't somebody please think of the licensing co (1, Informative)

Anonymous Coward | more than 3 years ago | (#34224822)

> Linux isn't exactly an ultrascalable high-performance OS either.

Actually it is, it has a whole mess of features that don't become relevant until high-end stuff that the typical "haha lunix sux0rs" sad windoze weenie couldn't comprehend. It's retrogressed slightly lately out-of-box under neverrending low-end "schedule better for the desktop" pressure (hint: if you have 32768 cores, O(1) scheduling is *really handy*. Fortunately you can switch schedulers).

Coupled with the vendor-specific addons like SGI ProPack, it really is the least worst OS out there for HPC today.

Re:Won't somebody please think of the licensing co (1)

eugene2k (1213062) | more than 3 years ago | (#34225262)

You can only switch I/O schedulers and O(1) is a task scheduler.

Re:Won't somebody please think of the licensing co (1)

Sarten-X (1102295) | more than 3 years ago | (#34225538)

You can get the full source easily. You can switch everything, if you want.

Re:Won't somebody please think of the licensing co (1)

Kjella (173770) | more than 3 years ago | (#34225984)

If you were building a supercomputer operating system from the start you'd make very different design decisions than Linus did.

Heh, one of those "start clean and it'll be so much better". There's been a kazillion patches to Linux to make it scale better, if there was anything essential holding it back they'd fork and run their own supercomputer-linux. Yes, you would use very different design decisions, but those Linux made in the early 90s aren't longer in effect either.

Re:Won't somebody please think of the licensing co (0)

Anonymous Coward | more than 3 years ago | (#34224704)

because the same machine was able to achieve higher speeds using Linux

Well, duh.

Frankly I don't see how you can "supercompute" on Windows at all, with UAC and clippy popping up every other hour and whatnot.

Heh.

Looks like your trying to simulate the Higgs Boson. Would you like a tutorial?

Nevertheless I am impressed (1, Interesting)

Anonymous Coward | more than 3 years ago | (#34224438)

I am impressed that Windows can actually scale to that type of hardware. However, my questions are:

  - What kind of performance can an actual program achieve on Windows on that hardware?
  - Are context switches from godawful slow memory allocation calls as painfully slow on this supercomputer as they are on the typical desktop?
  - How badly does the ever-essential anti-malware suite drag down the supercomputer?

Re:Nevertheless I am impressed (2, Interesting)

bmo (77928) | more than 3 years ago | (#34224820)

- What kind of performance can an actual program achieve on Windows on that hardware?

Less than on Linux, and that's what counts in the end, isn't it?

Coupled with the fact that licenses eat into the budget a significant amount, Windows TCO is not the bargain that Microsoft would like you to believe.

--
BMO

Re:Nevertheless I am impressed (2, Informative)

gmack (197796) | more than 3 years ago | (#34225156)

I am impressed that Windows can actually scale to that type of hardware. However, my questions are:

  - What kind of performance can an actual program achieve on Windows on that hardware?

Fairly good if the programmer is skilled at writing super computer applications.

- Are context switches from godawful slow memory allocation calls as painfully slow on this supercomputer as they are on the typical desktop?

It shouldn't matter too much since they would (mostly) avoid context switches by only running 1 copy of the software per core and half of windows is disabled in Super Computing edition.

- How badly does the ever-essential anti-malware suite drag down the supercomputer?

Shouldn't be needed since it should be extremely hard for malware to get into such a controlled environment to begin with.

There are other reasons Microsoft's idea is a bad one such as the higher licensing costs, no possible reason to want to write custom, non GUI software from scratch on Windows etc.

Re:Nevertheless I am impressed (2, Funny)

Noughmad (1044096) | more than 3 years ago | (#34225346)

- How badly does the ever-essential anti-malware suite drag down the supercomputer?

Shouldn't be needed since it should be extremely hard for malware to get into such a controlled environment to begin with.

Digital Fortress?

US becoming less superpowery (1)

Oxford_Comma_Lover (1679530) | more than 3 years ago | (#34224444)

> Although the US has long held a dominant position in the list things now seem to be shifting, with two of the top spots held by China, one by Japan, and one by the US.

Damn, I feel like Britain after WW2.

Hey, does this mean US accents are going to seem sexy soon?

Re:US becoming less superpowery (2, Informative)

bsDaemon (87307) | more than 3 years ago | (#34224536)

Yeah, but on the downside, it means that asian chicks are going to start gaining weight and wanting to be "liberated" and stuff, so your sexy accent isn't really going to pay off.

Re:US becoming less superpowery (1)

hedwards (940851) | more than 3 years ago | (#34224634)

This was bound to happen eventually. But the process was accelerated by the right wing and the American exceptionalists who are completely unable to acknowledge when we need a course correction.

Re:US becoming less superpowery (1)

zach_the_lizard (1317619) | more than 3 years ago | (#34224776)

Way to bring politics into a thread about supercomputers!

Now, to politics.... Being a fan of neither large party, I can smugly sit here and point out both of the sides' failings while you and the inevitable others argue which of the two sides is unfit to rule. Sadly, both sides have me convinced the other side sucks.

Re:US becoming less superpowery (1)

jellomizer (103300) | more than 3 years ago | (#34224836)

Really I thought it was accelerated by the left wing and those environmentalists who say that all new technology is bad and evil, and unwilling to see that life is about balance.

Or...

It could be that China is a country that has the highest population and its new free(er) market economy allows to better utilize its human capital and brain power. As well they are in a culture where when they are competing they will try to win at all costs even in spite of them selfs. While in the US we try hard to win but on a longer term.

The US record wasn't really US. (-1, Troll)

AnonymousClown (1788472) | more than 3 years ago | (#34224674)

The petaflop barrier was broken in 2008 by a US office of Indian Business Machines (IBM) so it's technically NOT a US record.

Re:US becoming less superpowery (-1, Troll)

Anonymous Coward | more than 3 years ago | (#34224702)

Damn, I feel like Britain after WW2.

Considering their diets and eating habits, I'm not sure they know the war is over.

Re:US becoming less superpowery (0)

Anonymous Coward | more than 3 years ago | (#34224770)

Still the superpower in Lardarse-ery and Parochialism. No contest there.

About hardware, not operating systems (1)

noidentity (188756) | more than 3 years ago | (#34224446)

Isn't this about hardware, not operating systems (other than the OS being able to support the hardware)? And isn't the hardware simply about how much money you have to throw at it?

Re:About hardware, not operating systems (1)

rrossman2 (844318) | more than 3 years ago | (#34224528)

Well, it says the hardware ran linux at X speed, and windows at less than X speed...

Re:About hardware, not operating systems (3, Insightful)

hey! (33014) | more than 3 years ago | (#34224662)

Well, it says the hardware ran linux at X speed, and windows at less than X speed...

Actually the article doesn't say that. The hardware was different: the Linux configuration had more nodes than the Windows configuration. This *might* have been for some technical reason, or it might have been for some extraneous reason (e.g., they have better things to do with this beast than run benchmarks on it).

In any case, the difference between the Windows and Linux scores was for practical purposes insignificant. It was a *benchmark*, not a real computation. Even if the benchmark is pretty good, the mix of resources used by a real program won't match it exactly (e.g. an app that uses less floating point calculations but more memory allocations might see a very different result).

Microsoft's aim is not to run on research clusters, but to make inroads into businesses that have in-house Windows system administration and programming capabilities and might have use for high performance computing. If so, the linpack benchmark is probably close to irrelevant for many applications.

Re:About hardware, not operating systems (2, Insightful)

jedidiah (1196) | more than 3 years ago | (#34224764)

Well... "being able to run more nodes" is also a function of software. It's called scalability.

Being able to throw more nodes at a problem would certainly be a "feature" for HPC.

Re:About hardware, not operating systems (2, Insightful)

LurkerXXX (667952) | more than 3 years ago | (#34224960)

From TFA: "I'm not sure why the tests were run on a different number of nodes".

No one anywhere, except in your imagination, said Windows wasn't *able* to run on the extra nodes.

Re:About hardware, not operating systems (2, Funny)

spisska (796395) | more than 3 years ago | (#34225182)

No one anywhere, except in your imagination, said Windows wasn't *able* to run on the extra nodes.

I figure it was because the testers couldn't afford the licensing fees.

Re:About hardware, not operating systems (-1, Flamebait)

jedidiah (1196) | more than 3 years ago | (#34226010)

I have personally been involved with server deployments that had to be migrated to Unix because Windows was unable to scale to meet demand. The idea that Windows can't scale is hardly outrageous.

Linux had better numbers. That is a fact.
Linux ran on more nodes. That is a fact.

It's your excuses are a matter of pure "imagination".

They could do better but they chose not to?

That is just silly.

Re:About hardware, not operating systems (1)

gmack (197796) | more than 3 years ago | (#34225282)

Microsoft's aim is not to run on research clusters, but to make inroads into businesses that have in-house Windows system administration and programming capabilities and might have use for high performance computing. If so, the linpack benchmark is probably close to irrelevant for many applications.

It's not a smart aim. Programmers who make clustered apps are a different skill set from the programmers who make most of the software out there and if your average programmer attempts it the result will not scale no matter what resources you throw at the project and this is on top of the fact that only a subset of software problems are suited to clustering to begin with.

I'm going to go out on a limb and suggest that even in the Linux case the benchmark scales better than most software that will run on a cluster.

Once you have the rare sort of programmer who can do this and the rare sort of software problem that needs it there is nothing at all that will make windows at all familiar since the rest of the supercomputing market runs on Unix/Linux.

Re:About hardware, not operating systems (1)

gmuslera (3436) | more than 3 years ago | (#34224576)

That in the same hardware Linux was faster than Windows is about operating systems.

Re:About hardware, not operating systems (1)

betterunixthanunix (980855) | more than 3 years ago | (#34224588)

Isn't this about hardware, not operating systems (other than the OS being able to support the hardware)?

No, it is about two operating systems on the same hardware, one of which (GNU/Linux) outperforming the other (Windows).

And isn't the hardware simply about how much money you have to throw at it?

No, it is also about the architectural choices.

Re:About hardware, not operating systems (2, Insightful)

KarmaMB84 (743001) | more than 3 years ago | (#34225064)

They didn't run the Windows benchmark with all available nodes. I'd assume they didn't have licenses for every node but the researchers made it sound like they had some sort of nerdgasm because the machine could benchmark running Windows on par with Linux (only a 5% margin on a smaller cluster).

Re:About hardware, not operating systems (2, Interesting)

f3rret (1776822) | more than 3 years ago | (#34225218)

Isn't this about hardware, not operating systems (other than the OS being able to support the hardware)?

No, it is about two operating systems on the same hardware, one of which (GNU/Linux) outperforming the other (Windows).

And isn't the hardware simply about how much money you have to throw at it?

No, it is also about the architectural choices.

Architectural choices are irrelevant if you don't have the funding to realize them.

If you can't afford the hardware to your fancy supercomputer, you can make the best possible choices in the word, but you're still not getting a supercomputer.

Re:About hardware, not operating systems (1)

betterunixthanunix (980855) | more than 3 years ago | (#34225928)

However, if you have the money, architectural choices make a big difference. There is a reason why you do not see many shared memory machines these days, why Ethernet is usually not the network of course for clusters, etc.

Dual boots? (3, Funny)

backslashdot (95548) | more than 3 years ago | (#34224470)

So it dual boots? press the option key or something to get into Windows and play Crysis?

Re:Dual boots? (1)

zlogic (892404) | more than 3 years ago | (#34224788)

This machine is probable capable of playing Crysis with a framerate higher that 20fps

Re:Dual boots? (1)

Nidi62 (1525137) | more than 3 years ago | (#34224792)

You assume even this computer is powerful enough to play Crisis on anything other than low settings.

Interesting (2, Insightful)

quo_vadis (889902) | more than 3 years ago | (#34224584)

It is interesting that there are 6 new entrants in the top 10. Even more interesting is the fact that GPGPU accelerated supercomputers are clearly outclassing classical supercomputers such as Cray. I suspect we might be seeing something like a paradigm shift, such as when people moved from custom interconnect to GbE and infiniband. Or when custom processors began to be replaced by Commercial Off The Shelf processors.

Re:Interesting (3, Informative)

alexhs (877055) | more than 3 years ago | (#34224746)

Even more interesting is the fact that GPGPU accelerated supercomputers are clearly outclassing classical supercomputers such as Cray

Funny that you mention Cray, as the Cray-1 [wikipedia.org] was the first supercomputer with vector processors [wikipedia.org], what GPGPUs actually are.

Re:Interesting (1)

vlm (69642) | more than 3 years ago | (#34224862)

Funny that you mention Cray, as the Cray-1 [wikipedia.org] was the first supercomputer with vector processors [wikipedia.org], what GPGPUs actually are.

Cray-1 date of birth 1976

CDC Star-100 date of birth 1974 (not a stellar business/economic/PR success, but it technically worked)

ILLIAC IV design was completed in 1966. Implementation, however, had some problems. Debatable, but sort of true to say it was first booted up in 1972 but wasn't completely debugged for a couple years. As if there has ever been a completely debugged system.

Thats the problem with "first", theres so many of them.

Re:Interesting (1)

alexhs (877055) | more than 3 years ago | (#34225832)

I went by "The vector technique was first fully exploited in the famous Cray-1" from the wikipedia :)

Apparently the difference between the CDC Star-100 and the Cray-1 is the adressing mode : Star-100 fetched and stored data in main memory while the Cray-1 had 64 64-bit registers.

On the account of ILLIAC IV, Wikipedia says it "was finally ready for operation in 1976". It booted in 1972, but wasn't reliable enough to run applications at that time. It was usable in 1975, operating only Monday to Friday and having up to 40 hours of planned maintenance a week.

Re:Interesting (1)

antifoidulus (807088) | more than 3 years ago | (#34225128)

Vector processors in supercomputing are like bellbottoms, they constantly go in and out of style. Like you said the first supercomputers were vector machines, but then with the rise of cots vector processors fell out of style for a while, came back briefly with the earth simulator, and now is back with the gpu. Unlike previous vector processors gpus have a lot more restrictions(ESP when it comes to memory bandwidth and latency), but also unlike previous vector processors gpus are dirt cheap an d developmen can essentially be outsourced to the gpu manufacturers.

Re:Interesting (1)

Sycraft-fu (314770) | more than 3 years ago | (#34224956)

Well part of the problem is that the definition of supercomptuer has become a little blurred, in particular with regards to the top500. Many things people are calling supercomputers really aren't, they are clusters. Now big clusters are fine, there are plenty of uses for them. However there are problems that they are not good at solving. In clusters, processors don't have access to memory on other nodes, they have to send over the data. So long as things are pretty independent, you can break down the problem and work on small sets without a ton of communication they work great. However for some things, like particle simulations, that doesn't work. There is so much interdependency that processors really have to be able to access memory not on their node to be able to keep things working fast.

Then there's just the problem of what you choose to test with. GPUs are good at certain kinds of problems. Things that are 32-bit floating point (64-bit with the very latest GPUs), that can be divided down in to very small slices, that don't branch a lot and when a branch does happen, everything branches the same way, and where you can fit the problem set (or at least the part you are working on) in to the GPU's memory which is generally 1-4GB. That's fine, lot of problems are like that and they fly on GPUs. However others aren't, and they are much slower than CPUs. So GPGPU systems will appear powerful given some benchmarks, not as powerful given others.

At any rate we just should be more careful with what we call a supercomputer because we are kind diluting the term. 2000 servers connected with GigE is a hell of a lot of processing power and a great cluster, but isn't really a super computer. There are reasons why some people need to pay more for special systems with high speed NUMA interconnects. Just because a cluster performs well on Linpack (what the top500 likes) doesn't mean it is good at everything.

Re:Interesting (1)

symbolset (646467) | more than 3 years ago | (#34225190)

It doesn't explicitly say, because the interconnect is proprietary. But with Infiniband the nodes can address the RAM on the other nodes. I would assume this to be the case here also.

Rate of processing power increase (1)

zyzko (6739) | more than 3 years ago | (#34224688)

The last two sentences on the summary are the most interesting ones. If you thought that the rate of growth of memory and processing power on standard home/office computers is out of hand just look at the supercomputers. These things are basicly old when delivered and their life is practically max. 3-5 years, after that nobody cares. And that is a pity considering how much these beasts cost and they are mostly funded with public (tax) money because running a business selling processor time from these things with their small lifetime would never be profitable.

Re:Rate of processing power increase (1)

Teun (17872) | more than 3 years ago | (#34224762)

But being ahead of the herd has never been cheap and the rewards (or losses for being late) have made it a necessity.

Re:Rate of processing power increase (1)

gmack (197796) | more than 3 years ago | (#34225010)

Pretty sure Render Farms have the same obsolescence problem and there are businesses that depend on them.

Re:Rate of processing power increase (2, Insightful)

Junta (36770) | more than 3 years ago | (#34225102)

Not really. Yes, there is something of an arms race for the top500, but even after the top500 no longer lists a system it will almost certainly still be in use by someone for practical purposes other than benchmarking.

Re:Rate of processing power increase (1)

bbn (172659) | more than 3 years ago | (#34225782)

The weather institute had the most powerful supercomputer in this (small) country. It was not that old, but it was time to upgrade. The new supercomputer was now the fastest supercomputer and the old one became the second fastest.

What happened to the second fastest supercomputer in the country? It was scraped. They could not afford to keep running it due to cost of powering it. They could not sell or give it away, because the economics of it did not add up. Anyone needing such computing power were better off getting new equipment that would do more for the amount of electricity needed.

Time To Hump It! (1)

b4upoo (166390) | more than 3 years ago | (#34224700)

The US had best take the processor speed race very seriously. Who knows what kind of either military or economic domination might get a leg up with super computers? And once on top it can take a century or two to dislodge a leader in technology.

With or without? (0, Troll)

Teun (17872) | more than 3 years ago | (#34224736)

Was that achieved with or without a reliable virus scanner and firewall?

Long Term Trends (1, Insightful)

Anonymous Coward | more than 3 years ago | (#34224780)

Over the next 10 years I think we will see China, India, and other growing economies surpass the US in technology and science research, just as the US surged ahead of the "old world" countries in Europe post World War 2. More Phds, more papers, more discoveries.

As life gets more comfortable and stable for a country, in terms of things like health, energy, and infrastructure, the focus always moves away from this kind of technology advancement and onto the arts, literature, and social concerns.

Try asking someone in Europe how important they think it is for Germany, or France or the UK to have a leading position in terms of supercomputing capability. It just doesn't feature on the radar for them.

The same will become true for the US over the next decade.

Re:Long Term Trends (1, Insightful)

Anonymous Coward | more than 3 years ago | (#34225764)

> surged ahead of the "old world" countries in Europe post World War 2
Sure. Europe was a pile of rubble.

> More Phds, more papers
Doesn't mean anything

> more discoveries.
by European immigrants escaping from the war

> As life gets more comfortable and stable for a country
You don't travel much. The most unstable countries are the worst in science. Also, CINC.

US technology has been leading in only one area: IT. As it was the area with the biggest growth during the last decades, the economical benefits were great.
The earth's past decades have been shaped by the US not because of technological superiority, but because of their marketing and entertainment.

Yeah, But From What I Hear... (-1, Troll)

Greyfox (87712) | more than 3 years ago | (#34224828)

If you stick a SD card into your petaflop windows cluster, it permanently modifies it...

I need one (2, Interesting)

florescent_beige (608235) | more than 3 years ago | (#34225014)

I need one so I can recalculate my budget spreadsheet in a femtosecond. These nanosecond pauses are getting old.

On a lighter note, so, why isn't this stuff changing our lives? I remember in the late 90's I read a story about how gigaflop computing would revolutionize aeronautics, allowing the full simulation of weird new configurations of aircraft that would be quantum leaps over what we had. Er, have.

Can I answer my own question? I mean, can I answer two of my questions? No, make that three now. Anyway, my perspective is that the kinds of engineers who have the knowledge required to write this kind of software aren't software engineers. In fact, aeronautics is rife with some of the most horrifying software imaginable. Much of it being Excel macros. Seriously. I wrote some of it.

Re:I need one (0)

Anonymous Coward | more than 3 years ago | (#34225358)

After all the great advances with modelling and simulation, we go from being information-limited to physics-limited again. You can design an amazing wing structure, but it needs materials with 3x the tensile strength of existing alloys. Research indicates you should switch to fiber composites, but the manufacturing lead in, tooling and maintenance work is a completely different game.

You may have seen giant low-noise flying wing designs out of engineering colleges, looks like progess. Fabricating a working design within the constraints of current infrastructure becomes more time consuming, because you can't simulate that so well.

So why would anyone want to do this? (5, Informative)

Entropius (188861) | more than 3 years ago | (#34225060)

I do scientific high-performance computing, and there is simply no reason anyone would want to run Windows on a supercomputer.

Linux has native, simple support for compiling the most common HPC languages (C and Fortran). It is open source and extensively customizable, so it's easy to make whatever changes need to be made to optimize the OS on the compute nodes, or optimize the communication latency between nodes. Adding support for exotic filesystems (like Lustre) is simple, especially since these file systems are usually developed *for* Linux. It has a simple, robust, scriptable mechanism for transferring large amounts of data around (scp/rsync) and a simple, unified mechanism for working remotely (ssh). Linux (the whole OS) can be compiled separately from source to optimize for a particular architecture (think Gentoo).

What advantage does Windows bring to a HPC project?

Re:So why would anyone want to do this? (1)

BigFootApe (264256) | more than 3 years ago | (#34225618)

The only "advantage" is when you're defaulted to Windows because an ISV has a required shrink wrap application available only for Windows.

Anti-American Bias? (1)

l0ungeb0y (442022) | more than 3 years ago | (#34225100)

The summary clearly states "Linux retains a spot in the top-5", then goes on to say that China has 2 "top spots", with America and Japan only having one spot a piece. And while that may be true if you limit it to a "top-4", America is tied with China if you count the number 5 position. So why does the OP pull this slight of hand, only counting the top 4 as the "top spots" after making reference to the "top-5" as the measure of top positions? Looks like bias to me.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...