Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Cray Unveils Its First GPU Supercomputer

Unknown Lamer posted more than 3 years ago | from the runs-quake-at-ten-billion-fps dept.

Supercomputing 76

An anonymous reader writes "Supercomputer giant Cray has lifted the lid on its first GPU offering, bringing it into the realm of top supers like the Chinese Tianhe-1A" The machine consists of racks of blades, each with eight GPU and CPU pairs (that can even be installed into older machines). It looks like Cray delayed the release of hardware using GPUs to work on a higher level programming environment than is available from other vendors.

cancel ×

76 comments

Sorry! There are no comments related to the filter you selected.

Which following the pattern of other articles... (5, Funny)

Anonymous Coward | more than 3 years ago | (#36233452)

...will promptly be used for mining BitCoins.

Re:Which following the pattern of other articles.. (0)

Anonymous Coward | more than 3 years ago | (#36233538)

At least until the value of bitcoin is such that it produces zero economic profit. The return on your supercomputer powered bitcoin venture will approximate the return on T-bills in... about... hold on..., OK. Now!.

Re:Which following the pattern of other articles.. (0)

Anonymous Coward | more than 3 years ago | (#36237570)

"Private homosexuality was a criminal offence in Britain up until - astonishingly - 1967. In 1954 the British mathemetician, Alan Turing, a candidate along with John Neumann for the title of father of the computer, committed suicide after being convicted of the criminal offence of homosexual behaviour in private.He was offered a choice between two years in prison (you can imagine how the other prisoners would have treated him) and a course of hormonal injections which could be said to amount to chemical castration, and would have caused him to grow breasts. His final, private choice was an apple that he had injected with cyanide. As the pivotal intellect in the breaking of the German Enigma codes, Turing arguably made a greater contribution to defeating the Nazis than Eisenhower or Churchill. Thanks to Turing and his 'Ultra' colleagues at Bletchley Park, Allied generals in the field were consistently, over long periods of the war, privy to detailed German plans before the German generals had time to implement them. After the war, when Turing's role was no longer top secret, he should have been knighted and feted as a saviour of his nation. Instead, this gentle, stammering eccentric genius was destroyed, for a 'crime', committed in private, which harmed nobody. Once again, the unmistakable trademark of the faith-based moralizer is to care passionately about what other people do (or even think) in private."

Re:Which following the pattern of other articles.. (1)

TWX (665546) | more than 3 years ago | (#36238496)

But is it compatible with Duke Nukem Forever?

Sadly, the machine I casemodded in full Duke regalia in anticipation of DNF back in 1997 is wholly incapable of running the game, and since it's AT form factor it ain't gettin' upgraded...

Sweet (0)

Linsaran (728833) | more than 3 years ago | (#36233460)

Now to put this to work mining bitcoins!
Wait you mean there are legitimate reasons to have racks of supercomputing GPU's other than to solve arbitrary math problems in the name of psuedocurrency?

Re:Sweet (2)

The Master Control P (655590) | more than 3 years ago | (#36236174)

Physics simulations involving discretized partial differential equations can make any machine less powerful than a Matroshka Brain cry uncle. If some of my optimizations work out, I'll be able to get near-slideshow framerates on a 512x256 2D simulation of a single, ideal, purely hydrodynamic fluid using nVidia's top of the line C2050 GPU.

Now consider that I'd prefer to have at least a thousand cells per side in all 3 dimensions, which makes the problem ten thousand times larger, preferably several thousand which would make it nearly a million times larger. Then add magnetism, resistance, viscosity and ExB drift, and move to at least a two or three fluid model, which would make for about 30 times the work per cell over what I've currently got running.

So before even beginning to consider atomic physics, radiation and non-ideal equations of state, I've made the problem about 20-30 million times more difficult than one which utilizes a GPU to the limit. Even if everything scaled perfectly across a thousand GPUs, major problems would take days or weeks to run :(

Imagine (2)

taktoa (1995544) | more than 3 years ago | (#36233462)

A Beowulf cluster of these!

Re:Imagine (2)

VortexCortex (1117377) | more than 3 years ago | (#36233680)

Meh, it's got "blades" -- it might as well be a Beowulf cluster.

Re:Imagine (2)

taktoa (1995544) | more than 3 years ago | (#36233732)

A Beowulf cluster of Beowulf clusters is not a Beowulf cluster, it's a multidimensional Beowulf cluster.
Likewise, a BOINC of Beowulf clusters, or a "jagged Beowulf cluster", is not just a Beowulf cluster.

Re:Imagine (1)

jd (1658) | more than 3 years ago | (#36233992)

You'd want to make a MOSIX cluster of Beowulf clusters, so as to allow for each cluster to appear as a node without any conflicts. To make it 3D, you'd use a Kerrighed cluster of MOSIX clusters of Beowulf clusters.

Re:Imagine (0)

Anonymous Coward | more than 3 years ago | (#36237410)

A Beowulf cluster of Beowulf clusters is not a Beowulf cluster

A Beowulf cluster is always, per definition, a Beowulf cluster...

Re:Imagine (1)

bryan1945 (301828) | more than 3 years ago | (#36235066)

All your cluster grits are belong to us

Re:Imagine (1)

djdanlib (732853) | more than 3 years ago | (#36238444)

All your cluster grits are belong to Cowboy Neal, you insensitive clod!

Re:Imagine (1)

bryan1945 (301828) | more than 3 years ago | (#36239308)

Damn!

First (-1)

Anonymous Coward | more than 3 years ago | (#36233474)

First

Will It Run +4, Interesting (-1)

Anonymous Coward | more than 3 years ago | (#36233480)

Glorious Revolution [dailymail.co.uk] ?

Yours In Beijing,
Kilgore Trout

P.S.: Newt Gingrich For .............. Town Fool !

But... (0)

Arador Aristata (1973216) | more than 3 years ago | (#36233578)

...can it run Metro 2033 on High?

Re:But... (2)

Colonel Korn (1258968) | more than 3 years ago | (#36233676)

...can it run Metro 2033 on High?

My single GTX580 can.

Re:But... (0)

Arador Aristata (1973216) | more than 3 years ago | (#36233758)

But on six screens?

I wonder... (0)

Anonymous Coward | more than 3 years ago | (#36233706)

Does it use bus-parity?

So now... (-1, Troll)

bmo (77928) | more than 3 years ago | (#36233734)

...I can mine all the remaining bitcoins and corner the market!

Oh wait, it's just a pump-and-dump(mtgox) money laundering(silkroad) scam.

Nevermind then.

--
BMO

Re:So now... (1)

Yvanhoe (564877) | more than 3 years ago | (#36234090)

It is if you just mint and sell or buy to keep the money. But if you use it to trade dematerialized goods, it can become a rally efficient currency.

Kraken Cray XT5 (1, Interesting)

Dremth (1440207) | more than 3 years ago | (#36233738)

I did some rough calculations regarding NICS's Kraken Cray XT5 and bitcoin mining. FYI, The Kraken was the 8th fastest supercomputer in Novermber of 2010. I determined that if the supercomputer put forth all of it's resources to mine bitcoins, it could generate 1,511.61 per day (or about $8,450.53/day). Granted, the Kraken has just regular CPU's doing the calculations. I could only imagine what a Cray supercomputer with GPU's in it would be capable of...

Re:Kraken Cray XT5 (4, Informative)

icebraining (1313345) | more than 3 years ago | (#36233998)

Uh, no you couldn't. The rate of bitcoin creation is fixed (it's about 50 BTCs / 10 mins, for now). If you add more computational time the system will adjust and it'd become proportionally harder to generate them, so the global rate would keep stable.

So despite the 100 thousand-fold increase in mining difficulty in the past 15 months, the network continuously self-adjusts itself to issue one block of Bitcoins about every 10 minutes. The difficulty increase is entirely caused by users competing between themselves to acquire these blocks.

http://blog.zorinaq.com/?e=49 [zorinaq.com]

Re:Kraken Cray XT5 (0)

Anonymous Coward | more than 3 years ago | (#36234224)

So uh, one could mine 7,200 bitcoins per day, minus what others mine. What was your point again?

Re:Kraken Cray XT5 (1)

icebraining (1313345) | more than 3 years ago | (#36234404)

1511.61 is ~21% of the 7200 daily BTCs, so you'd need around that in computing power.

According to the NICS page [tennessee.edu] , the Kraken has a peak performance of 1.17 PetaFLOP.
According to bitcoin watch [bitcoinwatch.com] , the network has now a performance of 46.4 PetaFLOP.

Now, I'm no mathematician, but it seems to me that 1.17 is far from 21% of 46.4; according to my calculator, it's in fact little over 2.5%.

Re:Kraken Cray XT5 (1)

Dremth (1440207) | more than 3 years ago | (#36234432)

Ok, yes, but the difficulty would increase for everyone mining as well. Last I checked, the entire bitcoin network had a mining strength of 1,747 Ghash/s. The Kraken alone has about 367 Ghash/s. That's 21% of the entire network. With all that power coming into the network at once, you're still bound to make a TON of bitcoins, because you're essentially taking a substantially large portion of bitcoins from other miners. I did neglect to factor in the scaling of difficulty (and that's why I said it was a rough calculation. Maybe I should've emphasized "rough" more), so you may not make as much as 1,511.61 BTC/day, but you're still going to make quite a bit (no pun intended).

Re:Kraken Cray XT5 (1)

icebraining (1313345) | more than 3 years ago | (#36234452)

Last I checked, the entire bitcoin network had a mining strength of 1,747 Ghash/s

Then you're out of date, it's 3653 Ghash/s [bitcoinwatch.com] now.

Re:Kraken Cray XT5 (1)

Dremth (1440207) | more than 3 years ago | (#36234490)

My mistake then. I was basing my information off of this: http://www.bitcoinminer.com/post/5622597370/hashing-difficulty-244139 [bitcoinminer.com]

Total network hashing: 1,747 Ghash/sec

Either the network strength has significantly increased in the past week, or one of those two sites shouldn't be trusted. Your source looks more reliable.

Re:Kraken Cray XT5 (0)

Anonymous Coward | more than 3 years ago | (#36234750)

It has increased significantly; http://bitcoin.sipa.be/

Re:Kraken Cray XT5 (1)

qubezz (520511) | more than 3 years ago | (#36236850)

That means with ten of these, you could have the majority of the compute power, enough to earn majority trust and poison the bitcoin system with your own fake transaction records and wipe everybody's funny-money into oblivion. I'm sure the NSA has enough compute power they could do that now if they wanted to crypto bitcoins instead of your emails for a day. The FBI would probably pay as much attention to a bad actor crashing a fake currency as they would someone hacking your WoW and selling your Traveler's Tundra Mammoth.

Re:Kraken Cray XT5 (1)

Intron (870560) | more than 3 years ago | (#36237622)

You are ignoring that it costs more to run a supercomputer than it could generate in bitcoins.

Re:Kraken Cray XT5 (1)

ObsessiveMathsFreak (773371) | more than 3 years ago | (#36235786)

Wow. So that's why I left Bitcoin on for four days straight and didn't mine a single coin.

Explain to me again why anyone is going to be running background Bitcoin processes in 2015?

Re:Kraken Cray XT5 (1)

Dremth (1440207) | more than 3 years ago | (#36236306)

I would imagine that by 2015, the mining for bitcoin will have slowed quite a bit. But, by then it should have hopefully gained enough popularity that it can function as just a p2p economy.

Re:Kraken Cray XT5 (1)

qubezz (520511) | more than 3 years ago | (#36236862)

You need to join a pool. Acting alone, and at the current rate processing power is being added with all the pump and dump slashspam, you likely won't win a 50 coin fabulous prize if you left your computer running for four years.

Re:Kraken Cray XT5 (1)

maxume (22995) | more than 3 years ago | (#36237876)

If the apparent value of bitcoins is far higher than the cost of electricity needed to generate them, people will still run clients.

Re:Kraken Cray XT5 (1)

luminate (318382) | more than 3 years ago | (#36236952)

The Kraken reportedly consumes about 2.8 megawatts of power, so assuming your figures are accurate, the power alone would cost about $6,720/day (at $0.10/kWh) for a "profit" of $1730/day. Factor in the fact that it's a $30 million machine with a very short usable lifespan (i.e. massive depreciation), and they'd be losing a ridiculous amount of money.

"High level" programming environment? Sigh. (2)

nxmehta (784271) | more than 3 years ago | (#36233796)

The fact that writing C and Fortran code using a message passing library constitutes a high level programming environment is a complete indictment of the sad state of parallel programming today. Seriously, do you want to be programming complex parallel algorithms on HPC machines using Soviet Era technology? I've tried that and it made me want to jump out a window. It's about as easy to program in this type of an environment as it is to program an FPGA (hint: it's a pain in the ass).

Re:"High level" programming environment? Sigh. (1)

jd (1658) | more than 3 years ago | (#36234006)

Occam is higher-level than C or Fortran, and it should be possible to adapt Erlang to parallelize across a cluster.

Re:"High level" programming environment? Sigh. (1)

David Greene (463) | more than 3 years ago | (#36234034)

You're forgetting things like PGAS and other higher-level parallel programming models. MPI is the dominant technology in use so these machines have to support it well. But they also support more future-looking tools.

Re:"High level" programming environment? Sigh. (3, Interesting)

Anonymous Coward | more than 3 years ago | (#36234116)

Really, you've tried it and it made you want to jump out of a window? OpenMP is an extremely simple, easy to use add-on to the C language. It is one of the two current standards used for parallelized scientific computing, and although it will eventually be succeeded by a language with more features, it will be difficult for its successor to match its ease and workmanlike grace.

I honestly have trouble believing someone could have much difficulty with it. If you want to have the work in a "for" loop parallelized the extremely mentally challenging thing to do is write

#pragma omp for

just before your for loop. Look at all that difficult message passing you have to contend with! And since you're writing scientific calculations, C generally lends itself to good clarity in the code. I worked with a math major who had only done the smallest bits of C and Java beforehand, and he picked up OpenMP immediately. If you can't handle it, you really should reconsider whether your talents lie in software development. If you want to see what a truly awful parallelizing language or API is, look at UPC - unified parallel C.

Re:"High level" programming environment? Sigh. (0)

Anonymous Coward | more than 3 years ago | (#36234234)

MPI != OpenMP

Re:"High level" programming environment? Sigh. (1)

KainX (13349) | more than 3 years ago | (#36234244)

MPI != OpenMP

HTH.

Re:"High level" programming environment? Sigh. (0)

Anonymous Coward | more than 3 years ago | (#36236782)

The article said they're adding OpenMP support, hence my comments were about OpenMP. If they also support MPI (I admit I skimmed quickly), all the better -- it's not particularly difficult to use, either.

Re:"High level" programming environment? Sigh. (1)

KainX (13349) | more than 3 years ago | (#36240934)

Maybe so, but the comment to which you replied, and with which you disagreed, was specifically about "using a message passing library." That's MPI, not OpenMP. It's like responding to someone saying, "I don't like spam!" with "But grilled cheese sandwiches are so much tastier when you put ham on them, so clearly you're wrong!" Your statement may be technically correct, but as a response to the topic at hand, it is in error. :-)

Re:"High level" programming environment? Sigh. (1)

Coryoth (254751) | more than 3 years ago | (#36235152)

What did you find awful about UPC? I've foudn it very pleasant to work with.

Re:"High level" programming environment? Sigh. (1)

Bill Barth (49178) | more than 3 years ago | (#36235222)

Have you tried it off-node?

Re:"High level" programming environment? Sigh. (1)

David Greene (463) | more than 3 years ago | (#36238644)

Yep. Works great!

Re:"High level" programming environment? Sigh. (1)

Coryoth (254751) | more than 3 years ago | (#36259272)

Have you tried it off-node?

Yes, and it works just fine; the only issues would be if I got my placement wrong via poor layout/blocking or I neglected to upc_memget something that I needed intense access to but for some reason couldn't make local in the initial layout. Neither of those require much forethought at all to avoid.

Re:"High level" programming environment? Sigh. (3, Insightful)

MaskedSlacker (911878) | more than 3 years ago | (#36234268)

The point is not for the job to be easy for your lazy ass, the point is for the code to execute as quickly as possible.

Re:"High level" programming environment? Sigh. (0)

Anonymous Coward | more than 3 years ago | (#36237656)

Then again, if "high-end" means "we've implemented a lot of busywork in an effective way, so you can focus on your actual problem", I'm all for it.
Oh, and there's nothing fundamentally wrong about writing HPC code in e.g. python ... as long as the modules you use for the heavy lifting are written in a lower-level language. ;)

Re:"High level" programming environment? Sigh. (1)

dbIII (701233) | more than 3 years ago | (#36237840)

It depends on the task. Some of them are not complex. Some things are so embarrassingly parallel that you just tell the first node or whatever to apply a function to the first lot of data, feed the next lot to the next node and so on - then just concatenate the results together at the end. There's a lot of stuff in geophysics like that, for example - apply filter X to twenty million traces (where a trace is just like an audio track). You could do that with twenty million processor cores if you had them (but other bottlenecks would render that insane long before you got near that number).

Re:"High level" programming environment? Sigh. (1)

David Greene (463) | more than 3 years ago | (#36238746)

There's more information on the GPU programming model in the HPCWire article. It is OpenMP directive-based, making it quite a bit easier to use than low-level CUDA and other such things.

Chinese computer dick waving (-1, Troll)

The O Rly Factor (1977536) | more than 3 years ago | (#36233836)

Great to see that Cray is joining in on getting a slice of the money IBM is raking in through providing computing products to the USA vs. China dick waving competition over who has the faster supercomputers.

Nothing but a complete moneypit if we have no actual experiments to run on them that require that kind of scale. It's no different than watching 15 year olds argue about which of their desktops can play Call of Duty at a better framerate.

Re:Chinese computer dick waving (4, Insightful)

LWATCDR (28044) | more than 3 years ago | (#36233974)

You would be right except that their are applications that do require the performance.
You can never have too much computing power for some applications like climate modeling. So what is your point?

Re:Chinese computer dick waving (0)

Anonymous Coward | more than 3 years ago | (#36234798)

You can never have too much computing power for some applications like climate modeling.

...or calculating more Pi digits, as we're talking about useful applications.

Re:Chinese computer dick waving (1)

MaskedSlacker (911878) | more than 3 years ago | (#36234290)

Nothing but a complete moneypit if we have no actual experiments to run on them that require that kind of scale

And we have plenty, the big ones off the top of my head being nuclear weapons work (as we've replaced live tests with computer simulations entirely), protein folding, climate modelling, and signals intelligence processing. I'm sure other ./ers without your childishly narrow experience of the world can think of others.

Now you can crash linux faster. (-1)

Anonymous Coward | more than 3 years ago | (#36233874)

And get modded flamebait faster too.

Will it support Fortran? (3, Interesting)

LWATCDR (28044) | more than 3 years ago | (#36233950)

There is still a lot of HPC applications written in Fortran with this run them?
Also how hard if any of a porting will be needed to get good results from this.

Re:Will it support Fortran? (0)

Anonymous Coward | more than 3 years ago | (#36234222)

It's Linux. It'll even run COBOL if you like ;)

Re:Will it support Fortran? (0)

Anonymous Coward | more than 3 years ago | (#36234226)

There is still a lot of HPC applications written in Fortran with this run them?
Also how hard if any of a porting will be needed to get good results from this.

Yes and No. You can write your CPU executed code in Fortran and interface with CUDA code that will execute on the GPU. Also there is some packages for using compiler directives to create CUDA code ( we all note how wonderful that will work ). The former is fairly efficient if there is enough parallel work to do on the GPU that you can neglect the time of shipping data from memory to the GPU.

Re:Will it support Fortran? (1)

qubezz (520511) | more than 3 years ago | (#36236910)

Yes and No. You can write your code, but good luck finding a punch card reader [wikimedia.org] for the Cray....

Re:Will it support Fortran? (0)

Anonymous Coward | more than 3 years ago | (#36236122)

Hopefully, it won't, so it forces all you Fortran slackers, to finally pick it up, so you won't ask thrice price for the same amount of work, just because it takes three times as long, to write something in shitty Fortran.

(The same thing should be done with C and C++. And soon Java. The triad of programming inefficiency and inelegance FAILs. [If you haven't seen something better, don't bother to comment. Go see the world first. Next stop: Haskell])

Re:Will it support Fortran? (0)

Anonymous Coward | more than 3 years ago | (#36236890)

And soon Java.

Ok I mostly agreed with you up to here. Two different levels of runtime:

For example java requires the "environment" to run. (Windows --- java.exe and related linked libraries)

C++, for example, is compiled specifically to be run on the respective operating system. (Windows you end with an executable)

(Windows is an easy example...chosen arbitrarily)

JAVA != supercomputing, I mean really? When is the last time someone programmed flight control software in an airplane in java or any other interpreter language... For something as delicate as a supercomputer, control matters. Additionally having to specify "heap size" with java on a super-computer is just laughable!

Re:Will it support Fortran? (1)

s.d. (33767) | more than 3 years ago | (#36236226)

PGI makes a CUDA Fortran compiler [pgroup.com] , but with GPUs, it's not as simple as just recompiling, the code has to be rewritten to take advantage of the accelerator and it's unique architecture.

Re:Will it support Fortran? (1)

LWATCDR (28044) | more than 3 years ago | (#36238104)

So I guess the second part of the question is. Have the HPC libraries been ported yet. I have heard one of the big reasons that Fortran is still so popular is the large library of highly optimized HPC libraries. The other reason is that Fortran is supposed to be really easy to optimize which I can believe.

Into the Realm? (4, Informative)

David Greene (463) | more than 3 years ago | (#36234010)

bringing it into the realm of top supers like the Chinese Tianhe-1A

Uh, Cray already has machines in service that blow Tianhe-1A out of the water on real science. Tianhe-1A doesn't even exist anymore. It was a publicity stunt. Cray is already making the top supers. It's others that have to catch up.

Re:Into the Realm? (0)

Anonymous Coward | more than 3 years ago | (#36234346)

Next year IBM is coming out with one dubbed "Mira" under the Blue Gene/Q arch. $50 Million price tag, it's for the DoE. It should also blow that T-1A out of the water.

Re:Into the Realm? (1)

fintler (140604) | more than 3 years ago | (#36235444)

ANL's Mira is going to be roughly half as fast as LLNL's Sequoia.

Re:Into the Realm? (0)

Anonymous Coward | more than 3 years ago | (#36236956)

Well, can you name them? For some reason, you didn't. My straw computer is better than your straw computer.

Re:Into the Realm? (1)

David Greene (463) | more than 3 years ago | (#36237838)

Jaguar.

As others have noted, IBM also makes machines that far surpass Tianhe-1A on real work. So does SGI.

Tianhe-1A was interesting for its intended purpose (Top-500) but it's a long way from being a productive tool.

My God... (1)

Gerzel (240421) | more than 3 years ago | (#36234320)

It's made of cores!

cheap jordans (-1)

Anonymous Coward | more than 3 years ago | (#36235124)

Also have a right to air jordan shoes 1 mid classic purple white to jordans for cheap [cheapjordanshoes23.net] hoodie now? One more reason would be the cheap Jordan shoes [cheapjordanshoes23.net] Tiffany rings wholesale makes tiffany & co bracelets cheaper than ever; Lively fragrance gives a specific freshness in burberry outlet [burberryoutlet-usa.net] same time adding a touch of sexy,

Password cracking (0)

Anonymous Coward | more than 3 years ago | (#36235392)

Using applications that can take advantage of GPU processing power and multi-core CPU's to crack passwords means that anyone that can afford one of these can do a whole lot more in that regard...
Imagine generating rainbow tables!

Impressive hardware.

nitpick time (-1)

Anonymous Coward | more than 3 years ago | (#36236936)

Sigh. If you're not using it for processing graphics, it's technically not a "GPU". After reading the article, they are simply using these as auxillary processors which are slaved under a main CPU. They are not actually using video cards, they're only using the GPU on a custom board, so this really is nothing at all newsworthy to me since it's no different than using any other dedicated co-processor. Yes, it's cheaper because they're using a commodity already on the market, but this is a far cry from the article's implication that they just made a Supercomputer out of spare video cards.

Will it support Fortran? (0)

Anonymous Coward | more than 3 years ago | (#36237852)

There is still a lot of HPC applications written in Fortran with this run them?
Also how hard if any of a porting will be needed to get good results form builder [phpforms.net] .

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?