×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

FASTRA II Puts 13 GPUs In a Desktop Supercomputer

timothy posted more than 4 years ago | from the lucky-number dept.

Supercomputing 127

An anonymous reader writes "Last year tomography researchers of the ASTRA group at the University of Antwerp developed a desktop supercomputer with four NVIDIA GeForce 9800 GX2 graphics cards. The performance of the FASTRA GPGPU system was amazing; it was slightly faster than the university's 512-core supercomputer and cost less than 4000EUR. Today the researchers announce FASTRA II, a new 6000EUR GPGPU computing beast with six dual-GPU NVIDIA GeForce GTX 295 graphics cards and one GeForce GTX 275. The development of the new system was more complicated and there are still some stability issues, but tests reveal the 13 GPUs deliver 3.75x more performance than the old system. For the tomography reconstruction calculations these researchers need to do, the compact FASTRA II is four times faster than the university's supercomputer cluster, while being roughly 300 times more energy efficient."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

127 comments

Easy money to be made? (0, Redundant)

Darkness404 (1287218) | more than 4 years ago | (#30466160)

It sounds like there might be easy money to be made buying these components, putting them in a computer case and then reselling them for profit at various universities. Just wait for the "Dell" of supercomputers.

Re:Easy money to be made? (1, Flamebait)

Hatta (162192) | more than 4 years ago | (#30466240)

Where do you get a motherboard that can accept 5 graphics cards?

Re:Easy money to be made? (0, Redundant)

LordKaT (619540) | more than 4 years ago | (#30466278)

7 graphics cards. Plus 4 power supplies.

Methink "easy" in the GP's context means "easier than building a supercomputer from the ground up like IBM currently does"

Re:Easy money to be made? (1)

Korin43 (881732) | more than 4 years ago | (#30467978)

What's so hard about 6 graphics cards and 4 power supplies? It's not like you have to hook them up differently. The only hard part would be finding a case they fit in.

Re:Easy money to be made? (1)

CityZen (464761) | more than 4 years ago | (#30468822)

There were several difficulties. The most obvious is that they fit 7 double-wide cards into 7 single-wide slots. The next was that the motherboard BIOS crashes when more than 5? boards are installed. The next was that in order to allocate enough I/O space, all unnecessary devices had to be disabled, and even still the Linux kernel needed to be hacked to reduce the space allocated to various resources. After all that, it was a piece of cake.

Re:Easy money to be made? (1, Interesting)

Hatta (162192) | more than 4 years ago | (#30466302)

Oh, I read that wrong, it's 7 graphics cards. Who makes such a motherboard?

Re:Easy money to be made? (4, Informative)

Chirs (87576) | more than 4 years ago | (#30466356)

Um...read the article?

The motherboard is a ASUS P6T7 WS Supercomputer.

Re:Easy money to be made? (1)

petermgreen (876956) | more than 4 years ago | (#30466656)

umm where in TFA does it say that?!

Re:Easy money to be made? (1)

jo_ham (604554) | more than 4 years ago | (#30467424)

In the huge bullet point list, in bold, by product code, with a further text explanation for each piece.

It's halfway down the page underneath the photograph of the machine and the bold face, all caps title "FASTRA II".

Do you need a screenshot also?

Re:Easy money to be made? (1)

hairyfeet (841228) | more than 4 years ago | (#30467428)

Yeah and it is surprisingly cheap [google.com] for a board that crazy powerful at $400. I bet we'll see more colleges cooking up their own supercomputers for specialized tasks with the price THAT low.

Re:Easy money to be made? (-1, Offtopic)

etrading76 (1702136) | more than 4 years ago | (#30468428)

Hi,Christmas sale, there are exquisite gifts... Dear Ladies and Gentlemen, Christmas sales, there are exquisite gift, here are the most fashionable and most noble gift, please come to order.For details, please consult: http://www.etradingitems.com/ [etradingitems.com] Quality is our Dignity; Service is our Lift. Nike shox(R4,NZ,OZ,TL1,TL2,TL3) $35 Handbags(Coach lv fendi d&g) $35 Tshirts (Polo ,ed hardy,lacoste) $16 http://www.etradingitems.com/ [etradingitems.com] @@@@@@#####$&^%@#@

Re:Easy money to be made? (-1, Troll)

Edwards1984 (1702120) | more than 4 years ago | (#30468444)

http://www.kkshoe.com/ [kkshoe.com] Hello, dear ladies and gentlemen, Buy now proposed a "Christmas gift '. A rare opportunity, what are you waiting for? Quickly move your mouse bar. Activities As of December 26 commodity is credit guarantee, you can rest assured of purchase, kkshoe will provide service for you all, welcome to 1. sport shoes : Jordan ,Nike, adidas, Puma, Gucci, LV, UGG , etc. including women shoes and kids shoes. 2. T-Shirts : BBC T-Shirts, Bape T-Shirts, Armani T- Shirts, Polo T-Shirts,etc. 3. Hoodies : Bape hoody, hoody, AFF hoody, GGG hoody, ED hoody ,etc. 4. Jeans : Levis jeans , Gucci jeans, jeans, Bape jeans , DG jeans ,etc.For details, please consult http://www.kkshoe.com/ [kkshoe.com]

Re:Easy money to be made? (1)

skirtsteak_asshat (1622625) | more than 4 years ago | (#30466496)

> Furthermore, the researchers believe the performance benefit will be even greater once they solve the remaining stability problems...

Hah. Hope they can write BIOS code from scratch... can you imagine trying to get mobo vendor support?

Re:Easy money to be made? (1)

Nutria (679911) | more than 4 years ago | (#30467294)

Hah. Hope they can write BIOS code from scratch... can you imagine trying to get mobo vendor support?

Yet another RTFA (or, in this case, WTFV).

Re:Easy money to be made? (1)

citizenr (871508) | more than 4 years ago | (#30466634)

Where do you get a motherboard that can accept 5 graphics cards?

msi 890FX-GD70 6x PCIE 2.0 x16

GPU accuracy (1)

tbischel (862773) | more than 4 years ago | (#30466494)

It used to be that GPUs would sacrifice accuracy for speed in floating point calculations, making them unsuitable for scientific computing. Is this still the case?

Re:GPU accuracy (5, Informative)

kpesler (982707) | more than 4 years ago | (#30466606)

Presently the G200 GPUs in this machine support double-precision, but at about 1/8 the peak rate of single-precision. In practice, since most codes tend to be bandwidth limited, and pointer arithmetic is the same for single and double precision, double-precision performance is usually closer to 1/2 that of single-precision performance, but not always. With the Fermi GPUs to be released early next year, double-precision peak FLOPS will be 1/2 of single-precision peak, just like on present X86 processors. Also note that many scientific research groups, such as my own, have found that contrary to dogma, single-precision is good enough for most of the computation, and that a judicious mix of single and double-precision arithmetic gives high-performance with sufficient accuracy. This is true for some, but not all, computational methods.

Re:GPU accuracy (2, Interesting)

hairyfeet (841228) | more than 4 years ago | (#30469142)

Question: Since you seem to be pretty knowledgeable on the subject, have you or any of your colleagues used or tried the AMD Stream SDK [gpgpu.org]? Because those ATi 5870s look to be pretty scary as far as raw power, and since the AMD SDK supports OpenCL on both the CPU and GPU, and AMD has opened up their code as well as supporting both Windows and Linux 32/64 bit I was just curious if you or anyone else here has tried it?

Re:GPU accuracy (3, Interesting)

Beardo the Bearded (321478) | more than 4 years ago | (#30466752)

First, a gaming card is going to get fast firmware. A workstation card is going to get accurate firmware. I imagine that supercomputer cards would get specialized firmware. (I only skimmed the summary.)

GPUs are excellent at solving certain types of problems and excel at solving matrices. (That's what your video card is doing while it's rendering.) The best part of that is that most, if not all, mathematical problems can be expressed as a matrix, meaning that your super-fast GPU can solve most math problems super-fast.

Next, GPUs love working together since they don't care about what the OS is doing. All they do is take raw data and respond with an answer. Usually we're putting that answer onto the display, since otherwise wtf are we doing with a GPU? In this case, the results are returned instead of using the flashy display. So what you end up with is a set of really fast, specialized, parallel engines solving broken down matrices.

They're also not subject to the marketing whims of Moore's Law, so you can often get faster cards sooner than faster CPUs. To break down a supercomputer so that you get this kind of performance for 4000 EURO is a fantastic achievement. It's almost, but not quite, hobby range. (I'd still put money on someone trying to evolve this into a gaming rig...)

double precession needed for matrix (1)

peter303 (12292) | more than 4 years ago | (#30466974)

Carefule about equating tesellation processing with matices. Many matrice operations have N^3 or higher operations. And they may be close to singular (ill-conditioned). Single point precession is poor for both.

Awesome (5, Funny)

enderjsv (1128541) | more than 4 years ago | (#30466190)

Almost meets the minimum requirements for Crysis 2

More Awesome (3, Funny)

copponex (13876) | more than 4 years ago | (#30466260)

This was post #2 and already modded -1, Redundant.

Re:More Awesome (1)

sadness203 (1539377) | more than 4 years ago | (#30466332)

Must be redundant in the long run.
This is sad, since this one was clever.

Re:More Awesome (3, Funny)

joocemann (1273720) | more than 4 years ago | (#30466454)

slashdot mods are often, as I observe, sour and pissy skeptics. even if it is humorous to them they will knock it for lack of something else to bash.

Re:More Awesome (2, Funny)

joocemann (1273720) | more than 4 years ago | (#30466888)

slashdot mods are often, as I observe, sour and pissy skeptics. even if it is humorous to them they will knock it for lack of something else to bash.

-1 troll

lol. exactly

humor on /. (1)

snooo53 (663796) | more than 4 years ago | (#30468646)

I've found that lately on Slashdot, I agree with them that highly moderated humorous posts seem to far outnumber the interesting ones. I've actually ratcheted down all funny comments to -4 or -5, and browse at 2, to catch the more interesting discussions which get passed over. But I've never seen any reason to moderate them down now that we have control when logged in... I dunno, maybe others think that people who come here looking for facetious comments should have to browse at funny +5 instead of us sourpusses :)

Re:More Awesome (1)

mgblst (80109) | more than 4 years ago | (#30468876)

Oh yeah, the fact that we get exactly the same comment everytime a fast computer + GPU is mentioned shouldn't stop the next moron from posting it.

Too right it's redundant (1)

BertieBaggio (944287) | more than 4 years ago | (#30469198)

It's redundant because some smartass mentions Crysis in response to *every fucking article* about someone doing something using powerful GPUs*.

Of course, if it was about CPUs, the post would be about what will be needed to run Windows 8, or 'finally meeting the minimum system requirements for Vista'.

Mostly, you can predict these posts from the title of the article. Doesn't stop crotchety people people like me coming to complain about it though...

* Footnote: When someone equally-crotchety complained about this before, a poster made the good point that Crysis draws this derision as it *still* taxes high-end systems. Myabe it's because CryEngine2 is bloated and ineffecient, maybe it's because it tries to do too much. All I know is we keep getting these inane posts.

Re:Too right it's redundant (1)

BertieBaggio (944287) | more than 4 years ago | (#30469228)

Sorry to reply to myself, but I've just noticed two comments:

mgblast's [slashdot.org], which says the same thing as mine more succinctly and bluntly. Embarrassingly, it was in the same god-damn thread.

Further down, we have this comment [slashdot.org] by RandomUsr, who actually does mention Vista. Woo! In fact, he (and the person that responded to him) also mentions anitvirus software. Never mind that this is a GPGPU system, just post crap about *something* bloated and wait for the '+1 Funny' mods to roll in.

Gods, reading these two posts made me realise that I need to stop reading and posting to Slashdot when it's late and I'm in a bad mood and feeling misanthropic. *grumbles*

Re:Awesome (4, Funny)

sadness203 (1539377) | more than 4 years ago | (#30466390)

Only if you imagine a beowolf cluster of these
Here goes the redundant and offtopic mod.

+1 (1)

toby (759) | more than 4 years ago | (#30469374)

Au contraire, I clicked the article link JUST to find this comment. Thankyou for maintaining a cherished /. tradition!

That's nothing (0, Offtopic)

For a Free Internet (1594621) | more than 4 years ago | (#30466326)

I can put 52 jellybeans in my mouth and sing "La Marseillaise."

Re:That's nothing (-1, Flamebait)

Anonymous Coward | more than 4 years ago | (#30466894)

meh. CmdrTaco can bottom for 52 well-hung niggers in one night.

Beef lover's night (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#30466378)

coed.jpg: WEB SITE SOUND EFFECTS BEYOND EXPECTATIONS
GIANT RJ-45 CONNECTORS WILL ASTOUND YOU
(see how they glisten and sparkle)
Vincent Mariani: Thank you for filling out the survey! Click here to
get $10 off any entree over $70 at Bennigan's!
coed.jpg: Ask your server for the Bennigan's Elite Club menu,
featuring the new Chicken Cordon Bleu Steak Sandwich, the Chicken
Cordon Bleu Sandwich made of 100% all-beef steak!
Vincent Mariani: Now! When you spend over $700 at Bennigan's, you get
to join our Furniture Biter's Club! Earn the right to bite any piece
of furniture (glass excluded) at any Bennigan's in the continental US!
Offer void where prohibited by work-safety laws.
coed.jpg: For a limited time, kids bite free! Use coupon code 0xF1DE110
10:00 AM
Vincent Mariani: Tragically, the hexadecimal coupon code system proved
to be Bennigan's greatest strength, and it's greatest weakness. We'll
be back on Behind the Unremarkable US Casual Dining Chain-
coed.jpg: Please stand by, it may take a few seconds for your
commercial break to be submitted. Please DO NOT!!!!!!!! close this
window.
10:06 AM
Vincent Mariani: We should make a "bucket list" or unremarkable US
Casual Dining Chains in which to bone and then start a contest where
everyone competes to be first to bone in Bennigan's, Chili's, TGI
Friday's, Outback Steakhouse, etc.
coed.jpg:hmm ... bennigan's, applebee's, golden corral, chili's
haha tgi friday's
i dunno, that could get depressing pretty quick!
Vincent Mariani: Yeah, that's the challenge
coed.jpg: haha
Vincent Mariani: can you get it up in a Golden Corral? Tough to say!
coed.jpg: it's okay honey, we can just hit the "salad bar" for some
fried chicken tacos with gravy and cheese and ranch dressing
oh shit, it's tuesday! that means salisbury steak! i might be getting
a boner after all
Vincent Mariani: That fellow from the commercials who looks like the
Hoff might be able to get a boner in Golden Corral
Achievement unlocked: Boner Corral
10:14 AM
coed.jpg:Go to the lettuce bowl at the end of the buffet and get the
silver tongs. Go to the other end of the buffet and turn around,
facing the roll warmer. hit X to use the tongs to get a roll, but jump
just before the roll goes into your pants. the roll should fly across
the buffet back to the salad bar. Go back to the lettuce bowl, and an
employee should approach you. hit X to talk. keep hitting X to
continue talking until she says "well, I gotta get back to work", then
hit square. golden corral buffet boning should commence in beautifully
rendered 3d polygons
10:18 AM
If you found the hidden the Golden Corral Team Member Polo and you
didn't wipe off the Jack Daniel's Rib Sauce from your chin on the TGI
Friday's level, you can skip the buffet and walk directly into the
kitchen. Adjust the radio station from Tejano to Smooth Jazz, and
Jorge will come out from the grill and start to yell at you. Answer
"yes, no, no." and he'll decide to bone you instead.
coed.jpg: achievement unlocked: beef lover's night

News Flash (2, Funny)

RandomUsr (985972) | more than 4 years ago | (#30466392)

Blazing Fast Pron Machine running Windows Vista. Don't forget to pick a copy of the latest memory intensive Anti-Virus, as this machine will handle it just fine.

It really runs Linux (0)

Anonymous Coward | more than 4 years ago | (#30469458)

Nice try at humor. But as almost always with these types of multi-processor machines, it runs Linux [ua.ac.be].

How fast is this really? (3, Insightful)

Ziekheid (1427027) | more than 4 years ago | (#30466422)

"the compact FASTRA II is four times faster than the university's supercomputer cluster, while consuming 300 times less power" And the original supercomputer was how fast? 512 cores doesn't say THAT much. I could compare my computer to supercomputers from the past and they'd say the performance of my system was amazing too.

Re:How fast is this really? (5, Informative)

jandrese (485) | more than 4 years ago | (#30466500)

If you read the article it tells you that the supercomputer has 256 Opteron 250s (2.4Ghz) and was built 3 years ago. If you have a parallizable problem that can be solved with CUDA, you can get absolutely incredible performance out of off-of-the-shelf GPUs these days.

Re:How fast is this really? (2, Interesting)

Ziekheid (1427027) | more than 4 years ago | (#30466560)

I'll admit that, thanks for the info, you'd think this was crucial information for the summary too though. Everything put in perspective, it will only outperform the cluster on specific calculations so overall it's not faster right?

Re:How fast is this really? (2, Interesting)

raftpeople (844215) | more than 4 years ago | (#30466688)

It's all a continuum and depends on the problem. For problems with enough parallelism that the GPU's are a good choice, then they are faster. For a completely serial problem, then the current fastest single core is faster than the both the supercomputer and the GPU's.

Re:How fast is this really? (2, Informative)

jstults (1406161) | more than 4 years ago | (#30466846)

you can get absolutely incredible performance out of off-of-the-shelf GPUs these days.

I had heard this from folks, but didn't really buy it until I read this paper [nasa.gov] today. They get a speed-up (wall clock) using the GPU even though they have to go to a worse algorithm (Jacobi instead of SSOR). Pretty amazing.

Re:How fast is this really? (1)

cheesybagel (670288) | more than 4 years ago | (#30467202)

At least a CPU program, when it crashes, does not bring down the whole OS. Memory protection? Pah, who needs such things... After all you never make coding mistakes. Right?

It is like MS-DOS programming all over again. Except the computer takes longer to reboot.

They use a worse algorithmic complexity algorithm in the paper because it actually performs better in the GPU than the other one. This happens in CPUs in several cases as well. When was the last time you saw someone using a Fibonacci heap? Memory footprint matters and taking advantage of the CPU caches matters. The paper also says nothing about CPU SIMD optimizations, which can make a program 3x faster if applied. That would make the performance the same as for the GPU system. Note that I am being generous here and actually ignoring the program setup time when they need to copy the data to the GPU. Because if I did not the pure CPU version would probably actually be faster.

Re:How fast is this really? (1)

jstults (1406161) | more than 4 years ago | (#30467354)

Well, I'm not sure about most of your criticisms, but they use Jacobi instead of Gauss-Seidel because SSOR is not data parallel, but Jacobi is.

That would make the performance the same as for the GPU system.

Really? Care to share any results that support that? I'm quite sure the peak flops you can achieve on the GPU are much higher than the limited SIMD capability of the CPU.

Note that I am being generous here and actually ignoring the program setup time when they need to copy the data to the GPU.

Sure there's communications overhead, but that's true of any parallel processing problem, the trick is to find problems that have a big computation to communication ratio (which happens to be most of computational physics and these tomographic reconstruction problems that TFA mentions as well).

Re:How fast is this really? (2, Informative)

cheesybagel (670288) | more than 4 years ago | (#30467514)

Really? Care to share any results that support that? I'm quite sure the peak flops you can achieve on the GPU are much higher than the limited SIMD capability of the CPU.

IIRC they claim 2.5-3x times more performance using a Tesla than using the CPUs in their workstation. Ignoring load time.

SSE enables a theoretical peak performance enhancement of 4x for SIMD amenable codes (e.g. you can do 4 parallel adds using vector SSE, in the time it takes to make 1 add using scalar SSE). In practice however you usually get like 3x more performance.

Theoretical SIMD performance for the GPU is very fine and nice, but in practice the small caches in current GPUs limit performance. CPUs also often have out-of-order execution support and other hardware which is too expensive in terms of transistors to implement in a GPU.

IMO the main problem here is that the programming model for the CPU is too complex since you need to use several different ways to express parallelism (SIMD/Multicore/Cluster) to get top performance.

That's why I have a problem with the comparisons (3, Informative)

Sycraft-fu (314770) | more than 4 years ago | (#30467434)

Because it only applies to the kind of problems that CUDA is good at solving. Now while there are plenty of those, there are plenty that it isn't good for. Take a problem that is all 64-bit integer math and has a branch every couple hundred instructions and GPUs will do for crap on it. However a supercomputer with general purpose CPUs will do as well on it as basically anything else.

That's why I find these comparisons stupid. "Oh this is so much faster than our supercomputer!" No it isn't. It is so much faster for some things. Now if you are doing those things wonderful, please use GPUs. However don't then try to pretend you have a "supercomputer in a desktop." You don't. You have a specialized computer with a bunch of single precision stream processors. That's great so long as your problem is 32-bit fp, highly parallel, doesn't branch much, and fits within the memory on a GPU. However not all problems are hence they are NOT a general replacement for a supercomputer.

Re:That's why I have a problem with the comparison (1)

jstults (1406161) | more than 4 years ago | (#30467882)

Take a problem that is all 64-bit integer math and has a branch every couple hundred instructions and GPUs will do for crap on it.

So would a Cray; supercomputers and GPUs are made for the same sorts of problems (exploiting data parallelism). Now if by 'supercomputer' you mean 'a cluster of commodity hardware', then ok, you've got a point, that heap of cpus will handle branches plenty fast.

Re:That's why I have a problem with the comparison (1)

wagnerrp (1305589) | more than 4 years ago | (#30468408)

Except that a 'supercomputers' and a 'cluster of commodity hardware' are effectively synonymous these days. They all use the same Power/Xeon/Opteron/Itanium chips, with several cores and a several GB of memory to a compute node. The only real difference left is the interconnect. Commercially built systems tend to have far beefier and more complex interconnects. Homebrew systems more often than not just use gigabit ethernet, with the larger ones rarely using anything better than a 'fat tree' with channel bonding or 10gbps ethernet.

Re:That's why I have a problem with the comparison (1)

Retric (704075) | more than 4 years ago | (#30468658)

There are also a fair number of Cell based supercomputers and even one hybrid out there. And even some pure custom solutions used by the NSA. (There is a reason they have their own chip fab.) And, if you include folding at home type applications, then GPU's represent a reasonable percentage of the worlds supper computing infrastructure.

Re:That's why I have a problem with the comparison (4, Insightful)

timeOday (582209) | more than 4 years ago | (#30469260)

Take a problem that is all 64-bit integer math and has a branch every couple hundred instructions and GPUs will do for crap on it. However a supercomputer with general purpose CPUs will do as well on it as basically anything else.

That was always true of supercomputers. In fact the stuff that runs well on CUDA now is almost precisely the same stuff that ran well on Cray vector machines - the classic stereotype of "Supercomputer"! Thus I do not see your point. The best computer for any particular task will always be one specialized for that task, and thus compromised for other tasks.

BTW, newer GPUs support double precision [herikstad.net].

Re:That's why I have a problem with the comparison (1, Insightful)

Anonymous Coward | more than 4 years ago | (#30469584)

E X A C T L Y ! ! ! I always read about how fast the Cell Broadband Processor(tm) is and how anyone is a FOOL for not using it. No. They suck hard when it comes to branch prediction. Their memory access is limited to fast, but very small memory. Out of branch execution performance is awful. You have to rewrite code massively to avoid it. For embarassingly parallel problems, they are a dream. For problems not parallel, they are quite slow. An old supercomputer isn't as fast as a new one. If ordinary processors especially multi-core ones had two or four stream processors for every core, parallel operations would be much faster too, the processors themselves would be faster and its likely one of the improvements that are being looked at by Intel and AMD (and others). Something like this would make general purpose processors much more like the Cell Broadband Engine(tm), and would make them somewhat obsolete. Certainly the Cell processor suffers from being able to deal with problems that can only use 256 MB of memory (the cell BE uses proprietary memory, very fast, but only available up to 256 MB, no one else makes this kind of memory, and they don't make chip sizes bigger than what winds up being 256 MB. GPU's are limited by memory size too (although 1GB is bigger than 256 MB), but it still suffers all of the problems of a specialty processor. If you can use it, great. I can't get any performance boost out of them, because my programs have out-of-order branches, and I get better performance from a general purpose CPU.

Re:That's why I have a problem with the comparison (1)

mcrbids (148650) | more than 4 years ago | (#30469718)

That's why I find these comparisons stupid. "Oh this is so much faster than our supercomputer!" No it isn't. It is so much faster for some things. Now if you are doing those things wonderful, please use GPUs. However don't then try to pretend you have a "supercomputer in a desktop." You don't. You have a specialized computer with a bunch of single precision stream processors. That's great so long as your problem is 32-bit fp, highly parallel, doesn't branch much, and fits within the memory on a GPU. However not all problems are hence they are NOT a general replacement for a supercomputer.

For that matter, which is faster: A two-ton flatbed truck, or a Maserati? Kinda depends on what you are trying to do, doesn't it? Want to move 3,000 pounds of Hay? You probably DON'T want the Maserati!

And all machines are like this. Some machines are better at some tasks than others. And presumably, the comparison to the University Supercomputer was because of a task that they *needed* to perform, and the pittance cost of the GPGPU-based supercomputer favored very well against the cost of leasing University supercomputer time.

Even different people are better at some things than others.... Some people are better a maths than others. Some people can take a bit of vinegar and coffee grounds, and make an artistic masterpiece.

Because I'm a jogger, I can run long distances faster than most people. But I suck at sprints, and I take long showers. I type over 100 WPM.

See?

Nigger Woods got athlete of the decade? (-1, Troll)

Anonymous Coward | more than 4 years ago | (#30466444)

You got to be shitting me.

times less (4, Funny)

Tubal-Cain (1289912) | more than 4 years ago | (#30466510)

...consuming 300 times less power.

*sigh*

Re:times less (1)

RandomUsr (985972) | more than 4 years ago | (#30466628)

Cost of seeing your Boss' face when he realizes how much you save the company on the new spam platform? Priceless. Oops, that's not a happy face!

Re:times less (4, Insightful)

timeOday (582209) | more than 4 years ago | (#30469290)

Can we please just officially define "n times less" as "1/n" and not feel bad about it anymore?

Not sure how fast it is, but I know it is hot... (2, Interesting)

(H)elix1 (231155) | more than 4 years ago | (#30466624)

I've got a pair of 9800gx2 in my rig. The cards turn room temperature air into ~46C air. Without proper ventilation, these things will turn a chassis into an easy bake oven.

For those not familiar with the 9800gx2 cards, it essentially is two 8800gts video cards linked together to act as a single card - something called SLI on the NVidia side of marketing. SLI typically required a mainboard/chipset that would allow you to plug in two cards and link them together. This model allowed any mainboard to have two 'internal' cards linked together, with the option of linking another 9800gx2 if your board actually supported SLI.

The pictures did not show any SLI bridge, so it looks like they are just taking advantage of multiple GPUs per card.

Re:Not sure how fast it is, but I know it is hot.. (1)

The Archon V2.0 (782634) | more than 4 years ago | (#30467144)

The pictures did not show any SLI bridge, so it looks like they are just taking advantage of multiple GPUs per card.

There's no seven-way SLI anyway. Since the GPUs are being used for processing and not graphics, there's no need for them to work together via SLI or Crossfire or what have you as long as the OS and programs treat 'em like any other multiprocessor setup.

Re:Not sure how fast it is, but I know it is hot.. (0)

Anonymous Coward | more than 4 years ago | (#30468746)

Not only is there no seven-way SLI, it tends to work poorly with CUDA applications no matter what sort of SLI you're using. Before running any BOINC Cuda apps, SLI needs to be disable or the app only sees "1" gpu.

Re:Not sure how fast it is, but I know it is hot.. (2, Funny)

thedarknite (1031380) | more than 4 years ago | (#30467572)

I've got a pair of 9800gx2 in my rig. The cards turn room temperature air into ~46C air. Without proper ventilation, these things will turn a chassis into an easy bake oven.

That's a brilliant idea, now people can make snacks without ever leaving the computer.

Re:Not sure how fast it is, but I know it is hot.. (0)

Anonymous Coward | more than 4 years ago | (#30468990)

I don't think anyone on /. actually "leaves" their computer...

Re:Not sure how fast it is, but I know it is hot.. (0)

Anonymous Coward | more than 4 years ago | (#30470106)

I also have two GX2s in a box for use with CUDA programming. (You don't use SLI with CUDA, in fact it's a disadvantage in that if you do you'll only be able to actually use one of the GPUs, so you won't see SLI bridges in a CUDA box.) The power consumption of the 9800gx2s is indeed fearsome even at idle. I measured it, but don't have the numbers on hand. BUT: Newer nvidia cards apparently use *much* less power at idle, and probably less at full blast as well (like a 45nm CPU vs. a 90nm CPU at the same GHz will use less power for the exact same work).

Since I only need one GX2 to test most programs, I keep the power unplugged to the second one most of the time to keep from wasting so much energy (and producing so much heat).

Yeah but... (0, Redundant)

definate (876684) | more than 4 years ago | (#30466644)

Can it play Crysis with a high frame rate on maximum?

Re:Yeah but... (0)

Anonymous Coward | more than 4 years ago | (#30466786)

No. At least until Crysis runs on BeOS they get no gaming. And they are not using SLI at all.

Re:Yeah but... (0)

Anonymous Coward | more than 4 years ago | (#30466816)

whoosh

Re:Yeah but... (0)

Anonymous Coward | more than 4 years ago | (#30466822)

Does it run Linux?

Silly (1)

jpmorgan (517966) | more than 4 years ago | (#30466902)

This isn't a huge achievement. Nobody else has done it because it's silly.

There are two major reasons... the first is they use GeForce cards. That's not a good idea, since GeForces are held to much lower quality standards than Teslas and Quadros. They're intended for gaming graphics, where a minor error here or there isn't the end of the world. "Sorry we missed your cancer, since our supercomputer miscalculated that region of the reconstruction." The second problem is, that's one bandwidth starved machine. It's based on a pretty nice motherboard, but with 13 GPUs that's not a lot of bandwidth to go around.

The more popular layout for a GPU supercomputer of that size is a small cluster of 2-GPU blades, with a hypertransport interconnect. It's a little bit trickier to work with, but there are fewer bottlenecks.

Re:Silly (2, Informative)

modemboy (233342) | more than 4 years ago | (#30466998)

The difference between GeForce and Quadro cards is almost always completely driver based, it is the exact same hw, different sw.
This basically a roll your own Tesla, and considering the Teslas connect to the host system via an 8x or 16x PCI-e add in card, I'm gonna say you are wrong when it comes to the bandwidth issue as well...

Re:Silly (2, Informative)

jpmorgan (517966) | more than 4 years ago | (#30467536)

The hardware is the same, but the quality control is different. Teslas and Quadros are held to rigorous standards. GeForces have an acceptable error rate. That's fine for gaming, but falls flat in scientific computing.

Re:Silly (1)

DeKO (671377) | more than 4 years ago | (#30468490)

Uh... no, you are wrong. Quadros and GeForces have a lot of differences in the internal hardware. Just because they "do the same thing" (they draw triangles really, really fast) it doesn't mean they are the same. GeForces, for example, don't have optimizations for drawing points and lines, nor assume you are abusing of obsolete APIs, like immediate mode drawing; both are common in CAD applications, and almost useless in games.

Re:Silly (1)

jpmorgan (517966) | more than 4 years ago | (#30468626)

No, the chips are almost exactly the same (except Quadros have 100% unbroken chips). You're thinking driver differences.

Re:Silly (1)

Khyber (864651) | more than 4 years ago | (#30469178)

There is NO difference between Quadro and GeForce besides Geforce basically being a laser-locked defective quadro with a different firmware.

In fact, you can flash most GeForce cards with the equivalent Quadro firmware and in some applications (not gaming) get better performance.

Been tooling around with nVidia cards since NV4. They've pretty much used this same strategy for the past decade+.

Re:Silly (2, Insightful)

CityZen (464761) | more than 4 years ago | (#30467706)

It's not silly: (1) this is a research project, not production medical equipment, meaning that the funds to buy Tesla cards were probably not available, and they aren't particularly worried about occasional bit errors. (2) Their particular application doesn't need much inter-GPU communication, if any, so that bandwidth is not an issue. They just need for each GPU to load datasets, chew on them, and spit out the results.

How much does your proposed GPU supercomputer cost for 13 GPUs?

Can't be to impressed: Folding@home guys did more. (1)

blind biker (1066130) | more than 4 years ago | (#30466972)

Folding@home enthusiasts and academic contributors did more than that, and a long time ago, too. Just check this thread at foldingforums [foldingforum.org] for one example.

Re:Can't be to impressed: Folding@home guys did mo (1)

CityZen (464761) | more than 4 years ago | (#30467656)

Did more what, exactly? None of the Folding setups listed have more than 4 GPU cards per motherboard.

Naming Scheme (1)

lymond01 (314120) | more than 4 years ago | (#30467322)

Wouldn't it be nice if the FASTRA II, which is 3.75 times faster than the FASTRA I, was actually called the FASTRA 375. Then I wouldn't have to ask.

Re:Naming Scheme (1)

slew (2918) | more than 4 years ago | (#30469768)

If it's really 3.75 times faster maybe they could call it the FASTRA System 360 Model 96 (or the Fastra 360/96) for short ;^)

but does it run.. (0)

Anonymous Coward | more than 4 years ago | (#30467690)

..hyper linux

Generic statements FAIL! (1)

Hurricane78 (562437) | more than 4 years ago | (#30467786)

it was slightly faster than the university's 512-core supercomputer and cost less than 4000EUR.

but tests reveal the 13 GPUs deliver 3.75x more performance than the old system.

It is impossible, to make such general statements about the performance, for something that is still very much specialized on long pipelines and streams of repetitive data (vector processing).

They may be much faster for tasks that fit that scheme. But slower for those that don’t.

Re:Generic statements FAIL! (1)

ceoyoyo (59147) | more than 4 years ago | (#30469320)

The performance of a standard cluster, or even a SIMD machine will vary tremendously depending on your application as well. The only reasonable way is to pick a problem and compare performance on that problem.

They just forgot a phrase at the end of that statement: "it was slightly faster than the university's 512-core supercomputer... in this application."

Why it's 13, not 14 GPUs (2, Interesting)

CityZen (464761) | more than 4 years ago | (#30467826)

Apparently, the regular BIOS can't boot with more than 5? graphics cards installed due to the amount of resources (memory & I/O space) that each one requires. So the researchers asked ASUS to make a special BIOS for them which doesn't set up the graphics card resources. However, the BIOS still needs to initialize at least one video card, so they agreed that the boot video card would be the one with only a single GPU. Presumably, they could have also chosen a dual GPU card that happened to be different from the others in some way.

Cramped cases... (1)

Dusthead Jr. (937949) | more than 4 years ago | (#30468968)

Maybe there's a really good reason for it that I'm not fully aware of, but why are PC cases, motherboards, add-on cards etc. all seen to be designed around such limited amounts of space? Is there a such thing a s PC case that size of a mini-fridge or bigger? A motherboard with freaking 10 or 12 slots with enough space between them? A video card the size of a motherboard? Anything but a cramped little box with limited expansion? Is that such a bizarre thing to want?

Re:Cramped cases... (1)

CityZen (464761) | more than 4 years ago | (#30469262)

It's known as "market forces". In case you haven't noticed, the computing needs of most people can be crammed into something the size of a paperback book or so. Larger computing devices are available, but the bigger you go, the smaller the market, and thus the larger the price. If you want something big, you might take look at a computer named "Jaguar". It has a big price, too.

As far as personal computers go, they tend to be designed around CPU strengths & limitations. Intel and AMD have figured out that the most efficient way to increase computing power is to put more and more processing power into a single chip, and have systems designed around a single CPU chip, as opposed to systems that put multiple CPU chips on the motherboard. Because of this approach, it became unnecessary to build systems larger than your typical ATX desktop.

If you needed more computing power than that, your best bet was to get multiple machines. Indeed, you can fill refrigerator-sized racks full of ATX (or other form factor) motherboards. For instance, check out: http://www.cse.illinois.edu/turing/Images/FrontView.html [illinois.edu]

Only recently have GPUs become recognized as an efficient way of adding lots of computing power to a desktop machine. As evidenced by the motherboard that made Fastra II possible, hardware is slowly becoming available to embrace this new computing paradigm. Perhaps in a few years, you'll get your 12-double-wide-slot motherboard and you'll be able to populate it with GeForce 28000 boards. But more than likely, it still won't be cheap, since few people seem to need this kind of performance.

Re:Cramped cases... (1)

CityZen (464761) | more than 4 years ago | (#30469304)

Oh, and by the way, I'm wondering quite the opposite: why do we still see so many over-sized full ATX size cases being offered, when microATX motherboards have everything we (most of us) need? Indeed, even mini-ITX motherboards are often adequate for so many needs, and yet mini-ITX cases still seem to command a premium because they are relatively rare. It's easy (and boring) to design a big rectangular ATX box. It's an engineering challenge to make a good-looking small box that does everything you need and is still practical to work with.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...