Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Supercomputer Built With 8 GPUs

kdawson posted more than 6 years ago | from the let-the-games-begin dept.

Supercomputing 232

FnH writes "Researchers at the University of Antwerp in Belgium have created a new supercomputer with standard gaming hardware. The system uses four NVIDIA GeForce 9800 GX2 graphics cards, costs less than €4,000 to build, and delivers roughly the same performance as a supercomputer cluster consisting of hundreds of PCs. This new system is used by the ASTRA research group, part of the Vision Lab of the University of Antwerp, to develop new computational methods for tomography. The guys explain the eight NVIDIA GPUs deliver the same performance for their work as more than 300 Intel Core 2 Duo 2.4GHz processors. On a normal desktop PC their tomography tasks would take several weeks but on this NVIDIA-based supercomputer it only takes a couple of hours. The NVIDIA graphics cards do the job very efficiently and consume a lot less power than a supercomputer cluster."

Sorry! There are no comments related to the filter you selected.

By what benchmark? (1, Redundant)

morgan_greywolf (835522) | more than 6 years ago | (#23610769)

By what benchmark is eight of the NVIDIA GPUs in the 9800 GX2 more powerful than 300 2.4 GHz C2Ds?

Re:By what benchmark? (5, Informative)

Anonymous Coward | more than 6 years ago | (#23610793)

By what benchmark is eight of the NVIDIA GPUs in the 9800 GX2 more powerful than 300 2.4 GHz C2Ds?
Looking at TFS the benchmark of their own tomography code taking a couple of hours instead of weeks.

Re:By what benchmark? (0)

Anonymous Coward | more than 6 years ago | (#23611209)

TFS? I don't even read the fine title!

Re:By what benchmark? (5, Interesting)

cheier (790875) | more than 6 years ago | (#23611545)

Too bad this isn't really news. I guess it is news if you consider that someone else has had their application accelerated by NVIDIA GPUs. I guess the only other reason that this could be news is by virtue of having 8 GPU cores.

Unfortunately, this setup won't work ideally for a lot of other CUDA based applications. For the past 6 months, I had a system with 6 GPUs (actual physical GPUs). This is the system that I showed at CES [ocia.net] . We are easily able to do 8 physical GPUs, and now I've been solely focused on utilizing Tesla.

Given that NVIDIA released the GX2 series, I was not surprised that someone would announce an 8GPU system. I'm surprised it took this long for someone to do it, and almost equally surprised that slashdot took this long to publish any news that is decent in the realm of GPU super computing. I've been cranking out close to 228 billion atom evals. per second in VMD [uiuc.edu] for months now, versus about 4 billion on dual quad core 3.0GHz Xeons.

Re:By what benchmark? (0)

Anonymous Coward | more than 6 years ago | (#23610799)

At least read the summary: "The guys explain the eight NVIDIA GPUs deliver the same performance for their work as more than 300 Intel Core 2 Duo 2.4GHz processors. "

Re:By what benchmark? (0)

Anonymous Coward | more than 6 years ago | (#23610835)

Tomographic reconstruction.

Re:By what benchmark? (4, Insightful)

cromar (1103585) | more than 6 years ago | (#23610839)

I am guessing it has something to do with floating point calculations vs. integer calculations, but if I read the article, this wouldn't be Slashdot, would it? Think about it. We have GPUs to perform vector maths, flops, etc. because the CPU is not all that great at that sort of thing typically. A general purpose CPU is not necessarily going to be the fastest if your problem domain is more suited to an "inferior" chip; general purpose CPUs are not designed to be the fastest chip in every situation.

Re:By what benchmark? (4, Informative)

77Punker (673758) | more than 6 years ago | (#23610985)

The GPU's are better at floating point than integer; if I remember correctly it takes 4 cycles on current GPU's to do a float operation, but it takes 16 to do an int. No, I don't understand why.

Also, the "multiply" and "add" instructions exist in a "madd" opcode which essentially doubles the theoretical floating point performance, even if you don't use "madd" very often.

Re:By what benchmark? (4, Informative)

Calinous (985536) | more than 6 years ago | (#23611471)

Because floating point operation goes on a dedicated path, while the integer operations does not have a dedicated integer-only path.
Also, it's possible that loading floating points operands and storing results in actual code can be pipelined, while integer operations are not pipelined.
  (and yes, I don't know what I'm talking about)

Limited Application (1)

MOBE2001 (263700) | more than 6 years ago | (#23611031)

It is obvious that, if a computer is using GPUs exclusively, it is limited to vector or data parallel processing. And it is no surprise that it is being used by an outfit that specializes in visual processing, which is ideally suited to data parallel processing. Change the benchmark program to code that has a lot of data dependencies and this "supercomputer" will choke to a crawl.

Re:Limited Application (4, Informative)

Calinous (985536) | more than 6 years ago | (#23611543)

Even more: if you don't optimize the code specifically for the GPU-based supercomputer, your performance goes down the drain. I wouldn't be surprised if they obtained a speedup of an order of magnitude or more from the aggressive code optimisation.
      The idea is: the original code would run faster on a 8 Core2Duo machine than on the 8 GPUs. Even more optimising of the code will do little for the Core2Duos, due to limited memory bandwidth, FSB bandwidth, and so on.
      Meanwhile, optimising a pipelining sistem (load, compute, store) in the GPU would be greatly improved by huge bandwidth (50GB/s on current systems), huge number of computation units (128 or more) and so on.

Re:By what benchmark? (1)

Guillaume Castel (1002740) | more than 6 years ago | (#23611179)

Intel begs to differ. Not that I'm buying into their marketing.

Re:By what benchmark? (2, Insightful)

gumbi west (610122) | more than 6 years ago | (#23611215)

When you get into inverting matricies, or doing matrix vector multiplication the algo is very easily in parallel, but I always wonder where the full matrices live. i.e. they could easily be tens of GBs of matrix, so the CPU would seem to have to be heavily involved as well.

Re:By what benchmark? (1)

kumar999 (1299337) | more than 6 years ago | (#23611271)

It is obvious that a GPU is going to do floating point cals much faster than a CPU. But the question becomes is this "super computer" so fast for the tests they designed for it or is it fast for any application.

GPUs are required to do more floating point calculations but is there such huge demands on super computers running financial calculations for instance?

I can understand it might be of some interest to the scientific community. Especially to do with astronomy, where they wait for months to get access to computers fast enough to run models.

Re:By what benchmark? (4, Insightful)

pablomme (1270790) | more than 6 years ago | (#23611319)

As far as I know, GPUs are amazingly fast at matrix operations and other things allowing vectorized evaluation. I guess these tomography applications must make massive use of these. After all, tomography is in essence image processing..

Re:By what benchmark? (0)

Anonymous Coward | more than 6 years ago | (#23610849)

Because is executes in 1/300th the time of a single 2.4 C2D?

It's entirely believable. GPU's have many more parallel execution paths than a pentium chip. GPU's are designed to crunch numbers in a very particular way, and if you can fit your solution to the way a GPU works it will far exceed what can be done on a CPU chip.

A single FPGA can replace 100's, of 2GHz processors in particular applications.

Re:By what benchmark? (2, Informative)

kipman725 (1248126) | more than 6 years ago | (#23611561)

or 100's of FPGA's can do what was previously considered a task that even with super computing resources was considered so time consuming to be only worthwhile for groups like the nsa: http://www.copacobana.org/index.html [copacobana.org] (the EFF had a simlar custom chip device several years before but that cost >$250K)

Re:By what benchmark? (4, Insightful)

symbolset (646467) | more than 6 years ago | (#23610859)

By what benchmark is eight of the NVIDIA GPUs in the 9800 GX2 more powerful than 300 2.4 GHz C2Ds?

By the benchmark that they solve the particular problem of this specific application in 1/300th of the time?

Re:By what benchmark? (4, Insightful)

Jaime2 (824950) | more than 6 years ago | (#23611199)

I think the GP (and myself) were objecting to the use of the fairly general word "power" and the use of this one problem as a "power benchmark". While it is obviously true that 8GPUs is as fast as 300 C2Ds for this problem, this system isn't as fast as a supercomputer for most problems. All this does is point out that the recent trend of building supercomputers out of inexpensive general purpose CPUs may not be a good idea for all applications.

Re:By what benchmark? (1)

foobsr (693224) | more than 6 years ago | (#23611265)

building supercomputers out of inexpensive general purpose CPUs may not be a good idea for all applications

You may generalize that, like, e.g., in - 'for running VISTA', but (ymmv) of course you can come up with a more serious example.

CC.

Re:By what benchmark? (5, Insightful)

symbolset (646467) | more than 6 years ago | (#23611379)

All this does is point out that the recent trend of building supercomputers out of inexpensive general purpose CPUs may not be a good idea for all applications.

And... a screwdriver is not always a prybar. A tool's a tool - they have preferred usage but if your requirement is specific and you're creative enough, you can do some fine work outside of the tool's intended purpose. Like this guy. Kudos to him.

Perhaps some more creative people finding this information will now discover if their specific requirements can be met by this interesting configuration. That will save them large quantities of cash or possibly enable some facility that was not previously available because supercomputers cost a grip-o-cash.

Of course for general purpose supercomputing you would want to use modified PS3s [wired.com] .

Re:By what benchmark? (0)

Anonymous Coward | more than 6 years ago | (#23611489)

That'd be great if they were interested in solving "most problems."

Re:By what benchmark? (4, Informative)

77Punker (673758) | more than 6 years ago | (#23610879)

By what benchmark is eight of the NVIDIA GPUs in the 9800 GX2 more powerful than 300 2.4 GHz C2Ds?
By any SIMD problem. For reference, fire up a game that's capable of using a software renderer and do some sort of benchmark, then use the 3D hardware on the same benchmark. That's the difference between SIMD on hardware that is designed to do SIMD and SIMD on hardware that's designed to do everything (or in the case of the Duo, multitasking).

Re:By what benchmark? (2, Interesting)

raftpeople (844215) | more than 6 years ago | (#23611729)

Just to expand on this stuff: Different tools are (obviously) designed for different workloads. I have a project I was contemplating porting to the Cell. Unfortunately only 40% of my performance bottleneck could take advantage of SIMD, but that 40% could have taken advantage of an enormous number of SIMD instructions just like the workload from TFA.

The other critical 40% of my project would have gained absolutely nothing from SIMD and on the Cell would have lost time due to branches. In this case 300 c2d's would far exceed the throughput of 8 GPU's.

Re:By what benchmark? (5, Informative)

hansraj (458504) | more than 6 years ago | (#23610889)

As far as my understanding goes, comparing a GPU's performance to a CPU's performance is very very task dependent and the comparison with 300 CPUs should not be taken to mean that a 8GPU system is more powerful than a 300 core duo system in general.

If the application requires solving a small task many times over and over and all of these tasks can be done in parallel then using a GPU works great because a GPU has many cores each of which can handle a simple routine. Also the GPU is designed to spend very little time on the way code is hadled (load, switch etc) and spend more time actually running the code (hence the requirement of only very simple functions).

Such problems frequently arise in tomography, physics, astronomy etc and I hear GPUs are a great success in these areas. But don't hold your breath for running your favorite distro blazingly fast using GPUs.

Re:By what benchmark? (3, Informative)

TheThiefMaster (992038) | more than 6 years ago | (#23611701)

The 9800 GX2's GPUs have 128 1.5GHz "shader processors". 8 of these is like having 1024 vector-processing-specialised processor cores at your command.

I could easily believe that it performed comparably to 300 2.4GHz Core 2 Duos (aka 600 "over 1.5x faster but not vector-specialised" cores).

Theoretical performance is 576 GFLOPS per 9800 GX2 GPU (4.608 TFLOPS total) vs 19.2 GFLOPS per Core 2 CPU (5.760 TFLOPS total). However in tests the Core 2 gets as low as 6 GFLOPS instead of it's 19 theoretical, and the 9800 GPU gets a lot closer to it's full power.

I guess... (3, Funny)

LordVader717 (888547) | more than 6 years ago | (#23610779)

They didn't have enough dough for 9.

Re:I guess... (0, Troll)

Anonymous Coward | more than 6 years ago | (#23610807)

Show me a board with more than 4 PCIe slots.

Re:I guess... (2, Informative)

lukas84 (912874) | more than 6 years ago | (#23611503)

Look at the IBM x3850 M2.

Re:I guess... (0)

Anonymous Coward | more than 6 years ago | (#23611407)

What? They can actually buy a half of GeForce 9800 GX2?

Re-birth of Amiga? (4, Interesting)

Yvan256 (722131) | more than 6 years ago | (#23610811)

Am I the only one seeing those alternative uses of GPUs as some kind of re-birth of the Amiga design?

Re:Re-birth of Amiga? (5, Informative)

Quarters (18322) | more than 6 years ago | (#23611113)

The Amiga design was, essentially, dedicated chips for dedicated tasks. The CPU was a Motorola 68XXX chip. Agnus handled RAM access requests from the CPU and the other custom chips. Denise handled video operations. Paula handled audio. This cpu + coprocessor setup is roughly analogous to a modern X86 PC with a CPU, northbridge chip, GPU, and dedicated audio chip. At the time the Amiga's design was revolutionary because PCs and Macs were using a single CPU to handle all operations. Both Macs and PCs have come a long way since then. 'Modern' PCs have had the "Amiga design" since about the time the AGP bus became prevalent.

nVidia's CUDA framework for performing general purpose operations on a GPU is something totally different. I don't think the Amiga custom chips could be repurposed in such a fashion.

Re:Re-birth of Amiga? (0)

Anonymous Coward | more than 6 years ago | (#23611235)

Only if it's running AROS. :-)

Dammy
http://www.aros.org

Nvidia... (0)

Anonymous Coward | more than 6 years ago | (#23610817)

So basically the drivers screw up the computer and cause a invalid pointer reference of some sort every few hours :P

Why haven't they started releasing GPU CPUs yet? (3, Interesting)

arrenlex (994824) | more than 6 years ago | (#23610843)

This article makes it seem like it is possible to use the GPUs as general purpose CPUs. Is that the case? If so, why doesn't NVIDIA or especially AMD\ATI start putting its GPUs on motherboards? At a ratio of 8:300, a single high-end GPU seems to be able to do the work of dozens of high-end CPUs. They'd utterly wipe out the competition. Why haven't they put something like this out yet?

Re:Why haven't they started releasing GPU CPUs yet (4, Insightful)

kcbanner (929309) | more than 6 years ago | (#23610881)

They are useful for applications that can be massively parallelized. Your average program can't break off into 128 threads, that takes a little bit of extra skill on the coder's part. If, for example, someone could port gcc to run on the GPU, think of how happy those Gentoo folks would be :) (make -j128)!

Get the performance where it's most needed (3, Insightful)

mangu (126918) | more than 6 years ago | (#23611397)

They are useful for applications that can be massively parallelized

Precisely. But that happens to be one of the areas where more performance is still needed.


You don't need a super-duper CPU for text editing, that's for sure. For most of the tasks people do on computers, we have had CPU enough for the last 15 years or more. But where we still need more CPU happens to be mostly in tasks that ARE massively parallel, for instance, physics simulations, of which you will find several examples in the nVidia site [nvidia.com] .


I'm following this technology with much interest, and I think I will have a major upgrade in my home computer soon. My old FX-5200 card has been more than enough for my gaming needs, but now I have a new reason for upgrading.

Re:Get the performance where it's most needed (5, Funny)

kipman725 (1248126) | more than 6 years ago | (#23611595)

You don't need a super-duper CPU for text editing

clearly you have never used EMACS ;)

Re:Why haven't they started releasing GPU CPUs yet (1)

Dachannien (617929) | more than 6 years ago | (#23611723)

Your average program can't break off into 128 threads, that takes a little bit of extra skill on the coder's part.
Or, in some cases, a significant lack of skill, although in those cases, it usually doesn't stop at 128.

Re:Why haven't they started releasing GPU CPUs yet (1)

MynockGuano (164259) | more than 6 years ago | (#23610897)

No; if you read all the way to the end, you can see where they discuss the limited specific "general" programs that currently support this kind of thing. Namely, folding@home (on ATI cards) and maybe Photoshop in the future. The tomography software they use is likely their own code, is graphics-heavy, and is tailored for this set-up.

Re:Why haven't they started releasing GPU CPUs yet (1, Interesting)

Anonymous Coward | more than 6 years ago | (#23610899)

For information on their current HPC platform checkout http://www.nvidia.com/object/tesla_computing_solutions.html [nvidia.com] FWIW I don't think there would be that big of performance advantage of putting the GPUs on the motherboard, infact you'd probably actually get a performance decrease if you UMA'd the memory. With discrete boards each GPU has it's own framebuffer resulting in higher memory bandwidth.

Re:The idea is to use the CPU as the CPU (0, Flamebait)

Anonymous Coward | more than 6 years ago | (#23610943)

The idea is to replace the CPU with a GPU and use code morphing to convert x86 code to run on the GPU. Think about it, if one GPU is as powerful as 37 Core Duo CPUs why bother having a CPU on the motherboard at all?

Re:The idea is to use the CPU as the CPU (4, Insightful)

dreamchaser (49529) | more than 6 years ago | (#23611239)

Because for 95%+ of the problems a general purpose computer tackles GPU's would suck. It's only in very special cases that GPU's outperform CPU's. Thus, your idea is a poor one.

Re:Why haven't they started releasing GPU CPUs yet (0)

Anonymous Coward | more than 6 years ago | (#23610909)

GPGPU only works with highly parallelizable problems.

Re:Why haven't they started releasing GPU CPUs yet (1)

devman (1163205) | more than 6 years ago | (#23610913)

In this instance for this particular problem domain GPUs outperform CPUs because of the calculations they are designed to handle. This is not an indicator that GPUs outperform CPUs in ALL problem domains though.

Re:Why haven't they started releasing GPU CPUs yet (4, Informative)

77Punker (673758) | more than 6 years ago | (#23610939)

It is possible to solve non-graphics problems on graphics cards nowadays, but the hardware is still very specialized. You don't want the GPU to run your OS or your web browser or any of that; when a SIMD (single instruction, multiple data) problem arises, a decent computer scientist should recognize it and use the tools he has available.
Also, this stuff isn't as mature as normal C programming, so issues that don't always exist in software that's distributed to the general public will crop up because not everyone's video card will support everything that's going on in the program.

Re:Why haven't they started releasing GPU CPUs yet (0)

Anonymous Coward | more than 6 years ago | (#23610965)

Probably a combination of the heat/power budget for a CPU and the fact that programs have to be written specifically for GPUs.

Re:Why haven't they started releasing GPU CPUs yet (1)

yanyan (302849) | more than 6 years ago | (#23610971)

Check out the GPGPU (General Purpose GPU) project:

http://www.gpgpu.org/ [gpgpu.org]

Re:Why haven't they started releasing GPU CPUs yet (0)

Anonymous Coward | more than 6 years ago | (#23610977)

gpus are good at doing what they are programmed for, calculations with floating point numbers.

They arent very good at doing multiple different things like a cpu. Every time you want to do something different you have to reprogram it, which would suck for normal use on a desktop.

Re:Why haven't they started releasing GPU CPUs yet (1)

gweihir (88907) | more than 6 years ago | (#23610993)

This article makes it seem like it is possible to use the GPUs as general purpose CPUs. Is that the case? If so, why doesn't NVIDIA or especially AMD\ATI start putting its GPUs on motherboards? At a ratio of 8:300, a single high-end GPU seems to be able to do the work of dozens of high-end CPUs. They'd utterly wipe out the competition. Why haven't they put something like this out yet?

Simple: This is not a supercomputer at all, just special-purpose hardware running a very special problem. For general computations, GPUs are pretty inferiour.

Re:Why haven't they started releasing GPU CPUs yet (1)

krilli (303497) | more than 6 years ago | (#23611015)

I don't understand completely what you mean, but:

* Only some people need the extra speed.

* The tasks that can be accelerated this much belong to a specific domain of computation.

* NVIDIA're just starting.

Re:Why haven't they started releasing GPU CPUs yet (1)

ChrisMaple (607946) | more than 6 years ago | (#23611055)

GPUs are massively parallel and very poor at processing branches. This doesn't make for good speed with most programs. For a company to put out a product that they claimed is much faster than the competition, they would have to be very selective with their examples and create new benchmarks that took advantage of their product. They might even have to create new programs to take advantage of the power, like a modified gimp.

In all likelihood, if they tried too hard to advertise their speed advantage, they would be called dishonest because most programs would not exhibit a speedup.

Re:Why haven't they started releasing GPU CPUs yet (1)

Dogtanian (588974) | more than 6 years ago | (#23611527)

This article makes it seem like it is possible to use the GPUs as general purpose CPUs. Is that the case?
As well as the issues that others have mentioned, there's also the problem of accuracy with GPUs.

AFAIK, in many (all?) ordinary consumer graphics cards, minor mistakes by the GPU are tolerated because they'll typically result in (at worst) minor or unnoticable glitches in the display. I assume that this is because, to get the best performance, designers push the hardware beyond levels that would be acceptable otherwise.

Clearly if you're using them for other mathematical operations, or to partly replace a standard CPU, such mistakes might *not* be acceptable.

Re:Why haven't they started releasing GPU CPUs yet (1)

**loki969** (880141) | more than 6 years ago | (#23611613)

This article makes it seem like it is possible to use the GPUs as general purpose CPUs.
The summary might, but the article doesn`t.

This is awesome! (4, Funny)

sticks_us (150624) | more than 6 years ago | (#23610863)

Ok, probably a paid NVIDIA ad placement, but check TFA anyway (and even if you don't read, you gotta love the case). It looks like heat generation is one of the biggest problems--sweet.

I like this too:

The medical researchers ran some benchmarks and found that in some cases their 4000EUR desktop superPC outperforms CalcUA, a 256-node supercomputer with dual AMD Opteron 250 2.4GHz chips that cost the University of Antwerp 3.5 million euro in March 2005...

...and at 4000EUR, that comes to what (rolls dice, consults sundial) about $20000 American?

Re:This is awesome! (2, Informative)

livingboy (444688) | more than 6 years ago | (#23610925)

Insted of dices you could use KCALC 1 EUR is about 1.55 USD so instead of 20000 it did cost only about 6200 USD.

Re:This is awesome! (0)

Anonymous Coward | more than 6 years ago | (#23610961)

more like 6000$ american

Re:This is awesome! (2, Informative)

krilli (303497) | more than 6 years ago | (#23610991)

Re:This is awesome! (1)

osu-neko (2604) | more than 6 years ago | (#23611635)

$6.218.

Six dollars and twenty two (rounding up) cents?! That's quite a bargain!

(One should probably refrain from using a "." as a separator when posting a figure in US dollars, where, by US convention [and this is US currency we're talking about], a "." indicates the end of the dollar part and the start of the cents part.)

Re:This is awesome! (1)

TubeSteak (669689) | more than 6 years ago | (#23611059)

The FASTRA uses aircooling and with the sidepanel removed the GPUs run at 55 degrees Celsius in idle, 86 degrees Celsius under full load and 100 degrees Celsius under full load with the shaders 20% overclocked. They have to run the system with the left side panel removed as the graphics cards would otherwise overheat but they're looking for a solution for their heat problem.
Looking for a solution?
Geeks everyone have used the old "box fan aimed at the case" solution since time immemorial.

If you wanna get real fancy, you can pull/push air through a water cooled radiator.
Example: http://www.gmilburn.ca/ac/geoff_ac.html [gmilburn.ca]

Re:This is awesome! (1)

maxume (22995) | more than 6 years ago | (#23611343)

Designed in America, manufactured in Asia, purchased in Europe.

Re:This is awesome! (2, Insightful)

osu-neko (2604) | more than 6 years ago | (#23611667)

Designed in America, manufactured in Asia, purchased in Europe.

20th century thinking. Welcome to globalization. The product was designed, manufactured, and purchased on Earth.

Re:This is awesome! (0)

Anonymous Coward | more than 6 years ago | (#23611355)

who cares, If there not going to tell me how to make one whats the point in telling us we already no nvidia is the best.

Re:This is awesome! (1)

pablomme (1270790) | more than 6 years ago | (#23611525)

and at 4000EUR, that comes to what (rolls dice, consults sundial) about $20000 American?
That made me try to extrapolate the 2002-2008 trend of the exchange rate to see when that would become true (provided the trend continues). I get 2014 and 2045 with linear extrapolations, which are gross approximations, and 2023 with an exponential extrapolation. Does anyone know how exchange rates should be expected to behave with respect to time?

Re:This is awesome! (0)

Anonymous Coward | more than 6 years ago | (#23611593)

Note, for anyone who is European like me and initially read it as American-piggism, the parent wrote 20,000 euros, not 2000.

Tomography (4, Informative)

ProfessionalCookie (673314) | more than 6 years ago | (#23610865)

noun a technique for displaying a representation of a cross section through a human body or other solid object using X-rays or ultrasound.


In other news Graphics cards are good at . . . graphics.

Re:Tomography (1)

krilli (303497) | more than 6 years ago | (#23611039)

It's more complicated than that, trust me. The troubles inherent in graphics processing translate to a whole lot more stuff than just rendering the latest Doom.

Re:Tomography (5, Insightful)

jergh (230325) | more than 6 years ago | (#23611093)

What they are is doing is reconstruction, basically analyzing the raw data data from a tomographic scanner and generating a representation which can then be visualized. So its more doing numerical methods than graphics.

And BTW even rendering the reconstructed results is not that simple, as current graphics card are optimized for geometry, not volumetric data.

coincidence (2, Insightful)

DaveGod (703167) | more than 6 years ago | (#23610871)

I can't imagine that it is a coincidence that this comes along just as Nvidia are crowing about CUDA, or that the resulting machine looks like a gamer's dream rig.

While there is ample crossover between hardware enthusiasts and academia, anyone soley with the computation interest in mind probabyl wouldn't be selecting neon fans, aftermarket coolers or spend that much time on presentable wiring.

In other news... (4, Funny)

bobdotorg (598873) | more than 6 years ago | (#23610891)

... 3D Realms announced this as the minimum platform requirements to run Duke Nuke'em Forever.

Re:In other news... (1)

MindlessAutomata (1282944) | more than 6 years ago | (#23611287)

Sounds extreme, but you have to keep in mind that when DNF comes out, all those cards and such will be stored in our closets or will be being used as spare equipment for our silly home Linux servers.

No it does not (1)

symbolset (646467) | more than 6 years ago | (#23610893)

It does not run Linux.

Can it? Anybody?

Re:No it does not (1)

77Punker (673758) | more than 6 years ago | (#23611003)

I guess this is probably a joke, but the sections of code that run on the GPU should work on any platform supported by CUDA (which is probably what they're using, didn't read TFA) with little or no modification. Unless they're modifying structures created by D3D, that is....

Re:No it does not (1)

Tweenk (1274968) | more than 6 years ago | (#23611119)

Can it? Anybody?
You can begin working on it, they also have SDKs for Linux:
CUDA SDK download [nvidia.com]

Re:No it does not (1)

You ain't seen me! (1237346) | more than 6 years ago | (#23611417)

It does not run Linux.
But Microsoft will start porting Vista to it, as soon as they get their x86 version sorted out.

Finally... (5, Funny)

ferrellcat (691126) | more than 6 years ago | (#23610901)

Something that can play Crysis!

Re:Finally... (4, Funny)

Yvan256 (722131) | more than 6 years ago | (#23610945)

If you call 8 FPS "playing".

nVidia Tesla (1)

beef3k (551086) | more than 6 years ago | (#23610911)

Why not just buy a premade Tesla system from nVidia and avoid the heating problems?

Re:nVidia Tesla (1, Insightful)

Anonymous Coward | more than 6 years ago | (#23610963)

A Tesla system would cost a lot more.

This is not a supercomputer (3, Insightful)

poeidon1 (767457) | more than 6 years ago | (#23610917)

this is an example of acceleration architecture. Anyone who have used FPGAs knows that. Ofcourse, making sensational news is a too common thing on /.

BUT! (1)

Dr.D.IS.GREAT (1249946) | more than 6 years ago | (#23610931)

Does it run linux or Crysis??

Killer Slant (1, Insightful)

FurtiveGlancer (1274746) | more than 6 years ago | (#23610941)

The guys explain the eight NVIDIA GPUs deliver the same performance for their work as more than 300 Intel Core 2 Duo 2.4GHz processors.

Pardon the italics, but I was impacted by the killer slant of this posting.

For specific kinds of calculations, sure, GPGPU supercomputing is superior. I would question what software optimization they had applied to the 300 CPU system. Apparently, none. Let's not sensationalize quite so much, shall we?

Not a Supercomputer -- Special purpose hardware (2, Informative)

gweihir (88907) | more than 6 years ago | (#23610957)

It is also not difficuult to find other tasks where, e.g., FPGAs peform vastly better than general-purpose CPUs. That does not make an FPGA a "Supercomputer". Stop the BS, please.

Re:Not a Supercomputer -- Special purpose hardware (2, Interesting)

emilper (826945) | more than 6 years ago | (#23611651)

aren't most of the supercomputers designed to perform some very specific tasks ? You don't buy a supercomputer to run the Super edition of Excel.

Brick of GPUs (4, Interesting)

Rufus211 (221883) | more than 6 years ago | (#23610995)

I love this picture: http://fastra.ua.ac.be/en/images.html [ua.ac.be]

Between the massive brick of GPUs and the massive CPU heatsink/fan, you can't see the mobo at all.

Re:Brick of GPUs (3, Funny)

Fumus (1258966) | more than 6 years ago | (#23611393)

They spent 4000 EUR for the computer, but use two boxes in order to situate the monitor higher. I guess they spent everything they had on the computer.

Re:Brick of GPUs (0, Flamebait)

Capitalist Piggy (1298699) | more than 6 years ago | (#23611675)

It's also funny that the excuse for having the side off the case is "to keep the temperature down", when they could have simply put a couple of exhaust fans on the side.

It's more likely they keep the case door off so people can gawk at all the nVidia cards.

In other news, I am going to attach 8 network cards to my PC and have an article published about how it is better than a high-end firewall solution because it can play Quake. Seems about as relevant as comparing 8 nV cores to 400 intels.

Re:Brick of GPUs (0)

Anonymous Coward | more than 6 years ago | (#23611737)

What I love about that picture:

It shows that even 8 GPU's was not enough to run Vista!

fir57 (-1, Troll)

Anonymous Coward | more than 6 years ago | (#23611035)

perfo8m kkeping [goat.cx]

Wave of the Future? Yes (5, Informative)

bockelboy (824282) | more than 6 years ago | (#23611043)

Wave of the Future? Yes*. Revolution in computing? Not quite.

The GPGPU scheme is, after all, a re-invention of the vector processing of old. Vector processors died out, however, because there were too few users to support. Now that there's a commercially viable reason to make these processors (PS3 and video games), they are interesting again.

The researchers took a specialized piece of hardware, rewrote their code for it, and found it was faster than their original code on generic hardware. The problems here are that you have to rewrite your code (High Energy Physics codebases are about a GB, compiled... other sciences are similar) and you have to have a problem which will run well on this scheme. Have a discrete problem? Too bad. Have a gigantic, tightly coupled problem which requires lots of inter-GPU communication? Too bad.

Have a tomography problem which requires only 1GB of RAM? Here you go...

The standard supercomputer isn't going away for a long, long time. Now, as before, a one-size-fits-all approach is silly. You'll start to see sites complement their clusters and large-SMP machines with GPU power as scientists start to understand and take advantage of them. Just remember, there are 10-20 years of legacy code which will need to be ported... it's going to be a slow process.

Re:Wave of the Future? Yes (4, Informative)

77Punker (673758) | more than 6 years ago | (#23611109)

Fortunately, Nvidia provides a CUDA version of the basic linear algebra subprograms, so even if your software is hard to port, you can speed it up considerably if it does some big matrix operations, which can easily take a long time on a CPU.

First post (0, Redundant)

Toll_Free (1295136) | more than 6 years ago | (#23611159)

But how many FPS does it do with Duke Nukem Forever? --Toll_Free

Re:First post (1)

gardyloo (512791) | more than 6 years ago | (#23611661)

On par with anyone else's system?

Vector Computing (2, Interesting)

alewar (784204) | more than 6 years ago | (#23611177)

They are comparing their system against normal computers, I'd be interesting to see a benchmark against a vector computer, like, eg. NEC SX9

I'd build a beowulf cluster of these... (0)

Anonymous Coward | more than 6 years ago | (#23611337)

But the motherboard didn't have enough PCIe x16 slots!

The price ! (2, Funny)

this great guy (922511) | more than 6 years ago | (#23611427)

The system uses four NVIDIA GeForce 9800 GX2 graphics cards, costs less than 4,000 EUR to build

What's more crazy: calling something this inexpensive a supercomputer, or 4 video cards costing a freaking 4,000 EUR.

Re:The price ! (1)

TheUnknownOne (810624) | more than 6 years ago | (#23611703)

I'm even more surprised by the fact that they gave links to Newegg for all there parts, and if I build a similar system swapping out their PSU choice for a 1300W one from newegg (the one newegg had with the most PCIe connectors, as they didn't have the PSU they used) it costs a only $3,400 US, or as per google, about 2,200 EUR. Granted prices may have dropped since they did this, but I doubt they've been cut in half.

I wonder how that compares to the D870 (1)

Lazy Jones (8403) | more than 6 years ago | (#23611495)

Nvidia offers an external GPU solution specifically for "deskside supercomputing", the Tesla D870 [nvidia.com] . It has only 2 cores with 1.35GHz each, apart from it being a bit more expensive, I wonder how it compares (you can connect several to a PC).

Windows XP? (1)

marind (12895) | more than 6 years ago | (#23611629)

What a pity that they are running Windows XP. This computer could be the first to see running Vista at an acceptable speed.

Sooo... using GPU for graphics? (1)

flyingfsck (986395) | more than 6 years ago | (#23611685)

They are using graphical processors to process graphics. Truly revolutionary. Who woulda thunkit?
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?