Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

GPUs To Power Supercomputing's Next Revolution

Zonk posted more than 7 years ago | from the you-can-use-them-to-play-quake-too dept.

78

evanwired writes "Revolution is a word that's often thrown around with little thought in high tech circles, but this one looks real. Wired News has a comprehensive report on computer scientists' efforts to adapt graphics processors for high performance computing. The goal for these NVidia and ATI chips is to tackle non-graphics related number crunching for complex scientific calculations. NVIDIA announced this week along with its new wicked fast GeForce 8800 release the first C-compiler environment for the GPU; Wired reports that ATI is planning to release at least some of its proprietary code to the public domain to spur non-graphics related development of its technology. Meanwhile lab results are showing some amazing comparisons between CPU and GPU performance. Stanford's distributed computing project Folding@Home launched a GPU beta last month that is now publishing data putting donated GPU performance at 20-40 times the efficiency of donated CPU performance."

cancel ×

78 comments

Sorry! There are no comments related to the filter you selected.

What makes GPUs so great? (3, Funny)

Anonymous Coward | more than 7 years ago | (#16789059)

I was thinking about the question of what makes GPUs so great..

I thought .. What is it that a CPU does that a GPU doesn't?

Oh yeah .. I know .. run windows.

*I'm kidding I'm kidding*

Re:What makes GPUs so great? (1)

tumbleweedsi (904869) | more than 7 years ago | (#16792728)

To be fair with Vista there is going to be a whole lot of Windows running on the graphics card.

Re:What makes GPUs so great? (1)

Xymor (943922) | more than 7 years ago | (#16832568)

Talking about run windows, Is it possible(or legal) for Nvidia to sell proprietary software to do system demanding activities like video and audio encoding and runing it in their hardware only, taking advantage of its supercomputer-like properties?

Sweet (4, Interesting)

AKAImBatman (238306) | more than 7 years ago | (#16789069)

GeForce 8800 release the first C-compiler environment for the GPU;

One more step toward GPU Raytracing. We're already pushing rediculous numbers of polygons, with less and less return for our efforts. The future lies in projects like OpenRT [openrt.de] . With any luck, we'll start being able to blow holes through levels rather than having to run the rat-maze. ;)

Re:Sweet (-1, Troll)

Anonymous Coward | more than 7 years ago | (#16793628)

rIdiculous. not fucking rEdiculous. what cant any of you twats on slashdot spell this common fucking word?

Re:Sweet (1)

syukton (256348) | more than 7 years ago | (#16799986)

It may be the same reason that you can't be bothered to capitalize or use apostrophes.

cant [kant] -noun
1. insincere, esp. conventional expressions of enthusiasm for high ideals, goodness, or piety.
2. the private language of the underworld.
3. the phraseology peculiar to a particular class, party, profession, etc.: the cant of the fashion industry.
4. whining or singsong speech, esp. of beggars.
-verb (used without object)
5. to talk hypocritically.
6. to speak in the whining or singsong tone of a beggar; beg.

can't [kant, kahnt]
1. contraction of cannot.

linpack numbers please (0)

Anonymous Coward | more than 7 years ago | (#16789085)

I'll believe it when I see Linpack numbers

So... (4, Informative)

Odin_Tiger (585113) | more than 7 years ago | (#16789087)

Let me see if I have this down right: With the progress of multi-core CPU's, especially looking at the AMD / ATI deal, PC's are moving towards a single 'super chip' that will do everything while phasing out the use of a truly separate graphics system. Meanwhile, supercomputers are moving towards using GPU's as the main workhorse. Doesn't that strike anybody else as a little odd?

Re:So... (1)

maxwell demon (590494) | more than 7 years ago | (#16793332)

Maybe some time in the future we will have CPUs with integrated GPUs (which probably will not be called like that, since they are also used for general parallel processing tasks).

Re:So... (2, Insightful)

mikael (484) | more than 7 years ago | (#16794350)

With the progress of multi-core CPU's, especially looking at the AMD / ATI deal, PC's are moving towards a single 'super chip' that will do everything while phasing out the use of a truly separate graphics system.

Not really...

PC's run multiple processes that have unpredictable branching - like network protocol stacks, device drivers and word processors and plug'n'play devices. More CPU cores help to spread the load. For the desktop windows system, 3D functionality was simply a bolt-on to the windows system through a separate API, now it is integral to the windows system. However, the new multi-core CPU's will still have the graphics processing logic.

In the past, supercomputers were either built from custom ASIC's or simply from a large number of CPU's networked together into a particular topology .

GPU's now support both floating-point textures and downloadable shading programs that are executed in parallel, Combining these two features together, gives the GPU all the functionality of a supercomputer.
Although up until now, the GPU has only supported 16-bit floating point precision rather than the 32-bit or 64-bit precision that traditional supercomputing applications such as FFT or computational fluid dynamics have required.

And since these applications are purely mathematical equations with no conditional branching within the innermost loops, these are well suited to being ported onto the GPU. The only limitation has been that GPU's couldn't form scalable architectures - at least until SLI came along. So, you've basically got supercomputing performance on a board. This fits into the scalable architecture of a supercomputer [ibm.com] .

What I'd like to see come from this (4, Interesting)

Marxist Hacker 42 (638312) | more than 7 years ago | (#16789089)

Simple video games that run ENTIRELY on the GPU- mainly for developers. Got 3 hours (or I guess it's now going on 7 hours) to wait for an ALTER statement to a table to complete, and you're bored stiff? Fire up this video game, and while your CPU cranks away, you can be playing the video game instead with virtually NO performance hit to the background CPU task.

Re:What I'd like to see come from this (1)

Profane MuthaFucka (574406) | more than 7 years ago | (#16797922)

Do people who work with databases large enough to make an alter table run for 3 hours commonly put them on their workstations? Why not run the alter table command on the database server, and play your game on your workstation?

Re:What I'd like to see come from this (1)

Marxist Hacker 42 (638312) | more than 7 years ago | (#16832252)

Do people who work with databases large enough to make an alter table run for 3 hours commonly put them on their workstations? Why not run the alter table command on the database server, and play your game on your workstation?

I would hope the later- but the 2^24th bug rebuild I was refering to sure took a long time.

Play Nethack or other low-CPU game :-) (1)

billstewart (78916) | more than 7 years ago | (#16817170)

I would suggest reading the net while you're waiting for the computation to finish, but I'm sitting here with Mozilla using 150MB of RAM and burning 98% of CPU because it's gotten itself into some kind of loop.... But Nethack is a nice low-CPU low-RAM game that shouldn't bother your CPU much.

Yay (0)

Anonymous Coward | more than 7 years ago | (#16789095)

enough power to run WIndows Vista at the same time with DNF, plus every computer game on Earth

Here we go again... (3, Funny)

Chayak (925733) | more than 7 years ago | (#16789101)

Great now Homeland Defence is going to buy up all the graphics cards to prevent their dangerous computing power from falling in the hands of evil script kiddies trying to crack your hotmail account...

wider view... (1)

headkase (533448) | more than 7 years ago | (#16789109)

Will it still be relevent if Intel delivers 80 cores [com.com] in five years as they promise? Or will history repeat itself and we'll have our 80 cores plus specialized "math coprocessors" again?

Acronym (2, Informative)

benhocking (724439) | more than 7 years ago | (#16789113)

For those who are curious, CUDA stands for "compute unified device architecture".

Mod article -1 redundant (0)

Anonymous Coward | more than 7 years ago | (#16789119)

Nvidia out and Intel graphics chipsets in. So long as Nvidia don't even release specs for their cards, I don't foresee their GPUs powering anything I'm involved with.

The next thing you know... (3, Funny)

NerveGas (168686) | more than 7 years ago | (#16789127)


    "Serious" computers won't come with fewer than 4 16x PCI-E slots for hooking in "scientific processing units"...

    We used to tell our boss that we were going to do stress-testing when we stayed late to play Q3, this takes that joke to a whole new level.

SIMD for the masses (1)

J.R. Random (801334) | more than 7 years ago | (#16789137)

This may result in people buying high end video cards for headless servers doing weather simulations and the like.

Self aware in 2007 (1, Funny)

JCOTTON (775912) | more than 7 years ago | (#16789143)

Can a game computer that is self-aware, play with itself? Are Slashdotters that play with themselves, self-aware?

Step back, step back from that sig....

Re: What makes a GPU so great (4, Informative)

NerveGas (168686) | more than 7 years ago | (#16789145)

"I thought .. What is it that a CPU does that a GPU doesn't?"

GPUs have dedicated circuitry to do math, math, and more math - and to do it *fast*. In a single cycle, they can perform mathematical computations that take general-purpose CPUs an eternity, in comparison.

Sighted in Massachusetts... (4, Funny)

sczimme (603413) | more than 7 years ago | (#16789153)



NVIDIA announced this week along with its new wicked fast GeForce 8800 release the first C-compiler environment for the GPU

"Wicked fast" GPU? And a compiler?

Sounds like a Boston C Party.

Practical results (2, Informative)

Anonymous Coward | more than 7 years ago | (#16789165)

Nice to see the mention of Acceleware in the press release. While a lot of the article is about lab results, Acceleware has been delivering actual GPU powered products for a couple of years now.

8800 and Seymour's machines (3, Interesting)

Baldrson (78598) | more than 7 years ago | (#16789167)

The 8800 looks like the first GPU that really enters the realm of the old fashioned supercomputing architectures pioneered by Seymour Cray that I cut my teeth on in the mid 1970s. I can't wait to get my hands on their "C" compiler.

Another overview article NYTIMES and literature? (5, Informative)

JavaManJim (946878) | more than 7 years ago | (#16789179)

Excellent news! Below is the link, registration required, for the New York Times. I will try to paste the article.

Second. Anyone out there working on books that have examples? Please reply with any good 'how to' sources.

Source: http://www.nytimes.com/2006/11/09/technology/09chi p.html?ref=technology [nytimes.com]

SAN JOSE, Calif., Nov. 8 -- A $90 million supercomputer made for nuclear weapons simulation cannot yet be rivaled by a single PC chip for a serious video gamer. But the gap is closing quickly.

Indeed, a new breed of consumer-oriented graphics chips have roughly the brute computing processing power of the world's fastest computing system of just seven years ago. And the latest advance came Wednesday when the Nvidia Corporation introduced its next-generation processor, capable of more than three trillion mathematical operations per second.

Nvidia and its rival, ATI Technologies, which was recently acquired by the microprocessor maker Advanced Micro Devices, are engaged in a technology race that is rapidly changing the face of computing as the chips -- known as graphical processing units, or G.P.U.'s -- take on more general capabilities.

In recent years, the lead has switched quickly with each new family of chips, and for the moment the new chip, the GeForce 8800, appears to give the performance advantage to Nvidia.

On Wednesday, the company said its processors would be priced at $599 and $449, sold as add-ins for use by video game enthusiasts and for computer users with advanced graphics applications.

Yet both companies have said that the line between such chips and conventional microprocessors is beginning to blur. For example, the new Nvidia chip will handle physics computations that are performed by Sony's Cell microprocessor in the company's forthcoming PlayStation 3 console.

The new Nvidia chip will have 128 processors intended for specific functions, including displaying high-resolution video.

And the next generation of the 8800, scheduled to arrive in about a year, will have "double precision" mathematical capabilities that will make it a more direct competitor to today's supercomputers for many applications.

"I am eagerly looking forward to our next generation," said Andy Keane, general manager of Nvidia's professional products division, a business the company set up recently to aim at commercial high-performance computing applications like geosciences and gene splicing.

The chips made by Nvidia and ATI are shaking up the computing industry and causing a level of excitement among computer designers, who in recent years have complained that the industry seemed to have run out of new ideas for gaining computing speed. ATI and Advanced Micro Devices have said they are working on a chip, likely to emerge in 2008, that would combine the functions of conventional microprocessors and graphics processors.

That convergence was emphasized earlier this year when an annual competition sponsored by Microsoft's research labs to determine the fastest sorting algorithm was won this year by a team that used a G.P.U. instead of a traditional microprocessor. The result is significant, according to Microsoft researchers, because sorting is a basic element of many modern computing operations.

Moreover, while innovation in the world of conventional microprocessors has become more muted and largely confined to adding multiple processors, or "cores," to single chips, G.P.U. technology is continuing to advance rapidly.

"The G.P.U. has this incredible memory bandwidth, and it will continue to double for the foreseeable future," said Jim Gray, manager of Microsoft's eScience group.

Although the comparison has many caveats, both computer scientists and game designers said that Nvidia GeForce 8800 had in some ways moved near the realm for the computing power of the supercomputing world of the last decade.

The fastest of these machines were specialized systems designed to simulate the tremendous forces that come into play in the first microseconds of a nuclear explosion. In contrast, Nvidia's new chip is a graphics processor that takes a significant step toward general-purpose computation. It can simulate the reflections of light scattering off objects in a video game as well as the physical interactions of the moving objects themselves.

To underscore its new ability to mimic reality, Nvidia on Wednesday showed off the 8800 chip with a virtual reality simulation of the model and actress Adrianne Curry.

The company staged the demonstration in a 200-yard-long tent it had set up here for a large gathering of PC gamers who were invited to preview the company's technology.

A second demonstration showed an interactive and photorealistic frog that could be stretched and slapped by an interactive video hand.

"We're just entering an era where it is possible to capture photo-real human faces in motion," said Steve Perlman, a Silicon Valley technologist whose San Francisco-based company, Mova, recently introduced a system for capturing human faces and figures for animation studios. "The 8800 enables us to display that level of realism in real time."

Separately, Nvidia also announced a new generation of PC systems that it is building based on Intel and A.M.D. processors. It will sell its nForce 600 Series line of media communications processors to systems makers and directly to retailers.

Thanks,
Jim Burke

Re: So... (1)

AKAImBatman (238306) | more than 7 years ago | (#16789185)

Let me see if I have this down right: With the progress of multi-core CPU's, especially looking at the AMD / ATI deal, PC's are moving towards a single 'super chip' that will do everything while phasing out the use of a truly separate graphics system. Meanwhile, supercomputers are moving towards using GPU's as the main workhorse. Doesn't that strike anybody else as a little odd?

Odd? Not really. The "PC super chip" design is practically the same thing as the "GPU Supercomputer" design. The big difference is that the PC Super Chip design adds in general processing capabilities while the GPU Supercomputer design merely shrinks existing supercomputer designs onto high performance Graphics chips.

GPUs actually share a lot in common with traditional supercomputer designs. So it's not surprising at all that the massive amounts of R&D done by the GPU market would have real-world applications for the supercomputer market. Similarly, the PC market has already played out the clock-speed trick. It still needs general purpose processing, but getting a 3.5 GHz chip just doesn't improve things much. Instead, the PC market is starting to look at using that processing power more effectively by dividing it up between truely parallel tasks. Especially those tasks that require massive number crunching.

Configurable microchips. (1)

purpledinoz (573045) | more than 7 years ago | (#16789187)

I think computers will eventually contain an FPGA, which can be re-programmed to perform any task. For example, a physics processor can be programmed into the FPGA when a game launches, folding@home can program the FPGA to do specific vector calculations very quickly, encryption algorithms can be programmed in to perform encryption/decryption very quickly, etc.

FPGAs are getting quite powerful and are getting a lot cheaper. It definitely won't be as fast as a dedicated ASIC, but if programmed properly, it should be able to accelerate certain tasks significantly.

8800GTX and HPC (5, Interesting)

BigMFC (592465) | more than 7 years ago | (#16789195)

The specs on this board are pretty crazy. 128 single precision FP units each capable of doing a FP Multiply add or a multiply and operating at 1.35 GHz and no longer closely coupled to the tradition graphics pipeline. The memory hierarchy also looks interesting... this design is going to be seeing a lot of comparisons to the Cell processor. Memory is attached via a 384 bit bus (320 on the GTS) and operates at 900MHz.

The addition of a C compiler, drivers specific to GPGPU applications and available for linux (!) as well as XP/Vista means that this is going to be seeing widespread adoption amongst the HPC crowd. There probably won't be any papers on it published at SC06 in Florida next week, but over the next year there probably will be a veritable torrent of publications (there already is a LOT being done with GPUs). The new architecture really promotes GPGPU apps, and the potential performance/$ especially factoring in the development time which should be significantly less with this toolchain. A couple 8800GTXes in SLI and I could be giving traditional clusters a run for their money when it comes to apps like FFTs etc. I can't wait till someone benchmarks FFT performance using CUDA. If anyone finds such numbers post and let me know!

RE: So.... (1)

tinkerghost (944862) | more than 7 years ago | (#16789223)

Original [slashdot.org]
It's not unusual at all. CPUs are very general and do certain things very quickly & efficiently. GPUs on the other hand do other things very quickly and efficiently. The type of number crunching that GPUs do is actually well suited to the massively repetitive number crunching done by most of the big super computers [think climatology studies]. Shifting from CPU to GPU architectures just makes sense there.

Currently viable solutions (2, Informative)

Conradaroma (1025261) | more than 7 years ago | (#16789241)

It's nice to see the name Acceleware mentioned in the NVIDIA press release, although they are missing from the 'comprehensive' report on wired. It should be noted that they have been delivering High performance computing solutions for a couple of years or so already. I guess now it's out of the bag that NVIDIA's little graphics cards had something to with that.

Anyone know of any other companies that have already been commercializing GPGPU technology?

Reply to #16789087 (3, Insightful)

Dr. Eggman (932300) | more than 7 years ago | (#16789255)

"Let me see if I have this down right: With the progress of multi-core CPU's, especially looking at the AMD / ATI deal, PC's are moving towards a single 'super chip' that will do everything while phasing out the use of a truly separate graphics system. Meanwhile, supercomputers are moving towards using GPU's as the main workhorse. Doesn't that strike anybody else as a little odd?"
16789087 [slashdot.org]

I picture this:

Before:
CPU makers: "Hardware's expensive, keep it simple."
GPU makers: "We can specialize the expensive hardware separatly!"


Now:
CPU makers: "Hardware's cheaper and cheaper, lets keep up our profits by making our more inclusive."
GPU makers: "We can specialize the cheap hardware in really really big number-crunch projects!"


btw, why isn't the reply button showing up? I'm too lazy to hand type the address.

GPGPU companies (2, Informative)

BigMFC (592465) | more than 7 years ago | (#16789259)

Check out Peakstream (http://www.peakstreaminc.com/). They're a Silicon Valley startup doing a lot of tool development for multicore chips, GPUs and Cell.

Re:wider view... (0)

Anonymous Coward | more than 7 years ago | (#16789263)

The newest nvidia GPUS apparently have 128 cores. From their press release: "...which allows 128, 1.35GHz processor cores in newest generation NVIDIA GPUs to cooperate with each...".

It gets even better (2, Funny)

Donut2099 (153459) | more than 7 years ago | (#16789265)

They found they could get even more performance by turning off vsync!

In other news (1)

stud9920 (236753) | more than 7 years ago | (#16789275)

In other news, the Von Neumann design was discovered to be pretty much Turing complete, but not the best tool for every job. Film at 11.

Familiar Idea (1)

AetherGoth (707621) | more than 7 years ago | (#16789279)

My honors thesis at college back in 2004 was a framework that would allow you to load pixel shaders (written in CG) as 'threads' and run them in parallel on one GPU. As far as I can tell nVidia has done the same thing, but taken it a step further by translating from C (and more efficient I'm sure).

I guess I should have published that paper back then...oh well.

I run climate simulations (1)

mofag (709856) | more than 7 years ago | (#16789337)

and they take a lot of memory. does anyone know if the nvidia cards will be able to access main system memory or does that defeat the purpose? (e.g. I am currently running 268 climate states which each take a couple of hours to run and about 1GB of physical memory - so on my cluster of 4 X2 5000s they should be done in a couple of days) 40 times the processing power (these GPUS are probably ATIs with 24-48 pipelines hence the 20-40 times performance) would be awesome (or 128 in the case of the GTX) but where is all the memory gona come from? will we see video cards with 64GB or DDR? if it means recoding eveything from scratch then this benefit wont trickle down to people like me :( *wipes tears from eyes and sniffs* so anyone know if this will run on my 6800GT :)

sigh.. (4, Funny)

springbox (853816) | more than 7 years ago | (#16789351)

They "CUDA" come up with a better acronym.

power management (2, Interesting)

chipace (671930) | more than 7 years ago | (#16789465)

I think that implementing the gpu as a collection of configurable ALUs is an awesome idea. I have two gripes:

(1) Power Management : I want at least 3 settings (lowest power, mid-range and max-performance)

(2) Where's the killer app? I value my electricty more than contributing to folding and SETI.

If they address these, I'm a customer... (I'm a cheap bastard who is fine with integrated 6150 graphics)

Re:power management (1)

maxwell demon (590494) | more than 7 years ago | (#16793516)

I could imagine video editing as one application which by its very nature could profit quite a bit from this.

But anyways, there's always scientific computing. Guess what all those supercomputers are used for :-)

And now imagine a GPU-driven Beowulf cluster ... :-)

double precision (-1, Redundant)

Anonymous Coward | more than 7 years ago | (#16789471)

it is well understood that GPUs are fast. what scientists want to know is what kind of precision issues are involved in using the GPU?

1) When will the GPU fully support the IEEE-754 standard for single precision
2) When will the GPU include at least emulated support for double precision

Many algorithms do not work well in single precision because error propagation really hurts.

why is GPU so great (FOR MATH) --Parallel process (1)

Infoport (935541) | more than 7 years ago | (#16789473)

Why is a GPU so great FOR MATH? Parallel processing (it is on Page 2 of the Wired article linked at the first of the Slashdot summary) If you need to have lots of branching and decision making, it is not as good. The better bandwidth, etc sure helps, but parallel processing is part of it. That is why they are so great for tasks such as number crunching involved in graphics (3d is done not by "moving the points" but by changing the base axis around the points-- this is a way of visualizing the math done to transform those point locations to the new point locations when a 3d figure "moves")
So *some* parts of computer transactions can be done in parallel, but if much needs to be in serial, it will ALL be slowed down by the (serial) process that decides which parts go where. If you can make your problems purely non-serial, like math that can be done in chunks and reassembled, without conditions that affect processing bewtween the chunks, THOSE problems can benefit from a parallel processor. Parallel processors are NOT new, but in the home-computing industry they just happen to be represented by GPUs, math co-processors, and not a lot else (dedicated cryptography chips probably too). If there is more demand at home, there will be more manufactured for the home. Currently, games and video are the main home demands, although home audio studios could probably benefit, if those people were to demand a lot more COMPLEX digital signal processing on the fly.(maybe more likely audio soundboards?)

Compilers can also compile out-of-order, which is why a C compiler can benefit-- there is a static end result from a given compiler input-- no interaction and choices not defined by the input.

Infoport

goakt (-1, Flamebait)

Anonymous Coward | more than 7 years ago | (#16789481)

Better computers? (1)

SpaceAdmiral (869318) | more than 7 years ago | (#16789483)

Folding@Home launched a GPU beta last month that is now publishing data putting donated GPU performance at 20-40 times the efficiency of donated CPU performance.

Obviously some of that is due to GPUs being better than general-purpose CPUs at this sort of math, but how much is also due to the fact that the people who are willing to run a Beta version of Folding@Home on their GPU tend to be the sort of people who would have much better computers overall than those who are merely running the project on their CPUs?

GPGPUs (0)

Anonymous Coward | more than 7 years ago | (#16789487)

To the guy who utilized pixel shaders as threads for his 'thesis': when you do research, one of the most important steps is background research (related works). Ex: anything on http://www.gpgpu.org/ [gpgpu.org] should have provided a cue for original work. Von Neumann machines are out, they clearly don't scale. Scheme taught us that local mutation may be ok, but global not so much. GPU coding exemplifies this assumption and, more interestingly, make hpc apps accessible :) I hope to work on this stuff next year, assuming funding :)

But can it *run* Linux? (1)

thule (9041) | more than 7 years ago | (#16789493)

Intel wants more GPU power in their CPU's. NVidia is using features of their GPU to do problem solving. Which one will win out?

Re:But can it *run* Linux? (1)

joto (134244) | more than 7 years ago | (#16795820)

Intel wants more GPU power in their CPU's. NVidia is using features of their GPU to do problem solving. Which one will win out?
The answer is obvious. Untill GPUs run Windows, and motherboards have slots for GPUs, but you add the CPU on a special "processing board" connected over a fast PCI Express slot, Intel wins.

How will this stack up against the IBM Cell? (0)

Anonymous Coward | more than 7 years ago | (#16789549)

From my minimal examination, it seems the the IBM Cell processor is another version of this idea: a CPU that is good at pipelined computation, without virtual memory, that does not support an OS. http://en.wikipedia.org/wiki/IBM_Cell [wikipedia.org]

These CPUs are not that easy to program, but they run screamingly fast when you make them work right. I think that IBM may have the edge in putting lots of these together for a cluster computing environment (insert Beowolf joke here). IBM is planning a very large Cell based system, an additonal member of the Blue Gene family. http://en.wikipedia.org/wiki/IBM_Roadrunner [wikipedia.org]

On the other hand, GPUs will have a huge advantage because of the size of the graphics card market. This will drive prices down and make GPU based computation available to the masses.

hybrids (2, Interesting)

Bill Dog (726542) | more than 7 years ago | (#16789607)

The following idea from TFA is what caught my eye:
"In a sign of the growing importance of graphics processors, chipmaker Advanced Micro Devices inked a deal in July to acquire ATI for $5.4 billion, and then unveiled plans to develop a new "fusion" chip that combines CPU and GPU functions."

I can see the coming age of multi-core CPU's not necessarily lasting very long now. We don't tend to need a large number of general-purpose CPU's. But a CPU+GPU chip, where the GPU has for example 128 1.35GHz cores (from the Nvidia press release), and with a new generation of compilers written to funnel sections of code marked parallelizable to the GPU portion, and the rest to the CPU, would be tremendous.

Does Intel have any plans to try to acquire Nvidia?

Where are the GPU-assisted encoders? (1)

PingXao (153057) | more than 7 years ago | (#16789609)

nVidia has PureVideo, ATi has whatever. Why are there still no GPU-assisted MPEG2 (or any other format) video encoders? Modern GPUs will do hardware assisted MPEG decoding, but software-only encoding is still too slow. TMPGEnc could be much faster. Same for the others. It seems as though the headlong rush to HD formats have left SD in the dust.

Re:So... (1)

evilviper (135110) | more than 7 years ago | (#16789669)

PC's are moving towards a single 'super chip' that will do everything while phasing out the use of a truly separate graphics system.

Umm... No. There's no evidence of that at all.

Doesn't that strike anybody else as a little odd?

Nope. Making the GPU just another core on the CPU chip would make PCs more able to utilize the GPU quickly, for these types of tasks.

accuracy problems (1)

simscitizen (696184) | more than 7 years ago | (#16789925)

Great if you want fast answers, but the RAM used in GPUs isn't as robust accuracy-wise as normal RAM.

Re:accuracy problems (1)

joto (134244) | more than 7 years ago | (#16795876)

You seriously need to back up this statement with some hard data, otherwise most people will think you are bullshitting us (and so do I). Link please?

Forget rootkits (0)

Anonymous Coward | more than 7 years ago | (#16789937)

Soon trogans can put entire mini-OSs on the video card...

Hmm, is anyone porting NetBSD to it yet?

(I'm not serious, well, maybe a little.)

Workshop and Tutorials at SC'06 (1)

pingbak (33924) | more than 7 years ago | (#16789951)

<shameless plug>
While it's probably too late to sign up for the general-purpose GPU tutorial at Supercomputing '06, there may still be time to get to the "General-Purpose GPU Computing: Practice and Experience" workshop (assuming you're going to Supercomputing to begin with.) Workshop's web page is http://www.gpgpu.org/sc2006/workshop/ [gpgpu.org]

The workshop itself has turned into a kind of "GPU and multi-core" forum, with lots of great speakers. NVIDIA's Ian Buck and ATI's Mark Segal will both be speaking to the Wired article's material. And IBM and Los Alamos will be talking about Cell and Roadrunner, among other things.
</shameless plug>

So, I wonder what Dinesh Manocha will be talking about at the workshop... Hmmm....

Re:Another overview article NYTIMES and literature (3, Informative)

pingbak (33924) | more than 7 years ago | (#16789977)

Google "Dominik Goeddeke" and read his GPGPU tutorial. It's excellent, as far as tutorials go, and helped me bootstrap.

Ok, ok, here's the link [uni-dortmund.de] ...

AVIVO (1)

Namarrgon (105036) | more than 7 years ago | (#16790017)

ATi has AVIVO, and they've been doing hardware-assisted encoding in a variety of formats for some time now. Google it up.

What about FPGAs (1)

Colin Smith (2679) | more than 7 years ago | (#16790091)

GPUs ok... Supercomputers, specific applications, custom code. I'd have thought it'd be an ideal application.

 

Not symmetric (1)

Namarrgon (105036) | more than 7 years ago | (#16790099)

Intel's 80 core chip wasn't symmetric; most of those cores were stripped-down processors, not x86 standard. Like the Cell, only more so.

nVidia's G80, while not on the same chip, takes this to 128 cores. G90 will support full double-precision math. And although it's separate from the CPU, graphics cards are such a standard part of most systems that by the time five years have elapsed, you'll likely be able to get a quad-core x86 + 256-core DP gfx/HPC system for somewhat less than Intel's fancy new 80-core release alone.

Re: What makes a GPU so great (1, Insightful)

tayhimself (791184) | more than 7 years ago | (#16790125)

Unfortunately, the new NV80 is still not IEEE754 compliant for single precision (32 bit) floating point math. It is mostly compliant however, so may be usable by some people. Forget it if you want to do 64 bit double precision floats though.

CPUs and GPUs. (1)

karthikkumar (814172) | more than 7 years ago | (#16790635)

CPUs are inherently good at doing serial jobs. and GPUs are good at doing parallel jobs. GPUs can be though of as the extreme enhanced graphical equivalent of DSP chips. So basically, any combination of a controlling and parallel execution processor can give you the supercomputing environment you need. Which again brings us back to our traditional supercomputing model; Except for one change, that the mathematical units have grown faster and massively parallel in nature! We haven't done much past anything turing computable anyway: chips growing faster, doing the same thing. So, you first had the CPU. Then you wanted faster graphics. So you seperate it and have the GPU (which is more than a video adapter). Then you want faster computation. So you put it together ('cept the video adapter). It's crazy, but they keep shuffling things over the years, people aren't bored of it anyway, everybody buys stuff and everybody wins. Now again: what does a GPU do that a CPU can't?

Re:CPUs and GPUs. (1)

joto (134244) | more than 7 years ago | (#16796162)

the mathematical units have grown faster and massively parallel in nature!

This is not english. Even if I try, I can't guess what you are trying to tell. It makes no sense whatsoever...

We haven't done much past anything turing computable anyway

Please understand what "turing computable" means. All computers are devices capable of emulating a turing machine (at least if it fits within RAM, etc). And computers is something you can emulate on a turing machine. Your criticism is like someone complaining that "In the field of cars, we haven't done much past transportation anyway".

It's crazy, but they keep shuffling things over the years, people aren't bored of it anyway, everybody buys stuff and everybody wins.

It's not crazy. It's common sense. If someone comes up with a better part for a computer, someone is going to buy it. If someone comes up with a better part for a lawn-mower, someone is going to buy it. Despite what you may think when you see todays youth, I believe common sense is here to stay.

Now again: what does a GPU do that a CPU can't?

It can run certain computational tasks faster.

Re:power management (1)

et764 (837202) | more than 7 years ago | (#16790736)

I value my electricty more than contributing to folding and SETI.

I've read this isn't quite as much a waste of electricity as it seems, at least during the winter if you have electric heating. The majority of the energy consumed by your CPU goes into thermal energy, which your heatsink disipates into the air. Thus every watt your CPU burns is one wat your furnace doesn't have to burn to keep your house warm enough. I'm sure it doesn't work out perfectly, but one way you're running a whole bunch of electricity through a big resistor that doesn't do anything but get hot, and the other way you're running a whole bunch of electricity through a big resistor that happens to solve interesting and useful problems while getting hot.

.NET or Java VM should run on one (0)

Anonymous Coward | more than 7 years ago | (#16790994)

I'm all for turing my 1 CPU machine + graphics card into a 2 CPU machine when .NET or JAVA VM runs on the GPU. Following that, I'd like to see a micro-PC for $25 consisting of a boot rom, io interface ports (USB, video, etc) that lets an off the shelf graphics card run as the main CPU and therefore have a machine without a motherboard CPU.

- Graphics card
- Microcontroller for io board with 1 slot for graphics card + USB ports
- Boot rom
- Tiny wall plugin transformer
- Basic tiny router sized case
----
for $25 or less

Re: accuracy problems (3, Informative)

AKAImBatman (238306) | more than 7 years ago | (#16791278)

Parent Post [slashdot.org]

Great if you want fast answers, but the RAM used in GPUs isn't as robust accuracy-wise as normal RAM.

You're confusing your technologies. The RAM used on video cards these days is effectively the same RAM you use with your CPU. The memory cannot lose data or very bad things will happen to the rendering pipeline.

What you're thinking of is the intentional inaccuracy of the floating point calculations done by the GPU. In order to obtain the highest absolute graphical performance, most 3D drivers optimized for gaming attempt to drop the precision of the calculations to a degree that's unacceptable for engineering uses, but perfectly acceptable for gaming. NVidia and ATI make a lot of money by selling "professional" cards like the Quadro and the FireGL to engineering companies that need the greater precision. A lot of the difference is in the drivers (especially for the low-end models), but the cards do often have hardware technologies better suited to CAD-type work.

GPUs good for a FEW supercomputing tasks. (1)

flaming-opus (8186) | more than 7 years ago | (#16791360)

First of all, the gf8800 has the same deficiency that the cell has, in that both are really good at performing single precision floating point math. This is great for video processing and the like, but real science has been using 64bit floats since the mid 70's. It might be hard to convince users that they can get the wrong answer, but it'll be really cheap and really fast.

secondly, the bandwidth to memory is very high, but the amount of addressable memory is very very low. 768MB of memory, divided by 128 processing units means that the entire problem set for each PE needs to fit in 6MB, otherwise you're bottlenecked going to main memory. Game rendering, conveniently tends to reuse a lot of data, and that data compresses very well in memory. Not so with real science data. This is quite analagous to the problems a lot of scientists are having with Blue Gene, which has 256MB of memory available to each PE.

This is not to say that doing HPC computing on the GPU won't happen, it will just be fairly limited in the number of problems that will port well to that environment. For those that do, however, you can't beat the bang for the buck. I suspect that this is mostly for game physics and video transcoding, as those are things that nvidia/amd can sell as an added value. Anything else just doesn't seem to provide much additional revenue, so I can't imagine them putting a lot of effort into supporting it.

Re: What makes a GPU so great (1)

bug1 (96678) | more than 7 years ago | (#16791624)

GPU's a slower than a CPU for serialised operations.

Its great for highly parallel processor bound applications, but for anything close to user level apps its just a waste of silicon.

8087 (4, Funny)

Bo'Bob'O (95398) | more than 7 years ago | (#16791768)

"GPUs have dedicated circuitry to do math, math, and more math - and to do it *fast*. In a single cycle, they can perform mathematical computations that take general-purpose CPUs an eternity, in comparison."

Sounds like there is a lot of untapped potential. I propose we move GPUs off the external cards, and give them their own dedicated spot on the motherboard. Though, since we will allowing it be used for more general applications, we could just call it a Math Processor. Then again, it's not really a full processor like a duel core, so, we'll just call it a Co-Processor. This new "Math Co- Processor" will revolutionize PCs like nothing we have ever seen before. Think of it, who would have thought 20 years ago we could have a whole chip just for floating point math!

serial & parallel (1)

jbloggs (535329) | more than 7 years ago | (#16791826)

Its quite obvious that computing is going in a direction where we won't say GPUs or CPUs, but rather serial processors and parallel processors, with the assumption of having both. The cell processors are a good example of this thought, although they're too heavy on the parallel side. Many tasks do not parallelize well, and will still need a solid serial processor.

So the Transputer was a good idea after all. (1)

master_p (608214) | more than 7 years ago | (#16792036)

Remember this [wikipedia.org] ? although it was a failure commercially, it was the right idea after all: lots of small processing units that are able to process in parallel big chunks of data; that's what modern GPUs do.

So what we need now is for this kind of architecture to pass in CPUs (maybe already scheduled from what I've read lately) and then a programming language where operations are parallel, except when data dependencies exist (functional languages may be good for this task).

Yes, but can it do DP? (1)

Phatmanotoo (719777) | more than 7 years ago | (#16792404)

Until these things are able to do double-precision, their applicability to general HPC problems remains very limited. Make them do DP arithmetic, and benchmark with SPEC and McAlpine's STREAMS benchmarks, and then we'll see. Oh and BTW, make a Fortran-90/95 compiler available.

Come on MPEG4 and MP3 acelleration... (1)

Brit_in_the_USA (936704) | more than 7 years ago | (#16792408)

Perhaps finally, we will see the popular commercial/shareware/freeware programs taking advantage of GPU acceleration.

There are two main areas that I would love to see accelerated by GPU: DivX or other MPEG4 Codec MP3 Codec

Due to the asymmetry in CPU usage it is the ENCODING that would be revolutionized by GPU acceleration. I am sure I am not alone when I think of these two areas as the most time consuming tasks my home PC is set-upon. Yes ATI may have a soilution, but I want to see support for both Nvidia and ATI in a more generally avlaible encoder solution.

Shared library acceleration. (1)

ChunderDownunder (709234) | more than 7 years ago | (#16794848)

brainstorming here, forgive me if it isn't at all feasible...

My rational is something along the lines of how Apple may have implemented hardware assisted vector operations; falling back to scalar equivalents when altivec wasn't available.

On kernel startup (or dynamically, assuming hot swapping GPUs!) the system could load a configuration for a shared library to take advantage of GPU acceleration. Whether this happened when coding to a specific API or could somehow be trapped in the platform c lib or at a kernel level I'll leave as an exercise for the reader. [I wouldn't know, I just program for a well know virtual machine. But as that VM might soon be GPLed [slashdot.org] , hopefully some well meaning soul would transparently integrate such a technology, at least for its math libraries. e.g. Offloading work to a GPU has already seen massive improvements in Swing's performance without touching a line of application level code, or recompiling for that matter. Effectively that virtual machine saw a HW upgrade!]

what about a Fortran compiler ? (1)

dario_moreno (263767) | more than 7 years ago | (#16793728)

Everyone knows that the language of supercomputing is Fortran, for historical (legacy code) as well as truly practical reasons such as braindead language (very good for compiler optimizations and automatic rewriting) efficient and predictible (loop unrolling, peephole optimization, optimal memory access without pointer indirections and heavy objects to pass between functions) linear algebra handling, which is the core of heavy numerical computing. What are they waiting to release a Fortran compiler for the GPU ? I think many chemical (Gaussian, GAMESS) physical (WIEN, VASP...) , biological or engineering packages (STAR-CD, math libraries (ScaLAPACK) are written in FORTRAN.

Sounds like Nvidia want to be the new Intel (0)

Anonymous Coward | more than 7 years ago | (#16795814)

As a 3D graphics engine programmer, I like programming 3D graphics engines. I like GPUs. I like the fact they are tailored to graphics, and the clever things the artistic graphics programmers can do with them.

So, when Nvidia announce CUDA I start looking for the April Fools joke.

Alas, it's not :(

They are genuinely creating a generalised multi-cell processor on their GPUs. The only purpose of this is to move the processing of generalised
problems from one chip (the x86) to another (an nvidia gpu). It's easier to get people to replace their graphics card than CPU, I guess.

Not one of the examples in the announcement highlighted how graphics are to be improved with this mechanism. Guess what the G in GPU stands for...
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>