Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Transcoding in 1/5 the Time with Help from the GPU

CmdrTaco posted more than 8 years ago | from the cycles-are-cycles dept.

Graphics 221

mikemuch writes "ExtremeTech's Jason Cross got a lead about a technology ATI is developing called Avivo Transcode that will use ATI graphics cards to cut down the time it takes to transcode video by a factor of five. It's part of the general-purpose computation on GPU movement. The Aviva Transcode software can only work with ATI's latest 1000-series GPUs, and the company is working on profiles that will allow, for example, transcoding DVDs for Sony's PSP."

cancel ×


Sorry! There are no comments related to the filter you selected.

Waaaaah! (-1, Troll)

Tomchu (789799) | more than 8 years ago | (#13933433)

Cue: But does it support Linux? Waaah!

Baby need a diaper?

Re:Waaaaah! (-1, Offtopic)

Anonymous Coward | more than 8 years ago | (#13933486)

For your unfunny comments, I subscribed you to a bunch of pr0n lists, a few spam lists and other assorted stuff.

Perhaps you will find teh funnay from there.

Re:Waaaaah! (-1, Troll)

Anonymous Coward | more than 8 years ago | (#13933740)

Hooray for proprietarytechnology! Hooray for Windows-only SDKs!

No, wait! Do we like it or do we loathe it? If it was NVidea, then we would of course be all fuzzily smiling and being happy, but do we like ATI!

P.S.: M$ sux, i swear im going to fucking kill ballmer

Transcode? (-1, Troll)

LilGuy (150110) | more than 8 years ago | (#13933439)

Who cares about transcoding, I want transgendering!

Re:Transcode? (0)

Anonymous Coward | more than 8 years ago | (#13933469)

I want transgendering!

You don't have to fly to Sweeden anymore for that.

Re:Transcode? (0)

Anonymous Coward | more than 8 years ago | (#13933546)

Why? Do you want to date yourself?

Re:Transcode? (0, Offtopic)

LilGuy (150110) | more than 8 years ago | (#13933678)

I wouldn't date myself if I were the last person on Earth.

first post! (-1, Offtopic)

capninsano (913991) | more than 8 years ago | (#13933440)

GNAA! fear our big nigger cocks!

Re:first post! (-1, Offtopic)

Anonymous Coward | more than 8 years ago | (#13933471)

Too slow

Will this let... (-1, Redundant)

pvt_medic (715692) | more than 8 years ago | (#13933454)

Us have smaller computers??

great... (1, Insightful)

know1 (854868) | more than 8 years ago | (#13933464)

Added power to the user, just as those wacky halloween shenanigans occur to cripple hardware such as this to only pipe a tune when it's paid

This would be great for MythTV.. Linux support?? (4, Insightful)

tji (74570) | more than 8 years ago | (#13933466)

My educated guess is, No, there won't be Linux support..

ATI was the leader in MPEG2 acceleration, enabling iDCT+MC offload to their video processor almost 10 years ago. How'd that go in terms of Linux support, you ask? Well, we're still waiting for that to be enabled in Linux.

Nvidia and S3/VIA/Unichrome have drivers that support XvMC, but ATI is notably absent from the game they created. So, I won't hold my breath on Linux support for this very cool feature.

Re:This would be great for MythTV.. Linux support? (1)

ZephyrXero (750822) | more than 8 years ago | (#13933517)

Hasn't nVidia been talking about using the GPU for video acceleration since the GeForce 5 came out? I don't understand why this isn't already available...

Re:This would be great for MythTV.. Linux support? (1)

Surye (580125) | more than 8 years ago | (#13933730)

It's had video acceleration since the GF3. I don't know what you're talking about. Maybe you're talking about hardware encoding(VIVO since GF3 AFAIK)? Or video encoding on GPU(Never heard this, would like to see a link)?

Re:This would be great for MythTV.. Linux support? (2, Informative)

EpsCylonB (307640) | more than 8 years ago | (#13934080)

When I got my 6600gt the box that it came said it could do hardware mpeg2 encoding, obviously this is not the case. I remember reading somewhere that nvidia orginally wanted the 6XXX series to be able to do loads of on board video stuff but they couldn't get it working on time. Its a real shame.

Re:This would be great for MythTV.. Linux support? (5, Interesting)

ceoyoyo (59147) | more than 8 years ago | (#13933533)

This should be written in Shader Language (or whatever it's called these days) which is portable between cards. There's no reason NOT to release this on any platform. Since it only runs on the latest ATI cards it probably uses some feature that nVidia will have in it's next batch of cards as well. If ATI doesn't release it for Linux and the Mac hopefully it won't be that difficult to duplicate their efforts. After all, shader programs are uploaded to the video driver as plain text.... ;)

Re:This would be great for MythTV.. Linux support? (1)

PsychicX (866028) | more than 8 years ago | (#13934357)

Bleh, speaking of shader languages. It'd be nice if they spent a little less time on obscure video processing features and a little more time on implementing Shader Model 3.0 properly. Their lack of texture lookups in the vertex shader is weak.

Re:This would be great for MythTV.. Linux support? (2, Informative)

ratboy666 (104074) | more than 8 years ago | (#13933651)

GPU Stream programming can be done with Brook [] . Brook supports the nVidia series, so that is what you purchase.

Pick up a 5200FX card (for SVIDEO/DVI output) and then use the GPU to do audio and video transcode. I have been thinking about audio (MP3) transcode as a first "trial" application.

"Heftier" GPUs may be used to assist in video transcode -- but it strikes me that the choice of stream programming system is most important (to allow code to move to other GPUs, driver permitting). I think that nVidia also supports developers using the GPU (there are comments and test results generated by nVidia available on the 'web). So far, not much from ATI, so I think nVidia gets the nod...


Re:This would be great for MythTV.. Linux support? (1)

Shinobi (19308) | more than 8 years ago | (#13933783)

There's also the fact that up until the R5x0 GPU's, only Nvidia and 3D Labs supported 32-bit floats, making it easier to implement features like this.

Re:This would be great for MythTV.. Linux support? (1)

Surye (580125) | more than 8 years ago | (#13933798)

Thanks for the link, this looks fun. I'll have to see about playing around with it, and maybe some simple benchmarking.

Re:This would be great for MythTV.. Linux support? (5, Interesting)

thatshortkid (808634) | more than 8 years ago | (#13934390)

wow, for once there's a slashdot article i have insight on! (whether it's modded that way remains to be seen.... ;) )

i would actually be shocked if there weren't linux support. the ability to do what they want only need to be in the drivers. i've been doing a gpgpu feasability study as an internship and did an mpi video compressor (based on ffmpeg) in school. using a gpu for compression/transcoding is a project i was thinking of starting once i finally had some free time since it seems built for it. something like 24 instances running at once at a ridiculous amount of flops (puts a lot of cpus to shame, actually). if you have a simd project with 4D or under vectors, this is the way to go.

like i said, it really depends on the drivers. as long as they support some of the latest opengl extensions, you're good to go. languages like Cg [] and BrookGPU [] , as well as other shader languages, are cross-platform. they can also be used with directx, but fuck that. i prefer Cg, but ymmv. actually, the project might not be that hard, just needs enought people porting the algorithms to something like Cg.

that said, don't expect this to be great unless your video card is pci-express. the agp bus is heavily asymmetric towards data going out to the gpu. as more people start getting the fatter, more symmetric pipes of pci-e, look for more gpgpu projects to take off.

Of course (-1, Flamebait)

Anonymous Coward | more than 8 years ago | (#13933488)

They would announce this just two days after I finished transcoding all my DVDs...

Slashdotted! (4, Funny)

Anonymous Coward | more than 8 years ago | (#13933494)

I wonder if [] could offload some of the Slashdot effect to their GPU?

Re:Slashdotted! (1)

mikael (484) | more than 8 years ago | (#13933685)

I heard there's a startup who have just announced a slashdot coprocessor board - it automatically searches for and downloads slashdot articles you might be interested in reading - unfortunately, it never stops and completely hogs your bandwidth connection, even with a 1 Terabit connection.

404 file not found (-1, Offtopic)

Anonymous Coward | more than 8 years ago | (#13933498)

About 2min ago when I clicked the Read More link from the home page (before there where any posts) I got a 404 error for this article... went to the bathroom came back tried again and it work... could anyone comfirm this bug? And maybe some possible clues to what is causing it so we can make a detailed bug report?

Oh yea where is the offtopic karma :-)

Re:404 file not found (0)

Anonymous Coward | more than 8 years ago | (#13933530)

We need more info for the bug report. Did you go for a #1 or a #2?

Re:404 file not found (0, Offtopic)

Kasracer (865931) | more than 8 years ago | (#13934001)

Had this happen to me once or twice but not on this article

What I want to see. (5, Interesting)

Anonymous Coward | more than 8 years ago | (#13933514)

Maybe others have had this idea. Maybe it's too expensive or just not practical. Imagine using PCI cards with a handful of FPGAs on board to provide reconfigurable heavy number crunching abilities to specific applications. Processes designed to use them will use one or more FPGAs if they are available, else they'll fall back to using the main CPU in "software mode."

Re:What I want to see. (1)

Enigma_Man (756516) | more than 8 years ago | (#13933639)

That's a really cool idea. I've had ideas that were along that line, but never quite made it through the thought process to what you are suggesting. It's like having an external floating-point processor, but extremely general-purpose and reconfigurable. That'd be a great component to have on one of the new PCI-Express boards, those have tons of available bandwidth that you could use up if what you were processing required lots of I/O, even on the 1x slots.


Re:What I want to see. (1)

networkBoy (774728) | more than 8 years ago | (#13934427)

This already exists.
One such company is Cyclone Microsystems. They offer i960 coprocessor based systems.
I don't remember the other vendor I looked at but they offered a xylinx FPGA solution or a TI DSP solution.

Already available.. (2, Insightful)

LWATCDR (28044) | more than 8 years ago | (#13933688)

I have seen a combo FPGA/PPC chip for embedded applications. The issue I see with this is how long would it be useful? FPGAs are slower then ASICs. Something like the Cell or a GPU will probably be faster than an FPGA. There are a few companies looking at "reconfigurable" computers. So far I have heard about any products from them yet.

Re:Already available.. (1, Interesting)

Anonymous Coward | more than 8 years ago | (#13933814)

Yeah, FPGAs are indeed slower than ASICs. How long would it be useful? I was imagining if it ever got popular (like in every gamer's computer) they'd be upgradeable like video cards and CPUs where every year the technology gets better and frequencies go up.

I got the idea when I saw the work done on the saarcor [] hardware realtime raytracing architecture. They tested their work using FPGAs.

Re:Already available.. (3, Interesting)

tomstdenis (446163) | more than 8 years ago | (#13933932)

FPGAs aren't always slower than what you can do in silicon. AES [sorry I have a crypto background] takes 1 cycle per round in most designs. You can probably clock it around 30-40Mhz if your interface isn't too stupid. AES on a PPC probably takes the same time as a MIPs which is about 1000-1200 cycles.

Your clock advantage is about 10x [say] that is typical 400Mhz PPC vs. 40Mhz FPGA ... so that 1000 cycles is 100 FPGA cycles. But an AES block takes 11 FPGA cycles [plus load/unload time] so say about 16 cycles. Discounting bus activity [which would affect your software AES anyways] you're still ahead by ~80 FPGA cycles [800 PPC cycles].

Though the more common use for an FPGA aside from co-processing is just to make a flexible interface to hardware. E.g. want something to drive your USB, LCD and other periphs without paying to go to ASIC? Drop an FPGA in the thing. I assure you controlling a USB or LCD device is much more efficient in an FPGA than in software on a PPC.


Re:Already available.. (1)

anonymous22 (826938) | more than 8 years ago | (#13934079)

Somebody mod the parent informative please.

Re:Already available.. (1)

ZorinLynx (31751) | more than 8 years ago | (#13934313)

>Florida Power and lights SUCKS 8 days without power and counting!

No, they rule. I saw them working in heavy rain to get my feeder back on. It came back later that night. They could have easily postponed the job until the next day, but they did it.

They have a lot of work on their plate; relax, they'll get to you.


Re:Already available.. (1)

Breakfast Pants (323698) | more than 8 years ago | (#13934492)

>They suck, they didn't fix my situation
No, no, no, they rock they fixed my situation.
Charles L. Stevenson would be proud.

Re:What I want to see. (1)

roguewraith (833497) | more than 8 years ago | (#13933695)

The Ohio Supercomputer center is working on promotting FPGA applications. []

Re:What I want to see. (0)

Anonymous Coward | more than 8 years ago | (#13933701)

Imagine using PCI cards with a handful of FPGAs on board to provide reconfigurable heavy number crunching abilities to specific applications

So, what you're effectively saying is: "imagine a beowulf cluster of these!" ?

Re:What I want to see. (1)

non0score (890022) | more than 8 years ago | (#13933736)

IIRC, I think this "reconfigurable processing element" idea was pointed out on AMD's roadmap for something "in the future." Check Anand's recent AMD roadmap article for more information.

Re:What I want to see. (0)

Anonymous Coward | more than 8 years ago | (#13933782)

pro audio shops do this to offload certain effects and such to dsp cards. check out or been doing it for awhile. i'm sure it just takes software support.

Re:What I want to see. (1)

NixLuver (693391) | more than 8 years ago | (#13933801)

I would just point out that the GPU maths are usually limited precision, so they would lend little assistance for many high precision functions...

Hitachi SH4 [PS2] has 128-bit doubles. (1)

mosel-saar-ruwer (732341) | more than 8 years ago | (#13934474)

I would just point out that the GPU maths are usually limited precision, so they would lend little assistance for many high precision functions...

The Hitachi SH4 that powers the PlayStation2 can perform 128-bit double calculations in hardware [or so I'm told].

By contrast, Sun's SPARC has a "quad" precision [128-bit] double, but it's a software implementation.

I believe the chipset that powers IBM's Z390 mainframe can also do 128-bit doubles in hardware.

Re:What I want to see. (0)

Anonymous Coward | more than 8 years ago | (#13933861)

These things already exist. You can buy PCI plug-in modules from Nallatech ( [] ). Complete systems are also available from SGI ( leases/2005/september/rasc.html [] ).

From what I understand, it is still very hard to program these things, but software techonology is starting to catch up. SGI has a version of GDB which is FPGA aware.

Re:What I want to see. (1)

hrieke (126185) | more than 8 years ago | (#13933904)

Well there was that joke about the SETI processing card [ 1 [] ] [ [] [fn1] ], and now there is a company building the general purpose Physics card for games (I wonder what else it would work on?), so taking this to the next step, by having a card filled with FPGAs or the like isn't all that new of an idea.
Seeing someone make some money off of it would be.
[fn1] - Bug in the HTML Format posting ablility- /. doesn't like two http:/// [http] in the herf=URL. Oh well...

Re: probably slow. (1)

cryptor3 (572787) | more than 8 years ago | (#13933929)

This might work, but the question to ask is whether it would really be faster. FPGAs are usually a lot slower than ASICs, as another replier pointed out. One FPGA emulation that I saw didn't even run half as fast (in terms of compute time for a task) as the actual ASIC. And if the FPGA becomes the critical path in your processing, it had better be fast (or at least faster than your CPU).

So I think that this would only work if a general purpose CPU (or GPU, for that matter) has a serious architectural weakness for your particular computing application. And I think that would be rare, given that it is the job of CPU manufacturers to keep track of what kind of computation people are interested in, and architect the chips accordingly.

If there isn't a serious architectural weakness, it would probably be more cost-effective to make a system that bolts on another general purpose CPU (or GPU) into the PCI slot. But that could be fun.

Re:What I want to see. (1)

Moses_Gunn (778857) | more than 8 years ago | (#13933991)

Some Cray supercomputers have this ability today...and I think they are running Linux, too. :)

I'm rarely impressed... (2, Insightful)

HotNeedleOfInquiry (598897) | more than 8 years ago | (#13933529)

With tech stuff these days, but this is awesome. A very clever use of technology just sitting in your computer and a huge timesaver. Anyone that does any transcoding will have immediate justification for laying out bucks for a premium video card.

Re:I'm rarely impressed... (4, Interesting)

drinkypoo (153816) | more than 8 years ago | (#13933544)

I'd like to see it but I wonder what the quality is going to be like as compared to the best current encoders. I mean you can already see a big difference between cinema craft and j. random mpeg2 encoder...

Re:I'm rarely impressed... (3, Informative)

Dr. Spork (142693) | more than 8 years ago | (#13933747)

You don't get it. ATI is not releasing a new encoder. The test used standard codecs, which do the very same work when assisted by the GPU, only 5X faster.

Using their own codecs (4, Insightful)

no_such_user (196771) | more than 8 years ago | (#13933946)

It looks like they're using their own codec to produce MPEG-2 and MPEG-4 material. How would you get an existing, x86-only aware application to utilize the GPU, which is not x86 instruction compatible? It's a good bet that codecs will be rewritten to utilize the GPU once code becomes available from ATI, nVidia, etc.

I'd actually be willing to spend more than $50 on a video card if more multimedia apps took advantage of the GPU's capabilities.

Re:I'm rarely impressed... (0)

Anonymous Coward | more than 8 years ago | (#13933830)

That kinda misses the point. This is about going from mpeg2 -> mpeg4 etc. Not about ultimate mpeg2 quality. People still go DVD9 to DVD-R but more and more its a dvd9/dv cam to mpeg4 H.264 world. Its quite possible that the mpeg2 profile it uses isn't as good as cce but A) it can be improved and B) for 1/5 the time does it matter?

Now of course there is the little matter of audio being ignored. *cough*. But still if these features are part of every gpu being sold in 1 year or so then it will be a real boon for transcoding.

Re:I'm rarely impressed... (1)

drinkypoo (153816) | more than 8 years ago | (#13934123)

A) it can be improved and B) for 1/5 the time does it matter?

A) Then improve it! Quality is everything. B) Of course it matters! unless we're talking about sending someone a video email or something, and then we're probably not talking MPEG[24] but more like H.263. Granted you do make that point, but since the target is much lower-resolution this is actually less important in that area. This is more significant when people are transcoding their MiniDV-source video to MPEG2 so they can put their home movies on DVDs and snailmail them to their family members. To do that with any decent quality requires multiple pass encoding and still takes ages.

Re:I'm rarely impressed... (1)

poj (51794) | more than 8 years ago | (#13933634)

A very clever use indeed.
If you have hardware that does certain kinds of calculations much more effcient way than a general purpose processor, why not use it?
From an engineering standpoint it's the right thing to do. A GPU is a specialized processor when it comes to certain mathematical operations. If you specialize you can be very good at what you do.

Re:I'm rarely impressed... (1)

BarryNorton (778694) | more than 8 years ago | (#13933704)

Anyone that does any transcoding will have immediate justification for laying out bucks for a premium video card
Hardly - I do a lot, but I wouldn't pay three hundred quid for this even though it is impressive...

But is it worth it? (3, Interesting)

Anonymous Coward | more than 8 years ago | (#13933545)

the X1800XT ties almost exactly with the 7800GTX @ stock of 430 core in most gaming benchmarks.

with nVIDIA's 512mb implementation of the G70 core touted to be at 550mhz core, it should theoretically thrash the living daylights out of the X1800XT. []

the decision is between aVIVO's encode and transcode abilities for h.264, or superior performance by nVIDIAs offering?

Re:But is it worth it? (2, Insightful)

Dr. Spork (142693) | more than 8 years ago | (#13933890)

Well, if you can see the difference between 150fps and 200fps, and you don't waiting and don't care about spending an extra $200, you really should wait for the G70.

I don't play the sort of games that need a graphics card over $200 to look good. I never even considered looking at the high end. However, this video encoding improvement will certainly make me do a double take. I was proud of my little CPU overclock that improves my encoding rate by 20%. But the article talks about improvements of over 500%! That's worth a couple of extra bucks.

Of course, by the time the software to do this actually becomes full-featured and useful, the price of the 1800 ATIs will hopefully drop a bit. Still, I have a feeling this will be my next GPU.

Unless nVidia can produce something equally impressive, of course!

Keep in mind (2, Insightful)

Solr_Flare (844465) | more than 8 years ago | (#13934327)

That while few people will notice the difference between 150fps and 200fps, those numbers are more or less there to help you determine the lifespan of the card itself. While, for current games, both cards will perform extremely well, a 50fps difference means that on future games, the Nvidia card will be able to last longer and run with more graphics options enabled without bottoming out on fps.

While a select few individuals still always buy the latest and the greatest, the majority of buyers look at video cards as long term investments mainly because of the rediculously inflated prices in the GPU market. All that said, I think you have to look at the card's feature set and make a decision based on that. While, gaming wise, the Nvidia GPU may be superior, the dramatically increased transcoding times definitely make the ATI card a potentially attractive purchase to people who work a lot with video. Given the amazing rise in popularity of the Video Ipod and the existing PSP market, the number of people with interests in transcoding video is definitely on the rise, and ATI was smart in tapping that market now.

Re:But is it worth it? (2, Insightful)

nine-times (778537) | more than 8 years ago | (#13934081)

Well, I'm assuming that the hope is that support for encoding/decoding h264 will be put into hardware going forward (meaning it will find its way into low-end cards as well). I know encoding h264 is the longest, most processor intensive task I do with a computer these days, and a hardware solution that would drop any time off that task would be appreciated.

Re:But is it worth it? (1)

centipetalforce (793178) | more than 8 years ago | (#13934420)

I think if you are a video professional (like me) and you've seen how obsenely slow rendering h.264 can be (which is an amazing codec) and you spend half your time waiting for rendering, than I think the answer is a profound YES it is worth it (if it works).

Crippled? (4, Funny)

bigberk (547360) | more than 8 years ago | (#13933557)

But will the outputs have to be certified by Hollywood or the media industry? You know, because the only reason for processing audio or video is to steal profits from Sony, BMG, Warner, ... and renegade hacker tactics like A/D conversion should be legislated back to the hell they came from

Re:Crippled? (1)

ZachPruckowski (918562) | more than 8 years ago | (#13933907)

Why bother? If we force ATI and the other card creators to simply give themselves over to the MPAA companies, we're guaranteed that they'll never make something that can break the rules. For that matter, why don't we just let the MPAA run anything related to video, and the RIAA run anything related to audio? It'd be the perfect solution. We wouldn't have to worry about this kind of stuff, because we know they have our best interests at heart, and aren't remotely corrupt or greedy...

GPU or CPU? (3, Interesting)

The Bubble (827153) | more than 8 years ago | (#13933712)

Video cards with GPU's used to be a "cheap" way to increase the graphic processing power of your computer by adding a chip who's sole purpose was to process graphics (and geometry, with the advent of 3d-acellerators).

Now that GPU's are becomming more and more programmable, and more and more general~purpose, what, really, is the difference between a GPU and a standard CPU? What is the benefit to having a 3d~acellerator over having a dual~CPU system with one CPU dedicated to graphic processing?

Re:GPU or CPU? (1)

13bPower (869223) | more than 8 years ago | (#13933744)

Yeah really. I say go mini blade farm or whatever you people call them. maybe in a generation or 2.

Re:GPU or CPU? (1)

epaton (884617) | more than 8 years ago | (#13933777)

the gpu is faster for this specific task

Re:GPU or CPU? (3, Insightful)

gr8_phk (621180) | more than 8 years ago | (#13933849)

"what, really, is the difference between a GPU and a standard CPU? What is the benefit to having a 3d~acellerator over having a dual~CPU system with one CPU dedicated to graphic processing?"

In a few years, there will be no real benefit to the GPU. Not too many people write optimized assembly level graphics code anymore, but it can be quite fast. Recall that Quake ran on a Pentium 90MHz with software rendering. It's only getting better since then. A second core that most apps don't know how to take advantage of will make this all the more obvious.

On another note, as polygon counts skyrocket they approach single pixel size. When that happens, the hardware pixel shaders - that GPUs have so many of - become irrelevant as the majority of the work moves up to the vertex unit. Actually at that point it makes a lot of sense to move to raytracing (something I have fast code for) which is also going to be quite possible in a few more years on the main CPU(s). Ray Tracing is one application that really shows why the GPU is NOT general purpose. You need data structures and pointers mixed with fast math - preferably double precision. You need recursive algorithms. You'll end up wanting a MMU. By the time you're done, the GPU really would need to be general purpose. The problem doesn't map to a GPU at all, and multicore CPUs are nearing the point where full screen, real time ray tracing will be possible. GPUs will not stand a chance.

Re:GPU or CPU? (1)

oliverthered (187439) | more than 8 years ago | (#13934118)

On another note, as polygon counts skyrocket they approach single pixel size.

Or, as Microsoft may be doing, you use nurbs, splines and 3d solids and then get the GPU to generate the interpolated edges and vertexes, so that circles look circular not matter how close you get. Polygon will start to be replaced by CSG, and solid primitives.

Re:GPU or CPU? (1)

bersl2 (689221) | more than 8 years ago | (#13933860)

GPUs are highly parallel, far moreso than a CPU. This makes them even more suited to vector operations than CPUs with SIMD.

What I want to know is whether, given the new-found programmability of the GPU, more pressure will be applied for ATI and nVidia to open up the ISAs to their graphics chipsets.

Re:GPU or CPU? (1)

eclectic2k (718299) | more than 8 years ago | (#13933873)

GPU != CPU GPU's are VERY good at graphics. More like an "SSE chip" vector processor than a CPU with SSE/3DNow/Altivec tacked on. a good read on GPU's as general purpose photo & video processors. []

Re:GPU or CPU? (1)

JawnV6 (785762) | more than 8 years ago | (#13933895)

GPU's have a lot of units that do exactly the same thing in parallel. They can crunch a lot of data, but only if its very parallel and easy to split up. They don't work well with a single instruction stream with one set of data.

CPU's are much more general purpose. They handle branches and conditional code much better, for example, but are limited to working on a few points of data at a time. Extensions like MMX and SSE enhance the parallel ability of the CPU, but not to the same extent as a GPU.

Different architecture (0)

Anonymous Coward | more than 8 years ago | (#13933917)

CPU and GPU are designed for very different types of processing.

CPUs are general-purpose processors. They are superscalar, have medium-length pipelines, excellent dynamic instruction path selection support (i.e. branch prediction), have limited number of general-purpose registers, have large expressive instruction sets, can do very fast integer calculations, have access to system RAM, etc.

GPUs are specialized for parallel tasks, in particular rendering tasks. They have many parallel pipelines, lousy dynamic instruction path selection support (and very deep pipelines--200 cycles or more), large numbers of orthogonal registers, limited instruction sets with support for a variety of very fast floating point calculations, limited support for integer calculations, limited access to system RAM.

GPU pipelines are becoming more general, and ultimately you will be able to do almost any computational task on either type of processor, but GPUs will still be much better for certain types of task (highly parallelizable tasks) than CPUs.

Apples and pears (1)

oliverthered (187439) | more than 8 years ago | (#13934042)

GPU's are designed to do parallel bulk vector processing (which is why they can transcode faster than a CPU) but this also limits what kind of applications or tasks you can reasonably offload to the GPU.

This means that the 'general purpose GPU' code, isn't really going to be general purpose, it going to be heavily vector orientated. On the other side the CPU is more general purpose, good at running many tasks and handling interrupts &co, for this reason the CPU won't replace the GPU and the GPU won't replace the CPU, no matter how many mflops you can squeeze out of either.

It would be nice to have protein folding [] done of the GPU, since it's a task the a GPU should be good at.

Re:Apples and pears (0)

Anonymous Coward | more than 8 years ago | (#13934274)

If you really want to know, Stanford has been working on F@H on GPUs for a while now ml []

Re:GPU or CPU? (2)

Jerry Coffin (824726) | more than 8 years ago | (#13934321)

What is the benefit to having a 3d~acellerator over having a dual~CPU system with one CPU dedicated to graphic processing?

That depends on what you mean by the "one CPU dedicated to graphic processing." If you mean something on the order of a second Pentium or Athlon that's dedicated to graphics processing, the advantage is tremendous: a typical current CPU can only do a few floating point operations in parallel, where a GPU has lots of pipes to handle multiple pixels at a time (or multiple vertexes at a time, depending on which part of the pipeline you're looking at), and each pipe (at least potentially) does vector processing to work on all four pixel components at once.

The result of all that is that the GPU has substantially higher overall floating point throughput than the CPU does.

If, OTOH, what you're suggesting is that the second CPU that's dedicated to graphics processing be optimized for that by having lots of floating point hardware, a much larger number of parallel pipelines to process multiple pixels at once, etc., then what you're suggesting really comes down to pretty close what we have right now, but re-naming the "GPU" as "secondary CPU".

In fairness, there are some differences even now. First of all, you program the GPU using a slightly different programming language that includes primitives for working on things like 3- and 4-element vectors, and for doing the kinds of things you typically have to do with them (e.g. compute normals) that require a series of instructions on a normal CPU.

The other obvious difference is that the GPU normally has its own memory, mostly for the sake of improved bandwidth. You could more or less homogenize the memory, using (for example) half a dozen or so DDR channels to your main memory, and have all the processors share them symmetrically -- but that imposes some extra difficulties on design and would probably drive the price up considerably (or limit the overall design).

In particular, the main memory bus normally allows you to plug in varying numbers of varying sizes of memory modules, where the GPU typically has a specific number of modules of known sizes. This makes it much easier to design bus drivers in the GPU because the bus loading is known at design time. That's a large part of the reason motherboards are still transitioning to DDR 2 memory while high-end graphics cards are now univerally using GDDR 3.

The other problem with that would be that it would then require essentially everybody to pay (most of) the price of a high-end graphics system whether they wanted it or not. Given the number of machines sold with (for example) Intel Integrated Graphics, it's pretty clear that most people are willing to sacrifice performance for lower price.

The universe is a figment of its own imagination.

GPU advantages over CPU? (1)

StarkRG (888216) | more than 8 years ago | (#13933733)

I'm sure there must be some, otherwise they wouldn't have them... How are GPU's specifically optimised for graphics work? Do they have built in de/compression algorithms? Wouldn't it just be easier to have multiple CPUs?

Re:GPU advantages over CPU? (0)

Anonymous Coward | more than 8 years ago | (#13933825)

GPUs arent so much optimised, but built from the ground up to efficiently render data. GPUs are very far removed from the idea of central processing, an idea that jen-hsun wang agrees with vigorously.

im a little sketchy on the details, as i havent refreshed my memory for a while, but GPUs dont possess branch predict logic and similar designs that CPUs have. GPUs have dedicated transistors for most operations, such as rasterize, transform and of course shading.

processing in GPUs can be considered to be a white water sport for electrons, they get pushed through the set course as fast as possible and on a massive scale, requiring the CPU to sort the code before it uses it.

Re:GPU advantages over CPU? (4, Informative)

tomstdenis (446163) | more than 8 years ago | (#13933838)

GPUs are massively parallel DSP engines. That makes them ideally suited for the task. They can do things like "let's multiply 8 different floats in parallel at once". Which is useful when doing transforms like the iDCT or DCT which are capable of taking advantge of the parallelism.

But don't take that out of context. Ask a GPU to compile the linux kernel [which is possible] and an AMD64 will spank it something nasty. *GENERAL* purpose processors are slower at these very dedicated tasks but at the same time capable of doing quite a bit with reasonable performance.

By the same token, a custom circuit can compute AES in 11 cycles [1 if pipelined] at 300Mhz which when you scale to 2.2Ghz [for your typical AMD64] amounts to ~80 cycles. AES on the AMD64 takes 260 cycles. But, ask that circuit to compute SHA-1 and it can't. Or ask it render a dialog box, etc...


Re:GPU advantages over CPU? (1)

StarkRG (888216) | more than 8 years ago | (#13934270)

Ahh, I see, so, while you could get your GPU to handle compiling something it's much better at rendering things. It'd be like using your hands to walk... How would something like that do with processing (rendering?) audio effects and such?

Re:GPU advantages over CPU? (1)

tomstdenis (446163) | more than 8 years ago | (#13934364)

Well audio is also DSP so I imagine it would do just fine. Just a matter of covering the expense of going into and outof the GPU [e.g. data conversion, program upload, data upload, etc...]


Re:GPU advantages over CPU? (1)

Pope (17780) | more than 8 years ago | (#13933852)

You clearly don't get it. This is not about replacing CPUs with GPUs to transcode video, it's about using those GPUs which are most likely sitting idle to help the CPU do its job.

Re:GPU advantages over CPU? (1)

StarkRG (888216) | more than 8 years ago | (#13934227)

Yeah, actually I do get it. I was simply asking what the differences between GPUs and CPUs are, not saying there was none. What I don't get is this need for people on the internet to insult people or treat them in a gruff manner.

Yawn... (2, Interesting)

benjamindees (441808) | more than 8 years ago | (#13933751)

nVidia has been doing this for a while now. In fact, there are finally getting to be interesting implementations like GNU software radio [] on GPUs:

An Implementation of a FIR Filter on a GPU []

Re:Yawn... (2, Interesting)

ehovland (2915) | more than 8 years ago | (#13933902)

To see the latest generation of this work, check out their sourceforge page: []

Will all x1000 cards do this? (1)

wheaton (759947) | more than 8 years ago | (#13933767)

The article isnt specific on this just says that cards from this series should work. The reason I ask this is I would hate to see a repeat of the Nvidia Purevideo technology. I never could get that to work and I don't think I am the only one. If its only the X1800 they should say that. I wished they would have compared performace as well between an X1300 if available to the X1800 using this technology.

Re:Will all x1000 cards do this? (2, Informative)

freakyfreak2 (613574) | more than 8 years ago | (#13933992)

It is very specific about this
From the article (second page):
"The application only works with X1000 series graphics cards, and it only ever will. That's the only architecture with the necessary features to do GPU-accelerated video transcoding well."

Re:Will all x1000 cards do this? (1)

wheaton (759947) | more than 8 years ago | (#13934075)

Your point is well understood. I am not hoping that it works with anything other than X1000 series just that it works with all the X1000 series. From my very cursory look into this it appears that there are many different Avivo features and not all cards are going to support all of them.

But I'd rather have it the other way around! (2, Interesting)

Macguyvok (716408) | more than 8 years ago | (#13933850)

I'd rather see GPU's ofloading thier work to the system CPU. There's no *good* way to do this. So, why not run this isn reverse? If it's possible to speed up general processing, why can they speed up graphics processing? Especially since my CPU hardly does anything when I'm playing a game; it has to wait on the graphics card.

So, what about it ATI? Or will thi be an NVIDIA innovation?

Re:But I'd rather have it the other way around! (0)

Anonymous Coward | more than 8 years ago | (#13933947)

try halving the multiplier on your brand-new FX-57 and tell me how well fear runs.

Re:But I'd rather have it the other way around! (1)

Macguyvok (716408) | more than 8 years ago | (#13934061)

That's a little over kill. Still, if you pay any attention, normally the GPU is running flat out, while the CPU is waiting for the GPU to finish rendering. Sure, drop the CPU speed by half, and it won't run well, but half is a HUGE drop. How about using the free CPU cycles for the good of the GPU. That's my point. Even if it's only 100 free cycles per frame, you would SEE the difference.

Re:But I'd rather have it the other way around! (0)

Anonymous Coward | more than 8 years ago | (#13934173)

The CPU acts as a bottleneck in almost all rendering applications bar 3dmark. so really, the GPU is the one waiting. above comment was a sarcastic remark at your peculiar logic.

lessons of "array processors" from 1980s (3, Informative)

peter303 (12292) | more than 8 years ago | (#13933975)

In the scientific computing world there have been several episodes where someone comes up with a attached processor an order of magnitude faster than a general purpose CPU and try to get the market to use it. Each generation improved the programming interface eventually using some subset of C (now Cg) combined with a preprogrammed routine library.

All these companies died mainly because the commodity computer makers could pump out new generations about three times faster and eventually catch up. And the general purpose software was always easier to maintain than the special purpose software. Perhaps graphics card software will buck this trend because its a much larger market than specialty scientific computing. The NVIDAS and ATIs can ship new hardware generations as fast as the Intels and AMDs.

Done in Roxio Easy Media Creator 8 (2, Informative)

Anonymous Coward | more than 8 years ago | (#13933977)

fyi this is already done by Roxio in Easy Media Creator 8. they offload a lot of the rendering or transcoding to GPUs that support it. for those that are older they have a software fallback. probably not an increase by such a large factor but still a significant boost on newer PCI-E cards.

get this acceleration into the mainstream (0)

Anonymous Coward | more than 8 years ago | (#13933987)

Right now it seems like this sort of thing is being done on specialized projects. It would be really awesome if someone would write an acceleration add-on to a library like GNU GSL or FFTW. That would allow a large number of scientists and engineers to easily accelerate their code without having to get involved in the intricacies of doing it. Maybe you wouldn't get the fastest code this way, but there would be great benefits, and people would actually use it.

Apple's core image (3, Informative)

acomj (20611) | more than 8 years ago | (#13934022)

some of Apple's apis (core video/core image/core audio) use the gpu when it detects a supported card, otherwise it just uses the cpu, seemlessly and without fuss. So this isn't new. []

There's a CPU in my keyboard too... (2, Funny)

Anonymous Coward | more than 8 years ago | (#13934288)

As I remember from my hardware class...there's an Intel 8051 or similar in most PC keyboards...wouldn't it be cool to somehow be able to use that CPU for something useful (aside from polling the keyboard)

Linux Support (3, Informative)

Yerase (636636) | more than 8 years ago | (#13934377)

There's no reason there couldn't be Linux Support. At the IEEE Viz05 Conference there was a nice talk from the guys operating about cross-platform support, and there's a couple of new languages coming out that act as wrappers for Cg/HLSL/OpenGL on both ATI & NVidia, & Windows & Linux... Check out Sh ( [] and Brook ( [] Once their algorithm is discovered (Yipee for Reverse engineering), it won't be long.

5X faster than what (1, Insightful)

Anonymous Coward | more than 8 years ago | (#13934415)

5X faster than what? Because an Athlon X2 4800+ can transcode pretty damn fast.
I would rather some reality-based claims, such as "real-time encoding of 3 VGA streams into XVid". Give me a real reason to include an X1800 into my entertainment box.

The wheel of reincarnation turns again (0)

Anonymous Coward | more than 8 years ago | (#13934423)

Man, if Ivan Sutherland were dead he'd be rolling over in his grave.

Hope they get the standard right this time (1)

just fiddling around (636818) | more than 8 years ago | (#13934425)

This is cool, but if the feeds that process generates is as nonstandard as the MPEG2 their Multimedia Center puts out, it is worthless.

I can't use the files I recorded on anything but ATI's software and Pinnacle Videostudio (go figure, it understands the codec).

Render farms (1, Interesting)

Anonymous Coward | more than 8 years ago | (#13934495)

If GPUs are more optimized for graphics, why can't renderfarms use more GPU's rather than more CPU's?

Pixar is using Intel boxes. Since Pixar writes it's own code, wouldn't it be better to write code into Renderman to shift the workload to multiple GPU's in each box in the renderfarm?

Just a thought...

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>