Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

The Wretched State of GPU Transcoding

Soulskill posted more than 2 years ago | from the things-that-should-work-better-in-2012 dept.

Graphics 158

MrSeb writes "This story began as an investigation into why Cyberlink's Media Espresso software produced video files of wildly varying quality and size depending on which GPU was used for the task. It then expanded into a comparison of several alternate solutions. Our goal was to find a program that would encode at a reasonably high quality level (~1GB per hour was the target) and require a minimal level of expertise from the user. The conclusion, after weeks of work and going blind staring at enlarged images, is that the state of 'consumer' GPU transcoding is still a long, long way from prime time use. In short, it's simply not worth using the GPU to accelerate your video transcodes; it's much better to simply use Handbrake, which uses your CPU."

cancel ×

158 comments

Sorry! There are no comments related to the filter you selected.

The real question is... (-1, Offtopic)

Anonymous Coward | more than 2 years ago | (#39935453)

Are they using Gamemaker? They fuckin' should be!

Re:The real question is... (-1)

Anonymous Coward | more than 2 years ago | (#39937355)

Garry Kitchen, is that you?

Lack of standards, quality. (1)

Anonymous Coward | more than 2 years ago | (#39935489)

I've heard from a lot of sources that the quality of output from various GPU accelerated video encoding schemes almost invariably lacks when compared to an established, known good CPU based video encoding scheme. When the GPU encoders can match quality, will they still be fast? Are they just cheating now? What gives?

Re:Lack of standards, quality. (2, Informative)

Anonymous Coward | more than 2 years ago | (#39935521)

Yes, they are cheating. That is exactly how they are getting it to be so fast.

Re:Lack of standards, quality. (2)

DragonTHC (208439) | more than 2 years ago | (#39935931)

has anyone tried Badaboom?

Re:Lack of standards, quality. (0)

Anonymous Coward | more than 2 years ago | (#39935993)

Yep, it's shit.

Re:Lack of standards, quality. (4, Informative)

PCM2 (4486) | more than 2 years ago | (#39936203)

has anyone tried Badaboom?

Not much point. It's been discontinued.

Re:Lack of standards, quality. (4, Insightful)

Hatta (162192) | more than 2 years ago | (#39936005)

What I don't understand is how this happens. Why would the same calculation get different results on different GPUs? Are they doing the math incorrectly?

Re:Lack of standards, quality. (5, Informative)

cheesybagel (670288) | more than 2 years ago | (#39936119)

Hint: Not all GPUs have IEEE FP compliant math. Often they break the standard, or do something else altogether just to improve performance.

Re:Lack of standards, quality. (2)

SplashMyBandit (1543257) | more than 2 years ago | (#39936217)

CPUs used to be like that too.

Re:Lack of standards, quality. (3, Informative)

Darinbob (1142669) | more than 2 years ago | (#39936915)

And remember that this is not necessarily lower quality! There are valid reasons for not following the complexities of IEEE floating point if you have no need for portability.

Re:Lack of standards, quality. (0)

Anonymous Coward | more than 2 years ago | (#39937345)

Even with the IEEE GP compliant math you have poor quality transcode, so this can't be the issue.

Re:Lack of standards, quality. (5, Informative)

parlancex (1322105) | more than 2 years ago | (#39937371)

Hint: Not all GPUs have IEEE FP compliant math. Often they break the standard, or do something else altogether just to improve performance.

I can't speak for ATI, but actually all FP32 math on Nvidia architectures for many generations now has been IEEE compliant, excluding NAN and -inf +inf and exception handling cases, and except for their hardware sin, cos, log implementations, and except when using the fused multiply add instruction (though the last one you could actually get around by using special compiler intrinsics to avoid the fusing).

Re:Lack of standards, quality. (2)

The Master Control P (655590) | more than 2 years ago | (#39938063)

The math units on every nVidia card made since at least late 2009, both single and double precision, are ieee754 compliant. The only excuse for it being wrong is that someone deliberately used the __fast non-primitive operations (sqrt/log/exp & friends), which compromise the algorithms used to compute transcendental operations. The exact extent of the compromise is detailed in the back of the nVidia CUDA guide.

I agree it would be pathetic if this were because someone passed -ffast-math or whatever it is to nvcc.

Re:Lack of standards, quality. (3, Interesting)

TD-Linux (1295697) | more than 2 years ago | (#39936187)

Because behind the scenes your "encoder" program is actually using several different encoders. Generally the encoder has to be custom written specifically for the specialized GPU hardware it is targeting.

Re:Lack of standards, quality. (4, Informative)

pla (258480) | more than 2 years ago | (#39936587)

Because behind the scenes your "encoder" program is actually using several different encoders. Generally the encoder has to be custom written specifically for the specialized GPU hardware it is targeting.

This has largely ceased to present a problem, thanks to OpenCL.

GPU code no longer needs to run as custom-written shaders targetting 20 different platforms. One program, written in fairly straightforward C, will run on just about any modern platform. And it will do so at speeds that absolutely dwarf a CPU - The Radeon x9yy cards (for x>=5) easily crush a modern CPU at OpenCL code by a factor of a thousand. The x8yy cards still perform admirably, over three hundred to one. For NVidia, the Tesla series do well, while the GX... Well, ten to fifty times faster doesn't exactly suck...


The real problem here? Most people have really crappy GPUs. Even compared to the $100 card range, your GPU sucks ass, and hard. And you can't really blame people, because honestly, even modern IGPs will run just about anything fairly well, so why would you pay for more?


But don't blame the GPUs, or the concept in general. If you target OpenCL and the user has a halfway decent modern GPU, it will give consistent, reliable results, and will blow away your CPU many times over.

Re:Lack of standards, quality. (5, Insightful)

Skarecrow77 (1714214) | more than 2 years ago | (#39936723)

but, at least in this context, speed is nearly irrelevant because it fails at the task at hand, producing high quality video.

who cares how fast it completes a task if it's failing? Nobody gives little jimmy props when he finishes the hour-long test in 5 minutes but scores a 37% on it.

Re:Lack of standards, quality. (3, Informative)

pla (258480) | more than 2 years ago | (#39936845)

who cares how fast it completes a task if it's failing? Nobody gives little jimmy props when he finishes the hour-long test in 5 minutes but scores a 37% on it.

I agree that presents something of a problem for current implementations; the concept of GPU transcoding doesn't fail, however. Only the fact that those currently pushing it have tried to show at least modest gains for everbody - meaning those with massively inappropriate hardware - has made it such an abysmal failure to date.

To repeat my earlier post, if you target an OpenCL-capable GPU, you will get consistent results; and if you target a card with a reasonable number of compute units, (58xx/59xx/68xx/69xx/tesla), you'll see performance far beyond what a modern CPU can give.

Does that make GPU transcoding the best choice for the general public at present? No! But for those with the hardware, the comparison counts as literally laughable.

Re:Lack of standards, quality. (2)

Skarecrow77 (1714214) | more than 2 years ago | (#39936983)

I've got a first generation fermi-based GTX 470. Considering that, at the time, the parallel compute power was the big halo selling point of the new fermi gpu, I was very underwhelmed when I finally found some software that would actually use it. I saw speedups of only about 3x or so above and beyond my core 2 duo (only a 2-core!) e8400, and the quality was abysmal in comparison

I'm not saying that GPU transcoding -shouldn't- be a better option than cpu transcoding, it completely should be, but current implementations seem like completely ignored why we're transcoding in the first place and what our goals are. having a faster transcode is nice, yes. faster is always nice, but it's simply not worth the tradeoff in quality, which is the point.

I transcode blu-ray rips to mkv at dual-pass 8000kbps with x.264's "slower" setting on an athlon II 245. the average encode takes anywhere from 36 to 48 hours. But I'm cool with that. do it right that once, and you're set for life. They're beautiful encodes.

Re:Lack of standards, quality. (2)

Surt (22457) | more than 2 years ago | (#39937501)

Props to Jimmy, he got 37% right in 8.3% of the time, and even more credit since I assume not everyone could get a 100% in an hour, or what would be the point of the test.

Re:Lack of standards, quality. (1)

ldobehardcore (1738858) | more than 2 years ago | (#39937653)

I think of it with a different analogy. Instead of little Jimmy and the test, I prefer the BK Lounge metaphor:

Burger King manages to hand you "food" in 11 Seconds, compared to Shari's (or wherever, insert a middle-of-the-road place here) 20 minutes,
The "food" is consistently inedible at BK, whereas at Shari's or wherever won't make you sick after bite 2.

The GPGPU is shitty at video transcoding, but boy howdy it's fast. And that's completely beside the point.
What good is a "burger" in 11 seconds if you can't keep it down? What good is a 24fps video transcoded at 450fps when the end result is nearly universally unwatchable?

Re:Lack of standards, quality. (1)

tyrione (134248) | more than 2 years ago | (#39937979)

Thank you. I'm personally getting sick and tired of badly written articles on Parallel Programming discussing CUDA and having to wade through crap before one sharp post discusses OpenCL. OpenCL 1.2 is very robust and we'll be seeing OpenCL 2.0 this August.

Re:Lack of standards, quality. (2)

grouchomarxist (127479) | more than 2 years ago | (#39936521)

There are probably two issues here, but the kind of calculations we're talking about here are floating-point calculations. And as every programmer should know floating-point calculations done by different CPUs or GPUs don't give you consistent results: http://developers.slashdot.org/story/10/05/02/1427214/what-every-programmer-should-know-about-floating-point-arithmetic [slashdot.org]

Also, we're talking about GPUs here. GPUs aren't even designed to give you IEEE standard results. Instead they're designed to give approximate results intended to be used for real time graphics display.

Re:Lack of standards, quality. (1)

The Master Control P (655590) | more than 2 years ago | (#39938093)

Every GPU from nVidia for 3 full hardware generations (Since compute architecture 1.3 - 2009 at least, possibly earlier) has had IEEE754 compliant fp32 and fp64 math. I imagine ATI has been compliant for as long also.

It is true that code can be compiled using libraries that deliberately compromise the algorithms for transcendental functions to make them faster, but that's 100% the programmer's fault.

Re:Lack of standards, quality. (4, Informative)

rsmith-mac (639075) | more than 2 years ago | (#39936569)

Because they're not using the same encode paths.

All 3 hardware encode paths - Intel QuickSync, AMD AVIVO, and NVIDIA's CUDA video encoder - are largely black boxes. Programs such as MediaEspresso are basically passing off a video stream to the device along with some parameters and hoping the encoder doesn't screw things up too badly. Each one is going to be optimized differently, be it for speed, more aggressive deblocking, etc. These are decisions built into the encoder and cannot be completely controlled by the application calling the hardware. And you have further complexities such as the fact that only Intel's QuickSync has a hardware CABAC encoder, while AMD and NV do it in software (and poorly since it doesn't parallelize well).

Or to put this another way, everyone has their own idea on what the best way is to encode H.264 video and what kind of speed/quality tradeoff is appropriate, and that's why everything looks different.

Re:Lack of standards, quality. (0)

Anonymous Coward | more than 2 years ago | (#39936135)

You mean to tell me that a device with no more than 5x advantage in memory bandwidth or peak FLOPS is *not* actually capable of 100x or 1000x or even 10x speed advantage?? Say it aint so.

Yes, they cheat. What they do is take a poorly optimized, single threaded program, and run it on the biggest, baddest CPU there is, and get a pretty sad performance number. Then they optimize the shit out of it, vetorize it, parallelize it, and then run it on their GPU to compare with. Sometimes they go so far as to completely change the algorithm used to be a faster or less accurate one.

http://www.cs.utexas.edu/users/ckkim/papers/isca10_ckkim.pdf

That's not to say there are no advantages. They do typically have significantly better peak flops and bandwidth numbers, which often can translate into a good advantage in like-for-like comparisons. But it's more like 2x. 5x in favorable cases.

Re:Lack of standards, quality. (2, Insightful)

Anonymous Coward | more than 2 years ago | (#39936801)

Nice shill paper you got there... Of course a paper made by the throughput computing lab and the Intel architecture group (both Intel corp) will advocate there's not much speedup by a GPU when compared to a CPU.

The big thing to note is that with a GPU, you have to do what you did when working with the original SSE (intel...) instruction set on regular CPU's, FP16 numbers will not have a significant amount of precision, so you must take that into account when programming with that instruction set in mind. It's not as if people haven't been performing calculations with numbers bigger than the bit width of the cpu's instructions. Modern GPU are getting much beefier with double precision math as well

the 5000 series Radeon's (not examined in the paper) have much better DP performance than the geforce GTX 280 compared. The Radeon 5970 for example has 18x the DP GFLOPS that the i7-960 has, and they both went to market at the same time. For SP data, the 5970 is 46x better than the i7-960.

Re:Lack of standards, quality. (2, Interesting)

Anonymous Coward | more than 2 years ago | (#39937727)

Shill paper? I guess you prefer your papers sponsored by Nvidia, showing a device of no more than 5x memory bandwidth improvement and no more than 5x flops improvements getting 1000x peformance increases? *Those are shill papers*. Today the situation is slightly changed, but not hugely.

i7-3770K makes 112 DP GFLOPS and 25.6GB/s memory bandwidth.
5970 makes 928 DP GFLOPS and 256GB/s memory bandwidth.

So they're both within 10x. It makes 4.64 SP TFLOPS, which is about 20x the SP FLOPS.

Still not going to get 100x performance increases, are you? 1000x? Pfft

Re:Lack of standards, quality. (1)

Anonymous Coward | more than 2 years ago | (#39937761)

Also, the radeon requires about 3x the amount of power to do this as the Intel CPU alone, not including the CPU which will be required to drive the GPU.

Total cost will also be significantly higher for the GPU.

They're simply way over-hyped. Anybody who fell for the rash of 100x, 200x, 500x, 1000x claims were simply deluding themselves.

They struggle to give even a single order of magnitude increase, most of the time.

Re:Lack of standards, quality. (1)

The Master Control P (655590) | more than 2 years ago | (#39938123)

It's not just about theoretical FLOPS and main memory bandwidth.

A properly written GPU program is ideally never relying on main memory except to keep a buffer filled - it feeds all its FPUs from shared memory, which can deliver an aggregate bandwidth of TBps to about 750KB (total) of space in a good card.

And the moral of the Story is... (3, Informative)

CajunArson (465943) | more than 2 years ago | (#39935535)

The GPU isn't meant to do everything. If it were, there wouldn't be a CPU. Considering the hatred that was poured on Quicksync here, and that Quicksync still produces better quality Transcodes than GPUs while being substantially faster, I don't think we'll be seeing the end of CPU transcoding anytime soon.

Re:And the moral of the Story is... (1)

Verunks (1000826) | more than 2 years ago | (#39935575)

correct me if I'm wrong but doesn't quicksync use the integrated gpu of sandy/ivy bridge cpus?

Re:And the moral of the Story is... (5, Informative)

CajunArson (465943) | more than 2 years ago | (#39935593)

The quick sync hardware is part of the IGP block but it is specialized hardware specifically geared towards transcoding. For example, it is not using the main GPU pipeline and shader hardware to do the transcoding.

Re:And the moral of the Story is... (2)

BLKMGK (34057) | more than 2 years ago | (#39936115)

Yeah now go look at the heaping scorn that was heaped on the Intel rep when he approached the x.264 guys way late in development. Had they been smart enough to come forward sooner we might have gotten accelerated instructions the x.264 guys would have used - not so now it seems. :-(

Re:And the moral of the Story is... (4, Informative)

rsmith-mac (639075) | more than 2 years ago | (#39936437)

Let's be clear here: the x264 guys will never be happy. QuickSync, AMD's Video Codec Engine, and NVIDIA's NVENC all use fixed function blocks. They trade flexibility for speed; it's how you get a hardware H.264 encoder down to 2mm2. There are no buttons to press or knobs to tweak and there never will be, because most of the stuff the x264 guys want to adjust is permanently laid down in hardware. The kind of flexibility they demand can only be done in software on a CPU.

Re:And the moral of the Story is... (2)

Ranguvar (1924024) | more than 2 years ago | (#39936553)

Except even when you compare the fixed function H.264 encoders to x264 at those exact settings, x264 still dominates.

Re:And the moral of the Story is... (2)

rsmith-mac (639075) | more than 2 years ago | (#39936593)

That's my point though. Fixed function encoders won't be able to match x264 because of a lack of flexibility. They can't be optimized for specific niches, they need to be generalist in order to be decent at everything since the hardware can't be changed.

Also let's be clear (4, Informative)

Sycraft-fu (314770) | more than 2 years ago | (#39937565)

That while the x264 guys aren't wrong to want to keep working on a software encoder that is tweakable, there is nothing wrong with a fixed function hardware encoder for some tasks. Sometimes, speed is what you want and "good enough" is, well, good enough.

Like at work I edit instructional videos for our website (I work at a university) using Vegas. I use its internal H.264 encoders, which can be accelerated using the GPU. They are quite zippy, I can generally get a realtime or better encode, even when there is a decent amount of shit going on in the video that needs to be processed (remember that Vegas isn't for video conversion, I'm doing editing, effects, that kind of thing).

Now the result is not up to x264 quality, per bit. I could get better quality by mucking around setting up an avisynth frameserver and having x264 do the encoding using some tweaked settings for high quality. However it would be much slower.

Not worth it. I'll just encoder a reasonably high bitrate video. It is getting fed to Youtube anyhow, so there's a limit to how good it is going to look. The faster hardware assisted encode speeds are worth it.

If I was mastering a Blu-ray? Ya I might do the final encode to go off to fabrication with x264 (actually more likely an expensive commercial solution that can generate mastering compliant bitstreams). Spend the extra time to get it as quality as possible because of all the other work and because it could actually be noticable.

There is room for both approaches.

Re:And the moral of the Story is... (1)

rsmith-mac (639075) | more than 2 years ago | (#39936469)

For example, it is not using the main GPU pipeline and shader hardware to do the transcoding

No, but it is using it for post-processing such as deinterlacing, noise reduction, etc. The shader pipeline is still involved whenever you need to decode something, be it for QuickSync or for just playing back a video on a PC.*

*Consequently this is why Intel can't quite match AMD or NV in video playback quality; they lack the shader performance to do as much resource intensive processing

Re:And the moral of the Story is... (3, Interesting)

nabsltd (1313397) | more than 2 years ago | (#39936681)

No, but it is using it for post-processing such as deinterlacing, noise reduction, etc.

I use the GPU to do FFT noise reduction before some encodes, and it's essentially "free" as it's faster than the 8 threads used by x264 for encoding.

Re:And the moral of the Story is... (4, Informative)

Dahamma (304068) | more than 2 years ago | (#39935647)

Quick Sync uses dedicated HW on the die. Intel's solution that uses their GPU is called Clear Video.

Re:And the moral of the Story is... (1)

Bengie (1121981) | more than 2 years ago | (#39936303)

It does and it shows Intel's GPU may not be the fastest in all areas, but they're quite well rounded as they are a few factors faster than $300+ GPUs.

Re:And the moral of the Story is... (4, Insightful)

Dahamma (304068) | more than 2 years ago | (#39935695)

Actually, recent GPUs *were* meant to do exactly this type of thing, and have been marketed by Nvidia and ATI heavily for this purpose. Of course there needs to be a CPU as well. The CPU runs the operating system and application code, and offloads very specific, parallelizable work to the GPU. This sort of architecture has existed almost as long as modern CPUs have existed.

And Quick Sync is even less of a general purpose CPU solution than using a GPU. Quick Sync uses dedicated application specific hardware on the die to do its encoding.

Re:And the moral of the Story is... (0)

Anonymous Coward | more than 2 years ago | (#39935725)

What's the point if it still sucks? It seems that while better, it it is still inadequate.
Sucking faster, is still sucking.

Re:And the moral of the Story is... (0)

Anonymous Coward | more than 2 years ago | (#39936295)

Your mom sucks faster and it seems to work for her.

Re:And the moral of the Story is... (3, Interesting)

PopeRatzo (965947) | more than 2 years ago | (#39935763)

The GPU isn't meant to do everything.

But since "Graphics Processing" is part of their name, wouldn't you expect them to at least do that?

Especially considering the price of high-end GPUs is getting up there compared to high-end CPUs.

Re:And the moral of the Story is... (1)

Anonymous Coward | more than 2 years ago | (#39935863)

Today's "graphics processing units" are essentially designed to render a large number of triangles on a screen in a highly efficient way. If any other graphics operation is thrown at them, they may simply not be designed to execute it well. Just because it has "graphics" in the name it doesn't mean that it can handle every graphics technology perfectly well.

Re:And the moral of the Story is... (3, Interesting)

Nemyst (1383049) | more than 2 years ago | (#39936215)

You mean yesterday's, surely. Rasterizers still are required obviously but GPUs nowadays are very much shader-based and not so much polygon-centric (we're far from T&L). They're built to efficiently process short but otherwise arbitrary floating-point operation sequences in extremely parallel scenarii.

Re:And the moral of the Story is... (5, Insightful)

billcopc (196330) | more than 2 years ago | (#39935953)

Well see, that's the thing. A GPU is better suited to some kinds of massively parallel tasks, like video encoding. After all, you're applying various matrix transforms to an image, with a bunch of funky floating point math to whittle all that transformed data down to its most significant/perceptible bits. GPUs are supposed to be really really good at this sort of thing.

My hunch is that the problems we're seeing are caused by two big issues:

1. lack of standardization across GPU processing technologies. CUDA vs OpenCL vs Quicksync, and a bunch of tag-alongs too. Each one was designed around a particular GPU architecture, so porting programs between them is non-trivial.

2. lack of expertise in GPU programming. Let's be fair here: GPUs are a drastically different architecture than any PC or embedded platform we're used to programming. While I could follow specs and write an MPEG or H.264 encoder in any high-level language in a fairly straight-forward manner, I can't even begin to envision how I would convert that linear code into a massively parallel algorithm running on hundreds of dumbed-down shader processors. It's not at all like a conventional cluster, because shaders have very limited instruction sets, little memory but extremely fast interconnects. We have a hard enough time making CPU encoders scale to 4 or 8 cores, this requires some serious out-of-the-box thinking to pull off.

Moving to a GPU virtually requires starting over from scratch. This is a set of constraints that are very foreign to the transcoding world, where the accepted trend was to use ever-increasing amounts of cheaply available CPU and memory, with extensively configurable code paths. The potential is there, but it will take time for the hardware, APIs and developer skills to converge. GPU transcoding should be seen as a novelty for now, just like CPU encoding was 15 years ago when ripping a DVD was extremely error-prone and time-consuming. If you want a quick, low quality transcode, the GPU is your friend. If you're expecting broadcast-quality encodes, you're gonna have to wait a few years for this niche to grow and mature.

Re:And the moral of the Story is... (4, Insightful)

fuzzyfuzzyfungus (1223518) | more than 2 years ago | (#39936667)

What strikes me as a bad sign is not so much that the GPU transcoding doesn't necessarily produce massive speed improvements; but that the products tested produce overtly broken output in a fair number of not-particularly-esoteric scenarios.

Expecting super-zippy magic-optimized speedups on all the architectures tested would be the mark of expecting serious maturity. Expecting a commercially released, GPU-vendor-recommended, encoding package to manage things like "Don't produce an h264 lossy-compressed file substantially larger than the DVD rip source file" and "Please don't convert a 24FPS video to 30FPS for no reason on half the architectures tested" seems much more reasonable.

I can imagine that the subtle horrors of the probably-makes-the-IEEE-cry differences in floating point implementations, or their ilk, might make producing identical encoded outputs across architectures impossible; but these packages appear to be flunking basic sanity checks, even in the parts of the program that are presumably handled on the CPU(when a substantial portion of iPhone 4S handsets are 16GB devices, letting the 'iPhone 4S' preset return a 22GB file while whistling innocently seems like a bad plan...)

Re:And the moral of the Story is... (4, Interesting)

Dputiger (561114) | more than 2 years ago | (#39936767)

Fuzzy,

You pretty much nailed my problem with the output. :P That's the reason why Arcsoft, with compatibility problems, ultimately ranked above Cyberlink. Arcsoft doesn't do very good work on the Radeon 7950 and it can't handle CUDA, but it at least gets something right. Quick Sync video is very good.

Cyberlink got nothing right anywhere. And it's the program most-often recommended to reviewers as a benchmark when we want to review GPU encoding.

Re:And the moral of the Story is... (0)

Anonymous Coward | more than 2 years ago | (#39937623)

Armchair guessing at its best. I would like to subscribe to your newsletter.

Agreed- it's not meant to do everything (0)

Anonymous Coward | more than 2 years ago | (#39936491)

after it all, it is a GRAPHICS processing unit, an it's designed for a very specific sub section of computing.. known as GRAPHICS PROCESSING.

if I wanted some jack of all trade type computation- I'd use something a little more common like a CPU..

Transcoding- hey! that's GRAPHICS PROCESSING isn't it? gosh- I hope my GPU can do me some of that!

Re:And the moral of the Story is... (1)

CityZen (464761) | more than 2 years ago | (#39937629)

No, the moral is that having new hardware is worthless without good software. Just because someone writes some new code that uses the new hardware doesn't make that code any better than the polished code that runs on the old hardware. This applies to much more than just transcoding on PCs.

Welp (-1, Troll)

Anonymous Coward | more than 2 years ago | (#39935539)

Pity the Handbrake devs are dickwads.

Re:Welp (3, Interesting)

gnasher719 (869701) | more than 2 years ago | (#39936615)

Pity the Handbrake devs are dickwads.

1. It's not funny.

2. They make an excellent bit of software that I have been using for free for years. Unless you helped them out you can't complain.

3. The guys creating Handbrake and the guys making video encoders are not the same people, so your rant is misdirected.

4. I mailed them two suggestions for improvements, and both got implemented. Now this may be because my suggestions were the kind of things that were (a) genuine improvements and (b) interesting for the developer and therefore would have been implemented anyway, but in my experience they are responsive to the right kind of suggestions.

Or just use an OpenCL-powered encoder... (4, Interesting)

carlhaagen (1021273) | more than 2 years ago | (#39935581)

...since the results of OpenCL code is static across GPUs rather than being an arbitrary output.

Re:Or just use an OpenCL-powered encoder... (4, Informative)

Mia'cova (691309) | more than 2 years ago | (#39936355)

Only the more modern GPU support it. And of those, there are still different levels of support. Even if it's supported, you would probably get much better perf on an nvidia card by using cuda for example. So in today's world, you can't just use an onpencl-powered encoder, it depends on what hardware you have.

The summary of the summary is (2)

BitZtream (692029) | more than 2 years ago | (#39935623)

that Cyberlink's software is pretty damn shitty.

I've done a little bit of playing around with GPU encoding myself and its not real hard to turn out something faster than your CPU on the GPU with identical quality. Getting varied quality from different cards means you're doing something VERY wrong.

Re:The summary of the summary is (1)

PCM2 (4486) | more than 2 years ago | (#39936171)

Getting varied quality from different cards means you're doing something VERY wrong.

Maybe it means you're good at programming one GPU but you're not as good at programming the other. Or if another person did the code for the other GPU, maybe the other person doesn't code as well as you do.

But if all these chips have different instruction sets and APIs, it sounds kinda like saying, "If your program runs slower on iOS than it does on Android, you're doing something very wrong." Maybe. The point is that things were supposed to be getting easier, but apparently they're not.

Re:The summary of the summary is (0)

Anonymous Coward | more than 2 years ago | (#39936555)

You're right about mediaspresso. It's absolute CRAP. I got it free with a powerdvd purchase and used it to try a few encodes for my transformer prime. It's slow and the output is terrible.This was on an 8 core Xeon system with a radeon HD card. I switched to Pavtube and got much better performance and results.

Does anyone have editors anymore? (2)

wbr1 (2538558) | more than 2 years ago | (#39935625)

There is a screwed up graph on page two where they use the same graphic twice, and the caption describes aspects of the one that is missing. I really wanted to see the comparison too. You would think in an article of that size and scope someone would be responsible for checking layout as well as copy. It is no wonder we are losing to china. Their English may be worse, but their work ethic and attention to detail is possibly better.

Re:Does anyone have editors anymore? (5, Informative)

Dputiger (561114) | more than 2 years ago | (#39935655)

As the author of the story, that's an error that slipped past in formatting. I'm uploading the proper graph right after I hit "Reply" on this.

Re:Does anyone have editors anymore? (1)

wbr1 (2538558) | more than 2 years ago | (#39935687)

Much obliged then sir. You can also delete my comment on your story at your site then! I posted there as well, not knowing you were watching /. Maybe we do have a chance after all (as long as crusty old cynics like me don't depress everyone too much).

GPUs will be great once we ... (1)

kbrafford (2634775) | more than 2 years ago | (#39935631)

I think that the real benefit of GPUs for transcoding will be seen once people start making new as-yet unimagined encoding schemes that are designed to do data parallel tasks that wouldn't even be considered on a traditional CPU.

Re:GPUs will be great once we ... (1)

PopeRatzo (965947) | more than 2 years ago | (#39935777)

I think that the real benefit of GPUs for transcoding will be seen once people start making new as-yet unimagined encoding schemes that are designed to do data parallel tasks that wouldn't even be considered on a traditional CPU.

Maybe by then, "traditional" CPUs will be different from the ones we have right now.

Re:GPUs will be great once we ... (1)

Hentes (2461350) | more than 2 years ago | (#39936071)

Encoding should be trivial to paralellize, you just cut up a movie to a sequence of n clips and encode them independently.

Re:GPUs will be great once we ... (1)

nabsltd (1313397) | more than 2 years ago | (#39936455)

Encoding should be trivial to paralellize, you just cut up a movie to a sequence of n clips and encode them independently.

Because the structure of modern codecs is based on Groups of Pictures (GOP), you'd have to run two passes on the video, with the first pass determining where the keyframes go. Although this is commonly done by people who don't have a good understanding of video encoding, the more efficient way is to just run a single pass using a constant quality (which is not the same as a constant quantizer). Then, on that single pass, you parallelize the operations on each frame. This also results in less disk thrashing and more hits on cached data.

From what I have found, though, most of the computation time in an encode is used up by either filters before the encode (grain removal, etc.) or by increasing the range and quality of motion search, which do parallelize well, but I don't think a GPU would help much.

Re:GPUs will be great once we ... (0)

Anonymous Coward | more than 2 years ago | (#39937627)

Although this is commonly done by people who don't have a good understanding of video encoding, the more efficient way is to just run a single pass using a constant quality (which is not the same as a constant quantizer).

Single pass has always produced garbage results for me unless you knock the bitrate WAY up, but why even bother recoding, then? 2-pass let's you determine your final filesize (IMO, the reason one recodes) and herd the bandwidth where it's needed.

Re:GPUs will be great once we ... (1)

pthisis (27352) | more than 2 years ago | (#39937805)

2-pass let's you determine your final filesize (IMO, the reason one recodes)

That's a weird IMO. Certainly 2-pass is the best way to go if you care about exact filesize, but that's not what most people care about. They care about having video that will play back so they can watch and hear it, and the primary reason anyone I know recodes is to convert to a format that they can actually play (generally for ipad/smartphone or ps3 playback).

Re:GPUs will be great once we ... (0)

Anonymous Coward | more than 2 years ago | (#39937931)

Well, yes, that is important also, but if your portable media player is capable of playing it, you're still not going to make a 50G blu ray rip for it. You're going to make it fit on the storage media you're using. Hence, controlling file size, using parameters that conform to the device's limits.

Re:GPUs will be great once we ... (1)

ldobehardcore (1738858) | more than 2 years ago | (#39937763)

What about slice-based parallel processing?
Correct me if I'm wrong, (I wouldn't be surprised if it turns out I am...) but doesn't x264 have an option to do slice-based parallel processing? As I understand it, if there are 4 running threads, each frame is chopped into four quadrants with a little edge room buffer in each slice, then independently encoded, then glued back together at the other end? That's how I remember that option being described in some forum or other. Not the standard multi-threading, but the slice-based option.

9 Pages??? (0)

Anonymous Coward | more than 2 years ago | (#39935755)

It sounded like an interesting read. However, I didn't get past the summary. Why would you split it into 9 pages?

Re:9 Pages??? (3, Informative)

Dputiger (561114) | more than 2 years ago | (#39936447)

As the author:

Because 3000-word articles with PNGs at ~300K per large image and 100K per preview image aren't fun reading in a single go. There's ~1.5MB of imagery just on the third page . Pages 3-8 have about the same, and that's with the images only loaded as thumbnails.

If you've got a fast net connection, you won't care. If you don't have a fast net connection, loading 16MB of images at once isn't a lot of fun.

Visual quality comparisons are one area where you can't use low-quality JPGs. A 9-page article at ET is a real rarity, it's not something we do because we want to spam ads.

Single Page Version of Article (5, Informative)

Anonymous Coward | more than 2 years ago | (#39935781)

Here's a link [extremetech.com] to the article in 1 page.

Wretched State of Reviews (0)

Anonymous Coward | more than 2 years ago | (#39935849)

I'm confused as to how a review of transcoding applications that utilize GPUs and is user friendly doesn't include DVDFab??? DVDFab is user-friendly, supports CUDA, DXVA, Intel Quick Sync and Software (CPU) encoding which supports the CoreAVC codec. DVDFab is available for Windows and Mac OSX. Perhpas it wasn't selected because there isn't a Linux version...

Using CUDA with DVDFab and 2-pass encoding, I get consistently excellent results and my high quality encoding time of a Blu-ray (for backup purposes) is between 90 and 120 minutes. 1-pass encoding is faster. These results have been consistent.

Re:Wretched State of Reviews (1)

Lunix Nutcase (1092239) | more than 2 years ago | (#39936011)

Because it was a review the actual GPU encoders themselves not various frontends to those GPU encoders.

Re:Wretched State of Reviews (1)

FullCircle (643323) | more than 2 years ago | (#39937067)

The review and summary are giving mixed signals then, as I had the same reaction to the article.

If this is a review of the encoders and not the front ends, then why is Handbrake specifically pointed out for ease of use?

Handbrake is only a front end to an encoder that can easily give similar or vastly worse results if you don't know how to use it.

Re:Wretched State of Reviews (1)

Dputiger (561114) | more than 2 years ago | (#39937309)

That's a distinction that the average user doesn't make. At the end of the day, I don't care if the front-end secretly passes the video to a collection of manatees who perform FFT calculations using colored balls they pick out of a pit. The criteria was a piece of software with easy-to-use presets that produces decent-quality video after I push "Ok."

If Program X does that, and Program Y doesn't, then Program X wins. The reason *why* is interesting and pertinent, but the question wasn't "Why do two different front-ends give different results using the same encoder?"

Re:Wretched State of Reviews (1)

FullCircle (643323) | more than 2 years ago | (#39937453)

Mine was a reply to "Because it was a review the actual GPU encoders themselves not various frontends to those GPU encoders." which you and I both seem to disagree with.

That said, I do believe that there are better GPU assisted applications than those tested, such as DVDFab mentioned above.

I'd be very interested to see how it compares using this methodology, but testing every available application could become a full time job.

I have no affiliation with DVDFab, but it comes to mind as a decent encoder well before any of the ones tested.

Re:Wretched State of Reviews (1)

swalve (1980968) | more than 2 years ago | (#39936789)

The problem is that the processor (GPU in this case) shouldn't make a difference as to the results of the calculations. Sure, a shittier GPU is going to have a shittier picture when forced to run at a certain framerate beyond its capabilities, but when used as a processor for a process that isn't time constricted, it should just take longer. Instead, feeding the same input into one brand of GPU is giving different results when it is run on a different GPU.

Re:Wretched State of Reviews (1)

Dputiger (561114) | more than 2 years ago | (#39936807)

Simple reason: Because DVD Fab never came up. I Googled several variations on the term and asked Nvidia, Intel, and AMD for their own recommendations as far as products were concerned. Cyberlink and Arcsoft were recommended by multiple sources. Badaboom, I knew about and was familiar with. Xilisoft and MediaCoder were added as a result of additional research.

I never came across DVD Fab. That's not a judgment on its quality or output.

The real problem... (2)

Sulik (1849922) | more than 2 years ago | (#39936095)

The real problem is a lack of a common API for encoding regardless of GPU/CPU, which leads to vendor-specific implementations with varying degrees of quality. The most efficient way to pretty much do anything is a dedicated HW block (from both perf and power point of view), so there is no question that there is value in encoding using dedicated hardware, but the software has to catch up.

Explain to me how its the GPU's fault (1)

catmistake (814204) | more than 2 years ago | (#39936161)

that encoders inexplicably insist on codex and wrappers that predate the Millenium? The problem with transcoding is that it exists at all. Strongarm the holdout encoders into using h264 or mp4v with mp4 wrappers, and transcoding will be like... well, like anything no one does anymore.

Re:Explain to me how its the GPU's fault (1)

nabsltd (1313397) | more than 2 years ago | (#39936633)

The problem with transcoding is that it exists at all. Strongarm the holdout encoders into using h264 or mp4v with mp4 wrappers, and transcoding will be like... well, like anything no one does anymore.

There will always be transcoding, since you can't fit the 20GB H.264 stream from a Blu-Ray on a phone. And, why would you want to? Resize the 1920x1080 to 800x480 or so, and it will look great on every phone.

For tablets or other devices with more resolution, you still don't need all the bits that most Blu-Ray encodes use. Most are essentially constant bit rate around 25-30Mbps. For movies that are essentially "talking heads" (courtroom dramas like A Few Good Men and Presumed Innocent are the best examples), most of those bits aren't actually adding anything to the picture quality. Even action movies can easily get by with 10Mbps average on a full-resolution transcode, as long as the action scenes get enough bits.

Re:Explain to me how its the GPU's fault (1)

catmistake (814204) | more than 2 years ago | (#39936837)

There will always be transcoding, since you can't fit the 20GB H.264 stream from a Blu-Ray on a phone.

You are thinking about this all wrong. You think you own that movie... but you don't, you own a license. That license entitles you to transcode... if you want to go ahead and do work that, chances are, has already been done, and is constantly being done for you by others that create far better quality transcodes. The obtuse talk about how great their hardware is, and how fast they can rip their movies... but the astute keep all their movies backed up on the Internet in every format and resolution imaginable.

Re:Explain to me how its the GPU's fault (1)

jedidiah (1196) | more than 2 years ago | (#39937441)

> You are thinking about this all wrong. You think you own that movie... but you don't, you own a license.

Your attempt to spread that pro-corporate propaganda simply won't work here. We know better.

Incompetent Author? (-1)

Anonymous Coward | more than 2 years ago | (#39936275)

I have no experience with any of these software titles but it is a bold statement to say that software is wretched without attaining the mastery of the software to get it to perform correctly and point out the design flaws that justify its wretched description. Illiterate young children may have a hard time using the internet; high school drop outs might find tax software difficult to use; it does not make the state of the internet or tax software wretched.

I fail to see how the author established himself as competent in this domain. What led me to question his competency is the fact that he tries to compare video quality without getting bitrate or resolution to hold steady.

Re:Incompetent Author? (5, Informative)

Dputiger (561114) | more than 2 years ago | (#39936859)

I set out to test presets. Specifically, I set out to test the presets of software packages which are sold on the purported *strength* of those presets. I say so in the first paragraph:

" Our goal was to find a program that would encode at a reasonably high quality level (~1GB per hour was the target) and require a minimal level of expertise from the user."

That's why MediaCoder results weren't included.

The entire article came about because Cyberlink's iPhone 4S preset yielded files that were 1.4GB if I used CPU encoding or a GTX 580, and 188MB if I used Quick Sync. That disparity is what I noticed when I went to check encode quality for the initial IVB review.

Can you build custom profiles in CME and create outputs that avoid these problems? You can -- though some options aren't available. That, however, is not the point. If I'm going to build my own custom profiles, I can download a copy of MediaCoder for free and do it with a more powerful piece of software that offers a huge number of options.

I did a review of "Software that claims to automate the GP encode process." I did not do a review of "Can Cyberlink MediaEspresso EVER create a decent image?" Given what I set out to evaluate, my ability to tweak profiles to achieve a satisfactory result is not a valid criteria for my conclusions.

Intel QuickSync is the true winner (1)

TPoise (799382) | more than 2 years ago | (#39936335)

So basically the article says GPU rendering is bad, but QuickSync is good enough for prime time.

Duh. QS is made to do a very specific task (encoding/decoding video) and it can do it super fast at decent quality rates. There's always the tradeoff of quality vs. encoding time. With QS, I can rip an entire 50GB Blu-Ray in 12 minutes to a 1080p MKV @ 8000kbps. It takes about 16 hours doing the same task with a normal x264 encoder such as Handbrake even though the quality is a little bit better. Is it worth waiting around 16 hours for me? Nope.

With enough bitrate, anything looks good. The key is to just bump up the bitrate in MediaCoder when using QuickSync for encoding to something very high.

Re:Intel QuickSync is the true winner (1)

nabsltd (1313397) | more than 2 years ago | (#39936823)

With QS, I can rip an entire 50GB Blu-Ray in 12 minutes to a 1080p MKV @ 8000kbps. It takes about 16 hours doing the same task with a normal x264 encoder such as Handbrake even though the quality is a little bit better.

Even using the "slower" preset on x264, 1080p encodes take about 3 times as long as the movie, so no more than 8 hours. This is on a slower CPU (since you have QuickSync) than you use, and end up at about 4Mbps

If I used a less-intensive preset, I would get encodes at about the same bitrate as yours, but taking just a little more than the running time of the movie to do it. QuickSync may be even faster, but 3 hours to encode most movies is good enough.

With enough bitrate, anything looks good.

In general, this is true, but very poor encoders can still screw up a high bitrate encode. That's why I use x264...it's going to give me the absolute best quality and lowest bitrate for the amount of encoding time. Since I only encode my movies one time, taking 6-8 hours to do it and getting bitrates as low as 2Mbps for 1080p with high picture quality is worth the extra time. It's also nice to be able to use the full power of AVISynth during the encode. My Blu-Ray rip of "A New Hope" doesn't have the useless "Jabba in the hangar" scene, and I'm working on getting Han shooting first.

Re:Intel QuickSync is the true winner (2, Informative)

Dputiger (561114) | more than 2 years ago | (#39936879)

No, the article says that GPU encoding software runs the gamut from outright awful to simply broken and limited. Quick Sync video is great in Arcsoft, terrible in Cyberlink, unsupported in Xilisoft, and looks decent in MediaCoder. Check the GTX 580's output in Xilisoft for plenty of proof that no, you don't need insane bitrates to create decent-looking output.

Re:Intel QuickSync is the true winner (2)

spire3661 (1038968) | more than 2 years ago | (#39936949)

Ok so i have a Sandy Bridge K processor. What else do i need to make QS work?

Hardware transcoding, not GPU (1)

mapuche (41699) | more than 2 years ago | (#39936409)

That's why video professionals and tv stations rely on hardware based transcoding, and this solutions tend to be expensive. There should be many systems than encode H264 videos really fast, something like this: http://www.blackmagic-design.com/products/teranex/
 

Re:Hardware transcoding, not GPU (3, Interesting)

nabsltd (1313397) | more than 2 years ago | (#39936857)

That's why video professionals and tv stations rely on hardware based transcoding, and this solutions tend to be expensive.

x264 can encode 1080p in realtime on a modern Intel CPUs (Sandy Bridge, etc.) with pretty much as good a quality for the same bitrate as most hardware solutions. For non-HD, x264 just smokes hardware, as it can do better than realtime encodes at very high quality on those same CPUs.

Re:Hardware transcoding, not GPU (1)

jedidiah (1196) | more than 2 years ago | (#39937455)

Fascinating. The fact that hardware based transcoding is a disaster is why "professionals" use hardware based transcoding?

That simply makes no sense.

How about DVDFab? (1)

artor3 (1344997) | more than 2 years ago | (#39936755)

I use DVDFab to rip DVDs using my GPU, and it positively flies. Most 2 hr movies take around 10 minutes to convert to H.264. It doesn't support VBR, but outside of that I've never had trouble with it. The resulting video quality is quite good as well (except with files that need deinterlacing, but that's always a problem). I think the person who wrote the articles just didn't try the right programs.

Re:How about DVDFab? (1)

PhrostyMcByte (589271) | more than 2 years ago | (#39937537)

Have you tried x264 with --preset veryfast? My experience is that x264 is able to match a GPU encoder's speed while still giving significantly better quality. I'd only bother with a GPU encoder if I had a terrible CPU (netbook, phone?).

Please see real transcoders (2)

TheSync (5291) | more than 2 years ago | (#39937729)

Please see Elemental Technologies [elementalt...logies.com] GPU-accelerated H.264 transcodes.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?