Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Five Nvidia CUDA-Enabled Apps Tested

ScuttleMonkey posted more than 5 years ago | from the need-for-speed dept.

Graphics 134

crazipper writes "Much fuss has been made about Nvidia's CUDA technology and its general-purpose computing potential. Now, in 2009, a steady stream of launches from third-party software developers sees CUDA gaining traction at the mainstream. Tom's Hardware takes five of the most interesting desktop apps with CUDA support and compares the speed-up yielded by a pair of mainstream GPUs versus a CPU-only. Not surprisingly, depending on the workload you throw at your GPU, you'll see results ranging from average to downright impressive."

cancel ×

134 comments

Sorry! There are no comments related to the filter you selected.

*Slighty* OT but nonetheless... (-1, Troll)

Anonymous Coward | more than 5 years ago | (#28004403)

"My daddy ate my eyes."

Re:*Slighty* OT but nonetheless... (-1, Flamebait)

Anonymous Coward | more than 5 years ago | (#28004579)

at least a nigger didn't eat your penis.

Nice, but... (2, Funny)

mikiN (75494) | more than 5 years ago | (#28004435)

post.push("First!");

All fine and dandy, but...does it run Linux?

Re:Nice, but... (4, Informative)

slummy (887268) | more than 5 years ago | (#28004455)

CUDA is a framework that will work on Windows and Linux.

Re:Nice, but... (2, Informative)

mikiN (75494) | more than 5 years ago | (#28004669)

Queue mip-mapped, 8xAA, subpixel rendered, fogged, PhysX enhanced flyby of a 'Whoosh' passing over your head.

The question was not whether CUDA runs _on_ Linux, but whether the GPU itself can run Linux.

I can imagine that, if we had ever been given all the specs, a multi-function DSP card like IBM's Mwave could. It would probably even be able to read aloud console messages (besides being a graphics card and modem, it's also a sound card).

Re:Nice, but... (1)

Dragonslicer (991472) | more than 5 years ago | (#28006247)

Queue mip-mapped, 8xAA, subpixel rendered, fogged, PhysX enhanced flyby of a 'Whoosh' passing over your head.

What, this thing runs on AA batteries? Sweet.

And as a side note, unless you were talking about a long line of whooshes, the word you were looking for is "cue".

Re:Nice, but... (0)

Anonymous Coward | more than 5 years ago | (#28006317)

Queue mip-mapped, 8xAA, subpixel rendered, fogged, PhysX enhanced flyby of a 'Whoosh' passing over your head.

What, this thing runs on AA batteries? Sweet.

And as a side note, unless you were talking about a long line of whooshes, the word you were looking for is "cue".

AA = Antialiasing.

Wait, I sense that thing is flying towards me now...

Re:Nice, but... (4, Informative)

gustgr (695173) | more than 5 years ago | (#28004463)

I know you are trolling, but actually CUDA applications work better on Linux than on Windows. If you run a CUDA kernel on Windows that lasts longer than 5~6 seconds, your system will hang. The same will happen on Linux but then you can just disable the X server or have one card providing your graphical display and another one as your parallel co-processor.

Re:Nice, but... (0)

Anpheus (908711) | more than 5 years ago | (#28004779)

Are you certain this is the case?

I'm curious because ATI/AMD appear to have solved that problem, in that I can run the Folding@Home GPU client and my displays still run. I'm running Windows 7 with Aero, so it's hitting the GPU not the CPU for my displays.

Re:Nice, but... (4, Informative)

3.1415926535 (243140) | more than 5 years ago | (#28004831)

Folding@Home runs its computations in short bursts. gustgr is talking about a single computation kernel that takes more than 5-6 seconds.

Re:Nice, but... (1)

Anpheus (908711) | more than 5 years ago | (#28005415)

Thanks for the clarification.

Re:Nice, but... (3, Informative)

Jah-Wren Ryel (80510) | more than 5 years ago | (#28004837)

He's not talking about how long the app itself runs, but how long each subroutine that runs on the GPU runs before returning something back to the app on the CPU side. If that subroutine takes too long to complete windows gets unhappy. I don't remember if it was a watchdog timer thing or a bus-locking thing or something else. I don't even know if its been fixed or not.

Re:Nice, but... (1)

Anpheus (908711) | more than 5 years ago | (#28005673)

Thanks for the clarification, as well.

Re:Nice, but... (2, Informative)

bigstrat2003 (1058574) | more than 5 years ago | (#28005265)

I know you are trolling...

No, he's joking. Stop crying troll when there's not even a hint of troll, for God's sake.

...but actually CUDA applications work better on Linux than on Windows.

Read carefully. He said "does it run Linux?", not "does it run on Linux?". Overused slashdot meme it might be, but the joke still went miles above your head.

Re:Nice, but... (1)

jgtg32a (1173373) | more than 5 years ago | (#28005757)

I'll give him the benefit of the doubt on does it run Linux" and "does it run on Linux" I read it the same way and didn't notice it until I saw your comment

Re:Nice, but... (4, Funny)

mikiN (75494) | more than 5 years ago | (#28004479)

Well, everywhere else in the world, Linux runs the CUDA Toolkit [nvidia.com] , so I can imagine that in Soviet Russia, a Beowulf cluster of Nvidia cards run Linux.

Re:Nice, but... (-1, Troll)

Anonymous Coward | more than 5 years ago | (#28004481)

but...does it run Linux?

Does it matter? Linux is not anywhere close to the target market, here--not when it's still gaining traction, as the article states. It would be foolish to focus on the niche *nix platform. That'll come when the technology matures, I'm sure.

It's safe to say that a vast majority of owners of higher-end GPUs are running Windows.

Re:Nice, but... (5, Insightful)

Jah-Wren Ryel (80510) | more than 5 years ago | (#28004889)

Does it matter? Linux is not anywhere close to the target market,

Linux support for CUDA matters hugely, Linux boxes are head and shoulders above any other market for CUDA-based software. That's because linux is the OS for supercomputing nowadays and CUDA's biggest niche is the exact same kind of number crunching that is typically associated with supercomputer workloads.

In fact, these GPUs are yet another example of how there is nothing new under the sun. A GPU is very much like the vector processor of Cray-style supercomputing (when Cray was still alive that is) aka SIMD (single instruction, multiple data). [wikipedia.org]

Re:Nice, but... (1)

David Greene (463) | more than 5 years ago | (#28004983)

Uhh...Cray [cray.com] is still very much alive. And doing vectors. And threads. And multicore. All long before Intel/AMD.

Re:Nice, but... (4, Informative)

Jah-Wren Ryel (80510) | more than 5 years ago | (#28005655)

Uhh...Cray is still very much alive. And doing vectors. And threads. And multicore. All long before Intel/AMD.

Seymour Cray was killed by a speeding redneck in a trans-am in 1996.

The company currently known as Cray as formerly known as TERA, which bought the assets of Cray Research from SGI who acquired Cray Research after Seymour had left to form Cray Computer which is also defunct.

Seymour was never significantly involved in multi-core or multi-threaded processors or NUMA. In fact, he specifically avoided designs even hinting of that sort of complexity because he felt that simplicity in design made it easier to fully utilize the maximum performance of the hardware.

Re:Nice, but... (1, Funny)

umeboshi (196301) | more than 5 years ago | (#28005873)

Seymour Cray was killed by a speeding redneck in a trans-am in 1996.

Well, at least it wasn't a speeding redneck in a 'cuda. ;)

Re:Nice, but... (0)

David Greene (463) | more than 5 years ago | (#28007431)

Seymour Cray was killed by a speeding redneck in a trans-am in 1996.

So? Cray != Seymour. In fact the most successful Cray machines were not designed by Seymour.

The company currently known as Cray as formerly known as TERA, which bought the assets of Cray Research from SGI who acquired Cray Research after Seymour had left to form Cray Computer which is also defunct.

So? Many of the engineers there have been there for a long time. Even if they've been bounced around between companies, it's a good number of the same people. And who's to say that SGI and Tera didn't provide some good brainpower to the current Cray, Inc.? No one has a monopoly on good design.

Seymour was never significantly involved in multi-core or multi-threaded processors or NUMA. In fact, he specifically avoided designs even hinting of that sort of complexity because he felt that simplicity in design made it easier to fully utilize the maximum performance of the hardware.

So? Seymour was wrong. It worked in the early days of CDC and Cray Research but it doesn't work any more. The microprocessor vendors made sure of that. Honestly, the man wasn't a god.

MIMD (1)

Gary W. Longsine (124661) | more than 5 years ago | (#28005807)

Apple and other OpenCL partners are undoubtedly looking forward, beyond SIMD, to the coming generation of MIMD [wikipedia.org] capable GPU such as the nVIDIA GT300 [brightsideofnews.com] .

Re:Nice, but... (5, Interesting)

parlancex (1322105) | more than 5 years ago | (#28006845)

In fact, these GPUs are yet another example of how there is nothing new under the sun. A GPU is very much like the vector processor of Cray-style supercomputing (when Cray was still alive that is) aka SIMD (single instruction, multiple data). [wikipedia.org]

Actually, not quite. The execution architecture in the Nvidia's G80 series GPUs and onwards is actually SIMT, single instruction multiple threads. The not so subtle difference here is that in a SIMD vector architecture the application explicitly manages instruction level divergence which will generally narrow the SIMD width of divergent paths to only 1 path, whereas in a SIMT architecture when threads diverge within a warp all divergent threads executing the same branch within that warp can be issued an instruction simultaneously, with the threads that are not on that branch within that warp inactive for that cycle. This is transparent to the application. Currently in Nvidia's latest architecture the warp size is still statically set at 32 threads so you'll see performance penalties when threads within any warp diverge proportional to the number of unique paths taken. Interestingly the next iteration of the hardware is rumored to feature a thread scheduler capable of variable warp sizes, probably still with some lower bound, but this would bring the GPU much closer to the ideal "array of independently executing processing cores" that we have in modern CPUs, but with obviously far more cores.

Re:Nice, but... (3, Insightful)

fuzzyfuzzyfungus (1223518) | more than 5 years ago | (#28005349)

If anything, NVIDIA is likely far more interested in CUDA working on Linux then in openGL working on Linux(something that they obviously do have some interest in).

Gamers, certainly, most likely have Windows systems. Workstation applications are likely a good chunk of Windows, with a slice of Mac, and some Linux.

Bulk crunching, though, which is where CUDA might make NVIDIA some real money, is overwhelmingly Linux based. Linux is, by a substantial margin, the obvious choice for big commodity clusters.

The war begins. (2, Interesting)

XPeter (1429763) | more than 5 years ago | (#28004523)

With NVIDIA slowly pushing it's way into the CPU market (CUDA is the first step, in a few years I wouldn't be surprised if Nvidia started developing processors) and Intel trying to cut into NVidia's GPU market share with Larrabee http://en.wikipedia.org/wiki/Larrabee_(GPU) [wikipedia.org] , we'll see who can develop outside of their box faster. This is good news for AMD since Intel will be more focused on Nvidia instead of being neck to neck with them in the processor market. Hey, maybe AMD will regain it's power in the server and netbook realms.

There's also going to be a battle of patents pretty soon too. Wish I was a tech lawyer.

Re:The war begins. (2, Interesting)

David Greene (463) | more than 5 years ago | (#28005023)

It's going to be interesting to see how Larrabee and AMD's Fusion battle it out. With Larrabee, Intel is taking a tightly integrated approach. One can easily imagine that LRBni will be integrated into mainstream CPUs in the not-so-distant future, at which point Intel will argue that no one needs a GPU.

AMD, on the other hand, is taking he approach of (relatively) loosely-coupled specialized processors. One, the CPU, for general-purpose/integer/branchy code and the GPU for graphics (and HPC?).

Currently my bet is on Intel because of the much simpler Larrabee programming model. But if the performance isn't there, things could get heated.

Re:The war begins. (1)

Bat Country (829565) | more than 5 years ago | (#28006347)

I'd honestly like to see the two work together to produce some sort of sickeningly powerful rendering setup.

A processor which was good at preprocessing a scene for maximum performance on the GPU hardware and built-in support for multiple display adapters, plus an on-board chip which handles outputting the resulting images via the digital-link-du-jour.

This sort of setup would mean that rather than having to update your GPUs every two years (you could just buy another one to run in parallel) - the graphics card manufacturers could get better at producing the hardware with a larger profit margin due to longer product lifetimes, the CPU manufacturers could get in on the action like they so clearly want to, and the motherboard chipset manufacturers could get in endless bidding wars to produce the best output signal pipeline and video decoders.

Nobody would come out a loser, and the whole thing would be more friendly to consumers in a depressed economy, which I've no doubt customers would respond to.

Tied to a card (5, Insightful)

ComputerDruid (1499317) | more than 5 years ago | (#28004531)

What I don't understand is why people hype a technology that is tied to a specific manufacturer of card. If nvidia died tomorrow, we'd have a fair amount of code thats no longer relevant, unless there was some way to design cards that are CUDA-capable but not nvidia.

Also worth noting that I'd completely forgotten CUDA even ran on windows, as I've only heard it in the context of linux recently.

Re:Tied to a card (5, Insightful)

gustgr (695173) | more than 5 years ago | (#28004585)

OpenCL will hopefully help to set a solid ground for GPU and CPU parallel computing, and since it is not technically very different from CUDA, porting existing applications to OpenCL will not be a challenge. Nowadays with current massively parallel technology the hardest part is making the algorithms parallel, not programming any specific device.

Re:Tied to a card (1)

egr (932620) | more than 5 years ago | (#28004613)

I think there was an open source alternative which is not tied to any card, but I forgot what its name was. And I never programmed for it, so I don't know how well it preforms.

Re:Tied to a card (2, Informative)

Caelius (1282378) | more than 5 years ago | (#28004649)

Open CL is the open source CUDA alternative. http://en.wikipedia.org/wiki/OpenCL [wikipedia.org]

Re:Tied to a card (0)

Anonymous Coward | more than 5 years ago | (#28004761)

How is OpenCL "open source"??

Re:Tied to a card (1)

Darkness404 (1287218) | more than 5 years ago | (#28004851)

Cross platform, royalty free, support from all major vendors... etc.

Re:Tied to a card (4, Informative)

jared9900 (231352) | more than 5 years ago | (#28004929)

But OpenCL is a specification, not an implementation. The only 3 implementations I'm currently aware of is Apple's (with Snow Leopard), AMD demoed implementation back in March, and Nvidia's beta implementation. So far none of those are open source. If you're aware of an open source implementation, please let me know I'm actually very interested in it, but have yet to locate one.

OpenCL is an Open Standard Compute Language (5, Informative)

Gary W. Longsine (124661) | more than 5 years ago | (#28005623)

It's not really clear what you're looking for, possibly because you're looking for the wrong thing. It might help if you first spend an hour or three learning a little more about OpenCL, and reading up at various sites to see who's doing what.

OpenCL is an Open Standard compute language which comprises:
  • a language extended from C99,
  • a platform (hardware + OpenCL-aware device driver), and
  • a compiler and runtime (which may decide where to send a compute task at run time).

If you're writing an OpenCL-aware device device driver for a GPU, you'll probably need to wait a bit for some open source examples. It's reasonably likely that there will be some included in Darwin [apple.com] (once updated for Snow Leopard).

Look to the LLVM [llvm.org] project (sponsored heavily by Apple and others) for an open source compiler which will (if it doesn't already) know about OpenCL.

It sounds like you might be looking for a higher level API which allows you to more easily use the OpenCL, or possibly for language bindings to Java or Python perhaps? I suspect you'll see those coming along, once Apple ships Snow Leopard, and people have a chance to kick the tires, and then integrate LLMV into their tool chains, extend various higher level API, bridge to Java and whatnot.

The earliest high level API to take easy and broad advantage of OpenCL will probably be from Apple, of course. They'll likely provide some nicely automatic ways to take advantage of OpenCL without programming the OpenCL C API directly. As a Cocoa programmer, you'll be using various high level objects, maybe an indexer for example, which have been taught new OpenCL tricks. You'll just recompile your program and it will tap the GPU as appropriate and if available. The Cocoa implementation is closed source, but people will see what's possible and emulate it in various open source libraries, on other platforms, for Java and other languages.

Here's a good place to start: OpenCL - Parallel Computing on the GPU and CPU [ucdavis.edu] . Follow up with a google search.

Re:Tied to a card (1)

jared9900 (231352) | more than 5 years ago | (#28004853)

OpenCL is not open source, OpenCL is a specification for a CUDA-equivalent language and API. Drivers are still necessary, and will likely be produced by the makers of the graphics hardware (ATI, Nvidia, Intel). Open source drivers and compilers are certainly possible, but I wouldn't expect them to be equivalent to the closed source stuff for sometime yet.

Re:Tied to a card (3, Informative)

TheRaven64 (641858) | more than 5 years ago | (#28004879)

OpenCL is an open standard, but there is not yet an open source implementation. That said, OpenCL is very similar to GLSL, and there is already a GLSL front end for LLVM being worked on by Mesa and Tungsten Graphics, so extending it to support OpenCL should be relatively easy.

Re:Tied to a card (1)

Caelius (1282378) | more than 5 years ago | (#28006537)

OpenCL is an open standard, but there is not yet an open source implementation.

Thanks for clarifying to everyone for me. I was in a hurry and misspoke. I was trying to imply that it wasn't tied to a single company/entity like CUDA, but rather a consortium of industry players, and "open-source" is what my fingers typed, instead of "open standard." Gah.

Re:Tied to a card (4, Informative)

Anonymous Coward | more than 5 years ago | (#28004923)

I hear this a lot in CUDA/GPGPU-related threads on slashdot, primarily from people who simply have zero experience with GPU programming. The bottom line is that in the present and for the foreseeable future, if you are going to try to accelerate a program by offloading some of the computation to a GPU, you are going to be tying yourself to one vendor (or writing different versions for multiple vendors) anyways. You simply cannot get anything approaching worthwhile performance from a GPU kernel without having a good understanding of the hardware you are writing for. nVidia has a paper [nvidia.com] that illustrates this excellently, in which they start off with a seemingly good "generic" parallel reduction code and go through a series of 7 or 8 optimizations -- most of them based on knowledge of the hardware -- and improve its performance by more than a factor of 30 versus the generic implementation.

Another thing to keep in mind is that CUDA is very simple to learn as an API -- if you're familiar with C you can pick up CUDA in an afternoon easily. The difficulty, as I said in the previous paragraph, is optimization; and optimizations that work well for a particular GPU in CUDA will (or at least should) work well for the same GPU in OpenCL.

OpenCL - UnTied to a card (1)

Gary W. Longsine (124661) | more than 5 years ago | (#28005695)

That's the whole point of of the OpenCL architecture, to let the compiler figure out the hardware specific optimizations. If you want a cross platform, GPU-independent mechanism to:

[ _Booming_ _Monster_ _Truck_ _Voice_]
Tap the hidden potential of your GPU! then you want OpenCL.

Re:OpenCL - UnTied to a card (0)

Anonymous Coward | more than 5 years ago | (#28005957)

nVidia could not manage to make the magical optimizing compiler for their own API and their own hardware, nor could ATI/AMD make such a compiler for their API and their hardware. Why on earth are people expecting that the OpenCL implementations are going to manage to do any better? Furthermore, the OpenCL code that I've looked at so far in the beta OpenCL SDK from nVidia is very similar (in design and optimization) to the equivalent code from the CUDA SDK.

Re:Tied to a card (2, Interesting)

mathimus1863 (1120437) | more than 5 years ago | (#28005275)

In general, it's not tied to a card. CUDA itself might be NVIDIA-dependent, but general-purpose GPU programming is not, and other manufacturers will have similar interfaces to GP-GPU programming, eventually.

As for my own experience with it... everyone at work is going crazy over them. One of our major simulations implements a high-fidelity IR scene modeler. It used to take 2 seconds per frame on CPU-only. They re-wrote it with GPU and got it down to 12 ms.

Anything that is highly parallelizable with low memory transfer reqts will get a pretty impressive speedup. My co-worker who has been doing this for a year now was explaining that computation is essentially free, it's the memory operations which are the bottleneck.

Re:Tied to a card (1)

jasprov (1521977) | more than 5 years ago | (#28005563)

That's where abstraction and specialization comes into play. After defining your algorithm for independent use, specialize and optimize it to exploit current or future hardware. This gives you a fallback for calculation, and extremely enhanced performance for the life and support of said hardware. And, as others have pointed out, it's a stepping stone to an OpenCL implementation, eventually giving you multiple vendors to rely on.

If NVIDIA goes out of business or drops support in two years, how much more work will you have gotten done over that time? If it's any less than the cost of implementing the specialized solution, it's worth it.

Is there risk? Yes. And, it's highly mitigated with the abstracted solution and migration paths.

Re:Tied to a card (0)

Anonymous Coward | more than 5 years ago | (#28006383)

Just as all x86 code will no longer be relevant if Intel died tomorrow.

For folders (3, Informative)

esocid (946821) | more than 5 years ago | (#28004539)

Fold@home [stanford.edu] can use CUDA in linux, but you have to compile the CUDA driver first.

Tom's Hardware (0, Troll)

sexconker (1179573) | more than 5 years ago | (#28004553)

Totally not a biased, money-hatted site. Totally. Trust us.

(Not saying they're biased in this case, but because of the bullshit they've pulled in the past I'll never visit their site again.)

Re:Tom's Hardware (2, Interesting)

crazipper (1250580) | more than 5 years ago | (#28004701)

I'd welcome the opportunity to prove otherwise. I've been managing editor for the last year, and much has changed. Best, Chris

Re:Tom's Hardware (3, Insightful)

ChunderDownunder (709234) | more than 5 years ago | (#28004945)

To be honest, it's all about advertising.

C'mon, 15 pages? You wonder why few of us ever RTFA...

Make Slashdot linked articles direct to a single page version, with maybe a handful of ads, and we may stick around and look at the rest of your site. Otherwise, it's potentially 1 million readers who may not bother clicking the URL, or just skip to the conclusion and miss the point of the article - perhaps hurting sales of advertised nvidia cards, the crux of the article's technology.

Re:Tom's Hardware (1, Informative)

crazipper (1250580) | more than 5 years ago | (#28005079)

I'll pass this feedback along to the design guys, but do you *really* want to scroll through 4,000 words and 50-some charts, rather than looking at just the pages you're interested in reading? Surely the length would be a bigger problem if there wasn't an index, right? TBH, I'm most focused on the editorial side of things.

Re:Tom's Hardware (1)

XPeter (1429763) | more than 5 years ago | (#28005161)

Chris, as long as you keep the drop-down menu I'll keep reading Tom's.

Re:Tom's Hardware (1)

crazipper (1250580) | more than 5 years ago | (#28005215)

Cheers X. The devs got rid of it for a few days there and they definitely got an earful ;-)

Re:Tom's Hardware (1)

XPeter (1429763) | more than 5 years ago | (#28005261)

Oh and one thing...when's the next SBM? I'm looking to build a new rig in the 2-2.5k range and I want to use the SBM's as a guide. We need some Q1 charts soon too :) Anyway enough with my demands, keep up the good work. Love the site. -Peter

Re:Tom's Hardware (1)

crazipper (1250580) | more than 5 years ago | (#28005283)

Next SBM starts next Monday and includes $600, $1,350, and $2,250 price points. Oh--and hold off on the purchase. All three systems are actually going to be given away this time around, so you never know. Might win one :)

Re:Tom's Hardware (1)

XPeter (1429763) | more than 5 years ago | (#28005315)

If I won one of the PC's, I wouldn't use it. It would be placed on a glass shelf in my room and if someone goes near it, I release the hounds :)

Re:Tom's Hardware (3, Insightful)

ChunderDownunder (709234) | more than 5 years ago | (#28005335)

Definitely YES, if it's an article worth viewing. I mightn't think I'm interested in a topic, only to find I am. :) Clicking a link after a screen only disrupts one's concentration, while the next page loads, when most of us just use a scroll wheel. And as far as revenue goes, you can fill an entire sidebar with ads, if lost advertising is a concern...

And to whoever moderated his post a troll, get a life. He's trying to improve the experience for us readers and we should encourage dialog...

Re:Tom's Hardware (2, Insightful)

Jah-Wren Ryel (80510) | more than 5 years ago | (#28006037)

I'll pass this feedback along to the design guys, but do you *really* want to scroll through 4,000 words and 50-some charts, rather than looking at just the pages you're interested in reading?

Yes, I do. I can scroll just fine thank you and I can also use the browser's built in word search to find specific words anywhere in the current page, but I can't do that and stay sane at the same time if I have to click 15 times and search 15 times for each word I might want find.

Surely the length would be a bigger problem if there wasn't an index, right?

Put the index in a sidebar or at the top of the single page. HTML has had document internal anchor points since pretty much day 1.

Re:Tom's Hardware (2, Interesting)

perryizgr8 (1370173) | more than 5 years ago | (#28007043)

yeah, make it like wikipedia articles. they are long but easily navigatable.

Re:Tom's Hardware (3, Insightful)

Boba001 (458898) | more than 5 years ago | (#28007503)

What's with only allowing registered users access to the print version? I pretty much gave up on being able to read the article after seeing that.

Re:Tom's Hardware (1)

rasherbuyer (225625) | more than 5 years ago | (#28005377)

There were ads?

Re:Tom's Hardware (1)

ChunderDownunder (709234) | more than 5 years ago | (#28005469)

Yeah, try turning your ad-blocker off once in a while, for the full internet experience! :)

Re:Tom's Hardware (2, Informative)

linhares (1241614) | more than 5 years ago | (#28005399)

and seriously, are you talking gpgpu performance or the magical wonders of seti@home, h264, science funding, and so on? So many pages wasted... and of course, much worse: my time wasted on the poetry.

If you absolutely need this type of wandering off to have more pages and more clicks to survive on the web, then I'm concerned your site may not last for very long. I personally love the site, but these 15-page wonderings off the subject drive me fucking nuts.

Re:Tom's Hardware (1)

Kagura (843695) | more than 5 years ago | (#28005073)

I'd welcome the opportunity to prove otherwise. I've been managing editor for the last year, and much has changed. Best, Chris

Tom's Hardware has been the best consistent site that I've gone to for the past four video cards I've bought (spanning many years). I'm happy with their benchmarks, more or less. I can deal with the 15 pages per article, but I am not impressed with that aspect.

Re:Tom's Hardware (3, Insightful)

Khyber (864651) | more than 5 years ago | (#28005441)

Here's why you're proven to be a money-hatted site.

Advertising bandwidth versus actual article content bandwidth. Your advertising uses up about 2500% more bandwidth than the actual article content.

You care more about advertising than you do about content. That's why you split everything up into so many pages that I could have done in less than two, single-spaced, 20 point font.

Re:Tom's Hardware (0)

Anonymous Coward | more than 5 years ago | (#28004729)

Used to go there all the time, but stopped going because they weren't posting the kind of reviews that I wanted to read (up-to-date roundups).

What bullshit are you referring to?

Re:Tom's Hardware (1)

feepness (543479) | more than 5 years ago | (#28004803)

Without being specific about the bullshit you are referring to, you just make yourself look like a fanboi whose favorite card was slammed.

Re:Tom's Hardware (5, Funny)

XPeter (1429763) | more than 5 years ago | (#28004817)

Totally not a biased, money-hatted site. Totally. Trust us.

Hi! You must be new to the internet as well as Slashdot, let me give you some tips.

        1. Always use the word "lunix" in place of "linux" in slashdot's discussion forums.
        2. You can steal mod points by copying someone else's insightful comment and pasting it as a reply to an earlier one.
        3. Mac users are a bunch of fucking queers.
        4. When there's something you need to do that can't be done with Windows but can be done with Lunix, keep in mind that you can do an even better job with Mac OS X. Some argue that BSD can do it better but no one makes software for BSD since no one gives a flying fuck.
        5. Adequacy.org was one of the best sites on the internet. Want to know if your sons a computer hacker? Click here! http://www.adequacy.org/stories/2001.12.2.42056.2147.html [adequacy.org]

Good luck, friend!

Ya... (1)

msimm (580077) | more than 5 years ago | (#28006121)

For once in my life I had to RTFA (all the way through) to see if he was really serious.

In extreme cases, over-exposure to computer radiation can cause schizophrenia

That explains so much about me. Classic. Great link. ;-)

SETI? (4, Informative)

NiteMair (309303) | more than 5 years ago | (#28004609)

Waste your GPU cycles on something more interesting than SETI...

http://www.gpugrid.net/
http://distributed.net/download/prerelease.php (ok, maybe that's less interesting...)

And why limit this discussion to CUDA? ATI/AMD's STREAM is usable as well...

http://folding.stanford.edu/English/FAQ-ATI

Re:SETI? (1)

ComputerDruid (1499317) | more than 5 years ago | (#28004685)

As of now, though, nvidia's CUDA has all of the hype, as well as a handful of applications developed for the platform.

Re:SETI? (0)

Anonymous Coward | more than 5 years ago | (#28006145)

Waste your GPU cycles on something more interesting than SETI...

http://www.gpugrid.net/
http://distributed.net/download/prerelease.php (ok, maybe that's less interesting...)

And why limit this discussion to CUDA? ATI/AMD's STREAM is usable as well...

http://folding.stanford.edu/English/FAQ-ATI

More interesting? Yeah, right. Are any of those ever going to get me the alien porn I need? NO. Therefore they are all a waste of time, QED.

will it run linux? (-1, Flamebait)

Anonymous Coward | more than 5 years ago | (#28004621)

who gives a fuck? linux is for dick sucking faggots who deserve to die.

Science is a parasite (1)

gustgr (695173) | more than 5 years ago | (#28004633)

The same way the DoD payed for the Cray supercomputers, gamers are paying for the GPUs. Science dropped by and said thanks.

Re:Science is a parasite (1)

Bigjeff5 (1143585) | more than 5 years ago | (#28005087)

Science prefers you use the term "symbiot".

Parasite has a negative connotation.

Hooray ... Fortran again (2, Informative)

thoughtspace (1444717) | more than 5 years ago | (#28004689)

For those out of work since the millenium bug, at long last FORTRAN is back: http://www.nvidia.com/object/cuda_what_is.html [nvidia.com]

Can't wait for the APL support. Reorganising my keyboard keys in anticipation.

Re:Hooray ... Fortran again (1, Funny)

Anonymous Coward | more than 5 years ago | (#28004721)

Back? You've never been in a Physics department, have you? Fortran was never gone.

h.264 encoding (5, Informative)

BikeHelmet (1437881) | more than 5 years ago | (#28004787)

h.264 encoding didn't improve with more shaders for some of the results(like PowerDirector 7), because of the law of diminishing returns.

I remember reading about x264 when quad-cores were becoming common. It mentioned that if quality is of the utmost importance, you should still encode on a single core. It splits squares of pixels between the cores; where those squares connect there can be very minor artifacts. It smooths these artifacts out with a small amount of extra data and post processing; the end result is a file hardly 1-2% bigger than if encoded on a single core, but encoded roughly 4x faster.

Now, if we're talking about 32 cores, or 64, or 128, would the size difference be bigger than 1-2%? Probably. After a certain point, it would almost certainly not be worth it.

This is supported by Badaboom's results, where the higher resolution videos (with more encoded squares) seem to make use of more shaders when encoding, while most of the lower resolution vids do not. (indicating that some shaders may be lying idle)

What I'm curious about, is could the 9800GTX encode two videos at once, while the 9600GT could only manage one? ;)

I'm also curious why the 320x240 video encoded so quickly - but that could be from superior memory bandwidth, shader clockspeed, and some other important factor in h.264 encoding.

Take it with a grain of salt; I'm not an encoder engineer; just regurgitating what I once read, hopefully accurately. ;)

Re:h.264 encoding (0)

Anonymous Coward | more than 5 years ago | (#28005201)

That makes no sense. Why don't they start the encode on each processor at 0%, 25%, 50%, and 75% of the movie?

Re:h.264 encoding (0)

Anonymous Coward | more than 5 years ago | (#28005595)

Disk read/write would likely become a bottleneck there. Plus you would have to recombine the files at the end, using more disk time.

Re:h.264 encoding (4, Informative)

SpazmodeusG (1334705) | more than 5 years ago | (#28005685)

Encoding from multiple different keyframes works when you can seek to any part of the input video but it doesn't help with realtime encoding.

If i'm encoding a signal in realtime from TV i have to start encoding at 0% onwards. The only way to parallelize it is to split the individual frames up into boxes (as done by the Badaboom).

Re:h.264 encoding (2, Informative)

SpazmodeusG (1334705) | more than 5 years ago | (#28005521)

Data compression is an inherantly serial operation. Parts of it can be done in parrallel but in general the way you compress the next bit is based on the patterns observed earlier.

Say you wanted one core to start encoding at 0% and the other at 50% of the way into the movie. The core starting at 50% has to start compression without any of the learned patterns in the 0-50% range. In the example you gave one core encodes half the screen and the other core encodes the other half. If they are running in parrallel the second core can't use the learnt patterns of the first unless it wants to wait for the first core to finish its current frame (thereby making it non-parrallel).

So you have a tradeoff. You can run everything serially, or you can accept that you'll miss a few observed patterns here and there and run more parrallel.

Re:h.264 encoding (1)

geekboy642 (799087) | more than 5 years ago | (#28005827)

I know almost nothing about data compression beyond the readme for pkzip. Are there really enough learned patterns in a video stream that would make a >1% difference in filesize if compressed in independent chunks? As far as I can reason it out, independent chunks would act like you'd just inserted an extra keyframe at the splitpoints.

Re:h.264 encoding (0)

Anonymous Coward | more than 5 years ago | (#28005991)

You are correct - it's basically key-frame boundaries that matter with conventional video compression.

Re:h.264 encoding (1)

midicase (902333) | more than 5 years ago | (#28006515)

You are thinking in terms of data from start to finish, but many types of video encoding/compression operate on the frame or relative to a frame.

One can store an entire frame in data, and the next bit of data would be the delta between the next and previous frame. Every so often the cycle restarts so that systems can cope with streaming data (you do need a least one full frame as a reference).

You can chop up the frame into many individual blocks. Do more of the same as above but on portions of the screen data.

There are many, many methods of handling video data. I'm working in the industry now, but still have yet to bend my mind around many of them, but we do have engineers whose sole job is to deal with this.

Re:h.264 encoding (2, Informative)

electrosoccertux (874415) | more than 5 years ago | (#28006905)

Data compression is an inherantly serial operation. Parts of it can be done in parrallel but in general the way you compress the next bit is based on the patterns observed earlier.

Say you wanted one core to start encoding at 0% and the other at 50% of the way into the movie. The core starting at 50% has to start compression without any of the learned patterns in the 0-50% range. In the example you gave one core encodes half the screen and the other core encodes the other half. If they are running in parrallel the second core can't use the learnt patterns of the first unless it wants to wait for the first core to finish its current frame (thereby making it non-parrallel).

So you have a tradeoff. You can run everything serially, or you can accept that you'll miss a few observed patterns here and there and run more parrallel.

For usability (seeking through a video) no codecs worked based on a learned pattern. The memory requirements to make use of this would be astronomical (you'd have to store the entire file in RAM, good luck doing that with a BluRay).

IIRC, the furthest back any codec looks is something like 24 frames.

Re:h.264 encoding (2, Informative)

Anonymous Coward | more than 5 years ago | (#28006971)

For video encoding there is a ton of work that can be done in parallel. You can compute all of the dct's for all of the macroblocks in parallel. You can run your motion search for every block in parallel.

Re:h.264 encoding (2, Informative)

adolf (21054) | more than 5 years ago | (#28007391)

This is one of the most inane thought patterns I have yet to witness this week.

The reason is simple: Fine, so you've split a process into chunks and distributed them across two or more cores. But it's not exactly like those cores are working in a vacuum; they all use the same RAM.

As another reply has stated, codecs don't work quite how you describe -- they don't use the entire media as a reference, but at most a couple of dozen frames. But even if such mythological technology were really in use: There's no qualitative reason why something learned by process A cannot be shared with process B, and vice-versa. Therefore, the two processes can encode totally different segments of a given video, share what they've learned, and make similar and consistent tradeoffs.

After that, you join the parts on an existing keyframe (which doesn't have to be exactly at 50% or whatever the ideal number happens to be), and call it a day.

Well, it works awesome if your problem is parellel (5, Interesting)

Muerte23 (178626) | more than 5 years ago | (#28004925)

The Tesla 1060 is a video card with no video output (strictly for processing) that has something like 240 processor cores and 4 GB of DDR3 RAM. Just doing math on large arrays (1k x 1k) I get a performance boost of about a factor of forty over a dual core 3.0 GHz Xeon.

The CUDA extension set has FFT functionality built in as well, so it's excellent for signal processing. The SDK and programming paradigm is super easy to learn. I only know C (and not C++) and I can't even make a proper GUI, but I can make my array functions run massively in parallel.

The trick is to minimize memory moving between the CPU and the GPU because that kills performance. Only the brand newest cards support functionality for "simultaneous copy and execute" where one thread can be reading new data to the card, another can be processing, and the third can be moving the results off the card.

One way that the video people can maybe speed up their processing (disclaimer: I don't know anything about this) is to do a quick sweep for keyframes, and then send the video streams between keyframes to individual processor cores. So instead of each core gets a piece of the frame, maybe each core gets a piece of the movie.

The days of the math coprocessor card have returned!

Re:Well, it works awesome if your problem is parel (2, Interesting)

Anonymous Coward | more than 5 years ago | (#28005163)

We've run some signal processing on a Tesla card, and get roughly 500x improvement over (somewhat poorly written) code for a Core 2 Duo.
~8 hr on a Core 2 Duo
~1.5 hr on Core i7
seconds on Tesla

Re:Well, it works awesome if your problem is parel (2, Informative)

Muerte23 (178626) | more than 5 years ago | (#28005299)

Well I didn't say my code was *well* written. Apparently there's a lot of trickery with copying global memory to cached memory to speed up operations. Cached memory takes (IIRC) one clock cycle to read or write, and global GPU memory takes six hundred cycles. And there's all this whatnot and nonsense about aligning your threads with memory locations that I don't even bother with.

Re:Well, it works awesome if your problem is parel (3, Informative)

parlancex (1322105) | more than 5 years ago | (#28006919)

Actually, what you are referring to is simultaneous DMA and kernel execution, and this is available in every card that has compute 1.1 capability which is actually every card but the very first G80 series cards (8800 GTX and 8800 GTS). The GPU actually executes the DMA and pulls memory that has been allocated as aligned and pagelocked and this can be overlapped with kernel execution, it doesn't have anything to do with GPU or CPU threads. Transfers from non page-locked memory are always synchronous and as such can't be overlapped with kernel execution. But, generally, yes, host -> device memory bandwidth is usually the bottleneck for most CUDA applications. Applications that are able to perform a large amount of processing on the same data if that data will fit simultaneously in device memory are able to mitigate this, but this doesn't usually include supercomputing or general coprocessor-esque applications (transcoding).

Re:Well, it works awesome if your problem is parel (2, Interesting)

Belisar (473474) | more than 5 years ago | (#28007707)

I assume that's what the parent meant.

As an addendum, the newest CUDA 2.2 (with chip of the newest generation, i.e. GT200) actually has support for reading directly from (page-locked) host memory inside of GPU kernels... something I believe ATI cards have allowed for a while.

OpenCL? (1)

Midnight Thunder (17205) | more than 5 years ago | (#28005031)

I thought Nvidia was indicating they were going to move to supporting OpenCL, or are the simply planning to support multiple technologies?

Re:OpenCL? (2)

ChunderDownunder (709234) | more than 5 years ago | (#28005627)

Both, I'd guess. If someone releases some killer software for OpenCL they'd be made not to - Apple are pushing it for OS X.

On the other hand, if they do a deal with someone to write CUDA stuff, it's lock-in that you must buy an nvidia card.

Either way they win...

Re:OpenCL? (1)

Trepidity (597) | more than 5 years ago | (#28006257)

They also have control over adding features to CUDA relatively rapidly as hardware gains new capabilities, which they can't easily do with OpenCL.

Re:OpenCL? (1)

3.1415926535 (243140) | more than 5 years ago | (#28006439)

CUDA and OpenCL are not exclusive, they're at different layers in the driver stack. If you look at the NVIDIA slides, you'll see that C, OpenGL, DX11 Compute, and Fortran are all just frontend languages that compile to/run on top of CUDA.

Re:OpenCL? (1)

cptnapalm (120276) | more than 5 years ago | (#28006371)

I remember reading the OpenCL announcement (I like to pretend that I know what I'm talking about in programming matters) and Nvidia did indeed say that they would be supporting it.

What About Multiple GPU Cards in 1 Host? (2, Insightful)

Doc Ruby (173196) | more than 5 years ago | (#28006457)

Those benchmarks show that even older ($120-140) nVidia GPU cards can really speed up some processing tasks, especially transcoding video. But what I think is even more exciting than just the acceleration from offloading CPU to GPU is using multiple GPU cards in a single host PC. Stuff a $1000 PC with $1120 in GPUs (like 8 $140 nVidia cards), and that's 1024 parallel cores, anywhere from 16x to 56x the performance at only just over double the price. PCI-e should make the data parallel fast enough to feed the cards. I bet that 8 $1000 cards stuffed into a $1000 PC would be something like 200x to 4000x for only 9x the price.

So what I want to see is benchmarks for whole render farms. I want to see HD video transcoded into H.264 and other formats simultaneously on the fly, in realtime, with true fast-forward, in multiple independent streams from the same master source. This stuff is possible now on a reasonable budget.

Re:What About Multiple GPU Cards in 1 Host? (1)

adolf (21054) | more than 5 years ago | (#28007415)

Cool. Sign me up.

Just one problem: Where can I find a $1000 PC with 8 available PCI Express x16 slots? The best machine I have at the moment only has three, and 8 won't even fit into a normal ATX case.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?