×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

GPU Gems 3

samzenpus posted more than 6 years ago | from the read-all-about-it dept.

Book Reviews 63

Martin Ecker writes "Weighing in at fifty pages short of a thousand, NVIDIA has recently released the third installment of its GPU Gems series, aptly titled "GPU Gems 3" published by Addison-Wesley Publishing. Just like the two previous books before it, GPU Gems 3 is a collection of articles by numerous authors from the game development industry, the offline rendering industry, academia, and of course NVIDIA. The 41 chapters of the book grouped into six parts discuss a wide range of topics, all dealing with recent advancements in using graphics processing units (GPUs, for short) to either render highly realistic images in real-time or do high-performance, parallel computation, an area that is called GPGPU (short for General Purpose computation on GPUs). In this latest installment of the series, the focus of a lot of the chapters is on using new hardware features of Direct3D 10-level hardware, such as NVIDIA's GeForce 8 series, to either get more realistic looking results or higher performance." Read on for the rest of Martin's review.The book is aimed at the intermediate and advanced graphics programmer that has a solid background in computer graphics algorithms. The reader is also expected to be familiar with commonly used real-time shading languages, in particular HLSL, which is used in most of the chapters. Familiarity with graphics APIs, such as Direct3D and OpenGL, is also required to get the most out of this book.

The first part of the book is about geometry with the first chapter diving right into generating complex procedural terrains on the GPU. This interesting chapter explains the techniques behind a recent NVIDIA demo that shows very nice, 3-dimensional, procedurally generated terrain using layering of multiple octaves of 3-dimensional noise. An interesting contribution of this chapter is how the authors texture the terrain avoiding the typical, ugly texture stretching that previous techniques exhibit. This is followed by a chapter on rendering a large amount of animated characters using new Direct3D 10 features, in particular the powerful geometry instancing that is now available. The author suggests doing palette skinning by storing bone matrices in animation textures instead of the traditional way where they are stored in shader constant registers. The next chapter is in a similar vein, but uses blend shapes aka morph targets instead of skinning to animate characters. In particular, the main focus is again on how to use Direct3D 10 features to accelerate blend shapes on the GPU. Other chapters in this part of the book are on rendering and animating trees, visualizing metaballs (also useful for rendering fluids), and adaptive mesh refinement in a vertex shader.

Part two of the book deals with light and shadows. For me personally, this is one of the most exciting parts of the book with very practical techniques that we are going to see applied fairly soon in video games. The first chapter is on summed-area variance shadow maps, an extension to the popular variance shadow maps algorithm that provides nice soft shadows without aliasing artifacts. The next chapter is on GPU-based relighting, which is mostly useful for fast previewing in offline rendering. Then we move on to a nice chapter on parallel-split shadow maps, which are a way of doing dynamic, large-scale environment shadows by splitting the view frustum into different parts and having a separate shadow map for each of them. Other chapters in this part of the book are on improved shadow volumes, high-quality ambient occlusion, which is an improvement of a technique previously presented in GPU Gems 2, and volumetric light scattering.

The third part of the book is on rendering techniques and it starts with a very interesting chapter on rendering realistic skin in real-time. This chapter with its more than fifty pages is one of the longest in the book, but it definitely deserves the space. I have never seen such realistic looking skin rendered in real-time before. The result is really astonishing and the authors go into detail of all the various techniques and tricks employed to achieve it. Simply put, they take a diffuse map and apply multiple Gaussian blurs of varying kernel sizes to it. These blurred images are then linearly combined using certain weights to get an approximation to a so-called diffusion profile, which is used to visualize subsurface scattering. Of course, the devil is in the details and the technique is a bit more complicated than what I've described here. Some other chapters in this part of the book are on capturing animated facial textures and storing them efficiently using principal component analysis (PCA) as used in recent EA Sports games, animating and shading vegetation in the upcoming game Crysis, and a way of doing relief mapping without the artifacts of previous methods.

Part four starts out with a chapter on true imposters, i.e. billboards generated by raytracing through a volumetric object on the GPU. It's fairly interesting but I doubt that we'll see it in video games anytime soon because the costs of this technique seem fairly high. Another chapter is on rendering large particle systems to lower resolution, off-screen buffers and then recombining them with the framebuffer as a post process. This technique allows for rendering very fill-rate intensive particle systems with good performance. Other chapters include an appeal to make sure you do your lighting calculations in linear space and be careful when and where gamma correction needs to be applied, followed by some chapters on post processing effects, in particular motion blur and depth of field, and a chapter co-authored by Jim Blinn himself on rendering vector fonts in high quality via pixel shaders.

With part five dealing with physics simulation on the GPU we enter GPGPU territory. While a lot of the techniques in this and the following part of the book are highly interesting and innovative, I doubt we'll be seeing them applied a lot in video games in the next year or two, simply because they use up a lot of GPU processing power and GPU memory that us game developers would rather spend on doing fancy graphics. The first chapter is on doing rigid body simulation on the GPU. The author uses spherical particles to represent rigid bodies, which greatly simplifies the collision detection even between the most complex shapes. The subsequent chapter is on simulating and rendering volumetric fluids entirely on the GPU. The authors apply fluid simulation to create realistic smoke, fire, and water effects. The presented technique is based on running a fluid simulator on a voxelized 3D volume stored in 3D textures. Also solid objects that interact with the fluid are voxelized on the fly on the GPU. To render the fluid a ray-marching algorithm is used. The remaining chapters of this part of the book discuss N-body simulation, broad-phase collision detection and convex collision detection with Lemke's algorithm for the linear complementarity problem. Many chapters of this part of the book use NVIDIA's new language for doing GPGPU called CUDA and the reader is expected to be familiar with it. CUDA is both a runtime system and a language based on C that eliminates the need to have in-depth knowledge of a graphics API in order to implement GPGPU algorithms.

The final part of the book is on GPU computing with chapters that show how to apply the incredible parallel computing power of modern GPUs to classic computation problems that are not directly related to either computer graphics or physics. One chapter demonstrates how to search for virus signatures on the GPU, effectively turning your graphics card into an antivirus scanner. Another chapter shows how to do AES encryption and decryption on the GPU, which is now possible thanks to the new generation of GPUs supporting integer operations in addition to floating-point operations. Other chapters deal with generating random numbers, computing the Gaussian, and using the geometry shader introduced with Direct3D 10 to implement computer vision algorithms on the GPU that previously were not possible with vertex and pixel shaders only, such as histogram building and corner detection.

One of the features that distinguishes the GPU Gems series from other graphics books was kept for GPU Gems 3: the high quality and large number of images and diagrams. All figures in the book are in color, and there are plenty of them. The book also comes with a DVD that has the sample source code to most of the techniques discussed in the book. A lot of these programs require Direct3D 10 hardware (and as consequence Windows Vista) to run. However, for most of these, demo videos are also made available so you can see how a technique looks like without having the latest hardware or operating system. Furthermore, the book's website offers a visual table of content and three sample chapters to download in PDF format.

As with the previous two GPU Gems books, most of the chapters in this book are fairly advanced and ahead of their time. A lot of the presented techniques are not yet practical for video games on current generation GPUs, simply because they use up all the computation power and/or memory that they have to offer. However, a lot of techniques from the previous two books are now commonly used and we can expect the same to be the case for many of the techniques discussed in this book. As such, it is required reading for any serious professional working in the real-time computer graphics industry.

Martin has been involved in real-time graphics programming for more than 10 years and works as a professional game developer for High Moon Studios in sunny California.

You can purchase GPU Gems 3 from amazon.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

63 comments

Oh Man... (3, Funny)

jellomizer (103300) | more than 6 years ago | (#21014335)

I was hoping it was a new version of Gems Desktop. I havent Upgraded my Amstrad PC 512 C in a while.

Re:Oh Man... (0)

Anonymous Coward | more than 6 years ago | (#21015231)

I have an Atari ST, you insensitive clod!

Re:Oh Man... (0)

Anonymous Coward | more than 6 years ago | (#21021007)

I JUST finished cleaning up my Mega STE yesterday..

Re:Oh Man... (1)

StikyPad (445176) | more than 6 years ago | (#21020913)

I was hoping it was a new version of the game where you have to cluster together three or more gems to eliminate them and earn points.

Re:Oh Man... (0)

Anonymous Coward | more than 6 years ago | (#21025895)

I havent Upgraded my Amstrad PC 512 C in a while.

You can buy a MacBook, it offers exactly the same performance and you can easily blend in with the other 95% of spoilt hip highschoolers.

disappointed (1)

User 956 (568564) | more than 6 years ago | (#21014383)

Weighing in at fifty pages short of a thousand, NVIDIA has recently released the third installment of its GPU Gems series

It's a book? I was hoping for a GPU that's optimized for playing Bejeweled.

JESUS LOVES COCK (-1)

Anonymous Coward | more than 6 years ago | (#21014421)

YES HE DOES!

Maybe nVidia's resources would be better spent (-1)

Anonymous Coward | more than 6 years ago | (#21014485)

you know, making Linux drivers that aren't completely instable?

Re:Maybe nVidia's resources would be better spent (1)

PitaBred (632671) | more than 6 years ago | (#21015185)

The word is "unstable", and there are lots of people who aren't programmers, but can serve to support programmers by, I don't know, making documentation and collating information into something like, say, a book?

Re:Maybe nVidia's resources would be better spent (0)

Anonymous Coward | more than 6 years ago | (#21015921)

Maybe you should invest in a dictionary [reference.com] .


instable /nstebl/
-adjective
not stable; unstable.

OpenGL please (3, Insightful)

2ms (232331) | more than 6 years ago | (#21014535)

Dammit I hate to see all this DirectX10 emphasis. It's games only. I am a scientist and CAD user. Right now there is no laptop let alone "consumer" card in the world that can handle even the kind of CAD work a lot of people have to do. OpenGL was created for science. DirectX was a copy of a subset of it applicable only to games. And now all the graphics cards are focusing on the DirectX and neglecting OpenGL. Arg! This copy of OpenGL is short-circuiting advancement in the very thing 3D graphics were originally, you could say, invented for -- the thing they actually are useful for beyond plain entertainment. These cards cost hundreds of dollars but they can't handle an assembly with 100 parts in a CAD model simply because they barely have any OpenGL hardware in them. A car, airplane, etc has millions of parts.

Re:OpenGL please (3, Insightful)

Xonstantine (947614) | more than 6 years ago | (#21014751)

And now all the graphics cards are focusing on the DirectX and neglecting OpenGL
Because that's where the money is. NVidia isn't a philantropic support organization (and, it goes with out saying, neither is Microsoft).

These cards cost hundreds of dollars but they can't handle an assembly with 100 parts in a CAD model simply because they barely have any OpenGL hardware in them.
Because there's very little money there.

Re:OpenGL please (2, Insightful)

TemporalBeing (803363) | more than 6 years ago | (#21014839)

These cards cost hundreds of dollars but they can't handle an assembly with 100 parts in a CAD model simply because they barely have any OpenGL hardware in them.
Because there's very little money there.
There is?

Last I was aware pretty much all non-Microsoft specific functionality for graphics was using OpenGL now, and the Linux Gaming market - which uses OpenGL - is a growing market too. Additionally, the CAD (AutoCAD, etc.) market is also a very ripe market for graphics, and a lucrative market too. (Not as a big a market in some respects, but certainly as lucrative, if not more so - AutoCAD doesn't have to re-invent itself for every release and cost millions to do so like a game does.)

Re:OpenGL please (1)

Xonstantine (947614) | more than 6 years ago | (#21015109)

Sure. We're talking about scale here. Performance increases on graphics cards are being pushed by the high-end gaming market, not by guys like you. The high end gaming market has folks that buy 2 SLI enabled 8800 GTXs one year (at $1100), and go out and turn around and do the same thing on next year's cards. And there are a whole lot more of them then there are of CAD guys buying Quadro FX5500's at $1400 a pop every 3 years. The bleeding edge technology is naturally going to seek the money. Eventually it will percolate down to less profitable niche markets. 5 years ago, you guys were the pointy tip of the sword when it came to the market with respect to money and the need. That's no longer the case.

Re:OpenGL please (1, Funny)

Anonymous Coward | more than 6 years ago | (#21015369)

But who's going to support it, kid? You? Writing graphics drivers ain't like dusting crops, boy. Without precise calculations you could overflow right through a buffer, or divide to close to a zero, and that'd end your app real quick, wouldn't it?

Re:OpenGL please (0)

Anonymous Coward | more than 6 years ago | (#21016633)

The boy/kid is right, old man, the money is made from the mass market consumer products these days. Deal with it; no need to get rude and pretend expertise on writing graphics drivers.

Re:OpenGL please (0)

Anonymous Coward | more than 6 years ago | (#21021529)

Whoosh is the sound a
Crop duster makes when it flies
Past your moody head

Re:OpenGL please (0)

Anonymous Coward | more than 6 years ago | (#21022371)

Whooosh is the sound of a wanna-be graphics driver not-developer trying to outsmart writing-graphics-driver-as-my-day-job.

Re:OpenGL please (0)

Anonymous Coward | more than 6 years ago | (#21022401)

Of course it isn't, go shoot somebody before they shoot you / dust some crops. Whoa! Divide to CLOSE to a zero, my my how exciting! Sounds like Microsoft Excel on a Pentium! Precise calculations! Overflows right through buffers! Oh my!

Errrr.. what the hell has this to do with the *fact* that money is made in the mass market these days?

Re:OpenGL please (0)

Anonymous Coward | more than 6 years ago | (#21014789)

Shoot me for being heretical here on Slashdot, but couldn't these apps use a dual-rendering pipeline and support both? I mean, I'm no fan of DX as being a production technology but it would seem to be spitting in the wind to be continuing to only support OpenGL in the CAD programs in the face of so many DirectX cards being on the market, no?

Re:OpenGL please (1, Insightful)

Anonymous Coward | more than 6 years ago | (#21015633)

"DirectX Cards"

There is no such thing. DirectX and OpenGL support the HARDWARE, not the other way around.

Re:OpenGL please (0)

Anonymous Coward | more than 6 years ago | (#21018557)

The hardware is identical - it's just the device drivers that are different. The reason we have two API's are for political reasons.

10 years ago, the high-end workstation/graphics supercomputer vendor SGI had their own proprietary 3D API - SGL. At this time (the late 1990's) other workstation vendors had their own API's. At this time third party graphics accelerator board manufacturers persuaded SGI to work on a third party API - So OpenGL was formed along with a committee to oversee the creation of new specifications, the ARB. At the same time Microsoft introduced Windows 95, and needed a API for consumer applications. So they bought out a graphics API company, and created DirectX. DirectX would probably have been the exclusive API for consumer systems, if ID Software (makers of Quake) hadn't insisted on using OpenGL as their target API for graphics acceleration. SGI assisted this by providing a mini-GL driver that could be installed on PC's - it resorted to a software implementation that used MMX extensions for acceleration. Unfortunately, SGI ended up selling their patents to Microsoft.

Microsoft's argument for having DirectX is that having to go through a committee slows down their development process, and so they prefer to have their own API for the games market (Desktop's, consoles). At the same time, chip vendors have to provide OpenGL device drivers to keep a few game developers happy. Also, a good few industries don't really want to be told where to go by Microsoft, and so continue to use OpenGL for this purpose.
At the same time, Microsoft doesn't really want scientific/professional users to DirectX as they too would slow down their evolution of the API. So we're in the status quo where there are two different API's for the same hardware.

Re:OpenGL please (1)

Glock27 (446276) | more than 6 years ago | (#21023471)

Microsoft's argument for having DirectX is that having to go through a committee slows down their development process, and so they prefer to have their own API for the games market (Desktop's, consoles).

This was always a specious argument. OpenGL includes a vendor extension mechanism so vendors DON'T have to wait for a new 'official' version to roll out new functionality.

Microsoft simply wanted (as it always does) to control the APIs and promote Microsoft platform lockin. It has been trying to push DirectX into 'serious' apps ever since, without much success fortunately.

The good news is that a few game developers, Apple, all Unix vendors and the Open Source community are quite sufficient to keep OpenGL a vibrant, cross-platform standard. NVIDIA is good about exposing all the new hardware functionality through vendor extensions as mentioned above, and are active on the OpenGL ARB.

Re:OpenGL please (1)

imasu (1008081) | more than 6 years ago | (#21014949)

Well, like it or not, real-time high framerate graphics are the primary use case for these cards. And immediate-mode API's are turning out to be too bus-heavy for that use case. Whether you like Microsoft or not, the programming model used in DX is an attempt to mitigate this. The proper response is to lobby the OpenGL ARB to add API features more amenable to modern graphics processing. They are making large steps with OpenGL 2.0 in this regard. Given their history I'm willing to bet that nvidia would like to support OpenGL as well as they can. But OpenGL needs to keep up with the times.

Re:OpenGL please (1)

mdwh2 (535323) | more than 6 years ago | (#21018703)

And immediate-mode API's are turning out to be too bus-heavy for that use case. Whether you like Microsoft or not, the programming model used in DX is an attempt to mitigate this. The proper response is to lobby the OpenGL ARB to add API features more amenable to modern graphics processing.

What exactly are you referring to by "features more amenable to modern graphics processing"?

OpenGL is hardly restricted to immediate mode - no one uses that if they care about performance - and things like vertex buffer objects have been around for years. Are you referring to something new in DirectX that is yet to appear in OpenGL?

Re:OpenGL please (1)

Kjella (173770) | more than 6 years ago | (#21015023)

How many are "a lot"? A lot compared to the millions and millions playing MMOs or FPSs or RTSs or anything else that benefits from flashy graphics card (yeah, yeah gameplay is important too) and are often paying several hundred bucks each? Let me think back at the university - how many there who would use heavy CAD? Maybe 200 people out of 30000, and I'm probably being kind. How many of those would pay the tens of thousands of dollars needed from each to compete with the gamers? Sorry, but there's just no business case for you, you've been reduced to a piggybacker on DirectX so enjoy the ride. Also it's funny that you think realistic graphics have no usefulness beyond plain entertainment, I think you may suffer from some slight hubris...

Re:OpenGL please (2, Insightful)

mdwh2 (535323) | more than 6 years ago | (#21018731)

Let me think back at the university - how many there who would use heavy CAD? Maybe 200 people out of 30000, and I'm probably being kind. How many of those would pay the tens of thousands of dollars needed from each to compete with the gamers?

Given that each copy of high-end CAD software costs tens of thousands of dollars (with mid-range costing thousands), I'd say a lot of them.

Judging from your comment about your university, you appear to be thinking that the CAD market is just students tinkering on a budget copy of AutoCAD. It's not - I've no idea how big the market compares to the games industry, but computer aided design and manufacturing is a billion dollar market.

Re:OpenGL please (5, Informative)

n dot l (1099033) | more than 6 years ago | (#21015099)

Dammit I hate to see all this DirectX10 emphasis. It's games only.
The book is written for game developers, and none of the topics are exclusive to DX10 - NVIDIA has already released OpenGL extensions that offer the same functionality under OpenGL. The fact that the samples use DX10 is irrelevant because the API isn't the point. Anyone with a working knowledge of both DX and GL can translate code from one to the other fairly easily.

Right now there is no laptop let alone "consumer" card in the world that can handle even the kind of CAD work a lot of people have to do.

These cards cost hundreds of dollars but they can't handle an assembly with 100 parts in a CAD model simply because they barely have any OpenGL hardware in them. A car, airplane, etc has millions of parts.
That's like comparing a pickup truck to a freight train. Consumer cards aren't designed to do CAD, they're designed to do games because (surprise!) they're sold to gamers. Workstation cards are made to do CAD. If you want to play the latest games, you get a 8800GTX. If you want to do CAD, or ultra high-poly modeling, or movie-quality animation, you get a Quadro FX. Or a FireGL if you prefer AMD/ATI.

And now all the graphics cards are focusing on the DirectX and neglecting OpenGL.
Graphics cards don't focus on either. Graphics cards focus on accelerating the sort of math that's common to all 3D rendering - transforming vertices, rasterizing triangles, and shading fragments (which are roughly analogous to pixels, for those of you that don't speak GL). Graphics drivers focus on DX or GL, and even in the consumer space you'd be stretching if you said that OpenGL is being neglected (see all the OpenGL extensions [opengl.org] that start with NV_ or ATI_ for proof).

Re:OpenGL please (1)

2ms (232331) | more than 6 years ago | (#21019633)

Well I'm pleased to hear that NV and ATI are still working on OpenGL as much as they are DirectX if that's really the case.

Are you really telling me that the only difference between a $1500 Quadro and gamer card is the drivers though? The bad-ass gamer card in my friend's computer chokes and can barely run at even the most basic animation of an of maybe 30 parts in CAD.

Re:OpenGL please (4, Informative)

n dot l (1099033) | more than 6 years ago | (#21020509)

Well I'm pleased to hear that NV and ATI are still working on OpenGL as much as they are DirectX if that's really the case.
It certainly is. NVIDIA had OpenGL equivalents to the new DX10 features out in the very first release of its DX10 driver. So did ATI (though their first DX10 card came much later than NV's so they had more time to begin with). I don't think either will ever be ignored - and that's a good thing. Competition between the two APIs has yielded a lot of good innovation that's now been adopted into both.

Are you really telling me that the only difference between a $1500 Quadro and gamer card is the drivers though? The bad-ass gamer card in my friend's computer chokes and can barely run at even the most basic animation of an of maybe 30 parts in CAD.
No. There's much more to it than that, of course. It all comes down to usage. If you profile a video game and a CAD program you'll see that they stress completely different parts of the card. Workstation cards will have more silicon dedicated to things like the memory controller (CAD sends a lot more data across the bus each frame than a game does), whereas consumer cards put most of their power behind the shader processor (games use long and complex shaders to implement animation, lighting, shadowing, etc - CAD typically just shades everything with simple Phong lighting). There's a lot of other differences as well, though I'd rather not write a 10 page essay on the topic right now :)

Re:OpenGL please (1)

Creepy (93888) | more than 6 years ago | (#21025457)

CAD passes more data because it still processes tessellation (converting a geometry to a polygon mesh) type features in CPU, then passes them to GPU (say CSG [wikipedia.org] ). Geometry Shaders don't handle tessellation well, but rumor (or speculation due to research and rumors) has it the next type of shader will be specifically for tessellation (so you'll have vertex, fragment/pixel, geometry, and tessellation shader) and you will be able to pass a set of primitives into the shader and perform CSG on them.

Anything that doesn't define a fragment shader uses the default Phong illumination model by default in OpenGL (or technically Phong-Blinn or something like that). CAD uses that for design, but uses software ray tracing for a final render. Ray tracing parallelizes well, but doesn't scale well to memory since in true form it requires every object in the scene to be in memory. Fast ray tracing techniques usually order objects in KD-trees [wikipedia.org] to find reflections in the scene faster and are attaining decent speeds on CPUs (Intel estimates within a few years, GPUs will even be obsolete). Yes, people have done ray tracing in shaders, but this is still approximated and often without full scene data, or on a limited subset of data.

OpenGL is still slow compared to DX for most rendering due to legacy support, however. This was supposed to be addressed in OpenGL 3.0 (i.e. the "Lean and Mean" profile which streamlines OpenGL but makes fundamental non-backward compatible changes to OpenGL, similar to DX10, but also retains backwards compatibility with an compatible profile [which will be "frozen," which may be the hang-up]), due in September, but ratification didn't happen yet and nobody seems to know what the status is or when we'll see it or cards with it. OpenGL 3.1 is due about 3-5 months after 3.0 is ratified and is meant to bring new features such as geometry shaders into core (rather than extensions).

Re:OpenGL please (1)

trigeek (662294) | more than 6 years ago | (#21020395)

Actually, as a graphics chip developer, I can tell you that Graphics chip development focuses almost exclusively on Direct3D. What Microsoft wants, Microsoft gets. The needs of OpenGL are entirely secondary when it comes to the hardware design.

Re:OpenGL please (3, Informative)

n dot l (1099033) | more than 6 years ago | (#21020787)

Actually, as a graphics chip developer, I can tell you that Graphics chip development focuses almost exclusively on Direct3D. What Microsoft wants, Microsoft gets. The needs of OpenGL are entirely secondary when it comes to the hardware design.
Who's chips do you develop? And if the answer's NV or ATI then maybe you should talk to whoever gets sent out to GDC, because that sure as hell isn't what they're telling us game developers.

In fact, I can actually think of a few cases where GL had something before DX did: NV_primitive_restart [opengl.org] 's been spec'd since 2002 and MS just brought it into DX with DX10 (could have been a caps bit long before then). Same thing with EXT_depth_bounds_test [opengl.org] (is this even in DX10? - I haven't seen it in the docs yet). I'm pretty sure NV also had a bunch of their depth shadowing stuff available through OpenGL before DX had anything of the sort in the spec as well. The only case I can think of where DX could so something way before GL could is MRT rendering - and that's just 'cuz pbuffers were allowed to exist for far too long before the introduction of EXT_framebuffer_object [opengl.org] .

And I've always gotten better framerates with large numbers of small draw calls, or when rendering a lot of dynamic content, or even just uploading static data, from OpenGL than I've ever seen in DX, across both ATI and NV's drivers...so it's not like the core paths are being neglected that I can see.

Dunno. I'd be interested to hear some of the cases where you feel GL's being left in the dust... And do your comments also apply to the workstation hardware?

Re:OpenGL please (5, Informative)

s_p_oneil (795792) | more than 6 years ago | (#21015415)

Actually, neither nVidia nor the editor chose DirectX.

Each chapter is contributed by a different author, and each author decides which API to use. I wrote one of the chapters of GPU Gems 2 (see http://sponeil.net/ [sponeil.net] ), and my chapter /demo used OpenGL. When I asked the guys at nVidia if they had a preference, they didn't care. They didn't even care whether I used nVidia's Cg or the standard GLSL. (I started with GLSL but switched to Cg because the GLSL compiler didn't optimize it well enough.)

Re:OpenGL please (3, Insightful)

everphilski (877346) | more than 6 years ago | (#21016045)

I am a scientist and CAD user.

Me too. I do CFD and 6DOF modeling and simulation. Guess what I wrote my last piece of CFD visualization software in? C# and DirectX 9.0c.
I use OpenGL as well, for some things, but unless you can enlighten me what technical reason is there that you cannot use DirectX for scientific visualization? I can't think of one off of the top of my head. (For one, it's object-oriented...) In fact one of the reasons I like DirectX for CFD is the mesh class. If you are visualizing a flow, you are often looking at a mesh of an object, or cut through the flow. The mesh class in DirectX fully encapsulates the creation of a mesh, vertices, etc. With OpenGL you'd have to manage your own struct of data. Which is fine, but one more thing you have to debug.

And now all the graphics cards are focusing on the DirectX and neglecting OpenGL. Arg!
No, ARB... the OpenGL Architecture Review Board. Design by committee is slow. When was the last time you heard someone say deisgn by comittee was a good thing? :) The spec hasn't been updated meaningfully in ages (though the 3.0 spec is due soon ... I think ... dates keep getting pushed back). So there is nothing for the manufacturers to update!
These cards cost hundreds of dollars but they can't handle an assembly with 100 parts in a CAD model simply because they barely have any OpenGL hardware in them.
Sorry, they utilize the same hardware. OpenGL are DirectX are both API's to the same hardware on any given video card. Not being able to load 100 parts sounds like a problem ... you sure you aren't using a software renderer?

Re:OpenGL please (1)

Creepy (93888) | more than 6 years ago | (#21026081)

Last date I heard was "end of September, 2007" for OpenGL 3.0. I've heard no new dates, and it still isn't ratified.

As was said in a previous post, CAD is more bandwidth dependent than games, which is why the OpenGL cards on the market are optimized for bandwidth. There is no good way to do CSG in hardware at the moment (thus the rumored tessellation shaders in next gen cards), so it's done in software.

    I don't see why loading 100 parts on a consumer level card would be a problem, either, but it depends on part size and available memory. Remember, CAD is more dependent on CPU, main memory, and bandwidth than GPU. I also don't know of any CAD software that runs in fullscreen mode by default, so if the poster is running Vista Aero he/she will get a 10% or more speed hit due to compositing (actually Aero on or off doesn't seem to matter on my laptop, suggesting the custom drivers are always compositing over a DX9 context - Linux on my laptop does not have a speed hit in either mode and performs roughly the same as fullscreen OpenGL on Windows).

Re:OpenGL please (1)

IntergalacticWalrus (720648) | more than 6 years ago | (#21095025)

I use OpenGL as well, for some things, but unless you can enlighten me what technical reason is there that you cannot use DirectX for scientific visualization?

How about not being tied up to Microsoft platforms only?

Re:OpenGL please (1)

AndOne (815855) | more than 6 years ago | (#21016111)

Google OpenGL3.0 The specification should be coming out shortly. No guarantee when support will come out in driver form but one of the major thrusts was to modernize OpenGL for the newer generation of graphics cards. Alot of openGL was legacy support that isn't such an issue these days. Further the big thing that DX10 is offering is the geometry shaders.

so that would be

GL NV geometry shader4
there's also
GL NV gpu program4
GL NV half float
GL EXT gpu shader4
GL EXT framebuffer object
(lameness filter requires i remove the _'s)
which should cover most of the other things you were complaining are in DX10 and not OpenGL. I'm by no means an openGL guru but five seconds with glGetString(GL_EXTENSIONS) will tell you what your card can do. All this was for my 8800GTS and the latest linux NVIDIA drivers. Besides HLSL/Cg/GLSL are all pretty much the same barring some api call names.

Further if you really want some computational horse power beyond graphics use CUDA which is available for windows or linux and has both openGL and DX hooks as well as having an implementation for the FFT and BLAS. Course for CAD that's probably not as issue but if you want modeling and particle motions CUDA could help.

This all presumes you're just writing this all yourself. For CAD specific things I'd guess you'd need a card marketed towards CAD. I don't do CAD so I can't help you there.

Re:OpenGL please (2, Interesting)

pandaman9000 (520981) | more than 6 years ago | (#21017113)

While what you say is likely true, may I ask if you have looked into the FireGL line of cards, or perhaps a fast CPU/ large mem card, and emulation? I do not know about FireGL emulation, but Some NV cards have historically been able to load Quadro drivers using Rivatuner to emulate the quadro's bits in identifier, or some such. I would expect it to be slower, but CPUs are at quad core and counting, so emulation may be viable, depending upon if it still works.

Re:OpenGL please (1)

pandaman9000 (520981) | more than 6 years ago | (#21017281)

While what you say is likely true, may I ask if you have looked into the FireGL line of cards, or perhaps a fast CPU/ large mem card, and emulation? I do not know about FireGL emulation, but Some NV cards have historically been able to load Quadro drivers using Rivatuner to emulate the quadro's bits in identifier, or some such. I would expect it to be slower, but CPUs are at quad core and counting, so emulation may be viable, depending upon if it still works.

Re:OpenGL please (1)

Bobby Mahoney (1005759) | more than 6 years ago | (#21018533)

Yeah, I dunno about that... I just this year finished a project designing the mechanical systems (hvac, plumbing, medical gas, fire, etc...) for a rather large hospital. It was done almost entirely in 3d, on very basic workstations with off the shelf consumer hardware (not to mention a few mid-range laptops). Not the most spectacular performance, but the consumer stuff got the job done. To give an idea of assembly/sub-assembly resolution, a single file might have 1,000 assemblies, approx. 100 of which are unique. Some of these might have 2 levels of sub-assemblies with 10 components each, so around 100 components. For the most part, and for conservative figuring, figure an assembly might have 10 components... So here I'm working in cad with a file which has orders of magnitude greater numbers of assemblies/components than in your example, and the DX10 seems to workout fine... Could you explain what might be different about the components in my building system drawings vs. the 100 part assembly you mentioned in your example?

Re:OpenGL please (1)

mdwh2 (535323) | more than 6 years ago | (#21018831)

There was no emphasis on DirectX in GPU Gems 2 (which I only recently bought, dammit!), and I suspect there won't be in this book either - the book's are basically a collection of essays, and it's the choice of the individual author.

Also this is a GPU programming book, so the shading language is more relevant than the API (if you don't even know the required OpenGL or DirectX commands to set up shaders, these books aren't for you). And again, GPU Gems 2 at least varies between Cg, HLSL and GLSL. It's not like there's much difference - the point of books like these is to teach algorithms and best practices. It's not a book to teach you a shading language.

Re:OpenGL please (1)

2ms (232331) | more than 6 years ago | (#21019575)

Not that I made it extremely clear, but I think most of you are missing my point. The point was that if the same resources went into development of OpenGL and hardware optimized for it as goes into DirectX, then maybe both what 3D graphics was originially invented for and then gaming also could benefit at the same time.

Instead, any CAD use requires $1500 graphics cards because of poor economies of scale. This is a stupid waste.

A lot of you seem to think that I don't understand why DirectX is receiving preferential treatment over the original (non-platform-specific) OpenGL. However, I do -- MS wanted to create something of its own that was deliberately incompatible with existing standards and that gaming development would end up moving to either due to prepodnerance of directx MS hardware (ie XBox), MS buying a ton of gaming companies, or the simple advantage that DirectX is not designed for CAD et al and thus can be optimized for gaming.

I realize that video games are huge business and that that's why, in combination of MS's position in the industry, DirectX has to be the one consumer (gamer) cards are oriented for. So no need for the explaining of this to me.

What many of you are obviously (particularly with the little examples of how you knew like 30 people out of 30,000 or something at school who used cad blah blah) very naive to how practically everything is made now. Look at your computer case -- it was designed in CAD. Look at you keyboard, your mouse, the outlet you plug computer into, the lamps in your room, your furniture, every appliance in your house, etc etc -- Everything is designed in CAD! Half of your gadgets could never be made without cad. Half of everything you use would break easily or weigh way too much, etc if it weren't for FE analysis in CAD, etc.

Anyway, right now every person who wants to actually build, design, test, etc something and manufacture it needs to spend $1500 on a video card because all the kids are using DirectX when games could just as well have been made with OpenGL (or an OpenGL with the gaming emphasis added in rather than directed to DirectX instead). I'm using a lot of hyperbole, but maybe you see what my point was a little better.

Re:OpenGL please (0)

Anonymous Coward | more than 6 years ago | (#21020741)

Umm, bullshit.

First, Microsoft created DirectX because:
  A) OpenGL wasn't a great gaming platform and had little interest, at the time, to ever be one.
  B) Non-standard APIs were becoming quite popular, especially 3dfx's Glide.
  C) Gaming on Windows was aweful, leading to DOS-based games staying the norm.

The consumer video cards support equivilant OpenGL extentions, which means you aren't locked in. In fact, you'll notice that unsurprisingly the exact same chip is used for consumer and industry grade cards by Nvidia, with minor modifications to optimize for the specific market. There is nothing blocking you from buying a consumer card.

The main advantage of OpenGL versus DirectX is cross-platform capabilities, which used to be quite important for CAD tools which ran on big UNIX boxes. Since you fail to bring that up, all of your complaints are moot.

Re:OpenGL please (1)

everphilski (877346) | more than 6 years ago | (#21024365)

Instead, any CAD use requires $1500 graphics cards because of poor economies of scale. This is a stupid waste.

You are talking bullshit. I work in Aerospace. My buddy a few floors down does CAD work on a HP desktop workstation (same build as mine) with a beefy but standard nVidia video card (Quadro FX). It costs half the price you cite. I know people who have that card in their **home gaming** computer. He visualizes entire rocket stages, thousands of unique components, smoothly.

A lot of you seem to think that I don't understand why DirectX is receiving preferential treatment over the original (non-platform-specific) OpenGL. However, I do -- MS wanted to create something of its own that was deliberately incompatible with existing standards and that gaming development would end up moving to either due to prepodnerance of directx MS hardware (ie XBox), MS buying a ton of gaming companies, or the simple advantage that DirectX is not designed for CAD et al and thus can be optimized for gaming.

Has nothing to do with preferential treatment, read the AC comment to this post he sums it up nicely. Microsoft wanted, nay needed control of the API to make gaming a success on Windows. And I would argue they were highly successful. Microsoft can turn around API updates much quicker than the OpenGL ARB does. Proof's in the pudding. How many times has the release of the 3.0 spec been delayed?

Anyway, right now every person who wants to actually build, design, test, etc something and manufacture it needs to spend $1500 on a video card because all the kids are using DirectX when games could just as well have been made with OpenGL (or an OpenGL with the gaming emphasis added in rather than directed to DirectX instead). I'm using a lot of hyperbole, but maybe you see what my point was a little better.

I'm trying to decide if you are a troll or if you are just that naive. Remember, DirectX and OpenGL are just API's. They target the same hardware. The hardware dictates how many triangles per second can be rendered, how shading operations can be performed, etc. DirectX did not create a schism in video cards. The highest of high-end video cards have DirectX drivers just like the lowest of low-end video cards have OpenGL drivers. If anything, these 'kids' dropped the price of the high-end video cards via supply and demand (raise demand, put more money in AMD/nVidia's pockets, get more supply and better cards sooner) by purchasing more cards and in many cases purchasing many high-end cards. When you graduate from college, have a good job, a house, and aren't married yet, as a gamer, where does your money go? a beefy computer. Thank them.

And sure, you could program a game in OpenGL. I'm not sure I'd want to. OpenGL is a non-object-oriented state machine. When you start to get into more complicated architectures, this gets messy, implementing non-object-oriented code in object-oriented game code / engineering code. You basically have to wind up writing a wrapper for OpenGL to store states and stuff. Or you could program in DirectX, which is object oriented and handles all that for you. And if you are a game programmer, you have the added advantage of being two steps away from porting to XBOX. (and yes, I've programmed in both OpenGL and DirectX.)

Re:OpenGL please (1)

2ms (232331) | more than 6 years ago | (#21030311)

Why do NVidia cards that can run CAD/OpenGL cost several times as much as NVidia cards that can't?

Re:OpenGL please (1)

everphilski (877346) | more than 6 years ago | (#21030743)

Why do NVidia cards that can run CAD/OpenGL cost several times as much as NVidia cards that can't?

I have no idea what you mean. Take the Quadro FX 3500 - nVidia classes it as 'Ultra high-end' [nvidia.com] on their workstation cards, and compare it to a GeForce 8800 Ultra, which is marketed typically to gamers [nvidia.com] :
Quadro / GeForce
memory: 512M / 768M
memory interface: 256bit / 384bit
memory bandwidth: 42.2 GB/sec / 103.7 GB/sec
BOTH cards support OpenGL 2.0 and DirectX 9.0c.

So there you have it, a high end gamer card that performs on par/slightly better than an 'ultra high end' workstation card. Priced about the same at reputable retailers. I can not find an instance where a comparable "CAD/OpenGL Card" costs "several times" as much. Mind sharing? Or are you complaining that CAD is more graphics intensive than video games? If so, it wouldn't matter what API you use... DirectX, OpenGL. Just a programming API to the same damn video card.

Re:OpenGL please (1)

2ms (232331) | more than 6 years ago | (#21037633)

You need to talk to a CAD draftsman some time. You'll quickly learn that there is not a gaming card in existence that can hold a candle to even the cheapest CAD-oriented card on the market. The gaming cards are useless as soon as you try to do anything with any movement.

Re:OpenGL please (1)

everphilski (877346) | more than 6 years ago | (#21047163)

You didn't read my post. My buddy downstairs with the NVidia Quadro **is** a CAD draftsman. We both work for the same company.

And if what you are saying is true, please, point out for me a "CAD-oriented card". Any vendor.

You are spewing bullshit.

Re:OpenGL please (0)

Anonymous Coward | more than 6 years ago | (#21022227)

This is nonsense. There is nothing that OGL can do that DX cannot. In fact, DX can do everything a modern GPU can do without the developer having to screw around with vendor-specific extensions to an antiquated API.

The hardware is not the issue - the OGL drivers are the issue - and they are the issue at least partly because OGL is not as well designed for GPU performance as DX (and partly because the demand for quality OGL drivers simply isn't there).

In a nutshell, it's as simple as this: get with the times and put pressure on your CAD vendor to offer DirectX support.

Re:OpenGL please (1)

Mendy (468439) | more than 6 years ago | (#21038349)

These cards cost hundreds of dollars but they can't handle an assembly with 100 parts in a CAD model simply because they barely have any OpenGL hardware in them. A car, airplane, etc has millions of parts.
If you're using an Autodesk 07 product it will likely have looked to see what graphics hardware and driver version you have, not found it in it's hardware database and disabled all the 3D acceleration.

You could try having a look on the Autodesk website and downloading the latest XML computability file and see if your card is now supported but if it's a consumer card this is unlikely and you're probably just going to have to go into the options and enable it manually. Before you do this it's also worth doing some research on the Autodesk website and elsewhere about your card - while most of them are disabled unnecessarily to be on the safe side there are some which do have rendering problems.

Re:OpenGL please (1)

dasir (1054076) | more than 6 years ago | (#21082001)

yes, I aggree with you. DirectX programming only available on windows systems, but OpenGL offer more compatibility ..

Open Source Gems? (4, Interesting)

tji (74570) | more than 6 years ago | (#21014663)

Are any OSS projects using these capabilities?

As a non-gamer, I'm primarily interested in video acceleration.. XvMC is fairly limited, and I have seen talk of doing GPU acceleration, like GLSL, on the following OSS projects:

MythTV
MPlayer + ffmpeg
VLC
XBMC Linux

Anyone know what the "state of the art" is for GPU accelerated video in open source?

Re:Open Source Gems? (1, Funny)

Anonymous Coward | more than 6 years ago | (#21014879)

i believe the state of the art is complaining to nvidia that
you still don't have hardware acceleration.

Re:Open Source Gems? (0)

Anonymous Coward | more than 6 years ago | (#21015037)

There's an OpenGL based library and a collection of tools for GPU-based image and data processing on http://cvtool.sourceforge.net/ [sourceforge.net] .

Open Source Porn? (0)

Anonymous Coward | more than 6 years ago | (#21016127)

"Anyone know what the "state of the art" is for GPU accelerated video in open source?"

Getting porn to appear faster.

Re:Open Source Gems? (3, Informative)

lavid (1020121) | more than 6 years ago | (#21016883)

So, I'm cited in this book for my work on the parallel prefix sum implementation they used. I later went on to rework an MPEG4 encoder for CUDA acceleration. So, to answer your question about using CUDA in these projects: it does offer a speed up, specifically of motion estimation, where most of encoding spends its time. Also, a lot of that speed up comes from exploiting the G80's memory architecture, which I do not believe you can do using GLSL. The problem ends up being that you need a G80, you need NVIDIA's drivers, and you need NVIDIA's compiler.

Longs Peak Release? (1)

Chlorus (1146335) | more than 6 years ago | (#21015519)

This is a bit off-topic, but does anyone know what happened exactly with OpenGL 3.0? I thought it was supposed to have released in September, but I've been unable to dig up any information as for the delay.

Re:Longs Peak Release? (1)

n dot l (1099033) | more than 6 years ago | (#21015709)

There hasn't been any info on OpenGL 3.0 in a while now. Even the "sneak peek" type articles seem to have dried up. My guess is that NVIDIA and AMD/ATI are either doing some last-minute bickering over features, or that someone realized that the spec contains something that would be impossible to implement (my guess would be "mixed OpenGL 2.x/3.0" rendering).

What is speed of gpu card evolution? (1)

mattr (78516) | more than 6 years ago | (#21021147)

I saw a bunch of nvidia's nice desk-side gpus in a glassed-in projection room run by sgi at an industrial vr show. Being able to throw two or more at the data flow lets them drive a 4K image (of a sportscar), though even then it looked underpowered.

But those little boxes (I guess this is the NVIDIA Quadro Plex series) go for over $20K each. At the current rate of progress, (when) will that drop in price and size to something that can fit inside a desktop or laptop, presumably with a giant asic one day? It will be sheer envy to buy this gorgeous book and have to deal with Vista, and still be underpowered. So I'd rather dream about a linux machine instead. What investment in linux/nvidia qudroplex will allow the book's high end procedures to work today?

Accelerated Image Processing (0)

Anonymous Coward | more than 6 years ago | (#21021375)

Looking forward to the day gimp or photoshop use graphic cards to accelerate image processing, or does photoshop do that already?
Seems pretty silly to use the cpu when you could push it off to the gpu and get it done faster.
Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...