Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Carmack On 'Infinite Detail,' Integrated GPUs, and Future Gaming Tech

Soulskill posted more than 3 years ago | from the building-a-better-virtual-rocket-launcher dept.

Graphics 149

Vigile writes "John Carmack sat down for an interview during Quakecon 2011 to talk about the future of technology for gaming. He shared his thoughts on the GPU hardware race (hardware doesn't matter but drivers are really important), integrated graphics solutions on Sandy Bridge and Llano (with a future of shared address spaces they may outperform discrete GPUs) and of course some thoughts on 'infinite detail' engines (uninspired content viewed at the molecular level is still uninspired content). Carmack does mention a new-found interest in ray tracing, and how it will 'eventually win' the battle for rendering in the long run."

cancel ×

149 comments

Sorry! There are no comments related to the filter you selected.

Does carmack still check slashdot? (2)

walshy007 (906710) | more than 3 years ago | (#37072524)

Years ago he posted here on occasion and I even remember seeing small active discussions on rendering technique technical points etc.

It has been ages since I've last seen this, which makes me ponder if he even reads slashdot these days when topics related to him are posted.

Re:Does carmack still check slashdot? (-1)

Anonymous Coward | more than 3 years ago | (#37072600)

I'd be seriously surprised if he wasted his time on this dump. Time has long since passed Slashdot by.

Re:Does carmack still check slashdot? (4, Funny)

Stradenko (160417) | more than 3 years ago | (#37072634)

Yeah, anyone still around here is obviously a real loser.

Hey! (4, Funny)

toby (759) | more than 3 years ago | (#37072808)

Speak for yourself! ;-)

Re:Does carmack still check slashdot? (1)

elsurexiste (1758620) | more than 3 years ago | (#37072812)

It's hard sometimes to wade through hordes of trolls and misinformed people, I guess...

Re:Does carmack still check slashdot? (1)

Baloroth (2370816) | more than 3 years ago | (#37073122)

It's hard sometimes to wade through hordes of trolls and misinformed people, I guess...

Well then, it's a good thing he works in the video game industry, which is troll and misinformation free!

Why can't we have both. (0)

Anonymous Coward | more than 3 years ago | (#37072560)

Ray Tracing has been around for a long time. The problem it just takes so long to render well. It is much better then it use to be, because we have more processing to do it. But... Still it slow, and Poligons or now the "Infinite Detail" seems to be a reasonable shortcut. To realtime raytracing, as they offer graphics fast enough and good enough for our needs... One day I hope for real time Ray Tracing... However that is still down the lines. Once Pixar doesn't need server farms for their final quality output then I will be happy.

Re:Why can't we have both. (1)

MichaelKristopeit420 (2018880) | more than 3 years ago | (#37072658)

I hope for real time Ray Tracing...

i'm not sure if you don't understand what "real time" means, or what "ray tracing" means... but i do know that you're an idiot.

Re:Why can't we have both. (1)

Skarecrow77 (1714214) | more than 3 years ago | (#37072952)

I know, right!

Next, he'll be asking for real time mpeg encoding.

You can have both rasterization and ray-casting. (2)

Suiggy (1544213) | more than 3 years ago | (#37072742)

The Euclideon infinite detail technology was exposed in a recent interview to be efficient sparse voxel octree ray-casting with contours (instead of axis-aligned cubes), which was previously investigated by nVidia researchers and others.

http://www.tml.tkk.fi/~samuli/publications/laine2010tr1_paper.pdf [tml.tkk.fi]

It's still not quite competitive enough with conventional rasterization techniques if you also try to do decent lighting with it along with all of the other stuff that goes into developing a game. Maybe in couple of more GPU generations. And in the Euclideon demos, you can see they're only getting 15-25 frames per second, and yet they're using a simplistic lighting model with a single global light.

Also, you can merge polygon rasterization and voxel ray-casting under the same rendering architecture if you employ deferred shading and composition. That way you can have your static models as a voxel-octree that you ray-cast into your geometry-buffer, rasterize your dynamic polygonal models in a second pass into the same geometry-buffer, and then perform your lighting passes with the geometry-buffer as input, and then your final scene composition and post-processing passes. In fact, I bet you that in 5-6 years time, this will become a standard practice among PC game developers. Unfortunately, I think the upcoming next console generation, with hardware from Sony and MS due out next year, will miss out on being able to handle this.

Ray Tracing != Ray Casting (5, Informative)

Suiggy (1544213) | more than 3 years ago | (#37072582)

It should be noted that John Carmack believes that Ray Casting, not Ray Tracing, will win out in the long term.

Unfortunately, many people outside of the graphics field confuse the two. Ray Casting is a subset of Ray Tracing in which only a single sample is taken per pixel, or in other words, in which a single ray is cast per pixel into the scene and a single intersection is taken with the geometry data set (or in the case of a translucent surface, the ray my be propagated in the same direction a finite number of times until an opaque surface is found). No recursive bouncing of rays is done. Lighting is handled through another means, such as with traditional forward shading against a dataset of light sources, or using a separate deferred shading pass to un-hinge the combinatorial explosion of overhead caused by scaling up the number of lights in a scene.

John Carmack has been quoted on saying that full-blown ray-tracing just isn't feasible for real-time graphics due to the poor memory access patterns involved, as casting multiple rays per pixel, with multiple recursive steps ends up touching a lot of memory in your geometry data set, which just thrashes the cache on CPU and modern/future GPU hardware alike.

When people talk about real-time ray-tracing, they almost always invariably are referring to real-time ray-casting.

Re:Ray Tracing != Ray Casting (1)

Vigile (99919) | more than 3 years ago | (#37072684)

I think he has come full circle on that though and thinks ray TRACING will win. Did you listen to the full interview?

Re:Ray Tracing != Ray Casting (2)

Suiggy (1544213) | more than 3 years ago | (#37073298)

I admit, I made my post before watching the video. Now that I've watched it, I think my position still holds. He talked a lot about using ray-tracing in content preproduction and preprocessing tools during development, and he talked about building ray-tracing engines in the past to experiment with the performance characteristics. This is the same research he did where he came to the conclusion that ray-casting will be better than ray-tracing. Where his position has changed is in that he used to think it would only be feasible using voxel as your input data set, but he's starting to come around on ray-casting against analytical surfaces... polygonal meshes, curved surfaces, etc. in real time.

Re:Ray Tracing != Ray Casting (2)

Creepy (93888) | more than 3 years ago | (#37074686)

I'd hope he is talking about ray tracing - I'm basically doing ray tracing inside of polygons in shaders today, and I'm sure he has, too. I really don't think ray casting has enough advantages over polygons that would make it worth it, and has significant disadvantages that would need to be worked around (no reflections, shadows, etc). Back in the 1980/1990s, programmers used the painter's algorithm instead of zbuffer because it was significantly slower to use zbuffer up to a certain number of polygons, and right as zbuffer was becoming practical consumer hardware became available that used it, killing the painter's algorithm. Ray tracing actually falls into a similar category - as the number of objects in a scene grows, it scales linearly, so at some point it will be cheaper or as cheap to use ray tracing as it is to use polygons. If monitors had stuck with 640x480, we'd easily be there today.

Not that ray tracing doesn't have issues - it needs access to the entire scene in memory (GPU memory optimally - GPU parallel processing is ideal), creates hard shadows only (without photon mapping or similar - I'd say Radiosity, but I doubt we'll see that full scene in realtime anytime soon - O(n^5) I believe), requires a lot more processing power for larger monitors (640x480 is 307200 rays, 1920x1200 is 2304000 rays or 7.5x more expensive - in contrast, polygons don't increase in cost by monitor size), and displays reflective lighting far better than diffuse.

All of the problems above are solvable by lots of GPU horsepower and memory - we're not there yet, but in 10 years, who knows?

Re:Ray Tracing != Ray Casting (1)

AvitarX (172628) | more than 3 years ago | (#37072702)

Does that mean you miss out on the reflectivity though?

Because I thought that was one of the main benefits to ray tracing.

Re:Ray Tracing != Ray Casting (2)

Suiggy (1544213) | more than 3 years ago | (#37072820)

Yes, you do lose out on reflectivity, so you have to handle reflections through other means. If you use deferred shading, you can handle it in a separate reflective lighting pass in combination with something like cube-mapping. Even though you do a second pass to handle it, it'll still end up being faster than had you tried to implement fully recursive ray-tracing.

Ray-casting would still have the benefit in providing better scalability than polygonal rasterization as there is a fixed cost per pixel, whereas with polygon rasterization, as you scale up the complexity of the geometry, you increase the computational cost. And that's why Carmack thinks it will eventually win out against rasterization: ray-casting becomes a better use of the available computational resources you have at your disposal once you get past a certain level of detail in your scenes.

Re:Ray Tracing != Ray Casting (2)

QuasiSteve (2042606) | more than 3 years ago | (#37072738)

When people talk about real-time ray-tracing, they almost always invariably are referring to real-time ray-casting.

Developers of dozens of realtime raytracers are lining up to disagree with you.
e.g. VRay RT, Octane, etc.
http://www.youtube.com/watch?v=6QNyI_ZMZjI [youtube.com]

Raytracing is extremely straight-forward and parallel. The only thing you have to do to make it feasible for use in games is either A. throw more power at it, B. cheat where you can, C. combination of A&B.

Not to mention companies that make dedicated hardware, almost invariably treating raytracing less as a raycasting problem and more as data access problem.

Re:Ray Tracing != Ray Casting (1)

AvitarX (172628) | more than 3 years ago | (#37072810)

It looks like even basic movement causes trouble.

Re:Ray Tracing != Ray Casting (3, Informative)

Suiggy (1544213) | more than 3 years ago | (#37072898)

The video you posted is not real-time frame rates, it's interactive frame rates. It takes a few seconds to fully recompute the scene once you stop moving the camera. And note how there's only a single car model. imagine scaling up the amount of geometry to a full world. With ray-tracing, as you scale up the complexity of the geometry, you end up scaling up the required computational complexity as well due to radiosity computations. Full real-time ray-tracing on huge worlds in real-time is a pipe-dream. What you will be able to do with ray-casting or rasterization with deferred shading composition to simulate things like reflections or radiosity will always be more than what you can do with ray-tracing, and so games developers will always choose the former.

Re:Ray Tracing != Ray Casting (3, Insightful)

Sycraft-fu (314770) | more than 3 years ago | (#37073260)

I think part of the problem is that you get a bunch of CS types who learned how to make a ray tracer in a class (because it is pretty easy and pretty cool) and also learn it is "O(log n)" but don't really understand what that means or what it applies to.

Yes in theory, a ray tracing engine scales logarithmically with number of polygons. That means that past a point you can do more and more complex geometry without a lot of additional cost. However that forgets a big problem in the real world: Memory access. Memory access isn't free and it turns out to be a big issue for complex scenes in a ray tracer. You have to understand that algorithm speeds can't be taken out of context of a real system. You have to account for system overhead. Sometimes it can me a theoretically less optimal algorithm is better.

Then there's the problem of shadowing/shading you pointed out. In a pure ray tracer, everything has that unnatural shiny/bright look. This is because you trace rays from the screen back to the light source. Works fine for direct illumination but the real world has lots of indirect illumination that gives the richness of shadows we see. For that you need something else like radiosity or photon mapping, and that has different costs.

Finally there's the big issue of resolution that ray tracing types like to ignore. Ray tracing doesn't scale well with resolution. It scales O(n) in terms of pixels, and of course pixels grow in a power of two fashion since you increase horizontal and vertical resolution when you get higher PPI. Then if you want anti-aliasing, you have to do multiple rays per pixel. This is why when you see ray tracing demos they love to have all kinds of smooth spheres, but yet run at a low resolution. They can handle the polygons, but ask them to do 1920x1080 with 4xAA and they are fucked.

Now none of this is to say that ray tracing will be something we never want to use. But it has some real issues that people seem to like to gloss over, issues that are the reason it isn't being used for realtime engines.

Re:Ray Tracing != Ray Casting (2)

Suiggy (1544213) | more than 3 years ago | (#37073544)

Yeah, not to mention that with full ray-tracing, as you add more lights to a scene, it increases the overall complexity per pixel per ray bounce linearly as well. That's why doing deferred shading is so nice because it unbuckles the lighting from the rasterization or ray-casting/tracing step and lets you scale up the number of lightings independently with a fixed amount of overhead and linear cost per light for the entire scene, instead of per pixel or per ray or per polygon.

Going with pure ray-tracing doesn't let you take advantage of that.

Re:Ray Tracing != Ray Casting (1)

Rockoon (1252108) | more than 3 years ago | (#37074164)

Yeah, not to mention that with full ray-tracing, as you add more lights to a scene, it increases the overall complexity per pixel per ray bounce linearly as well. That's why doing deferred shading is so nice because it unbuckles the lighting from the rasterization or ray-casting/tracing step and lets you scale up the number of lightings independently with a fixed amount of overhead and linear cost per light for the entire scene, instead of per pixel or per ray or per polygon.

I highlighted the key thing that you overlooked. They are both linear to the number of lights.

Re:Ray Tracing != Ray Casting (1)

Suiggy (1544213) | more than 3 years ago | (#37074288)

Again, your ability to comprehend fails to you. Scaling linearly per ray cast (and you doing multiple ray casts per pixel with ray-tracing) and scaling linearly with respect to the entire frame buffer are entirely different.

Ray-tracing: O(2^b * L * log N)
Ray-casting + deferred shading: O(log N) + O(L)

Re:Ray Tracing != Ray Casting (1)

Rockoon (1252108) | more than 3 years ago | (#37075066)

Amazingly, you think that lighting can be handled without respect to scene geometry.

You do realize that the deferred lighting passes must deal with the actual scene geometry, and not just the frame buffer, right?

Re:Ray Tracing != Ray Casting (1)

ShakaUVM (157947) | more than 3 years ago | (#37074172)

It gets even worse when you realize you have to do subsurface scattering to get realistic looks for a lot of surfaces (like, oh, skin). Then you no longer can terminate a photon when you reach most surfaces, but then have to further reflect and refract photons from that point.

It does make for nice looking materials, though... without it, you get those iconic hard and shiny surfaces in ray traced images, like the famous metal balls.

(http://en.wikipedia.org/wiki/Subsurface_scattering)

Re:Ray Tracing != Ray Casting (1)

QuasiSteve (2042606) | more than 3 years ago | (#37074562)

I won't go into what constitutes realtime vs interactive, as I can make even the fastest game engine out there 'interactive' as long as I run it on low enough hardware with all the features enabled and running it across multiple HD screens.

But the converse is also exactly my point - raytracing is something that scales very well, you just throw more computational power at it.

I did also mention 'cheats'; in that a game engine doesn't necessarily rely on any one single technology to begin with. We've now got on-hardware tessellation and mesh displacement - but that doesn't stop most games from still using bump maps, normal maps, parallax maps, etc. for the vast majority of surfaces (having to support older hardware factors in to it as well, of course).

Re:Ray Tracing != Ray Casting (1)

MichaelKristopeit420 (2018880) | more than 3 years ago | (#37072956)

so your argument against someone claiming that a large number of people are making an obvious error, is to point out that a large number of people are willing to defend making the same obvious error?

if there is a period of time between when a "ray" exists and when the ray is "traced", then the system is not a real-time system relative to rays.

apparently you, along with the ignorant masses, do not understand the implications of the variable latency required to trace reflections.

real-time ray-tracing is an oxymoron.

you're just a moron.

Re:Ray Tracing != Ray Casting (0)

Anonymous Coward | more than 3 years ago | (#37073208)

Real-time != instantaneous. That's not possible.

You'd also get more brownie points by using big boy words like 'recursion' instead of 'latency', but barely.
Learn some shit before you shit-talk.

Re:Ray Tracing != Ray Casting (1)

Pinky's Brain (1158667) | more than 3 years ago | (#37073272)

Real time in it's common technical meaning is mostly nonsensical when talking about rendering, it would mean you could guarantee a maximum delivery time ... real time rendering most often used as synonymous to interactive rendering (with interactive being a fuzzy concept, but lets say >10 fps in honor of Carmack's famous turtle in Quake).

Re:Ray Tracing != Ray Casting (2)

Suiggy (1544213) | more than 3 years ago | (#37072964)

I also forgot to mention that conventional rasterization and ray-casting is also just as parallel in nature as ray-tracing. In fact, more so because it has much better memory access patterns as I mentioned in a previous post. Memory-access is the biggest limiting factor in building scalable, parallel systems. If you don't have good memory-access patterns, you might as well being doing sequential work, because it's getting serialized by the hardware memory controller anyway.

And this situation isn't improving, it's actually getting worse on each subsequent hardware generation... memory access is increasingly becoming a bigger and bigger bottleneck. I suggest you educate yourself on how memory works on modern cache-coherent hardware.

https://lwn.net/Articles/250967/ [lwn.net]

Re:Ray Tracing != Ray Casting (1)

Pinky's Brain (1158667) | more than 3 years ago | (#37073146)

The data access problem of backward rendering is unsolvable ... it will always access data without regard to object coherency. For primary and shadow rays forward renderers will always be able to be more efficient when efficient occlusion culling is possible and subsampling isn't needed.

The video you linked has a realtime preview ... in 1/60th second it probably doesn't get that much further than the raycasting solution (primary rays).

Re:Ray Tracing != Ray Casting (1)

billcopc (196330) | more than 3 years ago | (#37073398)

The only thing that makes it "realtime" is that it has a relatively high redraw rate (for a raytracer). That's why there is a lot of fuzz when the camera pans around. It might render only a few thousand rays between redraws, which does give fast feedback but also slows down the rendering process overall. Most raytracers will churn 100k rays before updating the preview.

This is analogous to progressive jpeg decoding, where you start with a very chunky low-res preview and gradually work your way up to the full detail image. I don't see how this could be usable in games, unless you're specifically going for that TV-static effect :P

Re:Ray Tracing != Ray Casting (1)

Rockoon (1252108) | more than 3 years ago | (#37072996)

Unfortunately, many non-programmers such as yourself dont understand algorithmic complexity and as such fail to realize that O(P log N) will eventually beat O(PN) once N is large enough, even though the constants in the first are much larger than the constants in the second.

Carmack knows that raytracing will eventually be superior in performance to rasterization because it is inevitable.

The thing is that when the critical N is reached, O(P log N) isnt just going to be slightly better, its going to be enormously better from then on out. It is similar to how for small N that Bubble Sort beats Quick Sort but once the critical N is reached, Bubble Sort is left in the dust with absolutely no hope of beating the better-scaling algorithms.

Re:Ray Tracing != Ray Casting (1)

Suiggy (1544213) | more than 3 years ago | (#37073142)

I can't tell if you're trolling or you just wrote up a reply without fully reading my post. I never said rasterization is better. I alluded to the fact that ray-casting will win out over rasterization, and in the very near future. What I said was the ray-casting will win out over ray-tracing. The algorithmic complexity of ray-casting + deferred-shading is better than recursive ray-tracing.

Re:Ray Tracing != Ray Casting (0)

Anonymous Coward | more than 3 years ago | (#37073572)

Scumbag Rockoon. Says parent doesn't understand the difference between O(P log N) and O(PN), and then advocates an O(c * 2^backtraces * N) algorithm over the O(c * N) algorithm described by the parent.

Re:Ray Tracing != Ray Casting (1)

Suiggy (1544213) | more than 3 years ago | (#37073834)

And if you include lighting into the equation, with N = L lights + S geometric surfaces, he's advocating for O(c * 2^backtraces * N) over O([c1 * S] + [c2 * L]) that you would get with ray-casting + deferred shading&lighting, where it could be proven that c2 c1 c, seeing as how you also have to cast multiple rays per recursive back-trace to get a decent approximation of lighting in your scene, where you only need to perform a single sample per pixel with what I'm advocating for.

Re:Ray Tracing != Ray Casting (1)

Suiggy (1544213) | more than 3 years ago | (#37073860)

That should read c2 < c1 < c.

Re:Ray Tracing != Ray Casting (0)

Anonymous Coward | more than 3 years ago | (#37074000)

Neither of you know what N is! Lol!

Re:Ray Tracing != Ray Casting (1)

Suiggy (1544213) | more than 3 years ago | (#37073918)

Actually, now that I think of it, that should be O(c * 2^backtraces * log N) vs O([c1 * log S] + [c2 * L]) given that you would normally use spatial data structure such as an octree with logarithmic look-up times.

Re:Ray Tracing != Ray Casting (0)

Anonymous Coward | more than 3 years ago | (#37074110)

Actually I meant log(polygons) <= c (because you budget for a fixed # of polygons), and N pixels (so N grows as O(display-resolution^2)).

Re:Ray Tracing != Ray Casting (1)

Suiggy (1544213) | more than 3 years ago | (#37074422)

Ahh, okay, gotcha, now what you original said makes sense. There's more than one way to scale it. Generally, on a given project you have fixed budgets for pixels and polygons, but over the years as you target each subsequent generation you get to scale up both categories--however, scaling up the number of pixels beyond 1920x1080 or 1920x1200 doesn't really make as much as much sense as say continuing to scale up the complexity of your geometry, so I guess my brain assumed that's what you'd want to scale up rather than the output resolution.

Re:Ray Tracing != Ray Casting (2)

NoSig (1919688) | more than 3 years ago | (#37073346)

Everything you wrote is mathematically accurate, yet the actually interesting thing is exactly how big N has to get. The mathematics of big-O notation tells you nothing about that, yet that is the crux of the matter. For example, if N has to be bigger than 10^123478234897298, the apparent better asymptotic complexity has no impact on the real world. The point is that if I tell you that one algorithm is O(1) and the other is O(2^n), you haven't actually learned anything useful about which algorithm you should use for your program - you also need to know the point at which one becomes better than the other. But if you know that, you don't have to know the complexity to choose which algorithm will work better for you! So you see, big-O notation can be useful for understanding how your program behaves. It is never by itself (!) useful for deciding between two algorithms at any size of input - it doesn't give you enough information. So don't just tell me the complexities, tell me that one method is better than another at X multiple of current performance.

Re:Ray Tracing != Ray Casting (0)

Anonymous Coward | more than 3 years ago | (#37073832)

Thank you. I am so damn tired of people that takes a list that will never hold more than 10 items and tries to optimize it for >1000.
"Yes, because of the extra memory allocation it has a setup equivalent of sorting 100 elements but after that it is faster!!!1!11"

Re:Ray Tracing != Ray Casting (1)

Rockoon (1252108) | more than 3 years ago | (#37073898)

Everything you wrote is mathematically accurate, yet the actually interesting thing is exactly how big N has to get.

You know that this very question has been researched, right? I am amazed that you are intent to discuss this issue without having actually done any research in this matter.

You might want to start with this 2005 paper from Intel [intel.com] where they do some performance comparison for both hardware rasterization and software raytracing for various scene complexities.

That paper in particular illustrates how close we are. We are approaching the crossover point with the number of on-screen primitives right now. GPU's have done a lot to decrease the need for more primitives since then (hacks like parallax mapping and so forth,) but they are doing those things precisely because the demand for more scene detail is outpacing the ability to deliver higher primitive counts on GPU's. Piling on shaders (stream processors these days) only goes so far in delivering higher primitive counts, because their real choice is latency or bandwidth.. pick only one, even though you need both.

Re:Ray Tracing != Ray Casting (1)

NoSig (1919688) | more than 3 years ago | (#37074120)

Everything you wrote is mathematically accurate, yet the actually interesting thing is exactly how big N has to get.

You know that this very question has been researched, right? I am amazed that you are intent to discuss this issue without having actually done any research in this matter.

What I'm discussing is what can and cannot be concluded from big-O asymptotic complexity. You were drawing a mathematically correct conclusion that "for big enough N, raytraycing is better." You then made an incorrect further conclusion that "eventually, raytraycing is better." Big-O notation never guarantees that you'll ever be able to solve an input so big that the complexity estimate becomes accurate as to which algorithm is better. You then chose to heed my advice and present data on what actually matters - which algorithm is better in practice for which inputs. Good on you, even if the quote above is a very dickish way to accept advice.

Re:Ray Tracing != Ray Casting (0)

Rockoon (1252108) | more than 3 years ago | (#37074248)

What I'm discussing is what can and cannot be concluded from big-O asymptotic complexity.

..and what I'm discussing is the reality of rendering engines as they are today and the near future.. you know, like the fucking article and interview.

Re:Ray Tracing != Ray Casting (1)

NoSig (1919688) | more than 3 years ago | (#37074734)

You don't like it when you make mistakes, do you?

Re:Ray Tracing != Ray Casting (1)

loufoque (1400831) | more than 3 years ago | (#37073124)

John Carmack has been quoted on saying that full-blown ray-tracing just isn't feasible for real-time graphics due to the poor memory access patterns involved, as casting multiple rays per pixel, with multiple recursive steps ends up touching a lot of memory in your geometry data set, which just thrashes the cache on CPU and modern/future GPU hardware alike.

It's not good for a vector processor, but it's still pretty good for a many-core processor.

Re:Ray Tracing != Ray Casting (1)

Suiggy (1544213) | more than 3 years ago | (#37073450)

You still run into the same problems regardless of whether you're using a vector processor with no branch-prediction and no cache, to if you're using a bunch of in-order cores with cache coherency, or full-blown out-of-order cores with cache coherency. You end up pulling in a combinatorial explosion (ie. exponential number) of cache lines or memory accesses per recursive ray tied to the complexity of your scene.

People like to talk about how you can just throw more cores and distributed computing architectures to increase the performance of ray-tracing graphics. Yes, it works, but only up to a point. You can cut the number of days or weeks or months off of fully rendering a CG movie by building out a distributed render farm, but it's not going to hold for when you get single shared-memory machines with enough cores and you expect it to be able to do it all in real time.

It'll still be a much better use of those hardware resources if you go with algorithms which are much more orthogonal in nature when it comes to memory access patterns. Ray-casting + deferred shading should scale better than fully recursive ray-tracing, even on many-core shared-memory CPU architectures.

Re:Ray Tracing != Ray Casting (1)

loufoque (1400831) | more than 3 years ago | (#37074260)

You still run into the same problems regardless of whether you're using a vector processor with no branch-prediction and no cache, to if you're using a bunch of in-order cores with cache coherency, or full-blown out-of-order cores with cache coherency. You end up pulling in a combinatorial explosion (ie. exponential number) of cache lines or memory accesses per recursive ray tied to the complexity of your scene.

GPUs have no cache.
(Fermi has one, but it doesn't really work, so we might as well not count it)

Re:Ray Tracing != Ray Casting (1)

Suiggy (1544213) | more than 3 years ago | (#37074336)

GPUs eventually will get working caches. AMD's next-generation GPUs are shipping with stuff comparable or better than Fermi, and nVidia's Kepler is getting something better than Fermi.

Furthermore, Intel's Larrabee, Knight's Ferry, and Knight's Corner HPC compute products have cache, yet feature just a bunch of simple in-order, non-speculative x86 cores.

There's a wide spectrum of what's out there, and yes, you can have vector processors with cache, there's nothing saying that you can't do that.

Re:Ray Tracing != Ray Casting (1)

loufoque (1400831) | more than 3 years ago | (#37074440)

Anyway, the point is that of course ray casting is better suited to that hardware, but a lot of raytracing applications, like in medical or semiconductor imaging, already benefit from GPUs greatly.
And those things actually run in real (or interactive) time.

Re:Ray Tracing != Ray Casting (0)

Anonymous Coward | more than 3 years ago | (#37074540)

Sounds to me like it's a storage problem if you're *really* thinking about the future.

If detail is complex, rendering must be complex. That's a TRUTH. Anything less is a fancy example of lossy(quite) compression.

You can pre-render a scene with a farm right? Why the hell do we need to do it real time?

A long time ago sending a rendered image was not possible due to storage. So they sent small programs on floppies or whatever that rendered to the screen. It's really compression if you think about it hard enough.

So render *every single possible rotation of these complex ass 3d worlds*, IN ADVANCE and ship Yottabytes worth of data. http://en.wikipedia.org/wiki/Yottabyte
Then render the most dynamic of things on top of that in the proper place and make minor adjustments with all that cpu power.

Thank me later!

Does anyone actually listen to him (0)

Osgeld (1900440) | more than 3 years ago | (#37072728)

outside of the quake supernuts? Yes he revolutionized video games by taking a then decade old trick and using a faster pc to redraw it in real time, his company ended up making a FEW ok but nothing spectacular games, and had made a few engines that the competition blew away a month later

Why after all this time do people still hang on to his every word? Its just some dude, not Jesus.

Re:Does anyone actually listen to him (0)

Anonymous Coward | more than 3 years ago | (#37072824)

Id rater listen to him than jesus, what does that tell you?

Re:Does anyone actually listen to him (0)

MichaelKristopeit418 (2018864) | more than 3 years ago | (#37073000)

Id like what you did there.

Re:Does anyone actually listen to him (1)

Grizzley9 (1407005) | more than 3 years ago | (#37073408)

Id rater listen to him than jesus, what does that tell you?

That idiots post as anonymous cowards on Slashdot.

Re:Does anyone actually listen to him (0)

Anonymous Coward | more than 3 years ago | (#37073820)

You clearly don't know my pal, Jesus. Funniest little Mexican dude you'll ever meet. Anyone who wouldn't want to listen to him is just nuts.

Re:Does anyone actually listen to him (1)

discord5 (798235) | more than 3 years ago | (#37072828)

Why after all this time do people still hang on to his every word? Its just some dude, not Jesus.

By all means, feel free to do something more noteworthy and become the new authority on the subject at hand. Until then, I think I'll continue feeding my Savior Reincarnate shrine its daily snackrifices.

Re:Does anyone actually listen to him (0)

Anonymous Coward | more than 3 years ago | (#37072840)

Its just some dude, not Jesus.

Duh, we know the difference between Carmack and Romero.

Re:Does anyone actually listen to him (1)

TehNoobTrumpet (1836716) | more than 3 years ago | (#37072852)

To be fair, Jesus was also just some dude. Possibly.

Re:Does anyone actually listen to him (0)

Anonymous Coward | more than 3 years ago | (#37073094)

To be fair, Jesus was also just some dude. Possibly.

Dude I know Jesus, he does my carpentry work.

Re:Does anyone actually listen to him (0)

Anonymous Coward | more than 3 years ago | (#37073956)

He also does my lawn work! And I heard he does my friend's pool work! That Jesus sure does get around!

Re:Does anyone actually listen to him (1)

darkwing_bmf (178021) | more than 3 years ago | (#37073186)

To be fair, Jesus was also just some dude. Possibly.

No, Jesus wasn't the dude. He was the dude's bowling rival.

Re:Does anyone actually listen to him (1)

Anonymous Coward | more than 3 years ago | (#37073966)

Shut the fuck up Donny, you're out of your element.

Re:Does anyone actually listen to him (2)

ddt (14627) | more than 3 years ago | (#37073336)

There are a lot of reasons people still follow him closely.

He's really smart, hard-working, and open with his opinions. If something blows, he doesn't sugar coat it out of fear for his relationship with the company whose product he's bagging on.

He's also fiercely empirical. Moreso than a lot of his competitors, he doesn't buy into his own bullshit. If he has an idea, he makes an experiment to test it, and if it doesn't cut the mustard, he adjusts his view on the subject. He reverses his opinions over time and points out when he's been wrong. You'd think that trait would be common amoungst coders, what being lovers of science and engineering in general, and you'd be wrong.

He's also open with his source code, and a lot of coders have actually learned how to write games by reading his source for Doom & Quake. In addition to being a fast coder, he writes very clean code with a very crisp style, and that code has been a really positive influence on the next generation of game programmers.

But if you're looking for Jesus, you need look no further. I was one of the first coders John hired, and if you recall, Jesus is the *son* of God, and like Jesus, I was cast out from Heaven to muck about with mortals and spread the good word. Also, unlike John but more like Jesus, I believe my own bullshit without the need to test it. Crucially, if you look up "gamer" under Google images, I'm on the first page, and you'll see conclusively that I can rock the long Jesus hair. I am also skinny, emaciated, and have been repeatedly crucified. If you choose to become a disciple, I will welcome you with open arms. I just won't respect you.

Re:Does anyone actually listen to him (2)

XanC (644172) | more than 3 years ago | (#37073570)

So how many games will the 'Horns win this year?

Re:Does anyone actually listen to him (1)

h4rr4r (612664) | more than 3 years ago | (#37073650)

Yeah, it is real person not some quite likely fictional historic figure. What exactly is your point?

Re:Does anyone actually listen to him (1)

MattSausage (940218) | more than 3 years ago | (#37073816)

Have you not read this [codemaestro.com] ?

That's a pretty good reason to take him seriously.

Re:Does anyone actually listen to him (1)

Desler (1608317) | more than 3 years ago | (#37074072)

Great, but John Carmack admitted he didn't write the code and the derivation of the magic constant goes back many years prior to stuff SGI did.

Re:Does anyone actually listen to him (0)

Anonymous Coward | more than 3 years ago | (#37074208)

Oh look, it's someone who thinks it's super trendy to hate on popular figures.

People don't "hang on his every word" but he does tend to be insightful and in general worth listening to. A lot more than some random guy on Slashdot trying to act cool.

Best Yet (1)

Kid Zero (4866) | more than 3 years ago | (#37072904)

(uninspired content viewed at the molecular level is still uninspired content)

Best quote I've seen yet about this.

Too much dependence on drivers (1)

loufoque (1400831) | more than 3 years ago | (#37073138)

They rely too much on drivers, they should just attack the hardware and make their own renderer rather than relying on crappy OpenGL and DirectX...
But of course, they don't really want to invest in R&D. Game development is more of a "do something dirty quickly and then throw it again" kind of thing.

Re:Too much dependence on drivers (1)

hedwards (940851) | more than 3 years ago | (#37073394)

That would be a tremendous step backwards. You can get away with doing that if you're programming for a console, in fact that's how it used to be done. The problem is that as soon as you've got any variation at all in the hardware you very quickly start to have to code for every individual unit that you're going to support.

Need multiple resolutions? Well, you're going to have to make sure you code for them rather than handing them off to a 3rd party library. Unit have extra RAM? Well, you're going to have to adjust the code to deal with that as well.

Sure, it is much faster to run code like that, but it's also not portable really at all and you don't get any of the benefits that come from using a library.

Re:Too much dependence on drivers (1)

SuiteSisterMary (123932) | more than 3 years ago | (#37073446)

Even a few years ago, hell, probably still, for all I know, there was the DX8 path, the DX9 path, the openGL-nvidia path, the openGL-ATI path, and so on.

Or fifteen years ago, when part of the setup was picking exactly the correct video mode (hope your monitor and card support VESA 2.0 modes) and sound card, down to IRQ and DMA settings....

Re:Too much dependence on drivers (1)

loufoque (1400831) | more than 3 years ago | (#37073568)

The problem is that as soon as you've got any variation at all in the hardware you very quickly start to have to code for every individual unit that you're going to support.

If your software is of good quality, it is generic and easily retargetable.

Also, if you just consider GPUs, the interfaces to program them (CUDA in particular) has been there for quite a few generations and probably will stay for a long time still.
And all NVIDIA (and probably also AMD) drivers use the same code, with relatively few low-level details that change depending on model.

Re:Too much dependence on drivers (1)

NoSig (1919688) | more than 3 years ago | (#37073414)

That is the bad old days of computer graphics and sound - if a given game wasn't written for your particular hardware, too bad. It's hard to write to the hardware when there is a proliferation of distinct graphics cards out in the world and many more are added every year. On top of that, the way to talk to a given graphics card is often secret. There's a reason that people use OpenGL and DirectX.

Re:Too much dependence on drivers (1)

loufoque (1400831) | more than 3 years ago | (#37073766)

You say this as if the market wasn't essentially restricted to two vendors, with one clearly preferred by gamers.

Re:Too much dependence on drivers (1)

0123456 (636235) | more than 3 years ago | (#37073900)

You say this as if the market wasn't essentially restricted to two vendors, with one clearly preferred by gamers.

But a game written for OpenGL or Direct3D in 2001 still runs on modern hardware. A game written to write directly to 2001 hardware does not.

Writing directly to hardware without a standardised API is retarded and pretty much guarantees that in ten years time the software won't work or performance will be lousy if it does.

Re:Too much dependence on drivers (1)

loufoque (1400831) | more than 3 years ago | (#37074390)

But a game written for OpenGL or Direct3D in 2001 still runs on modern hardware. A game written to write directly to 2001 hardware does not.

This is irrelevant, since the industry does not try to make money out of old games. What they want is to make money at launch, then milk the cash cow for some time with a couple of people working on DLCs, then move on.

Also notice how your old Playstation games don't work without a Playstation (short of using an emulator). This is not a serious issue. Games are not meant to yield profits for life, so if they're limited to the lifetime of the platform it's fine.

Writing directly to hardware without a standardised API is retarded and pretty much guarantees that in ten years time the software won't work or performance will be lousy if it does.

What matters is that the game is groundbreaking and people want to buy it when it goes out.

Back when the Cell was released, it was a very powerful (it still is, but to a lesser extent). Game developers could have used it instead of the graphics card to do very innovative things (and Sony actually supported this idea). They didn't, even though it was available in many homes.
Game developers just can't handle doing serious R&D in graphics. They need hardware manufacturers or middleware people to do it for them. I know for a fact there are only a couple of Cell experts in the whole of Ubisoft.

Re:Too much dependence on drivers (1)

NoSig (1919688) | more than 3 years ago | (#37073914)

They make more than 1 graphics card each. Btw Intel sells more graphics cards than anyone else. You've probably got one integrated on your computer without knowing about it. There are also many flavors of each graphics card that the big companies come out with.

Re:Too much dependence on drivers (1)

loufoque (1400831) | more than 3 years ago | (#37074510)

All flavours of NVIDIA cards run the same base OpenGL implementation, likewise for ATI/AMD.
Intel GMA cannot even start most recent games.

Re:Too much dependence on drivers (0)

Anonymous Coward | more than 3 years ago | (#37074692)

All flavours of NVIDIA cards run the same base OpenGL implementation, likewise for ATI/AMD.

Bullshit they do. Maybe within 1 generation but that is not true outside of generations.

Re:Too much dependence on drivers (1)

NoSig (1919688) | more than 3 years ago | (#37074800)

If you listen to the interview, you'll hear John Carmack saying that built-in cards might out-perform dedicated cards for some things in future if they grant better memory access by virtue if using system memory directly. How do you know that the hardware interface between graphics cards never change? That doesn't sound right to me, but if you have an inside source, feel free to enlighten us.

Re:Too much dependence on drivers (1)

LWATCDR (28044) | more than 3 years ago | (#37073536)

Let me make a guess. You don't program do you?
1. AAA games already cost a lot to make. You want to spend $200 for a game?
2. Hardware is changing fast. If you write for the hardware what hardware do you write for? Which card? All of them? What about the cards that come out while you are spending the three years developing the game?

Now you may be confusing drivers with game engines but even then you would be wrong just not insane.

Re:Too much dependence on drivers (2)

loufoque (1400831) | more than 3 years ago | (#37073748)

Let me make a guess. You don't program do you?

I write software tools for high-performance computing that work on a variety of hardware, including all variations of x86, POWER, PowerPC and Cell, ARM, GPUs, multi-core, clusters... and other more confidential architectures (many-core, VLIW, FPGA, ASICs...)
Supporting a lot of different hardware is not an insurmountable problem if you have a good design (and a good test farm).

Now you may be confusing drivers with game engines but even then you would be wrong just not insane.

What we call graphics drivers nowadays is not just the code that allows to interact with the hardware. It's an OpenGL and DirectX implementation.
With the advent of programmable GPUs, this abstraction has become much too high-level. A C compiler is quite a better one.

Re:Too much dependence on drivers (1)

RyuuzakiTetsuya (195424) | more than 3 years ago | (#37074130)

At first i thought you were insane because writing for individual vendors was kind of crazy. However, between Intel, nVidia and ATI, you've cast a pretty wide net.

Then I realized it's even crazier wanting nVidia, ATi and Intel to all make their hardware communicate the same way through out the generations.

Re:Too much dependence on drivers (1)

loufoque (1400831) | more than 3 years ago | (#37074708)

CUDA and OpenCL should be all you need.

Re:Too much dependence on drivers (0)

Anonymous Coward | more than 3 years ago | (#37074218)

I think you have an unrealistic impression of what exactly is in a modern GPU. It's not the same as a CPU, there may in fact be multiple different processor cores, and several other blocks that have to be configured correctly, state that has to be managed between the blocks, control streams programmed, interrupt handling and so on.

Yes, it's not an insurmountable problem, but a GPU driver is a significantly complex piece of software in its own right and expecting that work to be duplicated multiple times for each product from each vendor for a single game engine is a pretty hefty ask. It'll be out of date long before it's finished.

Of course, what you could ask for is a much lower level API and therefore a leaner driver. This isn't an entirely unreasonable position, but who's going to drive this? There are also enough technical differences between the four major vendors that it's unlikely to happen any time soon.

id dosen't want R&D? (1)

SanityInAnarchy (655584) | more than 3 years ago | (#37073844)

Really?

I mean, the main criticism for id games has been that they are less games and more tech demos. Practically every game engine id ever sold has been used for much more interesting games by other people, but id still gets license fees.

If they don't invest in R&D, why are they doing their own engine design at all, and more importantly, just what are they investing in?

And even if you're right about OpenGL and DirectX being "crappy", which I highly doubt, the fact is that they are at least somewhat portable across hardware. Suppose he's right, and Intel integrated graphics suddenly win. If they went directly to the hardware, as you suggest, they'd need to port their game to Intel cards, or that game wouldn't work with those cards. We'd be back to the bad old days when you actually had to check the system requirements for your sound card and video card manufacturer.

Re:id dosen't want R&D? (1)

loufoque (1400831) | more than 3 years ago | (#37074298)

It wasn't a commented directed at Id in particular, but at the game developer industry in general.

We'd be back to the bad old days when you actually had to check the system requirements for your sound card and video card manufacturer.

Don't you have to do this anyway?
It's true consoles have stopped PC games from requiring new hardware, but a few years ago, you needed to replace your graphics card every other year to play new games.

Re:id dosen't want R&D? (1)

Desler (1608317) | more than 3 years ago | (#37074396)

Don't you have to do this anyway?

No. Gone are the days were you have to make sure you have a Sound Blaster 16 vs some other card because the game was only written to that specific sound card. Games usually give what generation versions of a video card you would need if running on various manufacturers but with probably almost no exceptions are games written to a specific manufacturer's audio or sound card.

Re:id dosen't want R&D? (1)

loufoque (1400831) | more than 3 years ago | (#37074488)

As I said in another subthread, it doesn't matter if you have choice since there are only two manufacturers anyway (with Nvidia being better supported by games in general).
Games usually can't even run on Intel GMA even on the lowest settings.

And if you need to replace your graphics card every two years to run new games, it's no different than buying a Sound Blaster 16 or whatever other fancy hardware a game at the time required.

Re:id dosen't want R&D? (1)

Lunix Nutcase (1092239) | more than 3 years ago | (#37074724)

Except that for unlike what you are talking about that I don't need to always keep around the Sound Blaster 16 to play that old game. Written to a portable API means I can use that same game on an old Sound Blaster or my new integrated sound card. This is what you seem to be missing.

Re:id dosen't want R&D? (1)

loufoque (1400831) | more than 3 years ago | (#37074772)

This is irrelevant, game developers don't care about whether you can play old games on new hardware. That's not their business model.
Again, I've already addressed this in another subthread. Go read it for more details.

Re:id dosen't want R&D? (1)

Lunix Nutcase (1092239) | more than 3 years ago | (#37075012)

Sorry, it's not irrelevant despite what you continue to claim. Your idea leads to more code needing to be written, far more testing is needed and programs are far more likely to break and have issues than what is written now. Basically your idea is fucking stupid on pretty much all counts.

Re:id dosen't want R&D? (1)

Truekaiser (724672) | more than 3 years ago | (#37074452)

Directx is only 'cross platform' if you count different video cards and the xbox360 as different platforms.
Opengl 'is' cross platform, you can run it on your pc, it's used on mac's, it's used in linux, it's used in most cellphones, it's used in most consoles(sans xbox and xbox360).

Re:id dosen't want R&D? (1)

Lunix Nutcase (1092239) | more than 3 years ago | (#37074752)

Directx is only 'cross platform' if you count different video cards and the xbox360 as different platforms.

I'm pretty sure that an Xbox 360 and a PC are different platforms. "Cross-platform" only means "runs on different platforms" it does not imply some minimum number of platforms. If I write something that works on both OS X and Windows it is "cross platform" even if it won't run on Linux or a cellphone.

ARM cores? (1)

SanityInAnarchy (655584) | more than 3 years ago | (#37073896)

Did anyone else catch that?

That's probably the most interesting thing he said all day. Throw an ARM core on the GPU, provide some sort of standard API for using it, and you eliminate all those pesky latency issues. Modern GPUs have enough RAM that one could potentially push the entire renderer onto the GPU, with the CPU and main memory only being responsible for game logic.

Of course, he also seems to be implying that this might go the other way, with integrated graphics winning out...

Re:ARM cores? (1)

Rockoon (1252108) | more than 3 years ago | (#37074220)

I thought it was interesting that he said hes spent the last 1.5 years working on raytracers. Carmack is very much a research guy now, not a developer. He pays others to develop.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>