Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Ray Tracing for Gaming Explored

Soulskill posted more than 6 years ago | from the pretty-pictures dept.

Graphics 266

Vigile brings us a follow-up to a discussion we had recently about efforts to make ray tracing a reality for video games. Daniel Pohl, a research scientist at Intel, takes us through the nuts and bolts of how ray tracing works, and he talks about how games such as Portal can benefit from this technology. Pohl also touches on the difficulty in mixing ray tracing with current methods of rendering. Quoting: "How will ray tracing for games hit the market? Many people expect it to be a smooth transition - raster only to raster plus ray tracing combined, transitioning to completely ray traced eventually. They think that in the early stages, most of the image would be still rasterized and ray tracing would be used sparingly, only in some small areas such as on a reflecting sphere. It is a nice thought and reflects what has happened so far in the development of graphics cards. The only problem is: Technically it makes no sense."

cancel ×

266 comments

Sorry! There are no comments related to the filter you selected.

Adaptive techniques: make or break (4, Interesting)

MessyBlob (1191033) | more than 6 years ago | (#22091858)

Adaptive rendering would seem to be the way forward. Ray tracing has the advantage that you can bail out when it gets complicated, or render areas to the desired resolution. This means a developer can prioritise certain regions of the scene and ignore others: useful during scenes of fast motion, or to bring detail to stillness. The result is similar to a decoded video stream, with detail in the areas that are usefully perceived as detailed. Combining this with eye position sensing (for a single user) would improve the experience.

Re:Adaptive techniques: make or break (0)

Anonymous Coward | more than 6 years ago | (#22091962)

This means a developer can prioritise certain regions of the scene and ignore others

Sure, after calculating all of the rays and then figuring out which will reach the "prioritized" part of the scene, assuming that there is a shiny sphere or whatever visible. Now do you see why that makes no sense? It might work to "bring detail to stillness", though, if things aren't moving you won't notice the framerate hit.

Re:Adaptive techniques: make or break (2)

MessyBlob (1191033) | more than 6 years ago | (#22091988)

> ...after calculating all the rays Not necessary :o) Assumedly, a game will have knowledge of its moving objects, and a quick calculation would create a region in the 2D viewing plane.

Re:Adaptive techniques: make or break (1, Insightful)

Anonymous Coward | more than 6 years ago | (#22092856)

Why not use the old "ray-casting techniques" to take the light backwards from the camera? This would allow you to easily limit which areas of the screen received a full ray traced treatment. You would need to include reflectives as potential light sources for you back trace, but I don't see why it wouldn't be effective.

Re:Adaptive techniques: make or break (-1, Troll)

Anonymous Coward | more than 6 years ago | (#22091966)

Who cares? Most of today's games are written in proprietary development environments like Visual Studio, so they are poor by definition. Once open-source gaming catches on, then you'll have thousands of people contributing to the code. Linux hackers are far better coders then most people who use Visual Studio (most of them can only use Visual Basic and C# anyway) so its likely they will add ray-tracing renderers that are far faster, more efficient and which produce better results.

Re:Adaptive techniques: make or break (5, Insightful)

morgan_greywolf (835522) | more than 6 years ago | (#22092062)

Linux hackers are far better coders then most people who use Visual Studio
Um, those two groups aren't mutually exclusive. Many of us *nix hackers also have day jobs that require us to use tools like Visual Studio. You make assumptions that aren't true.

Re:Adaptive techniques: make or break (-1, Flamebait)

GNUALMAFUERTE (697061) | more than 6 years ago | (#22092310)

Yes they are. If you get to use visual studio at work is because you are not good enough to be hired as a Unix coder.

Re:Adaptive techniques: make or break (3, Insightful)

DeeQ (1194763) | more than 6 years ago | (#22092340)

This is by far the dumbest statement I've ever read. Somebody has never tried to get a job in the field before eh?

Re:Adaptive techniques: make or break (3, Insightful)

morgan_greywolf (835522) | more than 6 years ago | (#22092398)

Yes they are. If you get to use visual studio at work is because you are not good enough to be hired as a Unix coder.
Um, yeah, right. There are far and away more Windows development positions than there are Unix development positions. And most enterprise apps these days aren't exclusively one or the other -- they are cross platform and mixed-language. So even a Unix developer might spend at least some time with Visual Studio.

Re:Adaptive techniques: make or break (2, Insightful)

Firehed (942385) | more than 6 years ago | (#22093322)

Don't remind me. Exactly how a text editor can load slower than Photoshop will perplex me until the end of time.

Re:Adaptive techniques: make or break (1, Insightful)

Anonymous Coward | more than 6 years ago | (#22092076)

HAHAHAHA OPEN SOURCE GAMING? Get out of your dream world. Clearly you are not too knowledgable about games my good sir.

Re:Adaptive techniques: make or break (0)

Anonymous Coward | more than 6 years ago | (#22092384)

Linux hackers are far better coders then most people who use Visual Studio (most of them can only use Visual Basic and C# anyway) so its likely they will add ray-tracing renderers that are far faster, more efficient and which produce better results.
Wtf?! Are you some kind of linux hacker groupie? That's just...eww....

"How will ray tracing for games hit the market?" (5, Funny)

bobdotorg (598873) | more than 6 years ago | (#22091860)

That completely depends on your point of view.

Now hear this (4, Insightful)

suso (153703) | more than 6 years ago | (#22092030)

I get tired of hearing this talk about real time ray tracing. They might be able to get basic ray tracing at 15 frames per second or more. But it won't matter, the quality won't be as good as some of the high quality images that you see that take hours to render. Sometimes days.

See, the two are incompatible because the purpose is different. With games, the idea is "How realistic can we make something look at a generated rate of 30 frames per second". But with photorealistic rendering the idea is "How realistic can we make something look, regardless of the time it takes to render one frame."

And as time goes on and processors become faster and faster, the status quo for what people want becomes higher. Things like radiosity, fluid simulations and more becomes more expected and less possible to do in real time. So don't ever count on games looking like those still images that take hours to make. Maybe they could make it look like the pictures from 15-20 years ago. But who cares about that? Real time game texturing already looks better than that.

already done with Quake (2, Interesting)

RMH101 (636144) | more than 6 years ago | (#22092088)

Re:already done with Quake (3, Interesting)

mdwh2 (535323) | more than 6 years ago | (#22092642)

It says that a quad core processor gets 16.9 frames at 256x256 resolution.

Wow.

(Like most ray tracing advocates, he points out that ray tracing is "perfect for parallelization", but this ignores that so is standard 3D rendering - graphics cards have been taking advantage of this parallisation for years.)

Re:already done with Quake (3, Informative)

quantumRage (1122013) | more than 6 years ago | (#22092928)

well, if you look more closely, you would notice that the articles have the same author!

Re:Now hear this (5, Insightful)

IceCreamGuy (904648) | more than 6 years ago | (#22092888)

from TFA:

At HD resolution we were able to achieve a frame rate of about 90 frames per second on a Dual-X5365 machine, utilizing all 8 cores of that system for rendering.
The quote is referring to Quake 4. So they already can raytrace a semi-modern game at 90 FPS, and they have a graph that very clearly shows raytracing at a performance advantage as complexity increases. Just look at the damn graph (page three), the point where raster performance and raytracing performance intersect can't be more than a couple years off, and it's apparent that we may even have crossed that point already. Continue becoming tired of hearing about raytracing, the rest of us will sit patiently as the technology comes of age. Personally, I'm tired of hearing about this HD stuff, I mean, it's not like HD TVs will ever be mainstream, with their huge pricetags and short lifespans. Oh wait...

Re:Now hear this (1)

jibster (223164) | more than 6 years ago | (#22093132)

I wish I had mod points to give you, this is the post I wanted to make. Forget the arguments that you think RTRT will never happen, its a mathematical certain that it will.

Re:Now hear this (4, Insightful)

suso (153703) | more than 6 years ago | (#22093166)

The quote is referring to Quake 4. So they already can raytrace a semi-modern game at 90 FPS, and they have a graph that very clearly shows raytracing at a performance advantage as complexity increases. Just look at the damn graph (page three),

I don't have to look at the damn graph to tell you that what people are going to want is this [blenderartists.org]

And what they are going to get is this [pcper.com]

And, they should just be happy with this [computergames.ro] (which, is pretty awesome)

My point is that real time photorealistic rendering will never catch up with what people expect from their games. It will always be behind. If all you want is mirrors, then find a faster way to implement them at the expense of a bit of quality.

Re:Now hear this (1)

cheater512 (783349) | more than 6 years ago | (#22093546)

Where are my mod points when I need them?

If your going to do it, you might as well do it properly.
It cant be done so dont worry about it.

Pun Intended (1, Funny)

Wanado (908085) | more than 6 years ago | (#22092464)

It is a nice thought and reflects what has happened so far in the development of graphics cards.
I've been reflecting on this subject as well.

This isn't what we need in games (5, Insightful)

Lurks (526137) | more than 6 years ago | (#22091888)

I guess one has to state the obvious in that by moving to a process which is not implemented in silicon, as with current graphics cards, the work must necessarily be done in software. That means it runs on CPUs and that's something Intel is involved in where as when you look at the computational share of bringing a game to your senses right now, NVIDIA and ATI/AMD are far more likely to be providing the horsepower than Intel.

But really, even if this wasn't a vested interest case (and it may not be, no harm exploring it after all) - the fact remains that we don't actually need this for games. Graphics hardware has gone down an entirely different route whereby you write little shader programs which create surface visual effects on top of the bread and butter polygons and textures. This is a well established system by now and has a naturally compressive effect. It's like making all your visual effects procedural in nature rather than giving objects simple real-world textures and then doing a load of crazy maths to simulate reality. It works very well. Rememeber a lot of the time you want things to look fantastical and not ultra-realistic so lighting is some of the challenge.

Games aren't having a problem looking great. They're having a problem looking great and doing it fast enough and game developers are having a problem creating the content to fill these luscious realistic-looking worlds. That's actually what's more useful, really. Ways to aid game developers create content in parallel rather than throwing out the current rendering strategy adopted world wide by the games industry.

Re:This isn't what we need in games (2, Interesting)

BlueMonk (101716) | more than 6 years ago | (#22091992)

I think the problem with the current system, however, is that you have to be a professional 3D game developer with years of study and experience to understand how it all works, whereas if you could define scenes in the same terms that ray tracers accept scene definitions, I think the complexity might be taken down a notch for developers making quality 3D game development a little more accessible and easy to deal with, even if it doesn't provide technical advantages.

Re:This isn't what we need in games (3, Interesting)

CannedTurkey (920516) | more than 6 years ago | (#22092096)

I think the problem with the current system is that it scales horribly. Right now we're barely pushing 2 Megapixel displays with all those shader effects turned on. If the Japanese have their way we'll have 33MP displays in only 7 years - because this is the 'broadcast standard' they're shooting for. Can they double the performance of the current tech every 2 years to eventually meet that? I have my doubts.

Re:This isn't what we need in games (-1)

babbling (952366) | more than 6 years ago | (#22092280)

Ray-tracing scales like n-squared.

Re:This isn't what we need in games (1)

cnettel (836611) | more than 6 years ago | (#22092326)

n being what? The beauty of it is that it's quite nice, when related to the number of pixels. The problem is the horrible non-locality when literally bouncing around the scene.

Re:This isn't what we need in games (1)

Lurks (526137) | more than 6 years ago | (#22092434)

"I think the problem with the current system is that it scales horribly."

On the contrary, it scales very well actually. You can simply use more pixel pipelines to do things in parallel to do less passes (which is what you see in most graphics hardware including the current consoles), or you can render alternate lines, or chunks of the display etc for multiple entire chunks of hardware such as SLI/Crossfire on the PC.

The problem you describe is essentially that any complicated visual processing in very high resolutions requires a massive amount of processing power and that's expensive to implement. It's possible, it's just expensive. There's just no way in hell the massively massively more computationally expensive technique of raytracing is going to improve upon things. I game on a PC in 1920x1200 predominently with a very high graphics part with large amounts of dedicated memory. That costs a lot more than the hardware that's in a console but it's actually the same type of graphics part that's in the 360/PS3, it's just a higher spec part with more parallel bits etc.

There's also a bit of light at the end of the tunnel for the first time with this stuff. A really good PC graphics card (the 8800GT) is now available for money which people would actually consider spending. Sadly this is a bit too little too late for the PC as a gaming platform but it does show the march of technology quite well.

By the next console cycle they'll be pitched at whatever NVIDIA releases fall of 2009 most likely. With current performance increases I'd say a ballpark is being able to render in 1920x1200 comfortably with AA and with less performance concerns than the current generation in terms of on-screen content. No it wont be 33MP displays but honestly early-adopters aside it's still going to be a long time before 1920x1200 displays can be considered the norm for the living room.

And I think you're into diminishing returns after that sort of resolution anyway. Better to remove some on-screen limitations due to console architectures (of which there is quite a lot, mostly due to lack of memory and bandwidth) rather than arms race the resolution issue alone.

Re:This isn't what we need in games (1)

Squalish (542159) | more than 6 years ago | (#22092912)

The japanese standard is, to put it bluntly, complete BS.

We MIGHT have the technical capability to encode 4K video in 4:4:4 by 2015 in realtime with upper-pro-level gear - it's a stretch. We won't see cameras like the Red One standardize in movie studios until well after this decade, much less television studios. 33 megapixels for a broadcast standard is ludicrous - and will be impossible even for the highest end cinema to implement in 7 years.

I'd settle for a solid, end-to-end 1080p60 in VC-1 as a broadcast standard - it's at the upper end of the picture quality an average person can distinguish, it's barely doable with today's hardware and bandwidth, and it's a big step above the current incredibly messy process (which usually includes reencoding in lower resolutions or horrible bitrates, interlacing, and viewing in non-native resolutions).

Re:This isn't what we need in games (1)

CannedTurkey (920516) | more than 6 years ago | (#22093620)

Impossible is a figment of your imagination.

Re:This isn't what we need in games (1)

Josef Meixner (1020161) | more than 6 years ago | (#22092638)

And why do you think that is any different with raytracing? Model geometry is very similar and I doubt that you will understand the math behind BDRF (Bidirectional Reflectance Function, a way to describe surface characteristics) or scattering in participating media without some time to study it. In fact they are so similar that OpenRT [openrt.de] is a raytracer with an interface quite similar to OpenGL. The shader system is completely different, though as it wouldn't make sense to limit it to GPU-shaders without hardware support.

Raytracer handle big geometry quite well, but if you think you can just throw everything in a naive way at them, you will get a bad surprise on the terrible speed. I don't see that modeling would be much easier, you don't have to model low poly (which is indeed hard), but because of the way raytracers work you often can't get away with a texture but need to have geometry. Also the textures are as muchwork, writing efficient shaders is not a lot different and the complete game logic is the same.

Re:This isn't what we need in games (3, Insightful)

darthflo (1095225) | more than 6 years ago | (#22092144)

Keep in mind recent parallelization advances. According to TFA, raytracing performance scales almost linearly with the number of processors (factor 15.2 for four quadcore machines connected via GigE over a single core); both Crossfire and SLI don't scale remotely that great.
If the parallelization trend continues like it's progressing now, manicore CPUs are probable to arrive before 2010. Also, both AMD and Intel appear to be undertaking steps in the direction of enthusiast-grade multi-socket systems, increasing the average number of cores once again. Assuming raytracing can be parallelized as gread as TFA makes it sound, rendering could just return to the CPUs. I'm no expert, but it does sound kinda nice.

Re:This isn't what we need in games (1)

ddoctor (977173) | more than 6 years ago | (#22092290)

Assuming raytracing can be parallelized as gread as TFA makes it sound, rendering could just return to the CPUs

GPUs are far more parallelized than CPUs - in that sense, it makes more sense to offload it to the GPU. However, CPU parallelization is increasing, so you never know. With things like GPGPU, the line between what's done on CPU and GPU is blurring.

Not the GPU. (1)

DrYak (748999) | more than 6 years ago | (#22092826)

GPUs are far more parallelized than CPUs - in that sense, it makes more sense to offload it to the GPU.


Not that much. Because GPU achieve ultra high parallelism using SIMD (single instruction multiple data) mechanisms.
And those aren't very efficient with very diverging code path.

i.e.:
- in traditional triangle rasterisation a lot of pixels are calculated at the same time for the same triangle. Thus a lot of thread will be running the exact same shader code on the GPU. SIMD is a nice increase of performance.
- in RayTracing, all pixels (all rays) start at the same point. But as rendering advance (as ray travel away from the camera) they will start to get different paths. In the end not all rays have taken the same path and executed the same shaders at the same time.

Thus you won't get as much improvement from a GPU as you would expect compared to other algorithms.

CELL's SPU or other chips that have several units that could take diverging paths (the latest generation of SPARC Niagara chips that don't share 1 single math core for all 32 threads) would probably perform well.
On the other hand, modern GPU are starting to have a large number of separate SIMD processors, so even if they couldn't take full advantage of their SIMD capability (as explained above), they could still achieve good performance by running 1 thread per multiprocessor.

Re:This isn't what we need in games (2, Informative)

mdwh2 (535323) | more than 6 years ago | (#22092606)

Keep in mind recent parallelization advances. According to TFA, raytracing performance scales almost linearly with the number of processors

Yes, and standard rasterisation methods are embarrassingly parallel, too. As the other reply points out, we already have parallel processors in the form of GPUs. So I don't see that either multicore CPUs, or that raytracing is easily parallised, is going to make raytracing suddenly catch up.

What we might see perhaps is that one day processors are so fast enough that people are willing to take the performance hit to get better effects from ray tracing (though even then, I hear that even non-real-time realistic 3D rendering often doesn't use ray tracing these days?)

Re:This isn't what we need in games (1)

SharpFang (651121) | more than 6 years ago | (#22092842)

The little problem is that power usage scales almost linearly with the number of cores, and so does the price to a degree.

2010 sounds realistic for a top-shelf equipment for the few chosen. 2020 looks more like consumer-grade electronics.

Re:This isn't what we need in games (1)

crhylove (205956) | more than 6 years ago | (#22093216)

Remember also, that right now a Core 2 duo has 291 transistors. Eventually, with the advent of nanotechnology and other heat efficient technologies that make our current stuff like tinker toys, we may have 291 cores on a cpu. At that point, if there is an advantage to ray tracing (which I believe there are many), we'll be doing it. I'm interested to see physics advancing more in games for the next few years. I think there is a lot more improvement to be done in physics than in graphics, also hugely benefiting by multicore.

In 1946 Eniac had approximately 175,000 transistors. In 2007 a core 2 duo had 291 million. So that's a ratio of:

175,000:291,000,000 or

7:12,000
(I'm ball parking in my head, leave me alone).

That happened in 60 years. So let's extrapolate 60 years.... I'm sure that there will be some derivative chips being sold THIS YEAR that have 8 cores, which is even more than 7 (if there aren't already, I'm not a chip fab engineer, obviously), so in 60 years I predict we will have 12,000 core chips.

Now, imagine a beowulf cluster of those!!!

I'll call it O'Drinnan's law:
The increase of cores popularly used in silicon chips will be similar to the expansion of transistors in the previous age. Probably even 10 to 20 times faster. Let's wait and see about nanotech....

Idiot! (1)

crhylove (205956) | more than 6 years ago | (#22093336)

I left out the word million several times in my initial statement. DAMN YOU NO EDITING!

rhY

Re:This isn't what we need in games (1, Insightful)

Anonymous Coward | more than 6 years ago | (#22092148)

Ray-tracing uses shaders too but in a more natural way. They define how an object is shaded, how light reflects on an object. That's what shaders in rasterizers are used for too, but they also use them to emulate a lot of effects that come naturally in ray-tracing. Shadows, reflections, lighting, ambient lighting...

Ray-tracing versus rasterizing has little to do with fantastical versus ultra-realistic. All cgi in movies is rendered with ray-tracing (pixar's renderman is technically a rasterizer though, but not like the one used in games). Cars, Nemo, ... don't look realistic, they look fantastical.

Ray-tracing gives the designers much more freedom. It's better in every way, except for speed.

Re:This isn't what we need in games (1)

should_be_linear (779431) | more than 6 years ago | (#22092174)

It is hard to predict what creative artists heads are capable of, given possibilities of Ray Tracing. In the beginning it will not being much improvement over current state-of-the-art graphics, just like graphics in late Amiga 2D games looked much better then first 3D textured games (few polygons, low-res textures). With Ray Tracing technology, complexity of scene (number of polygons) can be increased by several orders of magnitude. This can especially improve games like flight simulators, where you can see millions of polygons "at once", think of looking at multiple cities at once from 5000 meters height. Ray tracing is (almost) infinitely parallel process. If 100-cores CPU existed, it would be easy to utilize all those cores to render a Ray Traced scene 100 times faster then 1 core is able to do.

Re:This isn't what we need in games (1)

mdwh2 (535323) | more than 6 years ago | (#22092824)

If 100-cores CPU existed, it would be easy to utilize all those cores to render a Ray Traced scene 100 times faster then 1 core is able to do.

And if a 100-core processor existed, we could use those to render rasterised graphics 100 times faster. Will 128 [wikipedia.org] do?

(I don't disagree with the rest of your post, but there is no special advantage with multicore processors and ray tracing.)

Re:This isn't what we need in games (1)

kestasjk (933987) | more than 6 years ago | (#22092344)

As I understand it from the article as the number of polygons goes up ray tracing becomes more and more attractive, and that's the major draw. Apparently there'll come a point where scenes are complex enough that it is more efficient to use ray tracing.

He also hinted that ray tracing could make collision detection much better, so that you dont get hands/guns sticking through thin walls/doors, which would also be good.

But hey I'm not rooting for one of the other, game devs will use whatever is best and I'll sit back and enjoy..

Now to click submit before I open that 85MB PNG from the article..

Re:This isn't what we need in games (1)

Josef Meixner (1020161) | more than 6 years ago | (#22092518)

Graphics hardware has gone down an entirely different route whereby you write little shader programs which create surface visual effects on top of the bread and butter polygons and textures. This is a well established system by now and has a naturally compressive effect. It's like making all your visual effects procedural in nature rather than giving objects simple real-world textures and then doing a load of crazy maths to simulate reality.

... a way which was pioneered by the Reyes renderer (if I am not istaken) and is standard in any contemporary rendering system. Actually at that level there isn't a lot of difference between raytracing and rasterization. Raytracers often have more freedom in the shaders because e.g. shadows interact very elegantly with this system. A rasterizer needs to integrate shadow calculations (e.g. shadow volumes) in the shader code, a raytracer just shoots a shadow ray and then "knows" which light source has an effect and which one doesn't.

Ways to aid game developers create content in parallel rather than throwing out the current rendering strategy adopted world wide by the games industry.

There is one of the advantages of raytracers, you can use incredibly find details as the performance doesn't degrade as fast with many objects and with detailed objects and there are ways to even handle many light sources (one way is called "Lightcuts" and handles in the order of 100k lights without degrading too badly). It is even possible to delay the loading of objects and only represent them as bounding box and only load them when that bounding volume is hit. A rasterizer can't do that so far down.

Hardware product dependence not good (2, Insightful)

dazedNconfuzed (154242) | more than 6 years ago | (#22092654)

the fact remains that we don't actually need this for games.

Your post is heavily dependent on availability of suitable hardware. Software can be ported and recompiled to new platforms, but hardware-dependent software has a short lifespan precisely because getting usable hardware doesn't last particularly long. There's a lot of otherwise good enjoyable games which are unplayable now because they depended on the early Voodoo cards or other unrepeated graphics hardware. Now with CPU power ramping back up (relatively speaking), we can pull away from lifespan-limiting dependence on dedicated graphics hardware, and move the full rendering process back to generic CPU hardware.

Torques me off when I want to play something but can't - not because I don't have the horsepower, but because it requires obsolete cards.

BTW: The article notes that RT vastly outperforms raster on very high poly count scenes - that alone is reason to switch.

Raytracing scales up far better... (1)

argent (18001) | more than 6 years ago | (#22092846)

Raytracing scales up far better than rasterization. Adding triangles to a raytraced scene has far less effect on it than to a rasterized scene, because you don't have to render anything that's not actually part of the scene... and you don't have to run what is effectively a separate rendering pass to eliminate parts of the scene to limit the hidden variables, and the processing for each collision is much simpler so you can fit thousands of dedicated raytracing processors in a hardware raytracer where the same transistor budget might support 8 or 12 rendering pipelines.

Look at what they were doing in 2005, with 1% of the gates, 1% of the memory bandwidth, and 15% of the core clock speed of today's GPUs: http://graphics.cs.uni-sb.de/SaarCOR/ [uni-sb.de]

Re:This isn't what we need in games (1)

MobyDisk (75490) | more than 6 years ago | (#22093136)

But it is much harder to do certain effects this way. Ray tracing gives free per-pixel lighting, shadowing, reflections, and radiosity with minimal or no added work on the part of the developer. Tto do that same on today's cards the programmer must master a whole series of mathematical and psychological tricks and shortcuts to fake-out the apperance of those same things, then implement those in some funky assembly language for the video card.

Creation requirements may be less (1)

PIPBoy3000 (619296) | more than 6 years ago | (#22093592)

One of the problems with the current techniques is that programmers have to write shaders and artists have to make normal maps, high and low poly versions of models, and so on. It may be that raytracing reduces some of the need to do this work, and thus making games cheaper and faster to create.

Also of interest ... (1)

foobsr (693224) | more than 6 years ago | (#22091928)

There is a project 'The OpenRT Real-Time Ray-Tracing Project' [openrt.de] (not so much open despite name, but noncommercial code available) out there, and presumably Blender [blender.org] should be there soon [slashdot.org] .

CC.

Re:Also of interest ... (1)

Yetihehe (971185) | more than 6 years ago | (#22092238)

Its name is OpenRT just because it's modeled after OpenGL. It has nothing open in it, only name.

Wow. (2, Insightful)

SCHecklerX (229973) | more than 6 years ago | (#22091958)

I remember some scenes that I would create with PoV to sometimes take several hours for a single frame to complete. Now we're looking at doing it in real time. Amazing.

Further Reading (5, Interesting)

moongha (179616) | more than 6 years ago | (#22091970)

... on the subject, from someone that doesn't have a vested interest in seeing real time ray tracing in games becoming a reality.

http://realtimecollisiondetection.net/blog/?p=38 [realtimeco...ection.net]

Ray tracing is so wrong... (0, Troll)

Oscaro (153645) | more than 6 years ago | (#22092004)

I keep finding people who think that ray tracing is some kind of "perfect" rendering algorithm.

Actually I think of ray tracing as of the bubble-sort of computer graphics. It is absurdly naive, completely inefficient, and totally not necessary, especially for real time graphics. There are lots of different approaches to rendering that are much more flexible, efficient, and feasible.

Re:Ray tracing is so wrong... (0)

Anonymous Coward | more than 6 years ago | (#22092044)

Such as? Please...bestow your knowledge upon us, oh Great One.

Re:Ray tracing is so wrong... (1)

SharpFang (651121) | more than 6 years ago | (#22092948)

There was this project on /. with 'photon tracking' image generation. It emulated behavior of separate photons, their route and properties from the light source to the camera. The image got a nice effect of rainbow in a prism, realistic reflection in a mirror-like sphere, filtering the light through colorful glass etc. The problem was that creation of the image employed thousands of computers and months of work, and was still rather 'grainy', like high-sensitivity film. Meaning given enough computational power you'd get 100% accurate, perfectly made images, but we're far behind the 'enough' of the computational power yet.

Re:Ray tracing is so wrong... (1)

darthflo (1095225) | more than 6 years ago | (#22092190)

Like?
(I'm genuinely interested. Got some links for further reading?)

Re:Ray tracing is so wrong... (1)

tomandlu (977230) | more than 6 years ago | (#22092274)

Do you actually know what you are talking about?

If so, feel free to expand, otherwise I can't help but feel that you're missing the point.

Yes, plain vanilla raytracing has many inefficiencies, but so would a car without gears. All current raytracers implement various strategies for speeding up rendering (bounding boxes being the most obvious).

As others have said (and partially covered in TFA), the advantage of raytracing is that it simulates a real-world process (albeit ass-backwards - i.e. light is traced from the viewpoint to the lightsource(s), rather than visa-versa), and therefore many effects that are buggy, difficult or impossible for traditional graphic engines are simplified.

AND...it doesn't produce realistic images! (2, Interesting)

Joce640k (829181) | more than 6 years ago | (#22092304)

Ray tracing looks hyper real on scenes full of plastic and mirrors but it's useless for rendering real-world scenes where radiosity effects dominate.

Re:AND...it doesn't produce realistic images! (1)

tomandlu (977230) | more than 6 years ago | (#22092468)

Actually, I was wondering about this. Radiosity isn't a huge problem for raytracing - correction, it is a huge problem for raytracing, so raytracers that implement radiosity generally use a separate process.

Does rasterisation get radiosity "for free" or is it yet another process that has to be implemented?

If the latter, then this can't really count against raytracing.

I guess you could use it for shadows... (1)

Joce640k (829181) | more than 6 years ago | (#22092568)

Ray tracing solves two problems well - shadows and reflections.

A hybrid renderer might produce slightly better shadows than we have today but we still need orders of magnitude more power before it happens. Right now we're pushing the limit of graphics cards without ray tracing. Adding ray tracing at each pixel will make your pixel shaders hundreds of times slower.

Re:I guess you could use it for shadows... (2, Interesting)

42forty-two42 (532340) | more than 6 years ago | (#22093462)

Read TFA. Ray tracing does NOT happen on the graphics card; it happens on your CPU. And they've got Quake 4 at 1280x running at 90 FPS raytraced already. Since raytracing scales almost linearly, as you add more cores to your CPU (which is likely the future direction of CPU technology improvement), you improve raytracing performance by about the same factor.

Re:AND...it doesn't produce realistic images! (1)

flewp (458359) | more than 6 years ago | (#22092580)

You can still bake the shadow information into the textures. Sure, it wouldn't necessarily work for dynamic, moving objects, but it's just one partial solution.

Re:AND...it doesn't produce realistic images! (1)

dave420 (699308) | more than 6 years ago | (#22092730)

Really? [irtc.org] And that's from 2000. Check out their more recent galleries for some realistic (and some not so realistic) images.

How far we've come in just 15 years (5, Interesting)

dada21 (163177) | more than 6 years ago | (#22092026)

I was a founder of one of the Midwest's first rendering farms back in 1993, a company that has now moved on to product design. Back then we had Pentium 60s (IIRC) with 64MB of RAM. A single frame of non-ray traced 3D Studio animation took an hour or more. We had probably 40 PCs that handled the rendering, and they'd chug along 20 hours a day spitting out literally seconds of video. I remember our first ray trace sample (can't recall the platform for the PC, though) and it took DAYS to render a single frame.

I do remember that someone found some shortcuts for raytracing, and I wonder if that shortcut is applicable to realtime rendering today. From what I recall, the shortcut was to do the raytracing backwards, from the surface to the light sources. The shortcut didn't take into account ALL reflections, but I remember that it worked wonders for transparent surfaces and simple light sources. I know we investigated this for our business, but at the time we also were considering leaving the industry since the competition was starting to ignite. We did leave a few months early, but it was a smart move on our part rather than continue to invest in ever-faster hardware.

Now, 15 years later, it's finally becoming a reality of sorts, or at least considered.

Many will say that raytracing is NOT important for real time gaming, but I disagree completely. I wrote up a theory on it back in the day on how real time raytracing WOULD add a new layer of intrigue, drama and playability to the gaming world.

First of all, real time raytracing means amazingly complex shadows and reflections. Imagine a gay where you could watch for enemies stealthily by monitoring shadows or reflections -- even shadows and reflections through glass, off of water, or other reflective/transparent materials. It definitely adds some playability and excitement, especially if you find locations that provide a target for those reflections and shadows.

In my opinion, raytracing is not just about visual quality but about adding something that is definitely missing. My biggest problem with gaming has been the lack of peripheral vision (even with wide aspect ratios and funky fisheye effects). If you hunt, you know how important peripheral vision is, combined with truly 3D sound and even atmospheric conditions. Raytracing can definitely aid in rendering atmospheric conditions better (imagine which player would be aided by the sun in the soft fog and who would be harmed by it). It can't overcome the peripheral loss, but by producing truer shadows and reflections, you can overcome some of the gaming negatives by watching for the details.

Of course, I also wrote that we'd likely never see true and complete raytracing in our lives. Maybe I'll be wrong, but "true and complete" raytracing is VERY VERY complicated. Even current non-real time raytracing engines don't account for every reflection, every shadow, every atmospheric condition and every change in movement. Sure, a truly infinite raytracer IS impossible, but I know that with more hardware assistance, it will get better.

My experience over the years was ALWAYS with static images that were raytraced. They looked great, but it wasn't until I experienced raytraced animations (high res, many reflective and transparent layers with multiple light sources and a sun-source) that I really saw the benefit and how it would aid in gaming.

The next step: a truly 3D immersive peripheral video system, maybe a curved paper-thin monitor?

Re:How far we've come in just 15 years (5, Funny)

Twisted64 (837490) | more than 6 years ago | (#22092154)

Imagine a gay where you could watch for enemies stealthily...
How's that voice recognition software working out for you?

Re:How far we've come in just 15 years (-1, Redundant)

Anonymous Coward | more than 6 years ago | (#22092186)

Imagine a gay where you could...
mmm....

Re:How far we've come in just 15 years (1, Funny)

Anonymous Coward | more than 6 years ago | (#22092192)

Imagine a gay

Paging Dr. Freud ...

Re:How far we've come in just 15 years (2, Interesting)

192939495969798999 (58312) | more than 6 years ago | (#22092202)

You can do global lighting (the next step from radiosity) with no side effects using interval rendering/raytracing. and since interval math lends itself to parallelization, your only limit would be hardware cost, which should eventually be low enough to have a globally-lit and raytraced real-time game. At first maybe just 3d pacman, but how awesome would that be!

Re:How far we've come in just 15 years (2, Funny)

4D6963 (933028) | more than 6 years ago | (#22092302)

Imagine a gay who could watch for enemies stealthily by monitoring shadows or reflections

There, fixed it for you, it makes a bit more sense now, I guess..

Re:How far we've come in just 15 years (1)

hackstraw (262471) | more than 6 years ago | (#22093054)

My experience over the years was ALWAYS with static images that were raytraced. They looked great, but it wasn't until I experienced raytraced animations (high res, many reflective and transparent layers with multiple light sources and a sun-source) that I really saw the benefit and how it would aid in gaming.

All this is fine, but I think we will have to wait another 20+ years for computers to be fast and cheap enough before this becomes a reality.

The next step: a truly 3D immersive peripheral video system, maybe a curved paper-thin monitor?

The quality/cost is variable, but these things are available today. Shutter glasses, linear and circular polarized glasses, there is a TV on the market right now that offers a degree of 3dness w/o glasses (I can't find the link to it now).

Its definitely more fun dreaming about what will come in the computer world, than dealing with the "Why the fsck doesn't this work like it should?" today.

fscking pays better than dreaming though.

Re:How far we've come in just 15 years (1)

Marvin01 (909379) | more than 6 years ago | (#22093420)

The next step: a truly 3D immersive peripheral video system, maybe a curved paper-thin monitor?
I don't know about curved monitors or true 3D immersion, but I would expect in the near future to see PC games with support for 3 monitors, the main display screen and two others which you would arrange to give peripheral vision.

Re:How far we've come in just 15 years (1)

framauro13 (1148721) | more than 6 years ago | (#22093600)

Now, 15 years later, it's finally becoming a reality of sorts, or at least considered.
So your scenes finally completed rendering? :)

A very interesting perspective.

Holy Grail? Maybe not. (3, Informative)

Dr. Eggman (932300) | more than 6 years ago | (#22092046)

Although I have a hard time arguing in the realm of 3D lighting, I will direct attention to the Beyond3D article, Real-Time Ray Tracing: Holy Grail or Fool's Errand? [beyond3d.com] . Far be it of me to claim that this article applies to all situations of 3D lighting, it may be that Ray Tracing is the best choice for games, but I for one am glad to see an article that atleast looks into the possibility that Ray Tracing is not the best solution; I hate to just assume such things. Indeed, the article concludes that Ray Tracing has its own limitations and that a hybrid with rasterisation techniques would be superior to one or the other.

shaders vs ray tracing .... (1, Interesting)

Anonymous Coward | more than 6 years ago | (#22092070)

As already mentioned, openrt is not open source. A good open source RTRT I've looked at is Manta. http://code.sci.utah.edu/Manta/index.php/Main_Page [utah.edu]

And to the (+5 Insightful) naysayer who says that the future of games will be in shaders not in RT ... what are you comparing there? You're almost comparing a technology with an implementation.

You can implement a ray tracer on the gpu. I.e. through the use of shaders.

Well DUH!! (1, Funny)

Critical Facilities (850111) | more than 6 years ago | (#22092132)

From TFA

If you use a 16-core machine instead of a single-core machine then the frame rate increases by a factor of 15.2!


No kidding?? Well if you drive a car with a 16 cylinder, 1500 HP engine, it's a LOT faster than a 4 cylinder compact. More on this story as it develops.

He's talking about scaling (2, Informative)

Xocet_00 (635069) | more than 6 years ago | (#22092278)

In a lot of cases in computing, doubling the number of pipelines (read: adding a second core, for example) does not, in fact, double performance unless the problem being worked on is highly parallelizable. For example, this is why one can not accurately describe a machine with two 2.5GHz processors as a "5GHz machine". Most computation that personal computers do on a day to day basis does not scale well, and the average user will reach a point of diminishing returns very quickly if they add many cores to increase performance for these tasks.

So all he's demonstrating here with his "16-core" experiment is that ray-tracing is a highly parallel process, and that throwing lots of cores at it will work effectively to increase performance without reaching that point of diminishing returns (at least, not reaching it very quickly.) Yes, we expect 16 cores to be faster than 4 cores or 1 core, but he's saying that when we're ray-tracing we can expect 16 cores to be almost four times faster than four cores and almost sixteen times faster than one.

Re:Well DUH!! (1)

tmagic (1133263) | more than 6 years ago | (#22092306)

4 times as fast?

Re:Well DUH!! (1)

Yetihehe (971185) | more than 6 years ago | (#22092342)

It's more trucking sand with 16 cars instead of one. Now try to truck one big rock with 16 cars. Some things aren't easily parallelizable (tough word). Also some processes don't look good when making car methaphore.

Re:Well DUH!! (0)

Anonymous Coward | more than 6 years ago | (#22092378)

If you use a 16-core machine instead of a single-core machine then the frame rate increases by a factor of 15.2!
No kidding?? Well if you drive a car with a 16 cylinder, 1500 HP engine, it's a LOT faster than a 4 cylinder compact. More on this story as it develops.
WHOOSH! That's the sound of his claim going right over your head.

Re:Well DUH!! (2, Insightful)

prefect42 (141309) | more than 6 years ago | (#22092956)

I'm sure you thought you were being clever, but you weren't.

If you knew anything about parallel algorithms you'd know that it's fairly common to have things that scale with more like 75% efficiency, and you're still happy. It's all down to how much communication is required, and how often it's required. With raytracing normally (as in, a decade ago when I knew anything about this) you'd parallelise over multiple frames. With real-time rendering you'd need to parallelise within a frame. Depending on your scene, there will be a static cost (bounding box calculations) in addition to the per-pixel costs.

jh

in the player's best interests, natch (4, Insightful)

Speare (84249) | more than 6 years ago | (#22092170)

Daniel Pohl, a marketer at Intel

There, fixed that for you.

Raytracing the shiny first-intersection makes a lot of sense, even if it doesn't sell more CPUs. Sure, some day we will all have stunning holistic scene graphs that fit entirely within the pipeline cache of the processor, but it's not yet time for that.

Every change in developing a game engine requires changes in the entire toolset to deal with how to produce assets, how to fit within render time limit budgets, and how to model the scene graph and the logic graphs so that both are easily traversed and managed.

In the meantime, we have a pretty nice raster system right now, with a development strategy that provides for all those needs. You might not think that fullscale raytracing would upset this curve, but I'm not convinced. What do you do when a frame suddenly is taking more than 1/30sec to render, because the player is near a crystalline object and the ray depth is too high? How do you degrade the scene gracefully if your whole engine is built on raytracing? We've all played games where things like this were not handled well.

I contend that game AI is sometimes more advanced than academic AI because game developers are results-oriented and cut corners ruthlessly to achieve something that works well enough for a niche application. The same goes for game graphics: 33 milliseconds isn't enough to render complex scene graphs in an academically perfect and general way, it will require the same results-oriented corner-cutting to nudge the graphics beyond what anyone thought possible in 33ms. If that means using raytracing for a few key elements and ray-casting/z-buffering/fragment-shading the rest of the frame, game developers will do it.

Re:in the player's best interests, natch (0)

Anonymous Coward | more than 6 years ago | (#22092346)

Oh yeah, you're right ...

"In the meantime, we have a pretty nice raster system right now ..."

"In the meantime, we have a pretty nice Windows Operating system right now ..."

You see some kind of parallel there ? It's not because the market is already oriented in some way that it's the way to go on ... We have here some generic CPU competing against a specialized hardware... Guess who wins ? The specialized hardware of course.

I agree with you that until the raytracing "technology" is mature enough, it would need to cut corners to even be able to compete with Raster. But doesn't Raster cheat already ?

Re:in the player's best interests, natch (0)

Anonymous Coward | more than 6 years ago | (#22093062)

Every change in developing a game engine requires changes in the entire toolset to deal with how to produce assets
Its not THAT bad. As most of the tool set is setup for polygons now. Guess what raytracing uses? I think the real bottleneck will be memory bandwidth with so many cpus chugging away.

Now it's a good time for a new Amiga. (2)

master_p (608214) | more than 6 years ago | (#22092258)

What's the Amiga have to do with raytracing? well, let me explain:

When the Amiga was released, it was a quantum leap in graphics, sound, user interface and operating system design. It could run full screen dual-playfield displays in 60 frames per second with a multitude of sprites, it had 4 hardware channels of sound (and some of the best filters ever put on a sound board), its user interface was intuitive and allowed even different video modes, and its operating system supported preemptive multithreading, registries per executable (.info files), making installation of programs a non-issue, and a scripting language that all programs could use to talk to each other.

20 years later, PCs have adopted quite a few trends from the Amiga (the average multimedia PC is now filled with custom chips), and added lots more in terms of hardware (hardware rendering, hardware transformation and lighting). It seems that the problems we had 20 years ago (how to render 2d and 3d graphics quickly) are solved.

But today's computing has some more challenges for us: concurrency (how to increase the performance of a program through parallelism) and, when it comes to 3d graphics, raytracing! Indicentally, raytracing is a computational problem that is naturally parallelizable.

So, what type of computer shall we have that solves the new challenges?

It's simple: a computer with many many cores!

That's right...the era of custom chips has to be ended here. Amiga started it for personal computers, and a new "Amiga" (be it a new console or new type of computer) should end it.

A machine with many cores (let's say, a few thousand cores), will open the door for many things not possible today, including raytracing, better A.I., natural language processing and many other difficult to solve things.

I just wish there are some new RJ Micals out there that are thinking of how to bring concurrency to the masses...

No, raytracing is BETTER adapted to custom chips. (1)

argent (18001) | more than 6 years ago | (#22092442)

So, what type of computer shall we have that solves the new challenges?

A custom chip that has hundreds or thousands of dedicated raytracing processors that run in parallel. Raytracing is embarrassingly parallelizable, so it's far better suited to specialized processors than vectorizing.

Saarland University, the people who designed OpenRT in the first place, were getting 60+ frames a second on a hardware raytracing engine in 2005... and their raytracing engine only had 6 million gates and ran at 75 MHz. Today's GPUs have 100 times as many gates and run at 8 times the clock. A dedicated raytracer built with nVidia's or ATI's or Intel's resources instead of Saarland University's could give you that thousand core processor right now.

Don't get it, (1)

fozzmeister (160968) | more than 6 years ago | (#22092264)

OK when models get further away in games, they get less detailed, both in terms of textures and polygons. Now if you have a light, with a simplified character, casting a shadow onto a wall quite a long way away, this simplification might become very much more obvious.

No you don't (1)

SmallFurryCreature (593017) | more than 6 years ago | (#22093272)

That is an optimization that is used today. It is NOT a law. Think the real world, just because that huge billboard is miles away doesn't mean some guy runs up to it and tears down the paper and puts a low res version up on it. The entire world in RL and 3D has the same detail no matter where it is.

As you shoot the ray, it finds the entire world the same size and detail. This is actually one of the problems, for proper raytracing you can't use a lower res model for faraway objects because then the scene might indeed end up as you say, reflecting low-res stuff up close.

However, using lower res models is a crappy shortcut anyway, while it currently does lessen the rendering burden it in turn asks for considerable effort to supply the various version of models and textures.

There is another problem with it. Zooming/magnification. Say you are looking at a far away character, now you switch to your sniper scope and all of sudden you see a stick figure, doesn't work does it. Same with camera's that might show that model up close while you are far away. Notice how many outdoor shooters have you wade through grass obscuring your close range vision, while a sniper sees you standing on clear open ground.

I remember Intel making a lot of noise about this optimization by cutting detail on far away objects years ago, it worked but only for games with a fixed view of the world where you couldn't all of sudden get a closeup. In regular rendering you can pull the same stunt, If a building is ALWAYS going to be background, you only need to build the part you see, many a 3D scene is like an old hollywood set, nice facades and nothing in back.

As we get faster hardware, hopefully we can do away with the old hacks to come up with a scene that can be rendered. Quad core PC's don't even cost that much right now, willbe intresting to see what a few more years will bring. Mind you, if all goes to the CPU, I expect AI will take a hit.

Honk! Honk! (1)

tripwirecc (1045528) | more than 6 years ago | (#22092296)

That guy should ask the movie industry whether mixing rastering and raytracing makes sense or not. The defacto rendering engine, Pixar Photorealistic RenderMan, thinks that frankenrendering is a worthwhile thing.

Re:Honk! Honk! (1)

91degrees (207121) | more than 6 years ago | (#22092422)

The movie industry could conceivably be wrong.

Re:Honk! Honk! (1)

tripwirecc (1045528) | more than 6 years ago | (#22092898)

But it works for them. And is still faster than pure raytracing.

Re:Honk! Honk! (0)

Anonymous Coward | more than 6 years ago | (#22093068)

Do they do realtime rendering ?

Vista ? (1)

mynickwastaken (690966) | more than 6 years ago | (#22092628)

So, the next version of Windows will have raytraced windows on screen?
How would they then call it?!

General purpose CPUs: a REALLY bad way to do this. (5, Interesting)

argent (18001) | more than 6 years ago | (#22092716)

Professer Philipp Slusallek of the University of Saarbruecken demonstrated a dedicated raytracer in 2005, using a 66 MHz Xilinx FPGA with about 6 million gates. The latest ATI and nVidia GPUs have 100 times as many transistors and run at 6-8 times the clock with hundreds of times the memory bandwidth. Raytracing is completely parallelizable, and scales up almost linearly with processors, so it's not at all unlikely that if those kinds of resources were applied to raytracing instead of vectorizing you'd be able to add a raytracer capable of rendering 60+ FPS at the level of detail of the very latest games into the transistor budget of the chips they're designing now without even noticing.

Here's a debate between Professer Slusallek and chief scientist David Kirk of nVidia: http://scarydevil.com/~peter/io/raytracing-vs-rasterization.html [scarydevil.com] .

Here's the SIGGRAPH 2005 paper, on a prototype running at 66 MHz: http://www.cs.utah.edu/classes/cs7940-010-rajeev/sum06/papers/siggraph05.pdf [utah.edu]

Here's their hardware page: http://graphics.cs.uni-sb.de/SaarCOR/ [uni-sb.de]

Re:General purpose CPUs: a REALLY bad way to do th (0)

Anonymous Coward | more than 6 years ago | (#22093242)

Mod parent up. The debate and the SIGGRAPH paper are more on topic than the rest of the usual junk posted in the comments. Additionally, thanks for sharing the debate, I hadn't read it before.

headtracking (1)

ANCOVA (1175953) | more than 6 years ago | (#22093002)

If I understand the concept correctly, this combined with headtracking (the Wii guy?) will revolutionize the gaming world. Of course there's a fat chance I didn't get it at all, 'coz I don't have the time to read TFA at work...

Nvidia rejoice! There's money to make! (1)

Cathoderoytube (1088737) | more than 6 years ago | (#22093232)

Uugh. I'd like to say depth map shadows work perfectly well in games, and it'd be nice to stay with it.
Raytracing is a lot more intensive on the hardware which to me says that the video card companies will be pushing for game developers to implement it just so they can sell more powerful cards.
I guess the next logical step after raytracing would be global illumination, and real time displacement maps. None of which the gaming industry actually needs.

Once again what the gaming industry needs is creativity not video cards with 2gb of on board memory.

There is some raytracing already... sort of (1)

TomorrowPlusX (571956) | more than 6 years ago | (#22093276)

I'm not disagreeing with the author ( I did RTFA ), but I want to say there is some ray tracing ( in a sense ) already in some modern games. Specifically, some types of parallax mapping can be considered to be bastard red-headed stepchildren of raytracing.

What you have is ray tracing in texture space, but that texture is brought to the screen via conventional scanline rasterization methods. Sort of. My glsl parallax shader code sucks though ( looks all gelatinous close up ) so I'm no expert....

You fa1l 1t.. (-1, Offtopic)

Anonymous Coward | more than 6 years ago | (#22093314)

Awesome. (2, Funny)

randomaxe (673239) | more than 6 years ago | (#22093378)

I can't wait to play a game where I'm a shiny silver ball floating above a checkered marble floor.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?