Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Graphics Software

Intel Researchers Consider Ray-Tracing for Mobile Devices 120

An anonymous reader points out an Intel blog discussing the feasibility of Ray-Tracing on mobile hardware. The required processing power is reduced enough by the lower resolution on these devices that they could realistically run Ray-Traced games. We've discussed the basics of Ray-Tracing in the past. Quoting: "Moore's Law works in favor of Ray-Tracing, because it assures us that computers will get faster - much faster - while monitor resolutions will grow at a much slower pace. As computational capabilities outgrow computational requirements, the quality of rendering Ray-Tracing in real time will improve, and developers will have an opportunity to do more than ever before. We believe that with Ray-Tracing, developers will have an opportunity to deliver more content in less time, because when you render things in a physically correct environment, you can achieve high levels of quality very quickly, and with an engine that is scalable from the Ultra-Mobile to the Ultra-Powerful, Ray-Tracing may become a very popular technology in the upcoming years."
This discussion has been archived. No new comments can be posted.

Intel Researchers Consider Ray-Tracing for Mobile Devices

Comments Filter:
  • by click2005 ( 921437 ) on Sunday March 02, 2008 @09:42AM (#22615114)
    Moore's Law works in favor of Ray-Tracing, because it assures us that computers will get faster - much faster - while monitor resolutions will grow at a much slower pace.

    Inverse Moore's Law states that the more time that developers spend on making games look 'pretty', the less time they spend on playability.
    • by koh ( 124962 ) on Sunday March 02, 2008 @09:44AM (#22615122) Journal

      Inverse Moore's Law states that the more time that developers spend on making games look 'pretty', the less time they spend on playability.
      My psychic powers tell me you've played one of the recent Final Fantasy titles.

      • Re: (Score:2, Informative)

        by hvm2hvm ( 1208954 )
        nah, most games these days tend to focus on graphical and sound effects rather than playability. this trend is similar to the movies made en masse in Hollywood that have pretty good effects but lousy plots. most games i played on a mobile have low quality graphics but playability makes them worthwhile. what good is raytracing going to do if the game is hard to control or understand. many mobile devices don't have a good support for multiple keypresses at once.
        • by irae ( 1152885 )
          What games did you play on a mobile? I find all them pretty boring...
          • Re: (Score:3, Interesting)

            by hvm2hvm ( 1208954 )
            well i only play games on my mobile when i'm waiting for the bus or something. my point was that i tried some 3d racing games and some kind of 2d splinter cell clone but the only ones i actually feel like playing when i'm bored are a Zuma clone and 2 other simple games. maybe it's because i don't need to pay much attention or because i don't need time to understand how to play it. but i can't see why would anyone want to play a complex game on such a small screen and with those really bad controls.
      • Re: (Score:3, Insightful)

        by mpeskett ( 1221084 )
        Sooner or later graphics that are completely indistinguishable from real life will be available on low-end hardware, then they'll have to start competing by making good games instead of just pretty games.
        • by Instine ( 963303 )
          Why not make the phone a VR headset too...

          Strap it on your noggin and immerse... or pick it up and dial.
          • Considering how much of a driving hazard they can be already, do you really think that's such a good idea?
          • Strap it on your noggin and immerse...
            If you really want to make money on VR, then your noggin isn't where you'd have people strapping the device.
        • Unfortunately no... when that happens then we'll probably focus on how "real" the characters feel based on things like AI istead of creating games with good gameplay
        • you think so? I bet they'll go after the physics then. After that, they'll go after the senses - hearing, touch, smell. Improving technology is much easier than writing good games. Lets just hope that by the time they're done with all that, there are still people around who know how to write good games.
      • Agreed. I've been hearing a lot from Intel on RTRT lately and I think they are barking up the wrong kd-tree(ha ha). It's far more likely that people will start caring less and less about visual fidelity and more and more about the content and gaming experience. It's the same thing we saw with offline rendered movies a few years back. Visual quality increased steadily (Final Fantasy - Spirit Within?) and then dropped relatively with more emphasis on entertainment (most of the new animated movies). On the o
    • by jcnnghm ( 538570 ) on Sunday March 02, 2008 @10:01AM (#22615176)
      You could probably argue that is why the Wii is selling so well.
      • Re: (Score:3, Insightful)

        by farkus888 ( 1103903 )
        I would agree with that argument. The wii got me back into gaming after a few year break. I had quit because I was annoyed with games being all about graphics and not being fun enough to actually draw me in.
      • You could, but you could counter argue that there are only a few really great games on it, and its success must have other important contributing factors (price, novelty, branding and its competition).
    • Strange. I thought it said the lower the screen resolution, the lesser the point in even bothering with ray-traced graphics. Oh wait, thats the "Common Sense Law."
    • In this case the developers will spend less time with ray tracing. You don't need to lose time with shadows (very hard to emulate and never perfect with the actual z-buffer rasterization method), depth of field tricks/shaders, emulating the infamous water refraction or a reflexion in a mirror. What baffles me is the ray tracing being introduced first on mobile and not on desktop computers.
      • by Slarty ( 11126 ) on Sunday March 02, 2008 @11:17AM (#22615442) Homepage
        For games, at least, shadows don't need to be perfect. Neither do reflection and (especially) refraction. The goal is all about rendering something that looks plausible, not perfect (although it's a bonus if you can get it). For things like caustics, most people (and especially gamers) just aren't going to notice if the shadows or caustics or what-not are a tiny bit "off".

        Current rasterization approaches use a lot of approximations, it's true, but they can get away with that because in interactive graphics, most things don't need to look perfect. It's true that there's been a lot of cool work done lately with interactive ray tracing, but for anything other than very simple renderings (mostly-static scenes with no global illumination and hard shadows), ray tracers *also* rely on a bunch of approximations. They have to: getting a "perfect", physically correct result is just not a process that scales well. (Check out The Rendering Equation on wikipedia or somewhere else if you're interested; there's a integral over the hemisphere in there that has to be evaluated, which can recursively turn into a multi-dimension integral over many hemispheres. Without cheating, the evaluation of that thing is going to kick Moore's law's ass for a long, long time.)

        By the way, the claim that with a "physically correct environment, you can achieve high levels of quality very quickly" doesn't really make much sense. What's a "physically correct environment" and what is it about rasterization that can't render one? How are we defining "high levels of quality" here? And "very quickly" is just not something that applies much to ray tracers at the moment, especially in the company of "physically correct". :-)
        • For having a fun game you don't need realistic graphics. The Wii is the proof. But this doesn't mean that it not helps. Without graphic improvements we will still play pong. The rendering equation is the same to rasterization as to the ray tracing. The problem is how well are both solution are scaling further. Rasterization needs to handle the overhead of drawing useless invisible poligons and calculate shadows, emulate the reflections sometimes by doubling the rendering effort and shaders tricks. Ray traci
        • Re: (Score:2, Informative)

          by a_claudiu ( 814111 )
          About the scalability of ray tracing vs. rasterization. ftp://download.intel.com/technology/itj/2005/volume09issue02/art01_ray_tracing/vol09_art01.pdf [intel.com]
          • by Slarty ( 11126 ) on Sunday March 02, 2008 @02:58PM (#22616630) Homepage
            Sure, the rendering equation isn't ray tracing specific (it's a core graphics equation, independent of any one image generation method) but it's much easier to directly apply in ray tracing. There aren't many rasterization techniques that even attempt to solve it... the goal usually is just to add some ambient light effects which look like a plausible attempt at global illumination. AFAIK, even the latest, greatest game engines still stop short at something like baked-in ambient occlusion or screen-space darkening using the depth buffer. It looks cool, but physically accurate it ain't. It's much more natural to get "perfect" results in ray tracing, but that was kinda my point: getting those accurate results is pretty costly. If people don't notice the difference, why bother? Stick with the cheap approximation.

            And about scalability, you're right, of course; ray tracing does scale better with scene complexity than rasterization does, and as computing power increases it will make more and more sense to use ray tracing. However, the ray tracing vs. rasterization argument has been going on for decades now, and while ray tracing researchers always seem convinced that ray tracing is going to suddenly explode and pwn the world, it hasn't happened yet and probably won't for the forseeable future. Part of it is just market entrenchment: there are ray tracing hardware accelerators, sure, but who has them? And although I've never worked with one, I'd imagine they'd have to be a bit limited, just because ray tracing is a much more global algorithm than rasterization... I can't see how it'd be easy to cram it into a stream processor with anywhere near as much efficiency as you could with a rasterizer. On the other hand, billions are invested into GPU design every year, and even the crappiest computers one nowadays. With GPUs getting more and more powerful and flexible by the year, and ray tracing basically having to rely on CPU power alone, the balance isn't going to radically shift anytime soon.

            For the record, although I do research with both, I prefer ray tracing. It's conceptually simple, it's elegant, and you don't have to do a ton of rendering passes to get simple effects like refraction (which are a real PITA for rasterization). But when these articles come around (as they periodically do on Slashdot) claiming that rasterization is dead and ray tracing is the future of everything, I have to laugh. That may happen but not for a good long while.
            • I believe that switching to ray tracing ceased to be a real technical problem some time ago. The only thing that kept rasterization going on was the possibility of increasing the GPU processor speed and the economical risk of implementing a new platform and standard.Now with CPU's/GPU's hitting the ceiling using brute speed increase approach, the switch to parallel processing will help ray tracing (did you notice the latest power hogs?).

              For the moment rasterization still have a small breading space as lon
        • photon mapping (Score:3, Interesting)

          by j1m+5n0w ( 749199 )

          ...getting a "perfect", physically correct result is just not a process that scales well. (Check out The Rendering Equation on wikipedia or somewhere else if you're interested; there's a integral over the hemisphere in there that has to be evaluated, which can recursively turn into a multi-dimension integral over many hemispheres. Without cheating, the evaluation of that thing is going to kick Moore's law's ass for a long, long time.)

          Photon mapping is a pretty good way of getting an unbiased approximation

        • Ray tracers don't rely on approximations for "anything other than very simple renderings", they rely on approximations for all rendering. For one thing it completely ignores the wave nature of the objects being modelled. There was an attempt (published in Siggraph in the 70's iirc) to render by directly modelling wave interactions of light with the modelled objects but even the supercomputers of the time were inadequate... could be time to revisit that.

          Funny though, the argument, about processing power in

      • by mdwh2 ( 535323 )
        What baffles me is the ray tracing being introduced first on mobile and not on desktop computers.

        My guess would be lower resolutions (and the article says this too). Also mobile devices tend not to have powerful GPUs dedicated for doing 3D graphics the traditional polygon way - it's much easier to compete with software rendering than with hardware.

        I'm confused by the "Moore's Law works in favor of Ray-Tracing" bit though - Moore's Law works in favour of any computation. And traditional polygon rasterisation
    • Sure, if what you want to do is play a "game" on the PC, then yea, forgo the "pretty" look and just concentrate on the "how many points can I score" aspect.

      However since Doom, I've never totally played a game just to pass the time scoring on my friends, or getting good at Tetris. I actually love the experience of exploring, and going somewhere else that I can't go in real life. In these terms, the bigger the world a game company can create, the more visually stunning and accurate the photorealism is, and
    • Because Intel's just gotta make those quarterly numbers.

      Just ask Microsoft and the Vista team in particular.
    • Inverse Moore's Law states that the more time that developers spend on making games look 'pretty', the less time they spend on playability.

      The computer graphics inverse of Moore's Law is known as Blinn's Law, and it essentially says that audience expectation rises at the same rate as Moore's Law.

      Originally posed for the animation/vfx industries, the actual statement of Blinn's Law is that the amount of time it takes to compute one frame of film is constant over time. The corollary is that it doesn't matt

    • by Ilgaz ( 86384 ) *

      Moore's Law works in favor of Ray-Tracing, because it assures us that computers will get faster - much faster - while monitor resolutions will grow at a much slower pace.

      As you know Moore is one of founders of Intel. Their research works in favor of Intel too, instead of pushing the boundaries of OpenGL ES or J2ME extensions for 3d (which are also based on opengl), offload the thing which could be done as a trivial task by mini GPU of cell phone to the cell phone CPU which Intel has significant market share.
      I loved USB1/2 technology until I have purchased a Firewire hard drive and wondered around web for an explanation why 400Mbit Firewire is almost 2x real World performan

  • Sweet!! (Score:2, Funny)

    by glavenoid ( 636808 )
    It's about time for S.P.I.S.P.O.P.D. for mobile devices! I've only been waiting about 15 years!!!
    • That phones may be able to ray trace is news? Sounds more to me like intel was of reading in the news all week how inferior their graphics stuff was because of the Microsoft Vista debacle part eight - and suddenly we have an anonymous tip to a blog at intel saying ray tracing on phones there is "an opportunity to deliver more content in less time" and "Ray-Tracing may become a very popular technology in the upcoming years".

      A popular technology? Like a working filesystem? They're real popular I hear. Or an o
  • Brilliant! (Score:5, Funny)

    by neonmonk ( 467567 ) on Sunday March 02, 2008 @10:23AM (#22615232)
    I can just see two moustached elderly gents discussing research, possibly even drinking Guinness out of a bottle. They go silent for a few minutes and then one of them, whilst stroking his long droop moustache suddenly jumps up and proclaims:

    "Holy Crap! Mobile gaming devices have tiny screens, imagine how easy it'd be to use advanced raytracing graphics!"
    "Brilliant!"
    • "Unfortunately we're not as clever as those Intel chaps, how will we make it work?"
      "Hmmm....."
      (long pause)
      "What about rendering really small scenes on a big stonking server and then using some sort of 'Network' to make the images appear?"
      "That sounds like some kind of magic!"

      Fantastic research [acm.org].
  • by nurb432 ( 527695 ) on Sunday March 02, 2008 @10:45AM (#22615288) Homepage Journal
    "As computational capabilities outgrow computational requirements, the quality of rendering Ray-Tracing in real time will improve, and developers will have an opportunity to do more than ever before."

    This attitude is why even tho our computers are 1000x faster then we had 20 years ago, they actually perform worse overall.
    • Re: (Score:3, Insightful)

      by DarkOx ( 621550 )

      This attitude is why even tho our computers are 1000x faster then we had 20 years ago, they actually perform worse overall.

      I would say yes and no. Its one thing to have the computer do something simply becase it can; I agree that is very wasteful. Raytracing is not needed on a 300x200 screen; especically while plaing a game and things are moving.

      On the otherhand 20 years ago like today we compormised and dispensed with things or found was to "fake it" in cases where the computer's conuld not deliver. Its really not critical shadows are rendered perfectly on my mobile phone while I am playing Doom57 Mobile Edition. An arch

    • Re: (Score:2, Insightful)

      I hate "the old days were so much better!" comments, especially when it comes to computing.

      20 years ago, no one was connected to a 3mbps line, listening to music, with a mail and an IM client constantly pinging back, watching a video on youtube in one of twenty tabs in my firefox, with vim/emacs/eclipse open, azureus plugging away at some torrents as fast as it could, on two 1280x1024 screens in real colour, all simultaneously, on a single core I bought years ago. I still don't notice significant slowdowns.
      • by nurb432 ( 527695 )
        Get off my lawn ya whippersnapper.

        if it has to be explained why the 'older day's of computing was better then today, you are too young and would never understand.
    • LOADING.......LOADING.......LOADING.......
      INSERT DISK 4 AND PRESS <RETURN>
      I for one, do not welcome 20 year old hardware.
    • by jcnnghm ( 538570 )
      I wouldn't say that. I remember back when my computer could only run a single application at a time, Wordperfect had white text on a blue background (Reveal Codes and dot matrix printers anyone), and was slightly more usable than vim is today, and there was a noticeable lag between my typing and the text appearing on the screen. That wasn't even 20 years ago, that was the early to mid nineties.

      I'm looking at my desktop right now, and I'm running no less than 6 different applications including a web browse
  • prog10 (Score:5, Funny)

    by k2enemy ( 555744 ) on Sunday March 02, 2008 @10:49AM (#22615300)
    Too bad the source code for the highly optimized prog10 raytracer was lost in the great hard drive crash of '98.
  • by DigitAl56K ( 805623 ) on Sunday March 02, 2008 @10:50AM (#22615308)

    Moore's Law works in favor of Ray-Tracing, because it assures us that computers will get faster - much faster - while monitor resolutions will grow at a much slower pace.
    Where did this "assurance" come from? Display resolutions grow as quickly as the latest games can run smoothly at the leading-edge dimensions. Since Moore's law is about doubling processing power, but doubling the display resolution means quadrupling the number of pixels, you may find the relationship is in fact much closer than you'd think.
    • by vulgrin ( 70725 )
      Just trying some random math that may or may not be based upon bad assumptions:

      1993 to 2008 = 15 years = 180 months = 10 Moore's Law Cycles
      Monitor size over that period was from 640x480 to 1920x1200 (pulling from my butt, we could argue EGA vs VGA and what percentage of the users actually have 1920x1200, but its a place to start)

      Pixels: 640x480 = 307,200 to 1920x1200 = 2304000 which is a factor of 7.5.

      So, in summary, over the past 15 years Moore's Law has eclipsed monitor growth. 10 times vs. 7.5 times.

      I
      • by vulgrin ( 70725 )
        Oh, and Moore's law is exponential. If resolution was exponential and we doubled as often as Moore's law, then in the past 15 years our monitor resolution would have gone from 640x480 to 327,680x245,760

        So, I think it's safe to say the summary ISN'T misleading.
    • Where did this "assurance" come from? Display resolutions grow as quickly as the latest games can run smoothly at the leading-edge dimensions.
      Moore's Law has achieved meme status. We now have Moore's Law of Business, of Display Resolution, of Hair Length, of a Geek's chance to have Sex, etc. I wonder just how many people have actually READ Moore's actual prediction and aren't just quoting it second-hand.
    • Re: (Score:3, Informative)

      by node 3 ( 115640 )
      Moore's Law says the number of transistors in a certain area at a certain cost will double about every 18 months. This effectively seems to double computer speed every 18 months.

      Doubling the number of transistors on an LCD does not double the resolution (as you pointed out), it only multiplies each dimension by the square root of 2. Doubling the number of transistors on a CRT does nothing (well, maybe it gives you a more impressive OSD). But even limiting it to LCDs, it does not hold up. Display resolution
    • Re: (Score:2, Interesting)

      by Anonymous Coward
      Display resolutions have been getting higher, but the eye is not getting better, so there is a limit to the useful resolution of any display, and we are getting close. For a 24" widescreen at normal viewing distance, you're not going to ever want a resolution much higher than 1920x1200. Instead, you'd like the display to be bigger to take up a larger part of your field of view. But there's a problem with this; in fact your eyes can only take in a small part of the display at once. The eye has high resol
    • Re: (Score:2, Informative)

      by maxume ( 22995 )
      Sort of. You only need as many pixels as the eye can see at the distance the display is used at(and maybe some extra for leaning in). If you jump through some hoops, you can come up with a resolution for a given distance:

      http://en.wikipedia.org/wiki/Eye#Acuity [wikipedia.org]
      http://www.dansdata.com/gz029.htm [dansdata.com]

      Piggy-backing on Dan's hand waving, 300 dpi at 1 foot is a decent rule of thumb, and waving my own hands, 1 foot is a reasonable minimum distance for a handheld device(I don't imagine most people holding something any c
  • by should_be_linear ( 779431 ) on Sunday March 02, 2008 @10:51AM (#22615318)
    As Intel couldn't compete with ATI/nVidia on 3D rendering performance, they simply redefined rules of the game. Now they seem ahead of everyone else in Real Time Raytraycing, at least based on publicly presented papers. Now, they need to integrate this into some bigger picture of "new gaming platform". If they manage to integrate this graphics with Java JVM in coherent way, so that developers can easier utilize multiple cores in games and be able to write games once, run on all platforms/future consoles as a bonus. That would be big step towards letting developers focus towards gameplay and not on DirectX/OpenGL/PS3/... API generations, extension nuances, tricks for simulating shades, optimizing polygon count in big scenes, ... ray-tracing is making all this simple without requiring effort on developer's side. Yes, I know Java is some percents slower then C++, but in Java it is so much easier to utilize multiple-cores (especially when it comes to debugging) that I am sure performance will be gained, not lost on modern CPUs.
    • If they manage to integrate this graphics with Java JVM in coherent way, so that developers can easier utilize multiple cores in games and be able to write games once, run on all platforms/future consoles as a bonus.
      But the console makers don't want that. If somebody inserts a disc from an older or competitor's console into the new console, then nobody is buying a new disc to pay back the console's R&D subsidy.
    • Ray casting and Java (Score:3, Interesting)

      by KalvinB ( 205500 )
      Bunnies, http://www.dawnofthegeeks.com/ [dawnofthegeeks.com] (a Wolf3D clone) was originally written in Java. I then started translating it to C# and got about a 50% speed boost. I'm now able to do bump mapping, higher resolutions and still have playable framerates.

      And this is just for Ray Casting which is much simpler than Ray Tracing.

      During my development with Java I discovered that setting a pixel color to 0xFF000000 caused a slowdown. That's right, a black pixel would slow the framerate down. I had to set all pure black
  • by binaryspiral ( 784263 ) on Sunday March 02, 2008 @10:58AM (#22615348)
    This is kind of stupid actually. Why would I want a game on my mobile to be thrashing the cpu when it could be doing some basic sprites and other not-so-cpu-intensive methods to produce my game?

    Ray-tracing may be possible on my 500Mhz smartphone's processor - but damn, I don't want to have to be plugged in to play them.
  • by ducomputergeek ( 595742 ) on Sunday March 02, 2008 @11:11AM (#22615412)
    Rendering my latest blender project....
  • I was reading this thread hoping to find links to existing real-time ray-tracers, but found none. Does anyone know of any real-time ray-tracers? Open source, please...
  • by Grard Menfin ( 1178135 ) on Sunday March 02, 2008 @11:19AM (#22615452)
    For those interested in real-time raytracing, the latest beta version of POV-Ray [povray.org] has a neat (but experimental) RTR feature. The source is now available for Windows and Unix/Linux. There also demo scenes available (and another demo scene with pre-baked textures can be found here [oyonale.com]).
  • My understanding was that current techniques in game graphics were developed because they require less computing power to achieve a similar level of quality; or to put it another way, they produce better quality for the same amount of computation.

    If this is the case, why not just use the increasing processing power to produce better quality graphics using the current optimized techniques?

    Am I missing something? Intel's argument seems a bit like saying we should get rid of QuickSort and go back to Bubble so
    • Actually, there already are dedicated raytracing accelerators (like SaarCOR) and NVidia has presented simple realtime raytracing on the G80 at last year's CeBIT. It's not just Intel who is interested in this.
    • Re: (Score:3, Insightful)

      by smallfries ( 601545 )
      You are assuming that there is only one variable (resolution) that can be adjusted. Actually the quality of the scene is a function of two variables: resolution and scene complexity. When the complexity of scenes was low, rasterization produced much better results than raytracing for the same effort. Now that scene geometry has increased so much we are reaching the point where raytracing will produce the same (or better quality) for less effort. The main issue is that rasterization is O(n) in scene complexi
    • The problem is that the scanline rendering techniques we use for real-time graphics these aren't an accurate solution to the rendering equation [wikipedia.org]. If you add more processing power you can render more triangles at higher framerates, but there isn't a physically-correct way to deal with the interreflections between objects (i.e. global illumination), and so the output of scanline rendering will never look quite real (it might look pretty close, but only with a lot of manual tweaking).

      Plain Whitted Ray tracin

  • by igomaniac ( 409731 ) on Sunday March 02, 2008 @11:26AM (#22615504)
    If you want to know the future of real-time graphics, look at what Pixar and other animation and special effects houses are doing. None of them are using ray-tracing except to achieve specific effects in specific circumstances. The fact is that global illumination combined with scanline renderers simply produce better pictures with less computational requirements.
    • by Anonymous Coward
      This is a common meme, but it is mistaken. I'm sure you've noticed that Pixar's movies aren't yet photorealistic. Raytracing *is* the holy grail of graphics; in its most sophisticated form it basically amounts to a simulation of the actual physics of light propagation, and with monte carlo methods it can be solved, producing images that can truly be said to be indistinguishable from reality. The reason Pixar doesn't use it is that, believe it or not, Pixar has constraints on their rendering time. They c
      • Everything else is just a coarse approximation which doesn't correspond to our best knowledge of light propogation. Forward raytracing? Pshaw, just a complete and utter hack. Backward raytracing can handle caustics and GI ... but at the same time, still atrocious hacks really which can't handle a whole host of optical effects.

        Quantum wave tracing baby, that's where it's at.
    • What does Pixar have to do with realtime graphics? Pixar's not DOING realtime graphics.

      Pixar has the luxury of controlling every take, and going back after the fact to re-render shots with different settings, or even to use different algorithms (including ray-tracing) to fix any rendoring flaws caused by whatever approximations they're using at that point. Realtime graphics do not have that luxury... if there's a problem in a scene, you can't go back and fix it.

      So whether raytracing is more or less appropri
    • by Aidtopia ( 667351 ) on Sunday March 02, 2008 @01:22PM (#22616086) Homepage Journal

      Actually Pixar has switched to Ray Tracing. Cars was ray traced [pixar.com] [PDF]. Skimming through the whitepapers on the Pixar site [pixar.com], it's clear ray tracing was also used extensively in Ratatouille.

      Even so, what Pixar is doing in feature films isn't particularly relevant to real-time ray tracing on mobile devices.

      • by j1m+5n0w ( 749199 ) on Sunday March 02, 2008 @02:48PM (#22616584) Homepage Journal

        It's worth pointing out (and it's mentioned in the paper you cite) that the main reason Pixar hasn't been doing much ray tracing until now is not performance or realism, but memory requirements. They need to render scenes that are too complex to fit in a single computer's memory. Scanline rendering is a memory-parallel algorithm, ray tracing is not. So, they're forced to split the scene up into manageable chunks and render them separately with scanline algorithms.

        This isn't an issue for games, which are going to be run on a single machine (perhaps with multiple cores, but they share memory).

    • Re: (Score:3, Interesting)

      Except Pixar has an army of shader developers working for 2 years on tweaking the rendering of practically every scene to ensure its photorealism. Scanline renderers may be faster but the human effort required to achieve photorealism is huge.
      Ray tracing alone is not a silver a bullet but if it produces better results with less human effort, it's a net win.

      I found this on Pixar's RenderMan page (https://renderman.pixar.com/products/tools/renderman.html):

      "Ray Tracing and Global Illumination
      The ray tracing and
    • Re: (Score:2, Interesting)

      by big4ared ( 1029122 )

      Definitely.

      Pixar used some raytracing for Cars and later described it as a huge mistake. Certain shots took over 200 hours per frame. In terms of performance vs. quality, even in movies, they prefer to go scanline. You won't see games going to raytracing any time soon.

      In Transformers, they used cube-maps because raytracing was too slow. Is anyone here seriously going to make the case that Transformers looked bad because the reflections weren't perfect?

  • Raytracing does not make things easier. If anything it makes things a bit harder, or at the least its a comparable work load.

    Is Raytracing really needed on a tiny mobile device at say 300x400?

    • Raytracing is more computationally expensive, but what about human expense? To get high performance while achieving comparable results with scanline rendering you need to prebake shadows, create reflection maps, pick which objects are going to be self-shadowed, and so on... many of these techniques involve selectively applying ray-tracing algorithms where you notice them, along with a myriad of other algorithms that are individually cheaper than raytracing for specific cases. At some point it makes sense to
      • At some point it might, but are we at the point where raytracing everything, doesnt set us back performance wise and visually?

        Wont it always be true that raytracing is slower, uses more ram etc. Its more physically correct but the alternatives will always be faster. Which is better and more performance oriented, baking ambient occlusion, or rendering it in real time with raytracing?

        Several factors come into play, samples vs prebaked map resolution. Lots of samples in ambient occlusion can have a dramatic ef
        • by argent ( 18001 )
          Wont it always be true that raytracing is slower, uses more ram etc.

          That's not a given. A few years back a university in Germany demonstrated a dedicated raytracing engine that was getting a video-game quality realtime raytraced scenes. The thing is that this processor was only running at 66 MHz and only had 352 Mb/s memory bandwidth. Today's GPUs are running 8 times as fast, have 100 times as many transistors (which translates to more parallel ray computations), and many times the memory bandwidth.

          By conce
  • I came to browse Slashdot while waiting for some ray tracing of my own. I do atomistic modeling of nanomechanics and I'm rendering movies of how atoms wiggle and move during deformation. Here is a test shot of a 4 nm tall aluminum cylinder rendered at 150 femtoseconds per second of animation:

    Aluminum nanocolumn vibration (Quicktime, 14 MB) [umich.edu]

    It's amazing how nice ray tracing can look compared to other visualization methods. It took three hours to generate this 1000 frame movie. But as processors add

    • Here is a test shot of a 4 nm tall aluminum cylinder rendered at 150 femtoseconds per second of animation: Aluminum nanocolumn vibration (Quicktime, 14 MB)

      Ahhhhh spheres, every ray tracer's favorite primitive! :D

  • by sunderland56 ( 621843 ) on Sunday March 02, 2008 @01:23PM (#22616096)
    The normal way things work in computing, things tricke down from high-performance platforms to lower ones. So, where are the desktop games using raytracing?

    If they want a phone to do 256 x 192 raytracing in real time, then a desktop with 1000x the compute power should easily be able to do 720x480 (full res television) in real time. But, oddly enough, there are no such titles out there....
    • by batkiwi ( 137781 )
      Raytracing scales EXACTLY linearly with resolution. 720x480 is not an acceptable resolution for PC games.

      They have a version of quake4 that can hit 100FPS at 1080p resolutions (1920x1080) with 8 cores. That means a top of the line dual core machine should be able to do 720p with no real problems.

      The games will come once someone makes an engine for them to be honest. It's not QUITE possible right now, but would be demo-able at the resolution you stated.
  • Do they have raytracing down fast enough to be useful in real time animation? If not, then stick to pre-rendered pximaps. Just because something can be done doesn't mean it should be.
  • In 1993 my DX2/66 laptop could ray trace pretty damned fast with POVray, and much faster if I optimised the code (stupid specularity!) and put a heat sink on the processor (which, in my laptop, was accessible). That was 15 years ago, at at a resolution of 800x600. Current cell phone processors run at least 10x that speed and have to deal with a resolution of around 1/8-1/16 of that. And for what? A ray-traced game? As everyone else has pointed out, the games are neat-o looking but playing them sucks ass.

    I m
  • Just what we need: more people walking around playing Bejeweled on their cell phone, and _not_ paying attention to where they're going.

    I still don't get the whole cell phone craze. Get a Game Boy for cryin' out loud!
  • I'd prefer companies focus on decent vector graphics for applications before trying to move directly to ray tracing for games.

    Really, nothing pushes hardware, er... harder, than games. Application GUI implementation is still in the stone age, even on mobile devices.

  • Here is the reality of the situation: with a mobile device you're probably dealing with a program written in an interpreted language (Java). If your application is a game then you're dividing your scare processing resources across all elements said game: AI, graphics, networking, whatever. It's all being done by one chip that is designed to CONSERVE BATTERY POWER. These are not the dual core race horses you see in desktops. Who wrote this article? They need failure stamped on their forehead. Maybe in
  • At the NHTV university of applied sciences, IGAD department, we are researching the use of ray tracing in games. Two student teams are working on proof-of-concept games using the technology. You can read about findings so far in my paper on real-time ray tracing in the context of games, on http://igad.nhtv.nl/~bikker [igad.nhtv.nl] and http://igad.nhtv.nl/~bikker/files/rtgames.pdf [igad.nhtv.nl] . There's also a real-time demo that shows what a modern PC can do.
  • Obviously an idea from a company who wants to sell more expensive chips.

    What do we really want from our mobile devices? I want "infinite battery life" and no recharging.

    Moore's Law (which is really an increase in the number of transistors per given area) could give us much much better battery life for the same performance, IF we don't go the way of the desktop and squander it on bloated software and eye-candy.

    There are already displays which take almost no power (less than 1mW): http://www.qualcomm [qualcomm.com]
    • I completely agree. For a few years now, I have been saying there must be a decent market for an ultra-basic mobile phone with it's main selling point being a incredibly high battery life. Colour screen, games and a camera all seems a bit pointless if you run out of power and you're away from a charger.
  • Does this mean that we are suppose to favour QVGA over some of the WVGA (800x480) phones coming out over the last year (especially in Japan)?

    Give me res > (raytracing >) size any day

What is research but a blind date with knowledge? -- Will Harvey

Working...