Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel Researchers Consider Ray-Tracing for Mobile Devices

Soulskill posted more than 6 years ago | from the smaller-pretty-pictures dept.

Graphics 120

An anonymous reader points out an Intel blog discussing the feasibility of Ray-Tracing on mobile hardware. The required processing power is reduced enough by the lower resolution on these devices that they could realistically run Ray-Traced games. We've discussed the basics of Ray-Tracing in the past. Quoting: "Moore's Law works in favor of Ray-Tracing, because it assures us that computers will get faster - much faster - while monitor resolutions will grow at a much slower pace. As computational capabilities outgrow computational requirements, the quality of rendering Ray-Tracing in real time will improve, and developers will have an opportunity to do more than ever before. We believe that with Ray-Tracing, developers will have an opportunity to deliver more content in less time, because when you render things in a physically correct environment, you can achieve high levels of quality very quickly, and with an engine that is scalable from the Ultra-Mobile to the Ultra-Powerful, Ray-Tracing may become a very popular technology in the upcoming years."

Sorry! There are no comments related to the filter you selected.

Fr1st Psot (-1, Troll)

Anonymous Coward | more than 6 years ago | (#22615104)

Yeah!

Inverse Moore's Law (5, Insightful)

click2005 (921437) | more than 6 years ago | (#22615114)

Moore's Law works in favor of Ray-Tracing, because it assures us that computers will get faster - much faster - while monitor resolutions will grow at a much slower pace.

Inverse Moore's Law states that the more time that developers spend on making games look 'pretty', the less time they spend on playability.

Re:Inverse Moore's Law (4, Funny)

koh (124962) | more than 6 years ago | (#22615122)

Inverse Moore's Law states that the more time that developers spend on making games look 'pretty', the less time they spend on playability.
My psychic powers tell me you've played one of the recent Final Fantasy titles.

Re:Inverse Moore's Law (2, Informative)

hvm2hvm (1208954) | more than 6 years ago | (#22615170)

nah, most games these days tend to focus on graphical and sound effects rather than playability. this trend is similar to the movies made en masse in Hollywood that have pretty good effects but lousy plots. most games i played on a mobile have low quality graphics but playability makes them worthwhile. what good is raytracing going to do if the game is hard to control or understand. many mobile devices don't have a good support for multiple keypresses at once.

Re:Inverse Moore's Law (1)

irae (1152885) | more than 6 years ago | (#22615972)

What games did you play on a mobile? I find all them pretty boring...

Re:Inverse Moore's Law (3, Interesting)

hvm2hvm (1208954) | more than 6 years ago | (#22616270)

well i only play games on my mobile when i'm waiting for the bus or something. my point was that i tried some 3d racing games and some kind of 2d splinter cell clone but the only ones i actually feel like playing when i'm bored are a Zuma clone and 2 other simple games. maybe it's because i don't need to pay much attention or because i don't need time to understand how to play it. but i can't see why would anyone want to play a complex game on such a small screen and with those really bad controls.

Re:Inverse Moore's Law (3, Insightful)

mpeskett (1221084) | more than 6 years ago | (#22615918)

Sooner or later graphics that are completely indistinguishable from real life will be available on low-end hardware, then they'll have to start competing by making good games instead of just pretty games.

Re:Inverse Moore's Law (1)

Instine (963303) | more than 6 years ago | (#22615998)

Why not make the phone a VR headset too...

Strap it on your noggin and immerse... or pick it up and dial.

Re:Inverse Moore's Law (1)

Takumi2501 (728347) | more than 6 years ago | (#22619828)

Considering how much of a driving hazard they can be already, do you really think that's such a good idea?

Re:Inverse Moore's Law (1)

trytoguess (875793) | more than 6 years ago | (#22616444)

Unfortunately no... when that happens then we'll probably focus on how "real" the characters feel based on things like AI istead of creating games with good gameplay

Re:Inverse Moore's Law (1)

carlmenezes (204187) | more than 6 years ago | (#22616610)

you think so? I bet they'll go after the physics then. After that, they'll go after the senses - hearing, touch, smell. Improving technology is much easier than writing good games. Lets just hope that by the time they're done with all that, there are still people around who know how to write good games.

Re:Inverse Moore's Law (5, Insightful)

jcnnghm (538570) | more than 6 years ago | (#22615176)

You could probably argue that is why the Wii is selling so well.

Re:Inverse Moore's Law (2, Insightful)

farkus888 (1103903) | more than 6 years ago | (#22615200)

I would agree with that argument. The wii got me back into gaming after a few year break. I had quit because I was annoyed with games being all about graphics and not being fun enough to actually draw me in.

Re:Inverse Moore's Law (1)

JonTurner (178845) | more than 6 years ago | (#22615216)

Strange. I thought it said the lower the screen resolution, the lesser the point in even bothering with ray-traced graphics. Oh wait, thats the "Common Sense Law."

Re:Inverse Moore's Law (1)

calebt3 (1098475) | more than 6 years ago | (#22615560)

Resolutions are increasing, just not as fast as computational power. We need something to do with all those extra cycles.

Re:Inverse Moore's Law (1)

bob.appleyard (1030756) | more than 6 years ago | (#22616056)

Physics simulations? AI? Actual game logic?

Re:Inverse Moore's Law (0)

Anonymous Coward | more than 6 years ago | (#22615344)

Well then you should love raytracing, as it is all about simplifying the process of generating pretty graphics. Today, game developers have to implement all sorts of code to produce special effects like shadows and reflections. For example, some game engines implement four or more completely different ways of generating shadows, depending on the situation (precomputed light maps, stencil shadows, shadow mapping, ambient occlusion), and the game art has to be designed with these shadowing methods in mind for best results. Raytracing trades off efficiency for generality and ease of implementation. Shadows, reflections, and other effects can be implemented quite simply by just tracing more rays, requiring comparatively little developer and artist effort, but *lots* of extra processing time.

Re:Inverse Moore's Law (1)

a_claudiu (814111) | more than 6 years ago | (#22615358)

In this case the developers will spend less time with ray tracing. You don't need to lose time with shadows (very hard to emulate and never perfect with the actual z-buffer rasterization method), depth of field tricks/shaders, emulating the infamous water refraction or a reflexion in a mirror. What baffles me is the ray tracing being introduced first on mobile and not on desktop computers.

Re:Inverse Moore's Law (4, Informative)

Slarty (11126) | more than 6 years ago | (#22615442)

For games, at least, shadows don't need to be perfect. Neither do reflection and (especially) refraction. The goal is all about rendering something that looks plausible, not perfect (although it's a bonus if you can get it). For things like caustics, most people (and especially gamers) just aren't going to notice if the shadows or caustics or what-not are a tiny bit "off".

Current rasterization approaches use a lot of approximations, it's true, but they can get away with that because in interactive graphics, most things don't need to look perfect. It's true that there's been a lot of cool work done lately with interactive ray tracing, but for anything other than very simple renderings (mostly-static scenes with no global illumination and hard shadows), ray tracers *also* rely on a bunch of approximations. They have to: getting a "perfect", physically correct result is just not a process that scales well. (Check out The Rendering Equation on wikipedia or somewhere else if you're interested; there's a integral over the hemisphere in there that has to be evaluated, which can recursively turn into a multi-dimension integral over many hemispheres. Without cheating, the evaluation of that thing is going to kick Moore's law's ass for a long, long time.)

By the way, the claim that with a "physically correct environment, you can achieve high levels of quality very quickly" doesn't really make much sense. What's a "physically correct environment" and what is it about rasterization that can't render one? How are we defining "high levels of quality" here? And "very quickly" is just not something that applies much to ray tracers at the moment, especially in the company of "physically correct". :-)

Re:Inverse Moore's Law (1)

a_claudiu (814111) | more than 6 years ago | (#22615732)

For having a fun game you don't need realistic graphics. The Wii is the proof. But this doesn't mean that it not helps. Without graphic improvements we will still play pong. The rendering equation is the same to rasterization as to the ray tracing. The problem is how well are both solution are scaling further. Rasterization needs to handle the overhead of drawing useless invisible poligons and calculate shadows, emulate the reflections sometimes by doubling the rendering effort and shaders tricks. Ray tracing if implemented in hardware can be more scalable and is more suitable to parallel processing.

Re:Inverse Moore's Law (2, Informative)

a_claudiu (814111) | more than 6 years ago | (#22615780)

About the scalability of ray tracing vs. rasterization. ftp://download.intel.com/technology/itj/2005/volume09issue02/art01_ray_tracing/vol09_art01.pdf [intel.com]

Re:Inverse Moore's Law (3, Informative)

Slarty (11126) | more than 6 years ago | (#22616630)

Sure, the rendering equation isn't ray tracing specific (it's a core graphics equation, independent of any one image generation method) but it's much easier to directly apply in ray tracing. There aren't many rasterization techniques that even attempt to solve it... the goal usually is just to add some ambient light effects which look like a plausible attempt at global illumination. AFAIK, even the latest, greatest game engines still stop short at something like baked-in ambient occlusion or screen-space darkening using the depth buffer. It looks cool, but physically accurate it ain't. It's much more natural to get "perfect" results in ray tracing, but that was kinda my point: getting those accurate results is pretty costly. If people don't notice the difference, why bother? Stick with the cheap approximation.

And about scalability, you're right, of course; ray tracing does scale better with scene complexity than rasterization does, and as computing power increases it will make more and more sense to use ray tracing. However, the ray tracing vs. rasterization argument has been going on for decades now, and while ray tracing researchers always seem convinced that ray tracing is going to suddenly explode and pwn the world, it hasn't happened yet and probably won't for the forseeable future. Part of it is just market entrenchment: there are ray tracing hardware accelerators, sure, but who has them? And although I've never worked with one, I'd imagine they'd have to be a bit limited, just because ray tracing is a much more global algorithm than rasterization... I can't see how it'd be easy to cram it into a stream processor with anywhere near as much efficiency as you could with a rasterizer. On the other hand, billions are invested into GPU design every year, and even the crappiest computers one nowadays. With GPUs getting more and more powerful and flexible by the year, and ray tracing basically having to rely on CPU power alone, the balance isn't going to radically shift anytime soon.

For the record, although I do research with both, I prefer ray tracing. It's conceptually simple, it's elegant, and you don't have to do a ton of rendering passes to get simple effects like refraction (which are a real PITA for rasterization). But when these articles come around (as they periodically do on Slashdot) claiming that rasterization is dead and ray tracing is the future of everything, I have to laugh. That may happen but not for a good long while.

Re:Inverse Moore's Law (1)

a_claudiu (814111) | more than 6 years ago | (#22617382)

I believe that switching to ray tracing ceased to be a real technical problem some time ago. The only thing that kept rasterization going on was the possibility of increasing the GPU processor speed and the economical risk of implementing a new platform and standard.Now with CPU's/GPU's hitting the ceiling using brute speed increase approach, the switch to parallel processing will help ray tracing (did you notice the latest power hogs?).

For the moment rasterization still have a small breading space as long as NVidia/AMD are still struggling with SLI/CrossFire. Soon enough the technical problems will reach the directors. The main problem now is the amount of money needed for implementing a new concept and the risks involved (the CEO's hates long term plans). The only guys in the game are AMD (not so much money), NVidia (I hope that they have the balls and they will not stop after buying Ageia) and Intel (they are the guys in the article). The next war will be the API specification as it was the war OpenGL/DirectX (for the moment the only one is OpenRT). I only hope that the API will not be developed by Microsoft (last one was for Karma only)

Re:Inverse Moore's Law (0)

Anonymous Coward | more than 6 years ago | (#22616186)

(Check out The Rendering Equation on wikipedia or somewhere else if you're interested; there's a integral over the hemisphere in there that has to be evaluated, which can recursively turn into a multi-dimension integral over many hemispheres. Without cheating, the evaluation of that thing is going to kick Moore's law's ass for a long, long time.)
Using monte carlo methods, the integral can already be evaluated in full on today's computers. Not in real time, of course, but in a matter of a day or two per image. The problem is solved from a software point of view; we really are just waiting for Moore's law. If we had a computer from 2030 today, we could run something like Indigo [indigorenderer.com] and be generating photoreal images in real time right now.

photon mapping (2, Interesting)

j1m+5n0w (749199) | more than 6 years ago | (#22616694)

...getting a "perfect", physically correct result is just not a process that scales well. (Check out The Rendering Equation on wikipedia or somewhere else if you're interested; there's a integral over the hemisphere in there that has to be evaluated, which can recursively turn into a multi-dimension integral over many hemispheres. Without cheating, the evaluation of that thing is going to kick Moore's law's ass for a long, long time.)

Photon mapping is a pretty good way of getting an unbiased approximation to the rendering equation. It's slower than plain ray tracing, but much faster than path tracing. Real-time interactive global illumination isn't as computationally intractable as you are implying; it is likely to follow real-time ray tracing in not too many years.

Re:Inverse Moore's Law (1)

Have Brain Will Rent (1031664) | more than 6 years ago | (#22617234)

Ray tracers don't rely on approximations for "anything other than very simple renderings", they rely on approximations for all rendering. For one thing it completely ignores the wave nature of the objects being modelled. There was an attempt (published in Siggraph in the 70's iirc) to render by directly modelling wave interactions of light with the modelled objects but even the supercomputers of the time were inadequate... could be time to revisit that.

Funny though, the argument, about processing power increasing faster than image resolution requirements, which would therefore gradually increase the practicality of more computationally expensive approaches to image synthesis as time went by, was one I made in my thesis proposal seminar. There was one guy on my committee who just couldn't seem to get that... he was thick as a brick and unfortunately also a dick.

Gamers and their "games"... (1)

rmdyer (267137) | more than 6 years ago | (#22617602)

Sure, if what you want to do is play a "game" on the PC, then yea, forgo the "pretty" look and just concentrate on the "how many points can I score" aspect.

However since Doom, I've never totally played a game just to pass the time scoring on my friends, or getting good at Tetris. I actually love the experience of exploring, and going somewhere else that I can't go in real life. In these terms, the bigger the world a game company can create, the more visually stunning and accurate the photorealism is, and the more complex the scene can be rendered at a reasonable rate the better. I've enjoyed the Myst series of games for that reason as well, and I don't even care about the puzzles, I just like the wonderous "places" I get to visit after I get home after a hard days work. In some ways it is just like a vacation. It's like a way of being a kid again, and the adventure of going on a long journey to some end that I know not. Environmentals are a "big" part of the game experience.

The most recent game in my arsenal of photorealism is Crysis. When I purchased a computer powerful enough to run that game with all the bells and whisles turned on, I was left slack-jawed and amazed at the possibilities. Crysis only flaw is that it isn't big enough, and ends up just being a "leading you" game just like Half Life. There are implied "barriers" for the sake of disk space, and for the sake of the story.

Make no mistake, I will pay just about any kind of money to satisfy my experience for other worlds and adventure. Yea, playing some occasional puzzles are fun, and certainly when you are in the mood to just wail on your internet buddies there's nothing more satifying than a good Quake style FPS. But in my view, all the games I've played have taken me "somewhere" that I can't go in real life, and offer me an escape to my rather mundane world.

The games I've played since Doom...

Doom (all variants)
Doom 1.666
Doom 2

System Shock

ROTT (Rise of the Triad)

Mech Warrior 2

Heretic
Hexen
Hexen II

Unreal
Unreal Tournament

The Wheel of Time

Myst
Riven
Exile
UrU Ages beyond Myst
UrU Path of the Shell
Real Myst
Myst IV Revelation
Myst V End of Ages

The Elder Scrolls III, Morrowind
The Elder Scrolls IV, Oblivion --- (Truely visually stunning, awesome, and adventure inspiring, with the exception of just plain bad execution. Why does every freaking thing on the road to other towns try to kill you? There are almost too many people and not enough variation in the conversations. Too many minor dumb quests to solve. I don't use the speedup.)

Tribes
Tribes 2 (These were good no matter what people say!)
Tribes Vengence (Great game, poor marketing)

Quake (all mods, world, etc)
Quake II (all mods)
Quake III Arena (all mods)
Quake III Team Arena
Doom 3
Quake 4

Halo

Half Life 2

Battlefield 2 (Environmental sounds not loud enough, comm voice over TOO LOUD!)
Battlefield 2142 (If I could talk about how bad this game was... geeze. I mean I played quite a bit to pass the time, but the implementation of the game was just poor compared to BF2.)

BioShock (off the beaten path and very nice)
Crysis --- (Absolutely stunning and amazing graphics, but poor AI, and very narrow storyline. However I understand there are limits to what computers can do, so I will give Crytek some credit here. Oh, and the whole alien ship experience was not so exciting. The multi-player needs to improve.)

I'm down to only 2 (maybe) three games per year. I'm waiting on Id's Rage now. If you can tell, I'm a PC gamer. I won't play console games because I feel that they were just created for companies to make quick short-term money. I feel that consoles surpress innovation and prevent PC hardware technologies from improving. I could talk all day about why I despise consoles, but in the end I just put my money where my mouth is and not buy them. However I also don't think others should be made to play PC games. Game appliances certainly are good for people to play a quick game without needing the expertise of a computing experience so they have their reason to exist.

Last rant... I like online multiplayer CTF type games. Why oh why are CTF games dissapearing? Also, why isn't more effort put into player capabilities physically instead of just point and shoot? Tribes (rocket packs), Quake III Arena (jump pads), grappling hook mods, rocket jumps, all allowed the player to take more advantage of the vertical 3D space available. One recent game "Assasins Creed"? allows climbing which is cool too. There needs to be a way for a player to gain some physical "abilites" beyond the standard strafe left right and jump movements. Some kind of multiple "mouse/keyboard gesturing" that allows some gymnasitics of some kind to be developed.

Finally I can't enphasize enough about how much environmental sounds count in setting the mood of a 3D rendered environment. There are some areas in games which have some background sounds going on, low, filtered, ambient, that is just soo cool.

Oh, and to put this thread back on track, sorry I will never play a game on a mobile device unless it is a PC. But, I will play a PC ray trace rendered game in 640x480, or 800x600. In fact I can't wait for such a game.

End game.

But Will It Run On Intel 915 Chipsets? (1)

markdowling (448297) | more than 6 years ago | (#22618318)

Because Intel's just gotta make those quarterly numbers.

Just ask Microsoft and the Vista team in particular.

Re:Inverse Moore's Law (1)

Pseudonym (62607) | more than 6 years ago | (#22618502)

Inverse Moore's Law states that the more time that developers spend on making games look 'pretty', the less time they spend on playability.

The computer graphics inverse of Moore's Law is known as Blinn's Law, and it essentially says that audience expectation rises at the same rate as Moore's Law.

Originally posed for the animation/vfx industries, the actual statement of Blinn's Law is that the amount of time it takes to compute one frame of film is constant over time. The corollary is that it doesn't matter if you can render Toy Story in real time using your fancy hardware, because today, people expect movies that look many times better than it.

Sweet!! (2, Funny)

glavenoid (636808) | more than 6 years ago | (#22615192)

It's about time for S.P.I.S.P.O.P.D. for mobile devices! I've only been waiting about 15 years!!!

Can't wait for my contacts list at sunset! (2, Interesting)

rs79 (71822) | more than 6 years ago | (#22615922)

That phones may be able to ray trace is news? Sounds more to me like intel was of reading in the news all week how inferior their graphics stuff was because of the Microsoft Vista debacle part eight - and suddenly we have an anonymous tip to a blog at intel saying ray tracing on phones there is "an opportunity to deliver more content in less time" and "Ray-Tracing may become a very popular technology in the upcoming years".

A popular technology? Like a working filesystem? They're real popular I hear. Or an on off button that actually works.

Slow news day + intel graphics dept astroturfing = ray tracing on phones is news.

Re:Sweet!! (0)

Anonymous Coward | more than 6 years ago | (#22618918)

I wonder if Google would consider a phone like that with Erlang for the networking bits - thus you may as well put Wings3D in there. And since it can ray-trace, I want it to have a portable version of Kerkythea. Got nothing else better to do when waiting in line, for the train, or at a restaurant table - may as well model and render something on a tiny little screen.

Too late Intel already been done! (0)

Anonymous Coward | more than 6 years ago | (#22615194)

Re:Too late Intel already been done! (1)

slashbob22 (918040) | more than 6 years ago | (#22615244)

It makes me wonder if we are the computational power to create random images for the creature on the other side of the screen.

Brilliant! (5, Funny)

neonmonk (467567) | more than 6 years ago | (#22615232)

I can just see two moustached elderly gents discussing research, possibly even drinking Guinness out of a bottle. They go silent for a few minutes and then one of them, whilst stroking his long droop moustache suddenly jumps up and proclaims:

"Holy Crap! Mobile gaming devices have tiny screens, imagine how easy it'd be to use advanced raytracing graphics!"
"Brilliant!"

Re:Brilliant! (1)

smallfries (601545) | more than 6 years ago | (#22615652)

"Unfortunately we're not as clever as those Intel chaps, how will we make it work?"
"Hmmm....."
(long pause)
"What about rendering really small scenes on a big stonking server and then using some sort of 'Network' to make the images appear?"
"That sounds like some kind of magic!"

Fantastic research [acm.org] .

"computational requirements" (2, Insightful)

nurb432 (527695) | more than 6 years ago | (#22615288)

"As computational capabilities outgrow computational requirements, the quality of rendering Ray-Tracing in real time will improve, and developers will have an opportunity to do more than ever before."

This attitude is why even tho our computers are 1000x faster then we had 20 years ago, they actually perform worse overall.

Re:"computational requirements" (2, Insightful)

DarkOx (621550) | more than 6 years ago | (#22615428)

This attitude is why even tho our computers are 1000x faster then we had 20 years ago, they actually perform worse overall.


I would say yes and no. Its one thing to have the computer do something simply becase it can; I agree that is very wasteful. Raytracing is not needed on a 300x200 screen; especically while plaing a game and things are moving.

On the otherhand 20 years ago like today we compormised and dispensed with things or found was to "fake it" in cases where the computer's conuld not deliver. Its really not critical shadows are rendered perfectly on my mobile phone while I am playing Doom57 Mobile Edition. An architecture program on my desktop though It would be nice to see how objects will turely look when lit.

Its silly to continue living with the compromises of the past, when we no longer need to, its equally silly and wasteful to do manything being done on production(research is always good) computers today just because we can.

Re:"computational requirements" (2, Insightful)

typicallyterrific (934202) | more than 6 years ago | (#22616374)

I hate "the old days were so much better!" comments, especially when it comes to computing.

20 years ago, no one was connected to a 3mbps line, listening to music, with a mail and an IM client constantly pinging back, watching a video on youtube in one of twenty tabs in my firefox, with vim/emacs/eclipse open, azureus plugging away at some torrents as fast as it could, on two 1280x1024 screens in real colour, all simultaneously, on a single core I bought years ago. I still don't notice significant slowdowns.

Remember when emacs used to be slow? I don't, I wasn't computer literate back when 8 megs of swap was a huge deal.
Does anyone seriously miss the days when 512 × 384 pixels were an improvement and you couldn't run more than one app at once?

Re:"computational requirements" (1)

nurb432 (527695) | more than 6 years ago | (#22616790)

Get off my lawn ya whippersnapper.

if it has to be explained why the 'older day's of computing was better then today, you are too young and would never understand.

LOADING....... (1)

Locklin (1074657) | more than 6 years ago | (#22617010)

LOADING.......LOADING.......LOADING.......
INSERT DISK 4 AND PRESS <RETURN>
I for one, do not welcome 20 year old hardware.

Re:LOADING....... (1)

nurb432 (527695) | more than 6 years ago | (#22617280)

Sounds like you were using the wrong hardware and OS back then.

Re:"computational requirements" (1)

jcnnghm (538570) | more than 6 years ago | (#22617176)

I wouldn't say that. I remember back when my computer could only run a single application at a time, Wordperfect had white text on a blue background (Reveal Codes and dot matrix printers anyone), and was slightly more usable than vim is today, and there was a noticeable lag between my typing and the text appearing on the screen. That wasn't even 20 years ago, that was the early to mid nineties.

I'm looking at my desktop right now, and I'm running no less than 6 different applications including a web browser, an e-mail client, and an IDE. These applications are hundreds of, if not thousands of times more complex than their early counterparts. We can all fondly look back on the good old days, but lets be realistic, I'll take the vastly increased functionality and real-time responsiveness of multiple modern applications over the old stuff any day.

prog10 (4, Funny)

k2enemy (555744) | more than 6 years ago | (#22615300)

Too bad the source code for the highly optimized prog10 raytracer was lost in the great hard drive crash of '98.

Re:prog10 (4, Informative)

ByteSlicer (735276) | more than 6 years ago | (#22615382)

Man, I had to google that before I got it [cmdrtaco.net] .

Summary is misleading (2, Informative)

DigitAl56K (805623) | more than 6 years ago | (#22615308)

Moore's Law works in favor of Ray-Tracing, because it assures us that computers will get faster - much faster - while monitor resolutions will grow at a much slower pace.
Where did this "assurance" come from? Display resolutions grow as quickly as the latest games can run smoothly at the leading-edge dimensions. Since Moore's law is about doubling processing power, but doubling the display resolution means quadrupling the number of pixels, you may find the relationship is in fact much closer than you'd think.

Re:Summary is misleading (1)

vulgrin (70725) | more than 6 years ago | (#22615520)

Just trying some random math that may or may not be based upon bad assumptions:

1993 to 2008 = 15 years = 180 months = 10 Moore's Law Cycles
Monitor size over that period was from 640x480 to 1920x1200 (pulling from my butt, we could argue EGA vs VGA and what percentage of the users actually have 1920x1200, but its a place to start)

Pixels: 640x480 = 307,200 to 1920x1200 = 2304000 which is a factor of 7.5.

So, in summary, over the past 15 years Moore's Law has eclipsed monitor growth. 10 times vs. 7.5 times.

I'm sure loads of people will point out flaws in my logic. BUT, the purpose of the article too is to point out that Ray Tracing on mobile devices may be possible. So in effect, you're fixing the resolution at 640x480 or lower, in which case Moore's law definitely wins out.

Re:Summary is misleading (1)

vulgrin (70725) | more than 6 years ago | (#22615536)

Oh, and Moore's law is exponential. If resolution was exponential and we doubled as often as Moore's law, then in the past 15 years our monitor resolution would have gone from 640x480 to 327,680x245,760

So, I think it's safe to say the summary ISN'T misleading.

Re:Summary is misleading (1)

Ngarrang (1023425) | more than 6 years ago | (#22615644)

Where did this "assurance" come from? Display resolutions grow as quickly as the latest games can run smoothly at the leading-edge dimensions.
Moore's Law has achieved meme status. We now have Moore's Law of Business, of Display Resolution, of Hair Length, of a Geek's chance to have Sex, etc. I wonder just how many people have actually READ Moore's actual prediction and aren't just quoting it second-hand.

Re:Summary is misleading (2, Informative)

node 3 (115640) | more than 6 years ago | (#22615654)

Moore's Law says the number of transistors in a certain area at a certain cost will double about every 18 months. This effectively seems to double computer speed every 18 months.

Doubling the number of transistors on an LCD does not double the resolution (as you pointed out), it only multiplies each dimension by the square root of 2. Doubling the number of transistors on a CRT does nothing (well, maybe it gives you a more impressive OSD). But even limiting it to LCDs, it does not hold up. Display resolution does not follow Moore's Law. If it did, then just three years ago, a 30" LCD would be 1280x800, or that the current MacBook would be around 1900x1200.

The reason for this is not that Moore's Law doesn't apply to LCDs, it probably does. What's happening is that instead of using that technology increase solely to make ever higher resolution displays, it's used to make ever cheaper and higher quality displays at the same, or marginally improved, resolutions.

The thing you can directly measure with LCDs with regards to Moore's Law is dot pitch. Every 18 months or so (let's say 2 years as that's the outside figure), dot pitch would increase by the square root of 2. That means that the display elements in your OS would shrink over time, and something that was 1" square in 2000 would now be 0.25" square. That's just since 2000. Go 8 years back again, and displays would have to be such that those 1" square icons would have to be 4" across and 4" tall!

Display resolutions grow as quickly as the latest games can run smoothly at the leading-edge dimensions.
That is outright false, as you are implying that graphic quality is not increasing beyond pixel resolution (since that's the point you are trying to disprove). In other words, if display resolution was keeping up with CPU power, pretty much in-step, then there would be no increase in polygon count, texture quality, etc, as all that would be happening is we'd be playing the original Doom with the same Doom quality, just at a higher resolution (or if you want to start with a 3d card rendered game, UT or take your pick of game from that era). But the fact is, game quality is increasing beyond just increasing the pixel count.

What you're noticing is that high-end games seem to match high-end displays at similar frame rates. This is not because display technology is keeping up with the silicon that drives your games. It's because game companies make use of every available cpu and gpu cycle until a certain approximate frame rate is reached.

Re:Summary is misleading (2, Interesting)

Anonymous Coward | more than 6 years ago | (#22615756)

Display resolutions have been getting higher, but the eye is not getting better, so there is a limit to the useful resolution of any display, and we are getting close. For a 24" widescreen at normal viewing distance, you're not going to ever want a resolution much higher than 1920x1200. Instead, you'd like the display to be bigger to take up a larger part of your field of view. But there's a problem with this; in fact your eyes can only take in a small part of the display at once. The eye has high resolution at the center of your field of vision but it quickly drops off, and your peripheral vision is very low resolution. If you render a high resolution image for your entire field of view, you are basically wasting almost all of that effort; only the part your eyes are focused on matters. What we really need is eye tracking to figure out which part of the image to render at high resolution and the rest can be rendered in low res. I think ultra-high-res monitors of the future should have built-in cameras running face recognition and eye tracking software. Incidentally, this would also enable a really cool user interface where you could control your computer by just looking and blinking.

Re:Summary is misleading (0)

Anonymous Coward | more than 6 years ago | (#22617428)

"What we really need is eye tracking to figure out which part of the image to render at high resolution and the rest can be rendered in low res. I think ultra-high-res monitors of the future should have built-in cameras running face recognition and eye tracking software. Incidentally, this would also enable a really cool user interface where you could control your computer by just looking and blinking."

I think what you are suggesting is for niche applications only, for entertainment applications there is plenty of power to go around. Not only that it would have to be non-invasive for people to use it (from a distance, not something you wear).

Re:Summary is misleading (2, Informative)

maxume (22995) | more than 6 years ago | (#22615872)

Sort of. You only need as many pixels as the eye can see at the distance the display is used at(and maybe some extra for leaning in). If you jump through some hoops, you can come up with a resolution for a given distance:

http://en.wikipedia.org/wiki/Eye#Acuity [wikipedia.org]
http://www.dansdata.com/gz029.htm [dansdata.com]

Piggy-backing on Dan's hand waving, 300 dpi at 1 foot is a decent rule of thumb, and waving my own hands, 1 foot is a reasonable minimum distance for a handheld device(I don't imagine most people holding something any closer than this for long periods of time, opinions may vary). So for a screen that is 5 x 10 inches, the benefits for going past 1500 X 3000 pixels rapidly diminish, especially for video/animation. For smaller screens, the pixel count is (obviously) even lower. So if you aren't in need of extraordinary resolution on a large screen, current pixel counts are pretty close to 'enough', especially for screens that don't occupy huge portions of your field of view, so you don't need to factor increases(especially large, continuous increases) in resolution into the comparison.

So we are at least on the threshold where increases in resolution are done 'because we can' rather than 'because there are obvious benefits', for lots of devices. Plenty of people already don't see a whole lot of benefit in the move to HDTV; Ultra-HDTV or whatever is going to be an even harder sell, as the difference will only show up at very close distances or on very large screens(and plenty of people already have the largest screen that they want as furniture).

High resolution text is probably orthogonal to a discussion about ray tracing, and it seems to be the biggest current motivation for increasing display resolution.

Re:Summary is misleading (0)

Anonymous Coward | more than 6 years ago | (#22616888)

but doubling the display resolution means quadrupling the number of pixels
In which universe is this true?

Re:Summary is misleading (1)

glitch23 (557124) | more than 6 years ago | (#22618650)

Since Moore's law is about doubling processing power, but doubling the display resolution means quadrupling the number of pixels, you may find the relationship is in fact much closer than you'd think.

No it isn't. It is about the # of transistors doubling which isn't always indicative of CPU calculation ability. From Wikipedia [wikipedia.org] : Moore's Law describes an important trend in the history of computer hardware: that the number of transistors that can be inexpensively placed on an integrated circuit is increasing exponentially, doubling approximately every two years.

Re:Summary is misleading (0)

Anonymous Coward | more than 6 years ago | (#22619280)

Since 1995 I've gone from 1024x768 (on a 15" CRT) to 1280x1024 on a 21" LCD. There's a definite limit to how small pixels have to be before we don't see them anymore, and there's a definite limit to how big monitors are convenient. I'm not quite there yet, but something like 2000*1600 on a 5:4 24" monitor would be more than enough.

In the same time, I've gone from a 66MHz 486 to a 2.4GHz Core2Duo.

Now, let's say I actually had that 2000*1600 monitor: That's a 4.07-fold increase - let's round to 4.
Now, as for the CPU. Looking at just one core, I won't hazard to guess how much it gets done per tick compared to a 486 when you start looking at things like SSE and a much larger cache, so let's assume "exactly the same". That gives us a 36-fold increase. Add SSE and the like, and we can probably round to 40 - an order of magnitude more. Factor in that raytracing is embarrassingly parallel, and it looks even better.

Re:Summary is misleading (0)

Anonymous Coward | more than 6 years ago | (#22620466)

For today's (and tomorrow's) games consoles, 1920x1080 is where it's at. This will be true for at least the next 20 years. Rasterising engines scale better with resolution, ray tracing engines scale better with geometry. Given the resolution for games consoles isn't going to change much, you're going to see greater benefits from a ray tracing engine on a console versus a rasterising engine on a console once the cross over point is passed. The next Cell processor and Larrabee will probably be the first processors enabling practical real time, animated ray tracing engines. Raying tracing with photon mapping at 1080p is going to give you high quality results.

Good for Intel, needs more work (4, Interesting)

should_be_linear (779431) | more than 6 years ago | (#22615318)

As Intel couldn't compete with ATI/nVidia on 3D rendering performance, they simply redefined rules of the game. Now they seem ahead of everyone else in Real Time Raytraycing, at least based on publicly presented papers. Now, they need to integrate this into some bigger picture of "new gaming platform". If they manage to integrate this graphics with Java JVM in coherent way, so that developers can easier utilize multiple cores in games and be able to write games once, run on all platforms/future consoles as a bonus. That would be big step towards letting developers focus towards gameplay and not on DirectX/OpenGL/PS3/... API generations, extension nuances, tricks for simulating shades, optimizing polygon count in big scenes, ... ray-tracing is making all this simple without requiring effort on developer's side. Yes, I know Java is some percents slower then C++, but in Java it is so much easier to utilize multiple-cores (especially when it comes to debugging) that I am sure performance will be gained, not lost on modern CPUs.

Lockout chip business model (1)

tepples (727027) | more than 6 years ago | (#22615342)

If they manage to integrate this graphics with Java JVM in coherent way, so that developers can easier utilize multiple cores in games and be able to write games once, run on all platforms/future consoles as a bonus.
But the console makers don't want that. If somebody inserts a disc from an older or competitor's console into the new console, then nobody is buying a new disc to pay back the console's R&D subsidy.

Re:Lockout chip business model (0)

Anonymous Coward | more than 6 years ago | (#22616890)

That logic falls apart from some angles. The one that first comes to mind is: and what happens when someone releases a console that plays old games AND is profitable by itself? (Like, say, every mainline Nintendo system and portable? Profitable, I mean, not necessarily playing the competition's old games) Then that one company is denying you some of those old-gamer sales.

But then, if your own company's box also plays the other guy's old games too, then some number of people will be buying your box to play the other guy's old games, and it balances out. And of course, playing your own previous generation system's old games works out well (see the playstation 2's success) because either way, people are buying your games.

The second thing that comes to mind is that the current generation of consoles is already networked and selling downloadable versions of older games for a few bucks each, and this model is profitable right now, so why wouldn't the same model (minus the cost of having to write emulators!) remain profitable for the next generation?

Re:Lockout chip business model (1)

tepples (727027) | more than 6 years ago | (#22617002)

The one that first comes to mind is: and what happens when someone releases a console that plays old games AND is profitable by itself?
Nintendo consoles typically play only from one generation back. GBA plays GBC games. DS plays GBA games but not GBC games. GameCube with Game Boy Player plays GBA games. Wii plays GameCube games but not GBA games.

The second thing that comes to mind is that the current generation of consoles is already networked and selling downloadable versions of older games for a few bucks each, and this model is profitable right now, so why wouldn't the same model (minus the cost of having to write emulators!) remain profitable for the next generation?
For one thing, the low-dollar games compete with the full retail games. That's part of why Nintendo is taking so long to release its NES back catalog on Wii Shop Channel, despite that other games with the same mapper are available. For another, how long would it take to download one of these games over a dial-up Internet connection?

Ray casting and Java (2, Interesting)

KalvinB (205500) | more than 6 years ago | (#22618444)

Bunnies, http://www.dawnofthegeeks.com/ [dawnofthegeeks.com] (a Wolf3D clone) was originally written in Java. I then started translating it to C# and got about a 50% speed boost. I'm now able to do bump mapping, higher resolutions and still have playable framerates.

And this is just for Ray Casting which is much simpler than Ray Tracing.

During my development with Java I discovered that setting a pixel color to 0xFF000000 caused a slowdown. That's right, a black pixel would slow the framerate down. I had to set all pure black pixels to not quite black pixels.

http://www.dawnofthegeeks.com/index.php?page=blog&offset=58 [dawnofthegeeks.com]

I also found that Java is much slower at doing a "v++" than C.

Those quirks aren't a big deal when you're not trying to do a lot of math. But they will cripple a Ray Tracer. If Sun could optimize Java better it might be viable but for now Ray Tracing based games would have to be written at a lower level even with a small resolution.

Maybe people don't expect enough out of handhelds to notice that the graphics are "poor" and that they could be better. In that case you could probably get away with Java. People don't expect much out of a console until someone starts really pushing the limit and then everyone has to.

Battery Life vs Graphics (2, Insightful)

binaryspiral (784263) | more than 6 years ago | (#22615348)

This is kind of stupid actually. Why would I want a game on my mobile to be thrashing the cpu when it could be doing some basic sprites and other not-so-cpu-intensive methods to produce my game?

Ray-tracing may be possible on my 500Mhz smartphone's processor - but damn, I don't want to have to be plugged in to play them.

Imagine a Beowolf cluster of those.... (2, Funny)

ducomputergeek (595742) | more than 6 years ago | (#22615412)

Rendering my latest blender project....

I can image that, but... (1)

Gazzonyx (982402) | more than 6 years ago | (#22615738)

But will it run on Linux?

Existing Real-Time Ray-Tracers? (1)

RAMMS+EIN (578166) | more than 6 years ago | (#22615432)

I was reading this thread hoping to find links to existing real-time ray-tracers, but found none. Does anyone know of any real-time ray-tracers? Open source, please...

Re:Existing Real-Time Ray-Tracers? (1)

nickthecook (960608) | more than 6 years ago | (#22615970)

IBM has one for the Cell processor. On my PS3, with the default scene, I get about 4fps at 720p, but you can plug a bunch of Cell-based machines into the same network and they will cluster and distribute the workload.

Re:Existing Real-Time Ray-Tracers? (1)

sowth (748135) | more than 6 years ago | (#22616340)

It looks like Povray [povray.org] is experimenting with one.

Re:Existing Real-Time Ray-Tracers? (1)

j1m+5n0w (749199) | more than 6 years ago | (#22616468)

There are a bunch of different ones discussed on this forum [ompf.org] . Arauna and Radius are two. There's also mine [pdx.edu] , but it's neither fast nor featureful.

Real time raytracing with POV-Ray (4, Informative)

Grard Menfin (1178135) | more than 6 years ago | (#22615452)

For those interested in real-time raytracing, the latest beta version of POV-Ray [povray.org] has a neat (but experimental) RTR feature. The source is now available for Windows and Unix/Linux. There also demo scenes available (and another demo scene with pre-baked textures can be found here [oyonale.com] ).

Not sure I get their argument (1)

NoNeeeed (157503) | more than 6 years ago | (#22615502)

My understanding was that current techniques in game graphics were developed because they require less computing power to achieve a similar level of quality; or to put it another way, they produce better quality for the same amount of computation.

If this is the case, why not just use the increasing processing power to produce better quality graphics using the current optimized techniques?

Am I missing something? Intel's argument seems a bit like saying we should get rid of QuickSort and go back to Bubble sort, because we now have the processing power to do it quickly. Is there something about ray-tracing vs current polygon techniques that means that the difference is actually narrowing as processor power increases?

Is this just a processor company - unsurprisingly - suggesting that we use more processing power, or is there something concrete to their argument that I've missed?

Re:Not sure I get their argument (1)

Anonymous Coward | more than 6 years ago | (#22615598)

Raytracing scales almost linearly with the number of pixels. For low resolutions it can actually become faster than other rendering techniques, while giving a comparable, or even superior image quality.

But the biggest advantage lies in the fact that raytracing is "embarrassingly parallel." Nowadays it seems that the trend is not for processors to become faster, but to have more and more cores. It is much easier for a raytracer to take advantage of that, than for conventional methods.

Re:Not sure I get their argument (0)

Anonymous Coward | more than 6 years ago | (#22615814)

Rasterization is also embarrassingly parallel.

I don't see the point of using ray tracing for eye rays, period. Then choose the right technique for the effect you want for shadows, reflections, etc. Sometimes that technique is ray tracing. Dynamic scenes are still problematic for the real-time ray tracing crowd, while rasterization handles that problem well.

I don't know of any current games that don't already cast rays for some purpose (usually not for visibility in graphics though...). Ray tracing is just another tool in the bag of tricks. Know when to use it.

Re:Not sure I get their argument (1)

Jesus_666 (702802) | more than 6 years ago | (#22616580)

Actually, there already are dedicated raytracing accelerators (like SaarCOR) and NVidia has presented simple realtime raytracing on the G80 at last year's CeBIT. It's not just Intel who is interested in this.

Re:Not sure I get their argument (2, Insightful)

smallfries (601545) | more than 6 years ago | (#22616716)

You are assuming that there is only one variable (resolution) that can be adjusted. Actually the quality of the scene is a function of two variables: resolution and scene complexity. When the complexity of scenes was low, rasterization produced much better results than raytracing for the same effort. Now that scene geometry has increased so much we are reaching the point where raytracing will produce the same (or better quality) for less effort. The main issue is that rasterization is O(n) in scene complexity while raytracing is O(log n). Of course there are lots of other issues and tradeoffs otherwise we would be using raytracing in games already.

If you're interested there is a detailed comparison available here [utexas.edu] .

Raytracing is not the holy grail of graphics (4, Informative)

igomaniac (409731) | more than 6 years ago | (#22615504)

If you want to know the future of real-time graphics, look at what Pixar and other animation and special effects houses are doing. None of them are using ray-tracing except to achieve specific effects in specific circumstances. The fact is that global illumination combined with scanline renderers simply produce better pictures with less computational requirements.

Raytracing IS the holy grail of graphics (0)

Anonymous Coward | more than 6 years ago | (#22615788)

You are plain wrong. Raytracing is more powerfull and conceptually simpler than all those fake techniques. Raytracing + programable pixel shaders = the future.

Pixar is not the holy grail of graphics (2, Informative)

Anonymous Coward | more than 6 years ago | (#22615852)

This is a common meme, but it is mistaken. I'm sure you've noticed that Pixar's movies aren't yet photorealistic. Raytracing *is* the holy grail of graphics; in its most sophisticated form it basically amounts to a simulation of the actual physics of light propagation, and with monte carlo methods it can be solved, producing images that can truly be said to be indistinguishable from reality. The reason Pixar doesn't use it is that, believe it or not, Pixar has constraints on their rendering time. They can't spend days to render a single frame; they need to get movies out the door. But given a couple dozen more years of Moore's law, raytracing would be everyone's rendering algorithm of choice. It's simply the most general, flexible, and simple rendering algorithm possible, so, absent computational constraints, it's what everyone would use. I'd say that qualifies it as the "holy grail" of graphics, wouldn't you?

Re:Pixar is not the holy grail of graphics (0)

Anonymous Coward | more than 6 years ago | (#22615914)

Raytracing is not the holy grail of 3D graphics. Global illumination [wikipedia.org] is closer to what you are thinking of. GP post is incorrect, Pixar doesn't do real GI, so you are correct in dismissing his analysis of their techniques.

Re:Pixar is not the holy grail of graphics (0)

Anonymous Coward | more than 6 years ago | (#22616044)

To be more specific, I suppose I meant "monte carlo path tracing" is the holy grail of 3D graphics (raytracing in the style of these real-time demos certainly is not). If you want to do true global illumination, taking into account *all* effects that light propagation has on an image, you basically have to use raytracing. The only non-raytracing algorithm for global illumination anyone uses is radiosity, but radiosity only simulates diffuse interreflection; monte carlo path tracing can simulate *everything*.

Quantum wave tracing is the holy grail (1)

Pinky's Brain (1158667) | more than 6 years ago | (#22617744)

Everything else is just a coarse approximation which doesn't correspond to our best knowledge of light propogation. Forward raytracing? Pshaw, just a complete and utter hack. Backward raytracing can handle caustics and GI ... but at the same time, still atrocious hacks really which can't handle a whole host of optical effects.

Quantum wave tracing baby, that's where it's at.

What does Pixar have to do with realtime graphics? (2, Interesting)

argent (18001) | more than 6 years ago | (#22615868)

What does Pixar have to do with realtime graphics? Pixar's not DOING realtime graphics.

Pixar has the luxury of controlling every take, and going back after the fact to re-render shots with different settings, or even to use different algorithms (including ray-tracing) to fix any rendoring flaws caused by whatever approximations they're using at that point. Realtime graphics do not have that luxury... if there's a problem in a scene, you can't go back and fix it.

So whether raytracing is more or less appropriate for realtime graphics, whether Pixar uses it or not is irrelevant.

Re:Raytracing is not the holy grail of graphics (3, Informative)

Aidtopia (667351) | more than 6 years ago | (#22616086)

Actually Pixar has switched to Ray Tracing. Cars was ray traced [pixar.com] [PDF]. Skimming through the whitepapers on the Pixar site [pixar.com] , it's clear ray tracing was also used extensively in Ratatouille.

Even so, what Pixar is doing in feature films isn't particularly relevant to real-time ray tracing on mobile devices.

it's about memory, not performance or realism (3, Informative)

j1m+5n0w (749199) | more than 6 years ago | (#22616584)

It's worth pointing out (and it's mentioned in the paper you cite) that the main reason Pixar hasn't been doing much ray tracing until now is not performance or realism, but memory requirements. They need to render scenes that are too complex to fit in a single computer's memory. Scanline rendering is a memory-parallel algorithm, ray tracing is not. So, they're forced to split the scene up into manageable chunks and render them separately with scanline algorithms.

This isn't an issue for games, which are going to be run on a single machine (perhaps with multiple cores, but they share memory).

Re:Raytracing is not the holy grail of graphics (0)

Anonymous Coward | more than 6 years ago | (#22616480)

Ray Tracing has one property that is, in fact, the holy grail with regards to graphics.

All content can be described locally, in isolation. In polygon renderers of today, shadows, reflections and tons of other special effects are special cased by the "game engine" or by pixar renders 'n such. With ray tracing, such special cases are inherent in the renderer without special cases.

Reduce constraints on the content producers, eliminate programmer involvement in the look and appearance of your product. Seems pretty close to a holy grail type technology.

That being said, it isn't clear that Ray Tracing is "it". Something else might be invented that has the desirable properties of ray tracing without the cost or has additional desirable properties that ray tracing doesn't solve (diffuse interaction). I am of the opinion that production implementations trend towards the brute force algorithm as computing power increases (think Z-buffer, at one point it was "too expensive" in terms of memory size and per pixel access. Programmable shaders are similar. Shadow maps are tending towards brute force. A notable exception though is the FFT for fourier transforms).

Computing power is a funny thing, it is *very* hard to visualize what you would do different if you had 4x the computing power that you have today (you can do what you know, much faster, but what else?).

Re:Raytracing is not the holy grail of graphics (2, Interesting)

Coward Anonymous (110649) | more than 6 years ago | (#22616534)

Except Pixar has an army of shader developers working for 2 years on tweaking the rendering of practically every scene to ensure its photorealism. Scanline renderers may be faster but the human effort required to achieve photorealism is huge.
Ray tracing alone is not a silver a bullet but if it produces better results with less human effort, it's a net win.

I found this on Pixar's RenderMan page (https://renderman.pixar.com/products/tools/renderman.html):

"Ray Tracing and Global Illumination
The ray tracing and global illumination features have been integrated with Pixar's highly evolved implementation of the REYES "scanline" rendering algorithm so that you only incur the overhead associated with these effects when and where you need them. RenderMan shader developers can selectively invoke RenderMan's new ray tracing subsystem to invent new solutions to difficult production problems or to achieve physically correct illumination effects."


My interpretation: If you can't figure out how to manually tweak the scene, throw CPU power at it.

Re:Raytracing is not the holy grail of graphics (2, Interesting)

big4ared (1029122) | more than 6 years ago | (#22619332)

Definitely.

Pixar used some raytracing for Cars and later described it as a huge mistake. Certain shots took over 200 hours per frame. In terms of performance vs. quality, even in movies, they prefer to go scanline. You won't see games going to raytracing any time soon.

In Transformers, they used cube-maps because raytracing was too slow. Is anyone here seriously going to make the case that Transformers looked bad because the reflections weren't perfect?

Intel is nuts. Raytracing is not easier. (1)

Jackie_Chan_Fan (730745) | more than 6 years ago | (#22615554)

Raytracing does not make things easier. If anything it makes things a bit harder, or at the least its a comparable work load.

Is Raytracing really needed on a tiny mobile device at say 300x400?

Easier for whom? (1)

argent (18001) | more than 6 years ago | (#22615906)

Raytracing is more computationally expensive, but what about human expense? To get high performance while achieving comparable results with scanline rendering you need to prebake shadows, create reflection maps, pick which objects are going to be self-shadowed, and so on... many of these techniques involve selectively applying ray-tracing algorithms where you notice them, along with a myriad of other algorithms that are individually cheaper than raytracing for specific cases. At some point it makes sense to simplify the code and the artists's job and just raytrace everything.

Realtime raytracer (0)

Anonymous Coward | more than 6 years ago | (#22615562)

For anyone interested, I wrote a prototype realtime raytracer @ http://forre.st/tracer [forre.st] .

Forrest

Scientific visualization (1)

AlpineR (32307) | more than 6 years ago | (#22615794)

I came to browse Slashdot while waiting for some ray tracing of my own. I do atomistic modeling of nanomechanics and I'm rendering movies of how atoms wiggle and move during deformation. Here is a test shot of a 4 nm tall aluminum cylinder rendered at 150 femtoseconds per second of animation:

Aluminum nanocolumn vibration (Quicktime, 14 MB) [umich.edu]

It's amazing how nice ray tracing can look compared to other visualization methods. It took three hours to generate this 1000 frame movie. But as processors add cores or ray tracing gets hardware acceleration this can speed up dramatically.

Doing ray tracing on small screens makes a lot of sense since you're restricted to low resolution anyway. A Nintendo DS or Apple iPhone has a fraction of the resolution of a desktop, but the ratio of processor speed to screen pixels might be better (or become better in the near future).

Re:Scientific visualization (1)

Have Brain Will Rent (1031664) | more than 6 years ago | (#22617514)

Here is a test shot of a 4 nm tall aluminum cylinder rendered at 150 femtoseconds per second of animation: Aluminum nanocolumn vibration (Quicktime, 14 MB)

Ahhhhh spheres, every ray tracer's favorite primitive! :D

Where's the desktop version? (2, Insightful)

sunderland56 (621843) | more than 6 years ago | (#22616096)

The normal way things work in computing, things tricke down from high-performance platforms to lower ones. So, where are the desktop games using raytracing?

If they want a phone to do 256 x 192 raytracing in real time, then a desktop with 1000x the compute power should easily be able to do 720x480 (full res television) in real time. But, oddly enough, there are no such titles out there....

Re:Where's the desktop version? (1)

batkiwi (137781) | more than 6 years ago | (#22618144)

Raytracing scales EXACTLY linearly with resolution. 720x480 is not an acceptable resolution for PC games.

They have a version of quake4 that can hit 100FPS at 1080p resolutions (1920x1080) with 8 cores. That means a top of the line dual core machine should be able to do 720p with no real problems.

The games will come once someone makes an engine for them to be honest. It's not QUITE possible right now, but would be demo-able at the resolution you stated.

Huh? (1)

Brandybuck (704397) | more than 6 years ago | (#22616134)

Do they have raytracing down fast enough to be useful in real time animation? If not, then stick to pre-rendered pximaps. Just because something can be done doesn't mean it should be.

Is this 1997? (1)

ReallyEvilCanine (991886) | more than 6 years ago | (#22616614)

In 1993 my DX2/66 laptop could ray trace pretty damned fast with POVray, and much faster if I optimised the code (stupid specularity!) and put a heat sink on the processor (which, in my laptop, was accessible). That was 15 years ago, at at a resolution of 800x600. Current cell phone processors run at least 10x that speed and have to deal with a resolution of around 1/8-1/16 of that. And for what? A ray-traced game? As everyone else has pointed out, the games are neat-o looking but playing them sucks ass.

I made the mistake of buying NFS Underground a few years ago. It looked pretty and the physics were OK, but then it quickly became a chore: you have to become ghetto trash and "pimp out" the car rather than actually fucking drive. And you have to do all sorts of non-driving shit to continue to the few drivy bits. Within 48 hours I fired up MAME and went back to playing Atari Sprint 1979, where I could actually, you know, drive the fucking car.

Screw phones (1)

billcopc (196330) | more than 6 years ago | (#22616736)

Just what we need: more people walking around playing Bejeweled on their cell phone, and _not_ paying attention to where they're going.

I still don't get the whole cell phone craze. Get a Game Boy for cryin' out loud!

Games get the best stuff? (2, Informative)

Waccoon (1186667) | more than 6 years ago | (#22616792)

I'd prefer companies focus on decent vector graphics for applications before trying to move directly to ray tracing for games.

Really, nothing pushes hardware, er... harder, than games. Application GUI implementation is still in the stone age, even on mobile devices.

Idiots (1)

Bitter and Cynical (868116) | more than 6 years ago | (#22616940)

Here is the reality of the situation: with a mobile device you're probably dealing with a program written in an interpreted language (Java). If your application is a game then you're dividing your scare processing resources across all elements said game: AI, graphics, networking, whatever. It's all being done by one chip that is designed to CONSERVE BATTERY POWER. These are not the dual core race horses you see in desktops. Who wrote this article? They need failure stamped on their forehead. Maybe in 10 years... but not now. Disclaimer I work for a large, promient mobile device developer. You have heard of them.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?