×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Revolution in Graphics?

CmdrTaco posted more than 14 years ago | from the isn't-that-interesting dept.

Graphics 164

wilton writes "A technology genius has Silicon Valley drooling - by doing things the natural way, writes Douglas Rushkoff in today's Gaurdian. This project has been going on for a couple of years now, they have demo's for Windows and Be. The ideas is to not use rendering and polygons to create scenes. Instead build it up from a molecular level, with apparently amazing results. "

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

164 comments

What a shitty article (3)

PG13 (3024) | more than 14 years ago | (#1630126)

First it in no way simulates molecules as both the article and the header imply...it merely uses iterated equations.

Secondnly, and more annoyingly it is still entierly digital...stupid reporters

Try the real thing (2)

jw3 (99683) | more than 14 years ago | (#1630127)

The article in Guardian is not very informative,
try the original nervana site [nervana.com],
you can even get the software there, for Windows, Mac and Be. The screenshots
don't look bad, but I have my doubts as to what this
all is all about, but hey, I'm a simple biologist.


It looks to me like a hoax, or at least like
someone trying to get into news - you don't find
a source code or any details. Could some good soul,
fit in comp sciences, explain to me what so interesting about this project?

Demo of this (1)

cluke (30394) | more than 14 years ago | (#1630128)

There's a demo of this at
http://www.nervana.com/psi/
(Winoze, Be and Mac)

It looks quite good, but very basic. Hard to tell really if it's going to be revolutionary or not.

This is new? (1)

Eric Sharkey (1717) | more than 14 years ago | (#1630129)

Exactly how is this new, and what is it about this that qualifies this man as a "genious"?

I've seen better looking graphics in 5 year old euro-demos running on a 386 DOS box.

My god! (1)

gregm (61553) | more than 14 years ago | (#1630130)

This is the coolest 74K ever. It's the smoothest VRML browser I've ever seen and an animated scene all in a 74K app! Granted it's not Q2 yet but it's damned impressive. Download it.

no I'm not afiliated with this guy/company I'm just real impressed.



Hype? or someting more? (2)

Nerant (71826) | more than 14 years ago | (#1630131)

I've downloaded the demo: all it seems to be is some garish landscape generator u can run thru: the sky/cloud moving thing ain't that cool. I've seen better water in the visualisations for my Mp3 player (Sonique for those interested) . Until i see a demo of this "molecular" technology that is on par or better with the 3d game engines of today, i'll remain a skeptic. Still, it might only be the begining of something great. Must keep an open mind.

Typo (1)

cluke (30394) | more than 14 years ago | (#1630132)

"In today's Gaurdian?" Guardian, surely! ;)
Though this may be intentional - the Guardian used to be notorious for its typographical errors (it's often referred to as "The Grauniad")

To get back on topic, their knowledge of tech matters is less than stellar (though not as bad as, say, the Sunday Times) so they may have just fallen for some marketing hype. I can scarcely imagine any Nintendo execs being stunned at that little demo, impressive though it is for 74k. (Though I've seen better on 4K Amiga intros)

What it really is: (1)

Skinka (15767) | more than 14 years ago | (#1630133)

It's just a landscape engine, nothing more. 3D-games contain a lot more stuff than just hills and lakes. It's still digital (doh). Low CPU requiremnts? Please, the demo (400x240 I believe) drains 90% of my CPU-resources - and it still looks like *hit. My CPU is not Z80 (which the article claims to be sufficient), but a K6-2 running at 400MHz. I rememer seeing cooler landscapes in demos, and with lot lover CPU requirements.

I'm not saying that this technology isn't worth anything (how could I, I first heard of it 15 mins ago). It still looks like it needs a lot of develoment, though.

Interesting indeed- but too early to tell (2)

plunge (27239) | more than 14 years ago | (#1630134)

For what it did, it didn't drop my system performance much at all. As a future holy grail for the precise and realistic 3d engine, however, I'm rather skeptical. The horrible mouse interface (pretty random as far as what dragging in any given direction does) in the demo makes it very hard to tell, but this method of iterative equations seems to have a problem- no consistently accurate solid 3d shapes. Sizes and relative shapes seem to stretch around far too much with a change of perspective. And I imagine that really beautiful, fractal and super-complex at any distance of viewing images that this tech could create wouldn't stay consistent as you move around. Apparent textures would swin all over the place. Maybe this is just a limitation of the demo, or my flawed analysis of the tech. But the reason polygon engines (or voxel engines!) are so popular is that they have consistent world geometry that works intuitively from any angle of viewing, making it well suited for virtual reality in which you can interact- even at the high speeds of quake-style games. This engine might turn out to be better suited for acid trip-heads than Quake4...

Revolutionary like a fox (2)

mjg (21046) | more than 14 years ago | (#1630135)

Apparently the people who wrote this article aren't up with modern times, or something. If you check out some of the 40kB demos for the Amiga you'd see more impressive stuff, and those don't require a 100MHz machine (stated requirements for the Win32 demo on the nervana.com site). As far as using mathematical equations to approximate the real world "better" than polygons, this has been used before. An example would be the procedural textures used in Lightwave 3D, which uses a bunch of algorithms to simulate different real world textures, using only a fraction of the memory of their bitmapped counterparts.

Of course, I could be completely wrong about these demos, but IMHO this is hardly revolutionary, or even the slightest bit impressive. I would expect that if this were truly something more than hype, there would be more substantial information at their website, or at least a demo which is actually impressive.

What would be nice to see is 3D accelerators with support for some procedural textures on the card, and then having those features actually used in the game. You can achieve some very impressive effects with such things, although I suppose most people see it as easier to just add more memory to the card nowadays. I remember reading one card that was due to be released a few months ago was supposed to have such features (Permedia 3?), but I'm not sure.

Uses iterated functions on graphics card? (1)

Lars Arvestad (5049) | more than 14 years ago | (#1630136)

That was a really bad article and I don't find the Nervana site that informative either, but my guess is that he wants to (or already has) put support for iterated function systems on the graphics card.

Use of IFS in computer graphics can hardly be regardes as news, but I guess putting it in hardware, making it possible to draw directly in video RAM, would make the technology usable also for games. This way you don't need that many polygons to draw an intricate structure.

Lars
Lars

--

Whats so special about that (1)

Hiro_Protagonist (79351) | more than 14 years ago | (#1630137)

I downloaded the program, and I've seen better computer graphics in the '80s.

If you still think its good you should check out this [eyeone.no], a 50 something K voxel engine in java (warning Explorer 4 only).

"The future is already here,
it's just not evenly distributed yet"

Where are the creatures? (1)

Anonymous Coward | more than 14 years ago | (#1630138)

Although the article didn't mention it, it seems like he is simply using known fractal rendering techniques for the landscape and water. It's not hard to come up with the rules for the quality he has achieved in the demo program and screenshots. What caught my attention in the article was the comment about doing creatures with his simple rules. I didn't even see any vegetation even though research on fractal plants has been going on for over a decade in the computer graphics community. Unfortunately, it sounds like this is another case of an uninformed journalist overhyping simple technology.

Re:Interesting indeed- but too early to tell (1)

Zerth (26112) | more than 14 years ago | (#1630139)

> The horrible mouse interface (pretty random as far as what dragging in any given direction does)

It would have been nice if they had provided some documentation, but it seems that where on the screen you click controls your movement, no dragging involved(the closer to left/right edge, the faster you turn, closer to top/bottom, the faster you move).

> And I imagine that really beautiful, fractal and super-complex at any distance of viewing images that this > tech could create wouldn't stay consistent as you move around

Yah, while up close the boundries between sections looks really good, but if you move about 10 seconds away from the island, the water starts forming some really wacky moire and the island gets all blocky.

If it weren't for that and the way the water "laps" this would be fun to play with. That is, if it actually did something.

Try 4k intros (1)

Beta (31442) | more than 14 years ago | (#1630140)

> This is the coolest 74K ever.

I suggest you check out some of the 4k intros on
ftp.scene.org. They've got 3d AND sound too. In 4096 bytes.

Constructive Solid Geometry (5)

cd-w (78145) | more than 14 years ago | (#1630141)

/begin{rant}
Constructive Solid Geometry (as used in POV etc.) is also an alternative to polygon-based rendering.

For those that don't know about it, with CSG the scene is built up from primitive blocks (e.g. cones, spheres, cubes, rods, etc). More complex objects are made by using boolean operations (AND, OR and DIFF) on the primitives. For example, a ring can be made by subtracting (DIFF) a rod from the centre of a sphere. Solid textures can be applied to the resulting objects, and raytracing can be used to produce shadows, reflections, transparency, etc.

Unfortunately, CSG and raytracing seem to have been overlooked by the graphics card manufacturers. The new effects proposed by 3dfx (motion blur, soft shadows, etc.) can be achieved very simply using stochastic raytracing. Raytracing has a reputation for being very processing-intensive, but I am convinced that it could be done efficiently in hardware, and the quality of the graphics would be far greater than polygon rendering.

In relation to the article - the Psi technique looks interesting, but seems to have very limited scope for application. IMHO, graphics card manufacturers should look at raytracing and CSG instead.
/end{rant}


Re:My god! (2)

Effugas (2378) | more than 14 years ago | (#1630142)

No. The 4K implementation of the first level of Descent--replete with MPU-401 music and all textures was OH MY GOD. This isn't nearly as impressive as most of the iterated landscape jobs I've seen bandied around, although the Earth's Landscape dataset IS cool.

Yours Truly,

Dan Kaminsky
DoxPara Research
http://www.doxpara.com

Re:What it really is: (1)

tc (93768) | more than 14 years ago | (#1630143)

Of course it drains 90-100% of your CPU, it's just thrashing around rendering as fast as it can. It would drain that amount on a CPU twice as fast. What would be more interesting is if they put a framerate counter up there so you could get some idea of the fillrate it's delivering.

Looks and Sounds like the old 'fractal' engines.. (1)

tjansen (2845) | more than 14 years ago | (#1630144)

I have this Deja Vu, all the pictures look like those 'fractal' graphics from 80's games like Rescue on Fractalus [netaxis.com] on old Ataris or Captain Blood [fatal-design.com] on Atari ST/Amigas. Looks like they re-invented them and filled them with some colors. But displaying some landscapes with simple formulars is MUCH simpler than displaying complex graphics. If they show a tree based on ther 'formulars' and in high framerate they would impress me. But not with landscapes...

the journalist got it wrong (1)

nickos (91443) | more than 14 years ago | (#1630145)

The game is about green themes like eco-friendliness, but the 3d algorithm is quite standard, and does not use "natural mathematics". Just looks like an old miggy 40k intro to me.

What? (0)

rash (83406) | more than 14 years ago | (#1630146)

He spent severall yars developing this?
I can do a btter demo in 20 min in QBAsic and i still dont know how to program.
How can annyboddy be impressed by this?

Why this is cool (3)

Dreamweaver (36364) | more than 14 years ago | (#1630147)

Alot of you seem confused about why this is a cool thing. The point isnt that the graphics look amazing right now.. it's that they're generated in a fundamentally different, possibly better way.

Of course polygons look prettier.. look at the current difference between painted pictures and polygon graphics. With a painting the artist is simply putting the colors on a flat surface in such a way that it simulates reality. Relatively easy to do since you just need to put colors there in a suggestive way (i cant do it myself for beans, but you get the point).

Now a graphics program, you create a 3d object out of polygons, then place texture images over them. This is more difficult because you have to create the actual 3d object.. like sculpting.. you cant just suggest 3d with shadow, you have to Make 3d and let the light create the shadow naturally. The textures arent really roughness or shininess, just images that Look rough or shiny and make any light sources react the way they probably would.. this saves memory by making the shape Look more complex than it really is. A smooth cylinder might look just like a tree trunk because it has a rough-appearing texture. But it's not really a tree. If you get too close you get flattening of the texture.. especially in realtime engines for games because it cant raytrace fast enough with modern computers, so uses simple rendering. It can look really, really good.. but can also look REALLY bad.

Now, i may have misunderstood the article and webpage for this technology, but what i got out of it is that this uses something like a fractal generation system, using a formulae and number of iterations, to generate real objects. Not just a mesh of points some of which have polygons drawn between them, but something closer to a physical reality. Like a fractal, it would look fine up close or far away, and like a fractal because it's based on iterating a simple algorithm over and over it would just be a matter of doing math rather than crunching z-buffer coordinates into 2d images like we do in polygon rendering engines.

What's really important here is the oppertunity for data transfer. All those cyberpunk novels make use of the ubiquitous virtual worlds where people and environments are rendered seemlessly, usually using small computers, in realtime, with wireless modem links. So far this has been no more than a dream because no personal computer could hope to handle that kind of load, No computer can raytrace in realtime with a complex scene, and there'd be no way to send that much data with anything like current modems. This technology doesnt make this all come true in a flash, but it does improve the chances immensely. You can simply transfer location data and a formula rather than mesh coordinates and transforms.. much, much less data. You dont need to do the kind of heavy number crunching for raytracing because of the way the objects are generated, and you dont have to worry about things like textures because you can just make the actual object bumpy, smooth, jagged, whatever.

Now the biggest complaint is obviously that it doesnt compare to modern polygon graphics. There's a simple reason for this.. it's not a highly funded, industrially motivated, relatively old technology. It's fairly new and being developed by a few guys. You cant expect miracles overnight.. but what he's got looks pretty good considering how new it is. You all talk about how wonderful demos look with current tech.. sure they do.. that's what theyre for. This demo is to demonstrate that his technology Does work. If you had a time machine that could send a penny 5 minutes into the future, would you complain because it didnt look cool?

Anyway, it's obviously no sure thing, but it does have a good deal of promise, and polygons cant last forever. Personally i think realtime rendered 3d games look like crap. Raytraced scenes can look very nice, but all too often suffer from virtual unreality (that plasticy look everything tends to take on.. obvious fractalism in complex objects etc).. This or something like it that builds up from basic principles into a complex object will eventually be needed.. just think about human interaction in a virtual environment, you cant very well create polygon meshes for every possability.. what if you broke a chair, how does it generate the broken ends and interior wood grain? If you bite into a cookie, how would you go about creating realistic crumbs in realtime?
Dreamweaver

Re:My god! (1)

gregm (61553) | more than 14 years ago | (#1630148)

I don't have Descent installed anymore ( I don;t know why getting old I guess) but wasn't the executable pretty big? I've played with vrml off and on since the beginning and am pretty proficent with Truespace and Blender. If a scene this simple was developed as a truespace or vrml scene it would be pretty dinky indeed. However truespace.exe (ver4)is a meg plus all the various dlls another 15 meg (granted it edits not just views). This is the "scene" and the viewer. I haven't played with any linux vrml browsers yet but I know that windows vrml browsers are a couple of meg easy.

Not ony that, notice the waves lapping on the shore... they may not be mathmatically perfect but they look "right" (albeit very fast and crude) to me.

If I'm wrong (I have been before :) please educate me.

Best Regards,

Greg

or try 64k intros from 1995 (1)

smoke (771) | more than 14 years ago | (#1630149)

never heard of the demoscene have you? there are even 4k's for linux now. this project is just plain shit and it should be removed from slashdot NOW. instead there should be a big link to ftp.scene.org (or perhaps www.error-404.com, which has some nice demos too)

Breakthrough, where ? (0)

Anonymous Coward | more than 14 years ago | (#1630150)

Give me a break, that looks like a demo from 92, barely, resembles some fractal rendering using non-standard voxel engines or seomthing alike breakthrough ? just show me where constructing a world from easily raytraceable maathematical objects such as cones, spheres, etc is very nice, ofcourse to achieve great deal of detail you would need tons of these and ofcourse to render them would necessetate a raytracing engine like suggested before. I don't know what this guy has been working on and why the guys from Nintendo/Apple are 'falling from their seats', but I'd sure like to hear his explanation because his site contains nooothing but a poor demo, a picture of a monkey and a home-made-video that takes too long to load. I'm not saying the dude is lame or anything, I'm critisizing the person who brought this to the media with backup information, without a good presentation, without documents, sources, opinions, etc. blah :)

Re:Constructive Solid Geometry (0)

Anonymous Coward | more than 14 years ago | (#1630151)

Many (but not all) of the graphics in the motion picture Tron were made using constructive solid geometry. (objects such as the light cycles, bit, etc.)

As for the Psi technique, perhaps I just don't get it, but how is this any different than what many demo groups have done over the last ten years or so?

Re:Why this is not so cool (2)

smoke (771) | more than 14 years ago | (#1630152)

A lot of 4k intros have generated landscapes and objects too, and yes we have seen fractal based objects too. Why is zbuffering no 'math' by the way? This project just got too much attention because someone did not read more than the article.

Remember MARS.EXE? (1)

XNormal (8617) | more than 14 years ago | (#1630153)

A wonderful little program (5.5k) by Tim Clarke [mailto] called MARS.EXE [ping4.ping.be] let you move with your mouse through shaded voxel-based martian terrain under a cloudy sky. It an at fantastic speed even on a 386.

Read the original usenet posting here [ed.ac.uk].

Digtal, analogue and simulating the real world. (1)

MosesJones (55544) | more than 14 years ago | (#1630154)

I can understand how using psuedo-analogue or even real analogue can lead to more realistic images. Fractals to produce better looking trees etc, sparks being blurred by the analogue signal (umm interesting, improve reality by decreasing quality). I'm probably off mark, but isn't there an issue of constraining these to produce an exact rendition of say a racing track. We can generate a load of curves to create a random track but how would a _real_ track be accurately constrained. An accurate representation of say Silverstone can be done in simple co-ordinates but I'm personally unsure as to what rules would need to be in-place to generate this with this approach.

probably comes under the "nice idea that I don't understand heading".

Re:Why this is cool (0)

Anonymous Coward | more than 14 years ago | (#1630155)

The point is that it's an OLD idea. Fractal graphics, and specificall IFS (interated function systems), which this guys system is, have been used in computer games back to at least early eighties.

Sure. IFS can be nice, and it has the advantage that you can choose whether to degrade detail or framerate. You can even easily adapt: If a scene gets to complex, you can just temporarily reduce the number of iterations you do on each object.

CDROM Game (1)

nito (1314) | more than 14 years ago | (#1630156)

I think that this is a hoax (there is nothing at the website related to all the bull in the article) to promote a lame CDROM game.
___________________________________________ __________

Re:Constructive Solid Geometry (3)

Anonymous Coward | more than 14 years ago | (#1630157)

You are using your terms incorrectly. CSG refers more to the models themselves -- it is independent of rendering. For example, I can use a B-rep for CSG objects and render them as polygons.

The most common alternative to polygon-based graphics is ray-tracing. This is a more pure form of sample-based graphics (i.e., I ask what the sample value is at a particular point rather than pushing pixels towards the end of the pipeline). My distinctions are slightly muddy here b/c I prefer to draw the demarcation lines based on illumination rather than model types.

It is trivial to do CSGs in a ray-tracer (and many related techniques). It is extremely difficult in a polygon-based system; but it is not impossible.

Constructive Solid Geometry On Current Cards (1)

Flat Feet Pete (87786) | more than 14 years ago | (#1630159)

By clever use of z buffer read and writes it may be possible to do some CSG effects in hardware already.

The problem I see with CSG is features like racetracks and landscapes. They don't really suit a CSG representation. Another thing to remember is all the other doohikeys that have to deal with your geometry representatione (e.g physics and AI).

The move to polygons has seen the death of many kewl little tricks that existed when people could just plot pixels. More stuff may dissapear id on-card geometry acceleration takes off. CSG would admittedly be interesting for this sort of thing, but a card whose hardware acceleration was based on CSG would make many operations that are simple today a bit of a bitch.

Am I missing something here? (1)

Lerc (71477) | more than 14 years ago | (#1630160)

"He decided to use the Nintendo GameBoy as a standard for how much computing power a machine should have (in other words, very, very little)"

So why does their demo require Minimum of 100Mhz.

It's not hard to see how underwhelming this project is.

The reporter who wrote the Article about this appears to have swallowed someones marketing hype hook-line-and-sinker. He hasn't even done the usual journalist work of go to an analyst and get some sound bites (text bites?).

The best case scenario for Nervana is that they have been mis-represented by someone, maybe the writer of the article.

They might have a good model for terrain representation but that hardly constitutes a revolution in graphics. You still have to do everything else.

Using this as a base for graphics would be like the old days on the amiga when someone comes up with a neat video trick and tries to make a game based around it. Inevitably the result is a contrived usually crap (mostly never to be released) game.

This stuff could possibly work as an opengl extention, but only if it can be implemented in hardware.




Re:My god! (2)

Knos (30446) | more than 14 years ago | (#1630161)

seems like the demoscene just invaded slashdot ;)

I will just give you links you should follow if you want to see impressive quick, small 3d rendering code:

  • (4k) mesha/picard (great 4k) [scene.org]
  • gcube [scene.org] (4k) intro with cool solids interaction effects.
  • Bring it Back [scene.org] (64k) intro with very good sound/graphics synchro
  • discloned [scene.org] (64k) party version. if it doesn't work, try the final version [scene.org] which is 700k (bigger because of all the sound drivers I guess)
  • ... many more on scene.org

    I have more links on my page, see the DemosSelection link.

Signal/Noise=0 (5)

jonathanclark (29656) | more than 14 years ago | (#1630163)

Don't bother reading the article. It contains no real information and it's obvious the reporter is dancing around the subject.

I don't know exactly what is being refereed to here, but many alternatives to polygon rendering have been around for ages. Simulation of the light reflection/refraction at the molecular level has been an ongoing area of research in the graphics community. The problem is as you get closer to real-life, exponentially more processing power is required. We can only hope for better and better approximation methods. Further, the fundamental laws of physics governing lights at the quantum level are not fully understood.

I'm highly skeptical that a 22 year is doing any work in this area. This work had very little application in the real-time graphics community, why should Nintendo be interested?

Perhaps they are referring to voxel rendering, which can be done in realtime and a more likely project for a 22 year old to undertake (who hasn't?) A large problem with voxels is the amount of memory required, so either the shapes must be generated on the fly procedurally or it must be compressed using curves/wavelets or a combination of both. The article mentions "parabolas and ellipses," so this might be what is being talked about? Voxels are in no way a representation of something "on a molecular level."

I'm impressed the reporter managed to write such a long article without saying anything.

What about NURBS? (3)

oblisk (99139) | more than 14 years ago | (#1630164)

I'm unable to reach the Link, yet There are a few more options than just positioning polygons for rendering. The solid modelling mentioned above is one that i know a little about. However NURBS have been around for quite a while and are rather powerful and very useful.

Definition:
NURBS, Non-Uniform Rational B-Splines, are mathematical representations of 3-D geometry that can accurately describe any shape from a simple 2-D line, circle, arc, or curve to the most complex 3-D organic free-form surface or solid. Because of their flexibility and accuracy, NURBS models can be used in any process from illustration and animation to manufacturing.

I know the use of NURBS are really easy and flexible as they are simply splines which can be ajusted by certain control points and different wieghting. They have easily replaced charachter modelling from polygons in the past 2 years.

I remeber speculation on hardware which could render/raytrace NURBS and other spline based modelling, directly w/o conversion to polys. However i've yet to see it materiealize.

Some of the Better NURBS modeller's avalible are:
Maya [sgi.com] A linux port of this is supposedly floating around SGI and some of the larger software houses.

Rhino3D [rhino3d.com] Shame its windows only, yet there's some reports of it successful in Wine.

Enjoy, Oblisk


------------------------------------

If the article is correkt... (1)

pointwood (14018) | more than 14 years ago | (#1630165)

...then I think sounds pretty cool.

From the article:
"He just passed through Silicon Valley last week demonstrating his homemade graphics engine, and everyone from the designers at Nintendo to programmers at Apple has been left in shock."

They sound pretty impressed to me. If this is a real breakthrough - then NVIDIA, 3dfx etc. may be in big trouble...

The question is - if this is so great - when is it going to be available/usefull - later this year or in ten years?

Insufficient Information (1)

madeye (99616) | more than 14 years ago | (#1630166)

It is clear that the really clever bit about this engine is the way it works : i.e. not how it looks. As far as I can tell, there is no information available about this on any of the related sites (I've poked around in the Windows .exe as well). If the data for generating the island landscape consisted of, say, 256 bytes of polynomial orders & coefficients it would be remarkable. But we don't know (yet) . . .

Re:Constructive Solid Geometry (2)

pointwood (14018) | more than 14 years ago | (#1630167)

"Raytracing has a reputation for being very processing-intensive, but I am convinced that it could be done efficiently in hardware, and the quality of the graphics would be far greater than polygon rendering."

It has allready been made for professionals:
http://www.art-render.com/products/rdrive.html

Re:Voxel stuff (0)

Anonymous Coward | more than 14 years ago | (#1630169)

Oops - clicked Submit too early. Check out MARS.ZIP - MARS.EXE is faster, far more impressive, runs on a 386 and is 10KB.

Unimpressed. (2)

Hermetic (85784) | more than 14 years ago | (#1630170)

I really fell bad about being so phenomonally unimpressed with guy's life's work. Granted, I am not publishing graphics software myself, but I don't post my grade school artwork either.

I feel almost the same when my three year old daughter brings me a picture she drew. "oh, sweetheart, that's beautiful! I love you! Now go stick that to the refrigerator with the other ones."

Did anyone else find it suspicious that the most recent "interview" was from September of last year? About a remarkably stupid sounding game? And the "demo" was billed as a way to view the island of nervana, even though it only vaguely resembled an island? The guy is 22 and designed the game, the graphics engine, and wrote the music for the game?

The best I can hope for here is that the reporter was a friend of his trying to help him out, or maybe just really, really, hard up for a story.

Fractals.... (1)

richieb (3277) | more than 14 years ago | (#1630171)

Sounds like he is using "iterated functions" to render images. Take a look at this book: Fractals Everywhere [amazon.com] for a detail description of the method.

Doesn't sound particularly revolutionary....

...richie

Re:Constructive Solid Geometry (1)

cd-w (78145) | more than 14 years ago | (#1630172)

You are of course correct. You can render a CSG model and you can raytrace a polygon model.
However, CSG is IMO the easiest model for raytracing, as polygons are for rendering.

Re:What it really is: (0)

Anonymous Coward | more than 14 years ago | (#1630173)

(actually an Anonymous Coward that didn't recieve his emailed password nearly fast enough>
You wrote:
"Please, the demo (400x240 I believe) drains 90% of my CPU-resources"

Erm, please remember that the OS is probably taking 85% of that 90% to refresh the window at
such a high rate, which 'feels' like nearly 100fps.
100fps on a 400x240 demo at 32bpp requires about
38MBytes per second to refresh properly.
That means AGP kiddies, please no PCI cards :)
-Shea M.

Of course it's a hoax (1)

briancarnell (94247) | more than 14 years ago | (#1630174)

Of course it's a hoax, look at who wrote the article -- Douglas Rushkoff. Wasn't Rushkoff complaining the other day about the evils of technology? Well, today he's around back to his day job of an uninformed hack constantly spreading the latest hype. Rushkoff's always been clueless.

Re:Constructive Solid Geometry (2)

cd-w (78145) | more than 14 years ago | (#1630175)

> http://www.art-render.com/products/rdrive.html
A snip at $20,000!

Incidentally, 3dNOW and SSI are entirely suitable for raytracing. Since they were released I've had the dream of writing a realtime raytracer.

\begin{shameless plug}
If anyone is interested, my first (suboptimal) attempt at producing a 3dnow raytracer is available at:
ftp://ftp.dcs.ed.ac.uk/pub/cdw/ray/3dray.tar.gz
Unfortunately I can't do any more until I get an
AMD or P3 box. Feel free to copy/modify this code. Just let me know of anything you do.
\end{shameless plug}

Silicon Valley Drooling? no, laughing? Maybe... (1)

mrbiggs (69086) | more than 14 years ago | (#1630176)

.."colouring which produces a surreal comic-book feeling to
the engine. This has been produced in part by the mathematics required to describe the landscape in real time,
meaning that the colouring remains relatively fundamental." So it HAS to look this crappy? Pretty sad. I have a name for it though, he could call it Retarded-PreSchooler-Vision... Pretty catchy isn't it?

In response to stuff like this... (1)

Caspian (99221) | more than 14 years ago | (#1630177)

I noticed that the software mentioned is available for Windoze, MacOS and BeOS, but not any free software OSes. Also, something tells me that this guy's not about to go free-software with his new stroke of "genius". Perhaps-- and this may be off the topic, but I feel it's important-- perhaps we could establish a corps of Slashdot geeks who monitor new postings and, whenever someone mentions a new non-free-software technology worth noting, seeks out to spawn off an effort to clone or surpass it.

Another thing that bothers me is that slashdot articles generally seem to draw no great distinction between free and non-free software. Sure, by all means, articles like this inform and entertain, but it would be nice (just my opinion) if they also said something like "It would be nice if this was free software" or "Anyone out there wanna join me in cloning this and making the clone free software" or the like...

Just my 2c...

Re:What a shitty article (1)

StephanTual (65000) | more than 14 years ago | (#1630179)

"A technology genius has Silicon Valley drooling - by doing things the natural way," writes Douglas Rushkoff.

"Another idiot certainly claiming to be an 'IT professional' writes a pointless article full of cliches, maybe trying to reproduce the style of Wired magazine, father of all the hype in this world" writes Stephan Tual.

Oh boy I'm not in a good mood today :-)

3dnow Source Code (raytracer) (1)

cd-w (78145) | more than 14 years ago | (#1630180)

Apologies for replying to my own post (bad form), but I hit 'Submit' too soon!

Regarding the 3dnow source code. It was written for a K6-2 300Mhz machine that I no longer have.
I have only compiled it with gcc (requires a recent version of binutils for the 3dnow part)
under Linux. It doesn't currently do CSG either,
only a fixed scene containing spheres. Unfortunately the 3dnow part doesn't actually give any speed improvement at the moment! I suspect that the femms overhead is too big - suggestions welcomed. I'd be interested to see how this runs on an Athlon.

I agree (3)

aheitner (3273) | more than 14 years ago | (#1630181)

No signal to noise. And I've seen much better results for landscape generation: check out MetaCreations Bryce, based on the work of the grandfather of procedural texturing, Ken Musgrave. It's stunningly beautiful.

"But this could be promising!..."
True. I'll believe when I see promising artworks. This reporter obviously got carried away; I'm in computer games and I'm just not impressed.

"But Bryce isn't realtime!"
True enough as well. Bryce is a raytracer; it takes a long time to render. Oooooooooooh.....I wish I could talk to you about this ... comments, Rosenthal or Scherer? I'm sorry, I just don't feel at liberty to disclose anything. Those of you that know us and our work, trust us...

I do disagree with you on one point: No reason a 22 year old can't do this. Everyone in the basement is 19 or younger.

Re:What about NURBS? (1)

UnknownSoldier (67820) | more than 14 years ago | (#1630182)

> I remeber speculation on hardware which could render/raytrace NURBS and other spline based modelling, directly w/o conversion to polys. However i've yet to see it materiealize.

Remember the adage "Triangles are the pixels of 3d". Until we see bezier surfaces and NURBS as primitives, we probably won't see hardware for another 5 to 10 years.

The problem with NURBS is that they are slow. In contrast to tossing a few more textured tris at the hardware, since thats what the hardware is optimized for.

remove this article (1)

smoke (771) | more than 14 years ago | (#1630183)

Could someone PLEASE remove this item?
The idiots are getting way too much attention again.

Um. Critical thinking suggested. (4)

Daniel (1678) | more than 14 years ago | (#1630184)

This sounds like a hoax. A big hoax. Here's the giveaway:
He decided to use the Nintendo GameBoy as a standard for how much computing power a machine should have...and developed a series of simple equations that can be used to generate waves, textures, and shapes.
Does anyone here know why polys, especially triangles, are the basis of most modern graphics systems? No? I'll tell you: it's because they're *EASY TO DRAW*. The equations are as simple as you can get; almost everything becomes linear interpolation and therefore only needs a single addition per pixel line. Waves are likely to need some sort of transcendental function (such as sine or cosine) to function properly -- something that requires either a massive hardcoded table, or a LOT of CPU time. Not to mention the need to toss either fixed-point or floating-point numbers around. GameBoys are 8-bit, aren't they? That doesn't give you much precision.
Remember how you used to draw parabolas and ellipses in maths class?
Um. There are three possibilities for drawing these:
-> Use the equation directly. This involves a square root. Square roots are slow.
-> For the ellipse, you can generate it using sines and cosines with a parameterized equation. The resolution on the parameter will determine how choppy the outside looks; even a resolution of 1 degree took a while on my TI-85 back in high school :)
-> Iterate over the ENTIRE DISPLAY, applying the generic conic equation to each point; use this to find boundaries. Incredibly tricky, requires a square or two for each pixel, and is generally going to be a pain. (for the ellipses this is a little simpler, since you can bound it by the major and minor axes)
Each element of such a display will require much more computation that a polygon; you could save a few polys this way, but I don't see it being the sort of revolutionary jump they describe.
The article then goes on to state some fluff about plants and carbon atoms, claiming that quantum equations are 'simple' (I wish!) and suggesting that "Barbalet"'s stuff is built "from the ground up, just like nature does it." This isn't true, even if what they said is true, and has nothing to do with molecules and plants; he would be building his images up from shapes -- different shapes than are standard now, perhaps, but still just shapes. No image built "from the ground up, like nature does it," requiring the transmission of every molecule, is going to even be manageable by modern computers, let alone result in stuff that can be transmitted over modems and wires more easily than graphics images.


The more charitable explanation is that this is a highly confused journalist who has run into ellipsoid 3D graphics or something similar and thinks it's cool.

Daniel

Re:Constructive Solid Geometry On Current Cards (1)

cd-w (78145) | more than 14 years ago | (#1630185)

I don't see why CSG isn't suitable for landscapes and racetracks.
If you have a 'plane' primitive, you can use that as a base and just build on top of it using AND operations.

In my original post, I should have distinguished between CSG and raytracing. It is perfectly possible to raytrace a polygon model if CSG
is too cumbersome.

Re:Am I missing something here? (1)

Kintanon (65528) | more than 14 years ago | (#1630186)

"He decided to use the Nintendo GameBoy as a standard for how much computing power a machine should have (in other words, very, very little)"

So why does their demo require Minimum of 100Mhz.



Hmm, I have a 200mhz, 64meg RAM machine here at work, I'm running Outlook, AIM, IE 5, Winzip, this guys graphicy thing, the resource meter, Mcafee Virus Shield, Getright, and I'm streaming a radio station from england, I still have plenty of processor left over... I dunno why you guys are complaining, this thing takes up almost nothing.

Kintanon

Re:In response to stuff like this... (1)

BluBrick (1924) | more than 14 years ago | (#1630187)

I noticed that the software mentioned is available for Windoze, MacOS and BeOS, but not any free software OSes. Also, something tells me that this guy's not about to go free-software with his new stroke of "genius".

Yes, I have to agree with you there. But until I realised that, I was encouraged by the following bit in the Guardian piece...
Taking another cue from nature, Barbalet has organised his entire Nervanet project as a "public access development forum".

Silly me, I thought this sounded a little like collaborative development. Sort of like real free software. Geez, I can be naive sometimes!

Sometimes ray tracing is the fastest algorithm (3)

SurfsUp (11523) | more than 14 years ago | (#1630188)

Raytracing has a reputation for being very processing-intensive, but I am convinced that it could be done efficiently in hardware, and the quality of the graphics would be far greater than polygon rendering.

Not just in hardware. Ray tracing was used in John Carmack's Wolfenstein - a classic example of how ray tracing can outperform traditional polygon rendering. In Wolfenstien the simplifying assumption is that just one ray needs to be traced per column of pixels in the viewport. It obviously works, for the special-case scenes that Wolfenstien used. The ideas were generalized somewhat in Doom, to allow for ceilings and floors. Raytracing was abandoned in Quake, in favor of traditional polygon rendering, coupled with a kick-ass culling algorithm. But don't think that raytracing is out of the picture yet - hehe, pardon the pun.

Re:I agree (2)

jonathanclark (29656) | more than 14 years ago | (#1630189)

I was a video game person as well (crack.com), but that is a different camp than theoretical computer graphics. Most young people are into real-time graphics (i.e games) aren't interested in what happens to light waves if they travel through all points in space at the same time. Plus you need something close to a ph.d in physics or computer graphics to be near the cutting edge. So... it's unlikely that someone under 22 could break ground in this field.

I'm still not there myself, but Glassner's 2 volume series "Principles of Digital Image Synthesis" has a good intro to the subject of light and energy transport.

I used to run the procedural texture mailing list a few years ago and we had Ken and Ebert on there. Ebert is still working on this stuff, he usually has something at siggraph, but Ken has moved on to other areas of interest. I just had a sweet offer to work on a real-time procedural texturing system for an upcoming game console, so I'm thinking about getting back into that.

Re:Sometimes ray tracing is the fastest algorithm (1)

.pentai. (37595) | more than 14 years ago | (#1630190)

Actually, Wolfenstein used Ray-Casting, which is different than raytracing...

What ray-tracing does it take a light source, and see where the light reflects (giving color).
Ray-casting on the other hand takes your field of view, and sees what it hits, i.e. what you would see. It limits itself by not allowing for reflections, etc.

Re:Signal/Noise=0 (1)

.pentai. (37595) | more than 14 years ago | (#1630191)

Looking at the example it seems to be just that - voxels...why this is a huge concept, I don't know, I have an old intro that's around 2k, that ran on my 486sx25 that does voxels...(and looks better than this I might add)...

Re:remove this article (1)

Knos (30446) | more than 14 years ago | (#1630192)

héhé no, at least it brought us the occasion to advocate the scene ;)

Re:Um. Critical thinking suggested. (1)

.pentai. (37595) | more than 14 years ago | (#1630193)

Being a gameboy programmer, I'll tell you this guy is on crack (or maybe some other drug...but anyways...). I've made a tri-plotter for GB, and even though I limited it (triangles where limited to 1 8x8 sprite), I was easily able to push out 100+ triangles per second. You're probably thinking, 100+, bah, but when you're on a screen so small, it works.

Now, this guy is doing voxels, which while it can be faster (since it's so limited), it can't be done on a gameboy...hell, the SNES couldn't do a voxel routine well w/o the help of the trusty SuperFX chip.

Put them all together and what do you have? (1)

Reflex (34932) | more than 14 years ago | (#1630195)

Each of the current Graphical techniques are adept at different things. If we used Psi, polygons and voxels together we could create some stunning environments etc.

Just a thought.

Anyone know a reason why it can't be done...?

No Hoax, sorry! (2)

Daniel (1678) | more than 14 years ago | (#1630196)

I just noticed that there's a link to a homepage at the bottom and other people can run the demo. So it's not a hoax, but I stand by my other comments -- the journalist is confused :) Fractal graphics are, as many other people have said, interesting but not revolutionary. I personally doubt they work well in the general case; unless you can get a fractal that looks just like any given object (say, a car), you'll have to start building objects up from fractals, at which point you run into the same problem that we have with polys: if you look close enough the illusion of an object vanishes. The only difference is that instead of turning into flat polys close up, the objects will turn into fractals close up. (and no, reality is not made of fractals so this won't really work that much better) OTOH for things that this works well for (trees, waves, clouds) it might be interesting to have a fractal-rendering subsystem added to a 'traditional' graphics system -- only problem is, how do you do the Z-buffering? But I'm sure someone can work that out.. Or maybe he's talking about procedural textures -- again, neat but not particularly new..my university actually has a research group looking into the possibilities of these things to do 'non-photorealistic rendering'. Daniel

This Kind of thing makes me mad... (1)

Darksky (58431) | more than 14 years ago | (#1630197)

remember this commercial?... : *Terminator2 music plays* "This Winter... General Motors will turn plastic into metal....."
i was SO excited! just think, super-light motors! think about the structural applications!
damn thing was a f@ckin' CREDIT CARD....

Re:Sometimes ray tracing is the fastest algorithm (3)

Anonymous Coward | more than 14 years ago | (#1630198)

I'm sorry, but no. When people talk about ray tracing they are almost always talking about an algorithm similar to the one presented in the classic Turner Whitted paper of 1980 entitled "An Improved Illumination model for Shaded Display". This traces rays from the eye out into the scene just like ray casting. The difference between ray casting and ray tracing is that tracing is done recursively while casting is not. By that I mean that in ray tracing after a primary ray from your eye hits a point in the scene you then spawn new reflected and or refracted rays from that intersection point and those rays spawn other rays and so on. In ray casting you just stop with the first hit and call it good enough. Consequently ray traced scenes can (and ususally do) have lots of interreflections between objects with mirror-like surfaces, but in Wolfenstein and Doom everything is opaque and diffuse.

Now there is such a thing as tracing rays from the lights, but nowadays that is typically referred to as "backwards raytracing" which is confusing because physically speaking that's forwards. So some confusion is understandable.

But techniques that use this backwards raytracing typically just do a pass with backwards tracing to deposit light in the scene, and then actually do the rendering with a more conventional raytracing pass (from the eye). Arvo was the first to use this technique I believe, in his "shadow maps" [Arvo, James: "Backward Ray Tracing" ACM Siggraph Course Notes 1986]. Jensen's photon maps are a more refined version of similar technology [paper here [mit.edu]].

Didn't Mico$oft.... (1)

afniv (10789) | more than 14 years ago | (#1630199)

....invent this? I guess they will two years from now.

~afniv
"Man könnte froh sein, wenn die Luft so rein wäre wie das Bier"

Isn't all its cracked up to be... (0)

Anonymous Coward | more than 14 years ago | (#1630200)

Taking a look at the screenshots on the guys webpage, just looks like a simple terrain renderer, no moving object, no complex objects, just rolling terrain. Neat, but not anywhere near "revolutionary"... I think maybe this guy is related to that other guy who said he could broadcast TV quality video over 14.4 connections using a new "compression method"...

Re:Sometimes ray tracing is the fastest algorithm (2)

Sloppy (14984) | more than 14 years ago | (#1630201)

But techniques that use this backwards raytracing typically just do a pass with backwards tracing to deposit light in the scene, and then actually do the rendering with a more conventional raytracing pass (from the eye)

Note sure if this is the same as what you're talking about, but what I used to do back in my "graphics days" was: I'de trace rays outward from the eye. As they hit reflective or refractive surfaces, I'de recursively trace from there. But if they hit an opaque surface, I would trace rays from that point to each light source. If the ray was unobstructed, then the point was illuminated, if it was blocked, then it was in shadow.

Heh, I also faked the diffractive fuzzy edges of shadows by implementing each light source as a cluster of small light sources that were very close together, but at distintively different points. That worked great. :-) Geez, really getting off-topic here, sorry.


---

Re:Try the real thing (2)

mrogers (85392) | more than 14 years ago | (#1630202)

I think the comparison to analogue synthesis stemmed from the use of combinations of simple algorithms, rather than tables of coordinates, to produce a complex texture.

An analogue synth has oscillators which produce simple tones such sine, triangle and square waves. By combining them using simple circuits such as ring modulators, envelope generators and resonant filters, it produces complex sounds. You only need a tiny amount of information to describe all the settings on an analogue synth (a "patch").

Early digital synths used wavetables (or samples) to produce complex sounds. This makes it easier to reproduce the sound of a real instrument (you just sample it and then play it back), but the patches are much larger.

(To complicate matters, many digital synths now emulate analogue synths using software models.)

Polygon-based graphics are similar to wavetable synthesis - you use a table of points to reconstruct a surface by drawing straight lines (or curves, if you have the processing power to spare) between each point and the next. 3D worlds created in this way require a lot of memory to store, or a lot of bandwidth to transmit.

Speculative part:

Psi seems to use combinations of simple waveforms to generate 3D worlds. I imagine this would generate random rolling terrain very nicely, but it would be hard to design a landscape "to order". I suppose you would design it using conventional 3D software, and then use Fourier analysis to extract the fundamental waveforms from the complex surfaces. Then you just send (or store) those waveforms, and the rendering engine has the much easier job of just recombining them.

I wonder if this technology could be used to create a new generation of samplers which would sample a sound, take it apart using Fourier analysis or whatever, and work out how to reconstruct it using simple waveforms? That would be very useful for increasing the capacity of samplers (and audio CDs, portable digital music players etc.).

He invented the Fractal!! (0)

Anonymous Coward | more than 14 years ago | (#1630203)

From what was written, it sounds like the "new" technology is "use fractals for graphics".
Someone explain the breakthrough please.

Mars Terrain Demo Explained (1)

Chris Pimlott (16212) | more than 14 years ago | (#1630204)

The author of the original Mars terrain demo has made an explanation of the algorithm used here [hornet.org].

I'd like to see a version of this under Linux.

Whatever... (1)

Korat (100097) | more than 14 years ago | (#1630206)

What a load of Buffalo Biscuits. That thing is an idiot test, plain and simple; if that "impressed" Nintendo than it's just another sign of their detachment from reality. I think I'll dig my Amiga 500 out of the attic. And run vastly superior demos on it's 7mhz processor. I'll say this for it then, it was a nice blast from the past.

this is new? (1)

csean (77908) | more than 14 years ago | (#1630207)

The article was so non-technical,i really don't know what this guy did. But, haven't people been generating landscape images from iterative equations for a long time?

Hey, Sony... you listening? (0)

Anonymous Coward | more than 14 years ago | (#1630208)


Seeing as how PC's and workstations are being obsoleted by the god-like PSX2, I wonder what Sony could do with this Wonderful, Superb New Technology....

Re:Um. Critical thinking suggested. (1)

Panaflex (13191) | more than 14 years ago | (#1630210)

You know, not everything requires floating point. There is alot of good work in real time that is accomplished in 8 bit integer format. It all depends on how much accuracy you want to loose.

Roger

Re:Laugh (0)

Anonymous Coward | more than 14 years ago | (#1630211)

That was my point, actually. Even when set to Plain Old Text, it didn't properly convert the "sarcasm" tags I enclosed the statement in. Seriously, as a graphics programmer myself I take every "revolutionary leap forward" in computer graphics with a shaker of salt. (That includes the PSX2.) Much of it is the same old stuff ramped up to take advantage of higher processor or video card speeds, etc. It appears to be just a fractal-based landscape, maybe tweaked by some sort of genetic algorithm. It would be interesting, however, to see what a game company could do with something like this. It would obsolete all of those Super Mario walkthru books in a hurry.... each time you play the level the scenery changes ever so slightly :)

ever heard of the demo scene? (3)

j1mmy (43634) | more than 14 years ago | (#1630212)

It looks like simple voxels to me -- a rendering trick that's been around for almost a decade. For those of you who are amazed at what can be done with 74k, take a look at this [hornet.org] little program from 1994.

Re:Try the real thing (1)

Panaflex (13191) | more than 14 years ago | (#1630213)

Actually, what I have been working on is a neural network to "decompose" a sound into a "fundamental"(or basically match waves from a table into a given sample wave).. and then storing the wave type, length, and amplitude. Thus 128 bytes of wave could be stored in 6 bytes. The hard part is generating a table of wave types.

Yawn... (1)

indigo@dimensional.c (55477) | more than 14 years ago | (#1630214)

Anyone else remember doing this kind of stuff with Fractint in, like, 1992?

And what is with this analog crap? Extremely uninformed mediot, methinks...

Re:Fractals.... (0)

Anonymous Coward | more than 14 years ago | (#1630215)

I thought Michael Barnsley already had a patent on image compression using Iterated Function Systems. IFS, Inc. He cracked the problem of compressing images of arbitrary scenes - not just those easy to render with fractals - mountains and trees. In the late 80's. Yawn.

Re:This is new? (0)

Anonymous Coward | more than 14 years ago | (#1630216)

that's a good point.. didn't all those old ASSEMBLY competitions have 10k, 20k, 100k filesize categories for entry? and that was back on old 386's... 5 minute vr flybys of cities, crazy effects, 3d morphing, all way more complex than the crappy nervana "landscape scroller" routine.. smaller too

Has anybody checked out Terragen? (0)

Anonymous Coward | more than 14 years ago | (#1630217)

It uses fractals to create absolutely mindblowing landscapes. The psi engine or whatever its called looks like its trying to accomplish the same thing but its nowhere in the 'genius' realm.

Re:Whats so special about that (1)

turbohavoc (79880) | more than 14 years ago | (#1630218)

ahh.. so it isnt java then..

Its evil Bills personal Java thats supposed to work only in IE..

I wont obey though, and wont even try to run it, ignoring Bills feeble attempts trying to make me run IE..

Re:Signal/Noise=0 (1)

Mister Attack (95347) | more than 14 years ago | (#1630219)

Ok, here are the facts I pulled from the article:

  1. This guy has a "new" way to draw 3D objects
  2. hypehypehype
  3. it's allegedly cheaper, processor-wise, than polygons
  4. hypehypehype
  5. The author doesn't know much about what's actually involved in this drawing process
  6. The results don't look all that great
Ok, so now here's my question: Why limit yourself to what a Game Boy can do??? The author of this article stresses that the guy with this technique wants only the processing power of a Game Boy. WHY??? It looks like much, much more could be done to make these pictures prettier. Why not use the sort of processing power that is available in the real world??

Re:Signal/Noise=0 (1)

astroboy (1125) | more than 14 years ago | (#1630220)

Further, the fundamental laws of physics governing lights at the quantum level are not fully understood.
This isn't true; if there's a better understood, more accurate theory -- for any physical phenomenon -- than QED is for electrodynamics, I'd like to know what it is.

You might be able to make it analog! (0)

Anonymous Coward | more than 14 years ago | (#1630221)

Be careful guys. There is such a thing as an analog integrated circuit chip! Alot of those iterated equations are really only finite difference equations which allow you to approximate solutions to differential equations. These same differential equations can be modeled directly with analog circuitry as well -- havent you ever seen the rossler attractor on an osscilloscope? Im not so sure how IFSs would be done, but there could very well be a way...

Old: IFS and Procedural shaders (1)

Steeldrivin (32368) | more than 14 years ago | (#1630222)

As others have mentioned, he's talking about IFSes.

Also, Pixar's Renderman uses procedural shaders, where the appearance is calculated dynamically, instead of using pre-generated textures (though that is also an option).

Instead of scanning in a wooden surface and tiling that image, Renderman can use a shader which generates a wooden surface; for wood that looks different, adjust the shader's code or pass in different parameters.

Doesn't current 3d uses many simple eq? (1)

Courier (91998) | more than 14 years ago | (#1630223)

I am not an expert at this but to me what this "new" tech does sounds alot like most other graphics methods.. Doesn't 3d polygons work by having lots of equations being done by the CPU to make a scene? And are not most of these equations quite simple? And why use the word simple? What is simple? For us seeing is simple but for a computer it isn't. Now that's a good example. Think of it this way we can see so easily and to do that we allocated the largest portion of our brains to any single task. If we just assume that visualising is similar to see then we must have similar processing power. I don't think there is any computer that can match the brain at overall processing power yet. What I mean is you can only go so far interms of clever methods. We can all see it in the CPU race between AMD and Intel. AMD may have the better chips but Intel has the brute Mhz ( until recently ) and Intel was winning. Personally I am not aganist this but it isn't very impressive. I wouldn't mind cheap "real world like" graphics but I am not going to jump up and down at this. I just can't believe that someone working by himself with little help can hope to do what other's working in large companies with big R&D can't. Just look at hotmail addresses how many "somehandle9999" do we need to get before people get the point that if you think of it 9999 other people would have think of it too?

URL for hidden Nervana docs (1)

RobotWisdom (25776) | more than 14 years ago | (#1630224)

I had to go thru Google to find this-- it may not even be linked via his homepage-- but here's [nervana.com]a big ol' page of docs.

I haven't looked at them yet but I also tracked down an older demo at Info-Mac that includes algorithm info for an earlier wireframe version. (Skip the useless 2Mb mov promo.)

The Mac demo of the new version downloads as 14k but unstuffs to 4Mb (!), and it includes an interactive terrain generator, which impresses me.

His writings explain that he started this as (what I call) a 'philosophy lab' where modelling 'mind' based on Bertrand Russell's ideas (!?????) was his main goal.

The Info-Mac demo is wireframe but includes a bunch of little monkey-dots whose eyes you can look thru, zipping around too fast and too tiny-ly for me to have fun with (yet).

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...