Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Graphics Software

Revolution in Graphics? 164

wilton writes "A technology genius has Silicon Valley drooling - by doing things the natural way, writes Douglas Rushkoff in today's Gaurdian. This project has been going on for a couple of years now, they have demo's for Windows and Be. The ideas is to not use rendering and polygons to create scenes. Instead build it up from a molecular level, with apparently amazing results. "
This discussion has been archived. No new comments can be posted.

Revolution in Graphics?

Comments Filter:
  • by PG13 ( 3024 ) on Thursday October 07, 1999 @11:54PM (#1630126)
    First it in no way simulates molecules as both the article and the header imply...it merely uses iterated equations.

    Secondnly, and more annoyingly it is still entierly digital...stupid reporters
  • The article in Guardian is not very informative,
    try the original nervana site [nervana.com],
    you can even get the software there, for Windows, Mac and Be. The screenshots
    don't look bad, but I have my doubts as to what this
    all is all about, but hey, I'm a simple biologist.


    It looks to me like a hoax, or at least like
    someone trying to get into news - you don't find
    a source code or any details. Could some good soul,
    fit in comp sciences, explain to me what so interesting about this project?

  • There's a demo of this at
    http://www.nervana.com/psi/
    (Winoze, Be and Mac)

    It looks quite good, but very basic. Hard to tell really if it's going to be revolutionary or not.
  • Exactly how is this new, and what is it about this that qualifies this man as a "genious"?

    I've seen better looking graphics in 5 year old euro-demos running on a 386 DOS box.
  • This is the coolest 74K ever. It's the smoothest VRML browser I've ever seen and an animated scene all in a 74K app! Granted it's not Q2 yet but it's damned impressive. Download it.

    no I'm not afiliated with this guy/company I'm just real impressed.



  • I've downloaded the demo: all it seems to be is some garish landscape generator u can run thru: the sky/cloud moving thing ain't that cool. I've seen better water in the visualisations for my Mp3 player (Sonique for those interested) . Until i see a demo of this "molecular" technology that is on par or better with the 3d game engines of today, i'll remain a skeptic. Still, it might only be the begining of something great. Must keep an open mind.
  • by cluke ( 30394 )
    "In today's Gaurdian?" Guardian, surely! ;)
    Though this may be intentional - the Guardian used to be notorious for its typographical errors (it's often referred to as "The Grauniad")

    To get back on topic, their knowledge of tech matters is less than stellar (though not as bad as, say, the Sunday Times) so they may have just fallen for some marketing hype. I can scarcely imagine any Nintendo execs being stunned at that little demo, impressive though it is for 74k. (Though I've seen better on 4K Amiga intros)
  • It's just a landscape engine, nothing more. 3D-games contain a lot more stuff than just hills and lakes. It's still digital (doh). Low CPU requiremnts? Please, the demo (400x240 I believe) drains 90% of my CPU-resources - and it still looks like *hit. My CPU is not Z80 (which the article claims to be sufficient), but a K6-2 running at 400MHz. I rememer seeing cooler landscapes in demos, and with lot lover CPU requirements.

    I'm not saying that this technology isn't worth anything (how could I, I first heard of it 15 mins ago). It still looks like it needs a lot of develoment, though.

  • For what it did, it didn't drop my system performance much at all. As a future holy grail for the precise and realistic 3d engine, however, I'm rather skeptical. The horrible mouse interface (pretty random as far as what dragging in any given direction does) in the demo makes it very hard to tell, but this method of iterative equations seems to have a problem- no consistently accurate solid 3d shapes. Sizes and relative shapes seem to stretch around far too much with a change of perspective. And I imagine that really beautiful, fractal and super-complex at any distance of viewing images that this tech could create wouldn't stay consistent as you move around. Apparent textures would swin all over the place. Maybe this is just a limitation of the demo, or my flawed analysis of the tech. But the reason polygon engines (or voxel engines!) are so popular is that they have consistent world geometry that works intuitively from any angle of viewing, making it well suited for virtual reality in which you can interact- even at the high speeds of quake-style games. This engine might turn out to be better suited for acid trip-heads than Quake4...
  • Apparently the people who wrote this article aren't up with modern times, or something. If you check out some of the 40kB demos for the Amiga you'd see more impressive stuff, and those don't require a 100MHz machine (stated requirements for the Win32 demo on the nervana.com site). As far as using mathematical equations to approximate the real world "better" than polygons, this has been used before. An example would be the procedural textures used in Lightwave 3D, which uses a bunch of algorithms to simulate different real world textures, using only a fraction of the memory of their bitmapped counterparts.

    Of course, I could be completely wrong about these demos, but IMHO this is hardly revolutionary, or even the slightest bit impressive. I would expect that if this were truly something more than hype, there would be more substantial information at their website, or at least a demo which is actually impressive.

    What would be nice to see is 3D accelerators with support for some procedural textures on the card, and then having those features actually used in the game. You can achieve some very impressive effects with such things, although I suppose most people see it as easier to just add more memory to the card nowadays. I remember reading one card that was due to be released a few months ago was supposed to have such features (Permedia 3?), but I'm not sure.

  • That was a really bad article and I don't find the Nervana site that informative either, but my guess is that he wants to (or already has) put support for iterated function systems on the graphics card.

    Use of IFS in computer graphics can hardly be regardes as news, but I guess putting it in hardware, making it possible to draw directly in video RAM, would make the technology usable also for games. This way you don't need that many polygons to draw an intricate structure.

    Lars
    Lars

    --
  • I downloaded the program, and I've seen better computer graphics in the '80s.

    If you still think its good you should check out this [eyeone.no], a 50 something K voxel engine in java (warning Explorer 4 only).

    "The future is already here,
    it's just not evenly distributed yet"

  • by Anonymous Coward
    Although the article didn't mention it, it seems like he is simply using known fractal rendering techniques for the landscape and water. It's not hard to come up with the rules for the quality he has achieved in the demo program and screenshots. What caught my attention in the article was the comment about doing creatures with his simple rules. I didn't even see any vegetation even though research on fractal plants has been going on for over a decade in the computer graphics community. Unfortunately, it sounds like this is another case of an uninformed journalist overhyping simple technology.
  • > The horrible mouse interface (pretty random as far as what dragging in any given direction does)

    It would have been nice if they had provided some documentation, but it seems that where on the screen you click controls your movement, no dragging involved(the closer to left/right edge, the faster you turn, closer to top/bottom, the faster you move).

    > And I imagine that really beautiful, fractal and super-complex at any distance of viewing images that this > tech could create wouldn't stay consistent as you move around

    Yah, while up close the boundries between sections looks really good, but if you move about 10 seconds away from the island, the water starts forming some really wacky moire and the island gets all blocky.

    If it weren't for that and the way the water "laps" this would be fun to play with. That is, if it actually did something.
  • > This is the coolest 74K ever.

    I suggest you check out some of the 4k intros on
    ftp.scene.org. They've got 3d AND sound too. In 4096 bytes.
  • by cd-w ( 78145 ) on Friday October 08, 1999 @12:34AM (#1630141) Homepage
    /begin{rant}
    Constructive Solid Geometry (as used in POV etc.) is also an alternative to polygon-based rendering.

    For those that don't know about it, with CSG the scene is built up from primitive blocks (e.g. cones, spheres, cubes, rods, etc). More complex objects are made by using boolean operations (AND, OR and DIFF) on the primitives. For example, a ring can be made by subtracting (DIFF) a rod from the centre of a sphere. Solid textures can be applied to the resulting objects, and raytracing can be used to produce shadows, reflections, transparency, etc.

    Unfortunately, CSG and raytracing seem to have been overlooked by the graphics card manufacturers. The new effects proposed by 3dfx (motion blur, soft shadows, etc.) can be achieved very simply using stochastic raytracing. Raytracing has a reputation for being very processing-intensive, but I am convinced that it could be done efficiently in hardware, and the quality of the graphics would be far greater than polygon rendering.

    In relation to the article - the Psi technique looks interesting, but seems to have very limited scope for application. IMHO, graphics card manufacturers should look at raytracing and CSG instead.
    /end{rant}


  • No. The 4K implementation of the first level of Descent--replete with MPU-401 music and all textures was OH MY GOD. This isn't nearly as impressive as most of the iterated landscape jobs I've seen bandied around, although the Earth's Landscape dataset IS cool.

    Yours Truly,

    Dan Kaminsky
    DoxPara Research
    http://www.doxpara.com

  • Of course it drains 90-100% of your CPU, it's just thrashing around rendering as fast as it can. It would drain that amount on a CPU twice as fast. What would be more interesting is if they put a framerate counter up there so you could get some idea of the fillrate it's delivering.
  • I have this Deja Vu, all the pictures look like those 'fractal' graphics from 80's games like Rescue on Fractalus [netaxis.com] on old Ataris or Captain Blood [fatal-design.com] on Atari ST/Amigas. Looks like they re-invented them and filled them with some colors. But displaying some landscapes with simple formulars is MUCH simpler than displaying complex graphics. If they show a tree based on ther 'formulars' and in high framerate they would impress me. But not with landscapes...
  • The game is about green themes like eco-friendliness, but the 3d algorithm is quite standard, and does not use "natural mathematics". Just looks like an old miggy 40k intro to me.
  • by Dreamweaver ( 36364 ) on Friday October 08, 1999 @12:56AM (#1630147)
    Alot of you seem confused about why this is a cool thing. The point isnt that the graphics look amazing right now.. it's that they're generated in a fundamentally different, possibly better way.

    Of course polygons look prettier.. look at the current difference between painted pictures and polygon graphics. With a painting the artist is simply putting the colors on a flat surface in such a way that it simulates reality. Relatively easy to do since you just need to put colors there in a suggestive way (i cant do it myself for beans, but you get the point).

    Now a graphics program, you create a 3d object out of polygons, then place texture images over them. This is more difficult because you have to create the actual 3d object.. like sculpting.. you cant just suggest 3d with shadow, you have to Make 3d and let the light create the shadow naturally. The textures arent really roughness or shininess, just images that Look rough or shiny and make any light sources react the way they probably would.. this saves memory by making the shape Look more complex than it really is. A smooth cylinder might look just like a tree trunk because it has a rough-appearing texture. But it's not really a tree. If you get too close you get flattening of the texture.. especially in realtime engines for games because it cant raytrace fast enough with modern computers, so uses simple rendering. It can look really, really good.. but can also look REALLY bad.

    Now, i may have misunderstood the article and webpage for this technology, but what i got out of it is that this uses something like a fractal generation system, using a formulae and number of iterations, to generate real objects. Not just a mesh of points some of which have polygons drawn between them, but something closer to a physical reality. Like a fractal, it would look fine up close or far away, and like a fractal because it's based on iterating a simple algorithm over and over it would just be a matter of doing math rather than crunching z-buffer coordinates into 2d images like we do in polygon rendering engines.

    What's really important here is the oppertunity for data transfer. All those cyberpunk novels make use of the ubiquitous virtual worlds where people and environments are rendered seemlessly, usually using small computers, in realtime, with wireless modem links. So far this has been no more than a dream because no personal computer could hope to handle that kind of load, No computer can raytrace in realtime with a complex scene, and there'd be no way to send that much data with anything like current modems. This technology doesnt make this all come true in a flash, but it does improve the chances immensely. You can simply transfer location data and a formula rather than mesh coordinates and transforms.. much, much less data. You dont need to do the kind of heavy number crunching for raytracing because of the way the objects are generated, and you dont have to worry about things like textures because you can just make the actual object bumpy, smooth, jagged, whatever.

    Now the biggest complaint is obviously that it doesnt compare to modern polygon graphics. There's a simple reason for this.. it's not a highly funded, industrially motivated, relatively old technology. It's fairly new and being developed by a few guys. You cant expect miracles overnight.. but what he's got looks pretty good considering how new it is. You all talk about how wonderful demos look with current tech.. sure they do.. that's what theyre for. This demo is to demonstrate that his technology Does work. If you had a time machine that could send a penny 5 minutes into the future, would you complain because it didnt look cool?

    Anyway, it's obviously no sure thing, but it does have a good deal of promise, and polygons cant last forever. Personally i think realtime rendered 3d games look like crap. Raytraced scenes can look very nice, but all too often suffer from virtual unreality (that plasticy look everything tends to take on.. obvious fractalism in complex objects etc).. This or something like it that builds up from basic principles into a complex object will eventually be needed.. just think about human interaction in a virtual environment, you cant very well create polygon meshes for every possability.. what if you broke a chair, how does it generate the broken ends and interior wood grain? If you bite into a cookie, how would you go about creating realistic crumbs in realtime?
    Dreamweaver
  • I don't have Descent installed anymore ( I don;t know why getting old I guess) but wasn't the executable pretty big? I've played with vrml off and on since the beginning and am pretty proficent with Truespace and Blender. If a scene this simple was developed as a truespace or vrml scene it would be pretty dinky indeed. However truespace.exe (ver4)is a meg plus all the various dlls another 15 meg (granted it edits not just views). This is the "scene" and the viewer. I haven't played with any linux vrml browsers yet but I know that windows vrml browsers are a couple of meg easy.

    Not ony that, notice the waves lapping on the shore... they may not be mathmatically perfect but they look "right" (albeit very fast and crude) to me.

    If I'm wrong (I have been before :) please educate me.

    Best Regards,

    Greg
  • never heard of the demoscene have you? there are even 4k's for linux now. this project is just plain shit and it should be removed from slashdot NOW. instead there should be a big link to ftp.scene.org (or perhaps www.error-404.com, which has some nice demos too)
  • A lot of 4k intros have generated landscapes and objects too, and yes we have seen fractal based objects too. Why is zbuffering no 'math' by the way? This project just got too much attention because someone did not read more than the article.
  • A wonderful little program (5.5k) by Tim Clarke [mailto] called MARS.EXE [ping4.ping.be] let you move with your mouse through shaded voxel-based martian terrain under a cloudy sky. It an at fantastic speed even on a 386.

    Read the original usenet posting here [ed.ac.uk].

  • I can understand how using psuedo-analogue or even real analogue can lead to more realistic images. Fractals to produce better looking trees etc, sparks being blurred by the analogue signal (umm interesting, improve reality by decreasing quality). I'm probably off mark, but isn't there an issue of constraining these to produce an exact rendition of say a racing track. We can generate a load of curves to create a random track but how would a _real_ track be accurately constrained. An accurate representation of say Silverstone can be done in simple co-ordinates but I'm personally unsure as to what rules would need to be in-place to generate this with this approach.

    probably comes under the "nice idea that I don't understand heading".

  • I think that this is a hoax (there is nothing at the website related to all the bull in the article) to promote a lame CDROM game.
    ___________________________________________ __________
  • by Anonymous Coward on Friday October 08, 1999 @01:35AM (#1630157)
    You are using your terms incorrectly. CSG refers more to the models themselves -- it is independent of rendering. For example, I can use a B-rep for CSG objects and render them as polygons.

    The most common alternative to polygon-based graphics is ray-tracing. This is a more pure form of sample-based graphics (i.e., I ask what the sample value is at a particular point rather than pushing pixels towards the end of the pipeline). My distinctions are slightly muddy here b/c I prefer to draw the demarcation lines based on illumination rather than model types.

    It is trivial to do CSGs in a ray-tracer (and many related techniques). It is extremely difficult in a polygon-based system; but it is not impossible.
  • By clever use of z buffer read and writes it may be possible to do some CSG effects in hardware already.

    The problem I see with CSG is features like racetracks and landscapes. They don't really suit a CSG representation. Another thing to remember is all the other doohikeys that have to deal with your geometry representatione (e.g physics and AI).

    The move to polygons has seen the death of many kewl little tricks that existed when people could just plot pixels. More stuff may dissapear id on-card geometry acceleration takes off. CSG would admittedly be interesting for this sort of thing, but a card whose hardware acceleration was based on CSG would make many operations that are simple today a bit of a bitch.

  • "He decided to use the Nintendo GameBoy as a standard for how much computing power a machine should have (in other words, very, very little)"

    So why does their demo require Minimum of 100Mhz.

    It's not hard to see how underwhelming this project is.

    The reporter who wrote the Article about this appears to have swallowed someones marketing hype hook-line-and-sinker. He hasn't even done the usual journalist work of go to an analyst and get some sound bites (text bites?).

    The best case scenario for Nervana is that they have been mis-represented by someone, maybe the writer of the article.

    They might have a good model for terrain representation but that hardly constitutes a revolution in graphics. You still have to do everything else.

    Using this as a base for graphics would be like the old days on the amiga when someone comes up with a neat video trick and tries to make a game based around it. Inevitably the result is a contrived usually crap (mostly never to be released) game.

    This stuff could possibly work as an opengl extention, but only if it can be implemented in hardware.




  • seems like the demoscene just invaded slashdot ;)

    I will just give you links you should follow if you want to see impressive quick, small 3d rendering code:

    • (4k) mesha/picard (great 4k) [scene.org]
    • gcube [scene.org] (4k) intro with cool solids interaction effects.
    • Bring it Back [scene.org] (64k) intro with very good sound/graphics synchro
    • discloned [scene.org] (64k) party version. if it doesn't work, try the final version [scene.org] which is 700k (bigger because of all the sound drivers I guess)
    • ... many more on scene.org

      I have more links on my page, see the DemosSelection link.

  • http://www.planetside.co.uk/terragen/ is much more impressive.
  • by jonathanclark ( 29656 ) on Friday October 08, 1999 @02:01AM (#1630163) Homepage
    Don't bother reading the article. It contains no real information and it's obvious the reporter is dancing around the subject.

    I don't know exactly what is being refereed to here, but many alternatives to polygon rendering have been around for ages. Simulation of the light reflection/refraction at the molecular level has been an ongoing area of research in the graphics community. The problem is as you get closer to real-life, exponentially more processing power is required. We can only hope for better and better approximation methods. Further, the fundamental laws of physics governing lights at the quantum level are not fully understood.

    I'm highly skeptical that a 22 year is doing any work in this area. This work had very little application in the real-time graphics community, why should Nintendo be interested?

    Perhaps they are referring to voxel rendering, which can be done in realtime and a more likely project for a 22 year old to undertake (who hasn't?) A large problem with voxels is the amount of memory required, so either the shapes must be generated on the fly procedurally or it must be compressed using curves/wavelets or a combination of both. The article mentions "parabolas and ellipses," so this might be what is being talked about? Voxels are in no way a representation of something "on a molecular level."

    I'm impressed the reporter managed to write such a long article without saying anything.

  • by oblisk ( 99139 ) on Friday October 08, 1999 @02:09AM (#1630164)
    I'm unable to reach the Link, yet There are a few more options than just positioning polygons for rendering. The solid modelling mentioned above is one that i know a little about. However NURBS have been around for quite a while and are rather powerful and very useful.

    Definition:
    NURBS, Non-Uniform Rational B-Splines, are mathematical representations of 3-D geometry that can accurately describe any shape from a simple 2-D line, circle, arc, or curve to the most complex 3-D organic free-form surface or solid. Because of their flexibility and accuracy, NURBS models can be used in any process from illustration and animation to manufacturing.

    I know the use of NURBS are really easy and flexible as they are simply splines which can be ajusted by certain control points and different wieghting. They have easily replaced charachter modelling from polygons in the past 2 years.

    I remeber speculation on hardware which could render/raytrace NURBS and other spline based modelling, directly w/o conversion to polys. However i've yet to see it materiealize.

    Some of the Better NURBS modeller's avalible are:
    Maya [sgi.com] A linux port of this is supposedly floating around SGI and some of the larger software houses.

    Rhino3D [rhino3d.com] Shame its windows only, yet there's some reports of it successful in Wine.

    Enjoy, Oblisk


    ------------------------------------

  • ...then I think sounds pretty cool.

    From the article:
    "He just passed through Silicon Valley last week demonstrating his homemade graphics engine, and everyone from the designers at Nintendo to programmers at Apple has been left in shock."

    They sound pretty impressed to me. If this is a real breakthrough - then NVIDIA, 3dfx etc. may be in big trouble...

    The question is - if this is so great - when is it going to be available/usefull - later this year or in ten years?
  • It is clear that the really clever bit about this engine is the way it works : i.e. not how it looks. As far as I can tell, there is no information available about this on any of the related sites (I've poked around in the Windows .exe as well). If the data for generating the island landscape consisted of, say, 256 bytes of polynomial orders & coefficients it would be remarkable. But we don't know (yet) . . .
  • "Raytracing has a reputation for being very processing-intensive, but I am convinced that it could be done efficiently in hardware, and the quality of the graphics would be far greater than polygon rendering."

    It has allready been made for professionals:
    http://www.art-render.com/products/rdrive.html
  • I really fell bad about being so phenomonally unimpressed with guy's life's work. Granted, I am not publishing graphics software myself, but I don't post my grade school artwork either.

    I feel almost the same when my three year old daughter brings me a picture she drew. "oh, sweetheart, that's beautiful! I love you! Now go stick that to the refrigerator with the other ones."

    Did anyone else find it suspicious that the most recent "interview" was from September of last year? About a remarkably stupid sounding game? And the "demo" was billed as a way to view the island of nervana, even though it only vaguely resembled an island? The guy is 22 and designed the game, the graphics engine, and wrote the music for the game?

    The best I can hope for here is that the reporter was a friend of his trying to help him out, or maybe just really, really, hard up for a story.
  • Sounds like he is using "iterated functions" to render images. Take a look at this book: Fractals Everywhere [amazon.com] for a detail description of the method.

    Doesn't sound particularly revolutionary....

    ...richie

  • You are of course correct. You can render a CSG model and you can raytrace a polygon model.
    However, CSG is IMO the easiest model for raytracing, as polygons are for rendering.

  • Of course it's a hoax, look at who wrote the article -- Douglas Rushkoff. Wasn't Rushkoff complaining the other day about the evils of technology? Well, today he's around back to his day job of an uninformed hack constantly spreading the latest hype. Rushkoff's always been clueless.
  • > http://www.art-render.com/products/rdrive.html
    A snip at $20,000!

    Incidentally, 3dNOW and SSI are entirely suitable for raytracing. Since they were released I've had the dream of writing a realtime raytracer.

    \begin{shameless plug}
    If anyone is interested, my first (suboptimal) attempt at producing a 3dnow raytracer is available at:
    ftp://ftp.dcs.ed.ac.uk/pub/cdw/ray/3dray.tar.gz
    Unfortunately I can't do any more until I get an
    AMD or P3 box. Feel free to copy/modify this code. Just let me know of anything you do.
    \end{shameless plug}
  • .."colouring which produces a surreal comic-book feeling to
    the engine. This has been produced in part by the mathematics required to describe the landscape in real time,
    meaning that the colouring remains relatively fundamental." So it HAS to look this crappy? Pretty sad. I have a name for it though, he could call it Retarded-PreSchooler-Vision... Pretty catchy isn't it?
  • Comment removed based on user account deletion
  • works on ie 5 also

    -----
  • "A technology genius has Silicon Valley drooling - by doing things the natural way," writes Douglas Rushkoff.

    "Another idiot certainly claiming to be an 'IT professional' writes a pointless article full of cliches, maybe trying to reproduce the style of Wired magazine, father of all the hype in this world" writes Stephan Tual.

    Oh boy I'm not in a good mood today :-)

  • Apologies for replying to my own post (bad form), but I hit 'Submit' too soon!

    Regarding the 3dnow source code. It was written for a K6-2 300Mhz machine that I no longer have.
    I have only compiled it with gcc (requires a recent version of binutils for the 3dnow part)
    under Linux. It doesn't currently do CSG either,
    only a fixed scene containing spheres. Unfortunately the 3dnow part doesn't actually give any speed improvement at the moment! I suspect that the femms overhead is too big - suggestions welcomed. I'd be interested to see how this runs on an Athlon.
  • by aheitner ( 3273 ) on Friday October 08, 1999 @03:41AM (#1630181)
    No signal to noise. And I've seen much better results for landscape generation: check out MetaCreations Bryce, based on the work of the grandfather of procedural texturing, Ken Musgrave. It's stunningly beautiful.

    "But this could be promising!..."
    True. I'll believe when I see promising artworks. This reporter obviously got carried away; I'm in computer games and I'm just not impressed.

    "But Bryce isn't realtime!"
    True enough as well. Bryce is a raytracer; it takes a long time to render. Oooooooooooh.....I wish I could talk to you about this ... comments, Rosenthal or Scherer? I'm sorry, I just don't feel at liberty to disclose anything. Those of you that know us and our work, trust us...

    I do disagree with you on one point: No reason a 22 year old can't do this. Everyone in the basement is 19 or younger.
  • > I remeber speculation on hardware which could render/raytrace NURBS and other spline based modelling, directly w/o conversion to polys. However i've yet to see it materiealize.

    Remember the adage "Triangles are the pixels of 3d". Until we see bezier surfaces and NURBS as primitives, we probably won't see hardware for another 5 to 10 years.

    The problem with NURBS is that they are slow. In contrast to tossing a few more textured tris at the hardware, since thats what the hardware is optimized for.
  • Could someone PLEASE remove this item?
    The idiots are getting way too much attention again.
  • by Daniel ( 1678 ) <dburrows@deb i a n.org> on Friday October 08, 1999 @03:47AM (#1630184)
    This sounds like a hoax. A big hoax. Here's the giveaway:
    He decided to use the Nintendo GameBoy as a standard for how much computing power a machine should have...and developed a series of simple equations that can be used to generate waves, textures, and shapes.
    Does anyone here know why polys, especially triangles, are the basis of most modern graphics systems? No? I'll tell you: it's because they're *EASY TO DRAW*. The equations are as simple as you can get; almost everything becomes linear interpolation and therefore only needs a single addition per pixel line. Waves are likely to need some sort of transcendental function (such as sine or cosine) to function properly -- something that requires either a massive hardcoded table, or a LOT of CPU time. Not to mention the need to toss either fixed-point or floating-point numbers around. GameBoys are 8-bit, aren't they? That doesn't give you much precision.
    Remember how you used to draw parabolas and ellipses in maths class?
    Um. There are three possibilities for drawing these:
    -> Use the equation directly. This involves a square root. Square roots are slow.
    -> For the ellipse, you can generate it using sines and cosines with a parameterized equation. The resolution on the parameter will determine how choppy the outside looks; even a resolution of 1 degree took a while on my TI-85 back in high school :)
    -> Iterate over the ENTIRE DISPLAY, applying the generic conic equation to each point; use this to find boundaries. Incredibly tricky, requires a square or two for each pixel, and is generally going to be a pain. (for the ellipses this is a little simpler, since you can bound it by the major and minor axes)
    Each element of such a display will require much more computation that a polygon; you could save a few polys this way, but I don't see it being the sort of revolutionary jump they describe.
    The article then goes on to state some fluff about plants and carbon atoms, claiming that quantum equations are 'simple' (I wish!) and suggesting that "Barbalet"'s stuff is built "from the ground up, just like nature does it." This isn't true, even if what they said is true, and has nothing to do with molecules and plants; he would be building his images up from shapes -- different shapes than are standard now, perhaps, but still just shapes. No image built "from the ground up, like nature does it," requiring the transmission of every molecule, is going to even be manageable by modern computers, let alone result in stuff that can be transmitted over modems and wires more easily than graphics images.


    The more charitable explanation is that this is a highly confused journalist who has run into ellipsoid 3D graphics or something similar and thinks it's cool.

    Daniel

  • I don't see why CSG isn't suitable for landscapes and racetracks.
    If you have a 'plane' primitive, you can use that as a base and just build on top of it using AND operations.

    In my original post, I should have distinguished between CSG and raytracing. It is perfectly possible to raytrace a polygon model if CSG
    is too cumbersome.
  • "He decided to use the Nintendo GameBoy as a standard for how much computing power a machine should have (in other words, very, very little)"

    So why does their demo require Minimum of 100Mhz.



    Hmm, I have a 200mhz, 64meg RAM machine here at work, I'm running Outlook, AIM, IE 5, Winzip, this guys graphicy thing, the resource meter, Mcafee Virus Shield, Getright, and I'm streaming a radio station from england, I still have plenty of processor left over... I dunno why you guys are complaining, this thing takes up almost nothing.

    Kintanon
  • I noticed that the software mentioned is available for Windoze, MacOS and BeOS, but not any free software OSes. Also, something tells me that this guy's not about to go free-software with his new stroke of "genius".

    Yes, I have to agree with you there. But until I realised that, I was encouraged by the following bit in the Guardian piece...
    Taking another cue from nature, Barbalet has organised his entire Nervanet project as a "public access development forum".

    Silly me, I thought this sounded a little like collaborative development. Sort of like real free software. Geez, I can be naive sometimes!

  • by SurfsUp ( 11523 ) on Friday October 08, 1999 @04:03AM (#1630188)
    Raytracing has a reputation for being very processing-intensive, but I am convinced that it could be done efficiently in hardware, and the quality of the graphics would be far greater than polygon rendering.

    Not just in hardware. Ray tracing was used in John Carmack's Wolfenstein - a classic example of how ray tracing can outperform traditional polygon rendering. In Wolfenstien the simplifying assumption is that just one ray needs to be traced per column of pixels in the viewport. It obviously works, for the special-case scenes that Wolfenstien used. The ideas were generalized somewhat in Doom, to allow for ceilings and floors. Raytracing was abandoned in Quake, in favor of traditional polygon rendering, coupled with a kick-ass culling algorithm. But don't think that raytracing is out of the picture yet - hehe, pardon the pun.
  • I was a video game person as well (crack.com), but that is a different camp than theoretical computer graphics. Most young people are into real-time graphics (i.e games) aren't interested in what happens to light waves if they travel through all points in space at the same time. Plus you need something close to a ph.d in physics or computer graphics to be near the cutting edge. So... it's unlikely that someone under 22 could break ground in this field.

    I'm still not there myself, but Glassner's 2 volume series "Principles of Digital Image Synthesis" has a good intro to the subject of light and energy transport.

    I used to run the procedural texture mailing list a few years ago and we had Ken and Ebert on there. Ebert is still working on this stuff, he usually has something at siggraph, but Ken has moved on to other areas of interest. I just had a sweet offer to work on a real-time procedural texturing system for an upcoming game console, so I'm thinking about getting back into that.
  • Actually, Wolfenstein used Ray-Casting, which is different than raytracing...

    What ray-tracing does it take a light source, and see where the light reflects (giving color).
    Ray-casting on the other hand takes your field of view, and sees what it hits, i.e. what you would see. It limits itself by not allowing for reflections, etc.
  • Looking at the example it seems to be just that - voxels...why this is a huge concept, I don't know, I have an old intro that's around 2k, that ran on my 486sx25 that does voxels...(and looks better than this I might add)...
  • héhé no, at least it brought us the occasion to advocate the scene ;)
  • Being a gameboy programmer, I'll tell you this guy is on crack (or maybe some other drug...but anyways...). I've made a tri-plotter for GB, and even though I limited it (triangles where limited to 1 8x8 sprite), I was easily able to push out 100+ triangles per second. You're probably thinking, 100+, bah, but when you're on a screen so small, it works.

    Now, this guy is doing voxels, which while it can be faster (since it's so limited), it can't be done on a gameboy...hell, the SNES couldn't do a voxel routine well w/o the help of the trusty SuperFX chip.
  • Ah.... the classic Mars demo! that was cool sh*t back then. Hard to believe it was so small.
  • Each of the current Graphical techniques are adept at different things. If we used Psi, polygons and voxels together we could create some stunning environments etc.

    Just a thought.

    Anyone know a reason why it can't be done...?
  • I just noticed that there's a link to a homepage at the bottom and other people can run the demo. So it's not a hoax, but I stand by my other comments -- the journalist is confused :) Fractal graphics are, as many other people have said, interesting but not revolutionary. I personally doubt they work well in the general case; unless you can get a fractal that looks just like any given object (say, a car), you'll have to start building objects up from fractals, at which point you run into the same problem that we have with polys: if you look close enough the illusion of an object vanishes. The only difference is that instead of turning into flat polys close up, the objects will turn into fractals close up. (and no, reality is not made of fractals so this won't really work that much better) OTOH for things that this works well for (trees, waves, clouds) it might be interesting to have a fractal-rendering subsystem added to a 'traditional' graphics system -- only problem is, how do you do the Z-buffering? But I'm sure someone can work that out.. Or maybe he's talking about procedural textures -- again, neat but not particularly new..my university actually has a research group looking into the possibilities of these things to do 'non-photorealistic rendering'. Daniel
  • remember this commercial?... : *Terminator2 music plays* "This Winter... General Motors will turn plastic into metal....."
    i was SO excited! just think, super-light motors! think about the structural applications!
    damn thing was a f@ckin' CREDIT CARD....
  • by Anonymous Coward on Friday October 08, 1999 @04:51AM (#1630198)
    I'm sorry, but no. When people talk about ray tracing they are almost always talking about an algorithm similar to the one presented in the classic Turner Whitted paper of 1980 entitled "An Improved Illumination model for Shaded Display". This traces rays from the eye out into the scene just like ray casting. The difference between ray casting and ray tracing is that tracing is done recursively while casting is not. By that I mean that in ray tracing after a primary ray from your eye hits a point in the scene you then spawn new reflected and or refracted rays from that intersection point and those rays spawn other rays and so on. In ray casting you just stop with the first hit and call it good enough. Consequently ray traced scenes can (and ususally do) have lots of interreflections between objects with mirror-like surfaces, but in Wolfenstein and Doom everything is opaque and diffuse.

    Now there is such a thing as tracing rays from the lights, but nowadays that is typically referred to as "backwards raytracing" which is confusing because physically speaking that's forwards. So some confusion is understandable.

    But techniques that use this backwards raytracing typically just do a pass with backwards tracing to deposit light in the scene, and then actually do the rendering with a more conventional raytracing pass (from the eye). Arvo was the first to use this technique I believe, in his "shadow maps" [Arvo, James: "Backward Ray Tracing" ACM Siggraph Course Notes 1986]. Jensen's photon maps are a more refined version of similar technology [paper here [mit.edu]].

  • ....invent this? I guess they will two years from now.

    ~afniv
    "Man könnte froh sein, wenn die Luft so rein wäre wie das Bier"
  • But techniques that use this backwards raytracing typically just do a pass with backwards tracing to deposit light in the scene, and then actually do the rendering with a more conventional raytracing pass (from the eye)

    Note sure if this is the same as what you're talking about, but what I used to do back in my "graphics days" was: I'de trace rays outward from the eye. As they hit reflective or refractive surfaces, I'de recursively trace from there. But if they hit an opaque surface, I would trace rays from that point to each light source. If the ray was unobstructed, then the point was illuminated, if it was blocked, then it was in shadow.

    Heh, I also faked the diffractive fuzzy edges of shadows by implementing each light source as a cluster of small light sources that were very close together, but at distintively different points. That worked great. :-) Geez, really getting off-topic here, sorry.


    ---
  • I think the comparison to analogue synthesis stemmed from the use of combinations of simple algorithms, rather than tables of coordinates, to produce a complex texture.

    An analogue synth has oscillators which produce simple tones such sine, triangle and square waves. By combining them using simple circuits such as ring modulators, envelope generators and resonant filters, it produces complex sounds. You only need a tiny amount of information to describe all the settings on an analogue synth (a "patch").

    Early digital synths used wavetables (or samples) to produce complex sounds. This makes it easier to reproduce the sound of a real instrument (you just sample it and then play it back), but the patches are much larger.

    (To complicate matters, many digital synths now emulate analogue synths using software models.)

    Polygon-based graphics are similar to wavetable synthesis - you use a table of points to reconstruct a surface by drawing straight lines (or curves, if you have the processing power to spare) between each point and the next. 3D worlds created in this way require a lot of memory to store, or a lot of bandwidth to transmit.

    Speculative part:

    Psi seems to use combinations of simple waveforms to generate 3D worlds. I imagine this would generate random rolling terrain very nicely, but it would be hard to design a landscape "to order". I suppose you would design it using conventional 3D software, and then use Fourier analysis to extract the fundamental waveforms from the complex surfaces. Then you just send (or store) those waveforms, and the rendering engine has the much easier job of just recombining them.

    I wonder if this technology could be used to create a new generation of samplers which would sample a sound, take it apart using Fourier analysis or whatever, and work out how to reconstruct it using simple waveforms? That would be very useful for increasing the capacity of samplers (and audio CDs, portable digital music players etc.).
  • The author of the original Mars terrain demo has made an explanation of the algorithm used here [hornet.org].

    I'd like to see a version of this under Linux.
  • For linux demos, another site to check out is:

    http://linux.scene.org [scene.org]

    There's some 4K demos there, some quite nice.

  • What a load of Buffalo Biscuits. That thing is an idiot test, plain and simple; if that "impressed" Nintendo than it's just another sign of their detachment from reality. I think I'll dig my Amiga 500 out of the attic. And run vastly superior demos on it's 7mhz processor. I'll say this for it then, it was a nice blast from the past.
  • The article was so non-technical,i really don't know what this guy did. But, haven't people been generating landscape images from iterative equations for a long time?
  • by Korat ( 100097 )
    Hysterically.
  • You know, not everything requires floating point. There is alot of good work in real time that is accomplished in 8 bit integer format. It all depends on how much accuracy you want to loose.

    Roger
  • by j1mmy ( 43634 ) on Friday October 08, 1999 @05:43AM (#1630212) Journal
    It looks like simple voxels to me -- a rendering trick that's been around for almost a decade. For those of you who are amazed at what can be done with 74k, take a look at this [hornet.org] little program from 1994.
  • Actually, what I have been working on is a neural network to "decompose" a sound into a "fundamental"(or basically match waves from a table into a given sample wave).. and then storing the wave type, length, and amplitude. Thus 128 bytes of wave could be stored in 6 bytes. The hard part is generating a table of wave types.
  • Anyone else remember doing this kind of stuff with Fractint in, like, 1992?

    And what is with this analog crap? Extremely uninformed mediot, methinks...
  • ahh.. so it isnt java then..

    Its evil Bills personal Java thats supposed to work only in IE..

    I wont obey though, and wont even try to run it, ignoring Bills feeble attempts trying to make me run IE..
  • Ok, here are the facts I pulled from the article:

    1. This guy has a "new" way to draw 3D objects
    2. hypehypehype
    3. it's allegedly cheaper, processor-wise, than polygons
    4. hypehypehype
    5. The author doesn't know much about what's actually involved in this drawing process
    6. The results don't look all that great
    Ok, so now here's my question: Why limit yourself to what a Game Boy can do??? The author of this article stresses that the guy with this technique wants only the processing power of a Game Boy. WHY??? It looks like much, much more could be done to make these pictures prettier. Why not use the sort of processing power that is available in the real world??
  • Further, the fundamental laws of physics governing lights at the quantum level are not fully understood.
    This isn't true; if there's a better understood, more accurate theory -- for any physical phenomenon -- than QED is for electrodynamics, I'd like to know what it is.
  • As others have mentioned, he's talking about IFSes.

    Also, Pixar's Renderman uses procedural shaders, where the appearance is calculated dynamically, instead of using pre-generated textures (though that is also an option).

    Instead of scanning in a wooden surface and tiling that image, Renderman can use a shader which generates a wooden surface; for wood that looks different, adjust the shader's code or pass in different parameters.

  • I am not an expert at this but to me what this "new" tech does sounds alot like most other graphics methods.. Doesn't 3d polygons work by having lots of equations being done by the CPU to make a scene? And are not most of these equations quite simple? And why use the word simple? What is simple? For us seeing is simple but for a computer it isn't. Now that's a good example. Think of it this way we can see so easily and to do that we allocated the largest portion of our brains to any single task. If we just assume that visualising is similar to see then we must have similar processing power. I don't think there is any computer that can match the brain at overall processing power yet. What I mean is you can only go so far interms of clever methods. We can all see it in the CPU race between AMD and Intel. AMD may have the better chips but Intel has the brute Mhz ( until recently ) and Intel was winning. Personally I am not aganist this but it isn't very impressive. I wouldn't mind cheap "real world like" graphics but I am not going to jump up and down at this. I just can't believe that someone working by himself with little help can hope to do what other's working in large companies with big R&D can't. Just look at hotmail addresses how many "somehandle9999" do we need to get before people get the point that if you think of it 9999 other people would have think of it too?
  • I had to go thru Google to find this-- it may not even be linked via his homepage-- but here's [nervana.com]a big ol' page of docs.

    I haven't looked at them yet but I also tracked down an older demo at Info-Mac that includes algorithm info for an earlier wireframe version. (Skip the useless 2Mb mov promo.)

    The Mac demo of the new version downloads as 14k but unstuffs to 4Mb (!), and it includes an interactive terrain generator, which impresses me.

    His writings explain that he started this as (what I call) a 'philosophy lab' where modelling 'mind' based on Bertrand Russell's ideas (!?????) was his main goal.

    The Info-Mac demo is wireframe but includes a bunch of little monkey-dots whose eyes you can look thru, zipping around too fast and too tiny-ly for me to have fun with (yet).

  • nice, 4k demo. taking me back a few sleepless nights.
  • by aheitner ( 3273 )
    I can't show you what's next. But I promise it's much more beautiful than this guy's lame stuff. And has some very excellent properties.

    And it was all written by 18-19 year olds.
  • Maybe what it is is a way of painting fractal graphics onto the components of the kind of CSG system described earlier in the thread. That could be sort of nice.

    (The rest of this is just spinning.)

    Or maybe the object could describe itself when you told it what angle/distance you were looking from, and what the ambient illumination was. (This would involve some kind of complex illumination object, but postulate that.) This might save a lot of work with ray-tracing and though it frequently wouldn't be as good, it would also quite frequently be "good enough". Sometimes ray tracing requires more than can be done, so you need to chop it off short anyway. The trick with this one would be how to pass the "ambient illumination" object around.
  • Exactly what I was thinking. This has been around for a long time, as it was used in that one Bladerunnder game that flopped a couple years back. Anyone remember that?
  • General purpose fractal graphics compression actually works very well - Iterated Systems' FIF format gives great compression and very high quality, along with the nice fractal-based benefits of providing resolution independence, natural scaling and variable resolution/compression tradeoff. As you say, fractal landscape compression (esp. for things like mountains, clouds, etc) is a very old idea. AFAIK if he's come up with a way of representing 3-D objects that would be novel - not to mention being necessary for anything other than backgrounds - but he doesn't seem to be....
  • A friend got me interested in the demo scene a few years back. Man, those little buggers pack in some serious eye candy. As for the POS nervana thing, not even close.

    Also, I believe www.hornet.org can hook you up with some demos.
  • Well, for the sake of argument, I'd say the theory of gravity has been better understood and more accurate over the years. Then again, no one has detected the presence of a graviton have they? Without a grand unified theory we are forced to admit the current theories governing the basic forces in the universe are approximations at best - which is what I meant by saying it is not fully understood. The fact that theoretical physics research continues is evidence of that.

    Does this matter to computer graphics? Not really... because computation power, not lack of an accurate model, is the limiting factor for all but the simplest simulations.
  • Sounds like fractal geometry to represent things. Not that exciting and requiring lots of power..
  • Constructive solid geometry, spline meshes, etc., have been used by Pixar's Renderman and other similar products for about 10 years, and Renderman definitely shades surfaces better if you use them rather than a polygonal model.

    If there is any innovation in this article at all, it's moving this technology down to the level of a video game from the non-real-time applications where it's been used for decades.

    Bruce

  • I must say, I'm not terribly impressed. I've seen better... much better. Fractal graphics have been around for years... and I've seen much more realistic looking ones on my old Amiga. It's true that this demo is pretty small, but it's not that small considering the poor quality. Many demo competitions had a 4k division. The top placing demos were usually abou 2-3 minutes long, and very visually impressive.

    As for saying that the polygon based stuff is always ugly... what about Final Fantasy 8? That runs on a mere Play Station, and it's one long celebration of eye candy!

    I don't dipute that fractles can make some very pretty graphics (especially of various plants), but they just aren't as simple to proccess as polygons. I really don't see how you can come out ahead in terms of CPU use. Drawing a line requires only a compare and either one or two increments per pixel (using Bresenham's algorithm). There just isn't a way to mathematically generate fractals that cheaply. Maybe that's why after all of the fractal hype in the early 90's, game designers went to texture mapped polygons anyways.
  • by fperez ( 99430 ) on Friday October 08, 1999 @09:43AM (#1630252)
    Well, for the sake of argument, I'd say the theory of gravity has been better understood and more accurate over the years.

    You may mean the classical theory of gravity. Noone has a clue yet as to how to get a decent (renormalizable) quantum gravity theory, short of using strings. And we're still a looong way from getting numbers out of strings, even though it looks very promising.

    OTOH, QED has been tested to insane experimental accuracy, is known to be renormalizable (to arbitrary order) and since perturbation theory converges fast (alpha=1/137), we actually can compute pretty much anything that can be experimentally tested.

    So I think it's safe to say that as far as the fundamental physics of light-matter interaction is concerned, we have a very good grasp of what's going on. Which doesn't mean we have a theory of everything: strong interactions are still poorly understood (perturbation theory doesn't work well in general and non-perturbative calculations are crazily hard), quantum gravity is still on the run, and strings as of yet contain many unresolved problems.

    Does this matter to computer graphics? Not really... because computation power, not lack of an accurate model, is the limiting factor for all but the simplest simulations.

    Actually, I can't think of any day to day computer graphics application where quantum effects matter anyway: classical electromagnetism gives you everything you need to compute light transmission/reflection/diffraction effects for any rendering you want. So good or bad understanding of light/matter interactions at the quantum level is just irrelevant as far as CG's goes.

  • Ok, so now here's my question: Why limit yourself to what a Game Boy can do???

    Well, as a proof of concept, it is definately worth showing that this can be done on even a relatively slow processor.

    I would argue that instead of demonstrating how efficient this is by how fast it can display simple scenes, it would be more convincing to show a scene that's more elaborate than current algorithms support on modern hardware.

    However, bear in mind that it's not exactly fair to compare Algorithm X on normal hardware to Polygons, since most people these days have EXTREMELY accelerated hardware for drawing polygons.

    However, for something as simple as the demo on this page, there's no reason to think he didn't cheat a whole lot, and just use a simplified voxel approach (we can't even change the camera's pitch or roll!)
  • Beyond the refresh rate of your video card is just wasted. I've seen smaller programs that could do what this one is doing better and faster. Would easily draw 60-80 FPS with little CPU usage.
  • From the article:
    "He just passed through Silicon Valley last week demonstrating his homemade graphics engine, and everyone from the designers at Nintendo to programmers at Apple has been left in shock."

    Yeah, and after viewing the demo I could easily see it continue as:
    "...left in shock. They were heard to whisper, 'Is this guy serious?'"
  • I hope this has HRTs when it posts - it didn't in the preview window! /begin(find fault with others) Okay, Ray-Tracing. This is a pretty mediocre idea, because the number of intersections you have to do per scene is terrifically high. I had a debate with a friend a while back about whether or not the perspective-correction and linear-interpretation h/w currently on cards added up to the ability to build really fast line-tri intersection hardware. The short answer was "no, not really". The most obvious problem is the much higher number of divides that RT requires to figure out what rays are hitting (or missing) which tris. Furthermore, RT really sucks for many kinds of scenes; that's why they invented Radiosity. As for CSG: Please. CSG is a great building technique, but a $h!tty run-time representation. Figuring out whether something intersected something else when there's an object ANDed into a NOTed region of another object is not a good way to spend your day. The reason everyone uses polygons is that they have a lot of nice mathematical properties, are useful for VSD and physics as well as rendering, do LoD well, can be represented in a modest amount of memory (all things considered) and are very well supported in h/w. To supplant them, a new technique would have to be insanely great, and the only technique I know of that holds such promise is Image-Space Rendering, which lets you walk through real-world photographs; but slowly, and with a lot of warping. Not mature yet, if it ever will be. The Psi technique is total crap, unless you really need cheapo terrain for your flight sim. Compare it to Black&White. Psi is slow, ugly, and difficult to control. And, inaccurate at human-sized scale, where it really matters in game design. I don't know who was drooling over this, but they must be pretty dumb. /end(find fault with others)
  • Ray tracing was used in John Carmack's wolfenstein - a classic example of how ray tracing can outperform traditional polygon rendering

    I believe you are refering to "ray-casting" not raytracing. they are very similiar in that they both send out "rays" from the viewers FOV to the object being viewed. But ray-casting does so in groups, resulting in blocky images. Raytracing casts rays precisely on a per ray basis, which requires a lot more calculations and results in a much more accurate representation of the object being viewed.

    Not a very good definition but I hope it clarifies the differences and why raytracing is much more math intensive then raycasting and why D00M and wolfy did NOT utilize this technique.

    One of the problems with raytracing is that it doesn't scale very well. increasing the resolution of the scene being rendered requires more then just increasing the polygon count. Polygon manipulation can be compensated for mathematically. unfortunatly for raytracing increased size just results in increased complexity. There is no easy way around it. Of course I am no expert on the matter. Nonetheless That is why it is important to develop methods of computing rays at the hardware level.

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...