Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Graphics Software Media Movies Hardware

Nvidia Releases Hardware-Accelerated Film Renderer 251

snowtigger writes "The day we'll be doing movie rendering in hardware has come: Nvidia today released Gelato, a hardware rendering solution for movie production with some advanced rendering features: displacement, motion blur, raytracing, flexible shading and lighting, a C++ interface for plugins and integration, plus lots of other goodies used in television and movie production. It will be nice to see how this will compete against the software rendering solutions used today. And it runs under Linux too, so we might be seeing more Linux rendering clusters in the future =)" Gelato is proprietary (and pricey), which makes me wonder: is there any Free software capable of exploiting the general computing power of modern video cards?
This discussion has been archived. No new comments can be posted.

Nvidia Releases Hardware-Accelerated Film Renderer

Comments Filter:
  • Spelling... (Score:4, Funny)

    by B4RSK ( 626870 ) on Tuesday April 20, 2004 @05:43AM (#8914523)
    Gelsto is proprietary (and pricey), which makes me wonder: is there any Free software capable of exploiting the general computing power of modern video cards?

    Gelato seems to be correct...
  • by rjw57 ( 532004 ) * <richwareham@nOSPaM.users.sourceforge.net> on Tuesday April 20, 2004 @05:43AM (#8914526) Homepage Journal
    This is a reversion of the norm :) [from the page linked to in the story]:

    Operating System

    * RedHat Linux 7.2 or higher
    * Windows XP (coming soon)
    • It does not happen often that a hardware manufacturer has Linux support before it has Windows support. At least I have never seen it before.
      • by PlatinumInitiate ( 768660 ) on Tuesday April 20, 2004 @06:24AM (#8914669)
        Not only with hardware manufacturers/drivers, but also general software. ISV's are getting annoyed by Microsoft's dominance of the desktop market, and through that, their (heavy) influence on desktop software. It's not inconceivable that in a decade, Microsoft could control every aspect of the standard desktop PC and desktop software market. At the moment some of the only really strong ISVs in their respective areas are Adobe, Corel, Intuit, Macromedia, Oracle, and a few specialized companies. Expect a big ISV push towards a "neutral" platform, like Linux or FreeBSD. Windows is too big to stop supporting, but ISVs will be smart to at least try and carve out a suitable alternative and avoid being completely dominated by Microsoft. All that most ISVs might be able to hope for in a decade is being bought out by Microsoft or making deals with Microsoft, if things don't go the way of creating a vendor-neutral platform.
        • Who are you?

          How do you know ISV's are getting annoyed? Do you go to lunch with ISV's every other day?

          Not only are you crudely generalizing, I think your point is actually not sound at all. You think Adobe cares about Microsoft dominating?

          The much more plausible explanation is that they (nVidia) already had the drivers/software for the architecture on linux (read previous posts, they bought the card).

          Another yet plausible explanation is that drivers are more difficult to implement in Windows (becau

          • by PlatinumInitiate ( 768660 ) on Tuesday April 20, 2004 @10:03AM (#8916092)

            How do you know ISV's are getting annoyed? Do you go to lunch with ISV's every other day?

            No, but working for a medium-sized ISV who deals with Microsoft (we buy bulk embedded XP licenses for use in custom gaming machines), I can tell you a few things about how Microsoft deals with customers. They have actually tried to offer us better deals if we discontinued our Linux solutions and marginalized our dealings with our Russian partners who produce hardware and software for use with Mandrake Linux 9.x in gaming solutions. (Sounds impossible? Think again). I can only imagine how much more underhanded Microsoft are when dealing with bigger ISVs.

            Not only are you crudely generalizing, I think your point is actually not sound at all. You think Adobe cares about Microsoft dominating?

            I'm sure Netscape and Sun didn't care either, until Microsoft took them out of the market. You are really insulting the intelligence of the Adobe executives if you think that they haven't considered this possibility or what they could do to avoid something similar happening.

      • Re:I like this... (Score:3, Informative)

        by Minna Kirai ( 624281 )
        At least I have never seen it before.

        Look at the AMD 64 [devx.com] ("Opteron", etc) CPU. Linux support is here, but native versions of Microsoft Windows are still yet to be released.
    • by Anonymous Coward
      Windows XP's Achilles Heel Apparently Revealed
    • Naturally. If the product is intended to be cross platform right from the beginning, then the developers would prefer to work on linux and port it to windows rather than the other way round, so you can expect the linux version to be released slightly earlier. What is perhaps a little surprising is that they announced it before the Windows version was ready.
    • Nvidia couldn't be a little pissed they're out of the XBOX2, could they?
    • Re:I like this... (Score:3, Insightful)

      by ameoba ( 173803 )
      It makes sense, really... If you're building an app that's intended to be used by clusters, why would you write it for XP? Having to spend an extra $100 per node really starts adding up when you've got serveral hundred or thousands of machines...
  • by Anonymous Coward
    Sadly, the hardware accelerations that consumer 3D graphics cards do aren't useful for the high quality renderings that are needed for film and television. The needs of games are just different, parially because of the need to render in realtime. So I doubt whether there's much scope for free software to make use of them for that purpose...
    • Perhaps you should read the Nvidia FAQ? This topic is covered. From what I can tell, they don't use the GPU in the traditional way, they just use it as a co-processor.
    • by mcbridematt ( 544099 ) on Tuesday April 20, 2004 @06:00AM (#8914591) Homepage Journal
      But, NVIDIA's Quadro lineup *ARE* PCB Hacked consumer cards. Some PCI ID(or BIOS for the NV3x cards) hacking can get you a Quadro out of a GeForce easily, minus the extra video memory present on the Quadro's. I've done this heaps of times with my GeForce4 Ti 4200 8x (to a Quadro 780 XGL and even a 980 XGL) and I believe people have done it with the NV3x/FX cards as well.

      This film renderer is different. It uses the GPU and CPU together as powerful floating point processors (not sure if gelato does anything more than that).
      • How do you know this? Did you perform benchmark comparing it to a real Quadro?

        A couple of years ago I got a GeForce4 4800 and a Quadro4 900 XGL. I performed the required resistor mod and flashed the GeForce4 with the Quadro4's BIOS.

        Sure the GeForce4 got recognised as a Quadro4 900 XGL in the Windows display control panel, but when you run benchmarks like SPECViewPerf it was obvious the modded-GeForce4 did NOT perform like a real Quadro4 900 XGL. Capabilities like the HW-accelerated clip planes did not see
    • by Oscaro ( 153645 ) on Tuesday April 20, 2004 @06:01AM (#8914596) Homepage
      This is not really correct. The graphics cards Gelato uses are consumer hardware. This doesn't mean that the image is generated directly by the card! The 3D hardware is used as a specialized fast and parallel calculation unit, used especially for geometric calculation (matrix per vertex multiplication, essentially) and other stuff. This (of course) means that the rendering is NOT done in realtime.
    • by WARM3CH ( 662028 ) on Tuesday April 20, 2004 @06:04AM (#8914598)
      Actually, there has been reports of using such hardwares to produce the similar results of the high-end, software based methods like those used in films. The trick is to break the job (typically the complex RenderMan shaders) to many passes, and feed them to the graphics card to process. By many passes, I mean 100~200 passes. The outcome will be like rendering a frame in a few seconds (we're not talking about real-time renderings here) which is MUCH faster the software based approaches. The limit in the past was that the color representaion inside the GPUs used a small number of bits per channel and by having a lots of passes on the data, round-off errors would degredate the quality of the results. But now, nVidia supports 32 bit floating point representaion for each color channel (i.e 128 bits per pixel for RGBA!) and this brings back the idea of using the GPU with many passes to complete the job. Please note that in the film and TV business, we're talking of large clusters of machines and weeks of rendering and bringing it down to days with smaller number of machines is a very big progress.
      • Exactly, floating-point colour channels provide a larger dynamic range, which prevents the banding and saturation you see when doing multiple passes with say 8-bits/channel colour. This has been the crucial development to enable this. Conditionals and looping for shaders wasn't generally required - as Renderman-style languages can be decomposed into a series of standard OpenGL operations. (I remember researching all of this for an essay I wrote on this sort of stuff at uni a couple of years ago - interestin
    • by Anonymous Coward
      Pinnacle have been doing graphics card assisted effects for a long time under Adobe Premier see

      http://www.pinnaclesys.com/ProductPage_n.asp?Produ ct_ID=19&Langue_ID=7

      for details
    • by Anonymous Coward
      That was true a couple of years ago, however Geforce 6800 and Quadro FX 4000 have support for:

      - 32 bit per component precision in the pixel shader
      - dynamic branches in the pixel and vertex shaders
      - unlimited instruction count for pixel and vertex shader programs

      These features are very useful, they make possible to render frames using GPU at the full quality. Considering that GPUs have HUGE amount of processing power, this will make the rendering much faster.

      For example, above mentioned new GPUs from NVid
  • by Anonymous Coward on Tuesday April 20, 2004 @05:45AM (#8914538)
    The rumor on the street is that a Soho based SFX house tried this when they had a deadline that standard software rendering couldn't meet.

    So they wrote an engine to do renderman->OpenGL and ran it across many boxes.

    Problem was that they got random rendering artefacts by rendering on different cards - different colors etc, and couldn't figure out why.

    When working on one box they got controlled results, but only had the power of one renderer.
    • by XMunkki ( 533952 ) on Tuesday April 20, 2004 @06:13AM (#8914630) Homepage
      Problem was that they got random rendering artefacts by rendering on different cards - different colors etc, and couldn't figure out why.

      I have seen this problem in software renderers as well. The problem seemd to be that part of the rendering farm was running on different processors (some were Intel, some AMD and many different speeds and revs) and one of them supposedly had a little difficulty with high-precision floating points and it computed the images with a greenish tone. Took over a week to figure this one out.
      • by DrSkwid ( 118965 ) on Tuesday April 20, 2004 @07:01AM (#8914777) Journal
        I used to see that with 3ds4 as well when I was rendering this [couk.com]. One was a pentium and one was a pentium pro.

        Ah those were the days. We were on a deadline and rendered it over Christmas. After four hours the disks would be full and it would be time to transfer it to the DPS-PVR. I spent six days where I couldn't leave the room for more than four hours, sleep included. Was pretty wild !

        VH1 viewers voted it second best CGI video of all time, behind Gabriel's Sledgehammer so I guess it was worth it!

    • It's certainly possible that different hardware or even different drivers on the different machines doing the rendering can create subtle (or not-so-subtle) differences in each resulting frame, but standardising the hardware and drivers across machines should solve that completely.
      • It's certainly possible that different hardware or even different drivers on the different machines doing the rendering can create subtle (or not-so-subtle) differences in each resulting frame, but standardising the hardware and drivers across machines should solve that completely.

        Yes, but it's not a good idea to tie one's business to a particular hardware company let alone one product, and one driver.

        Also, there are many people who make the money decisions that will balk at making particular changes
    • My question is, if a GPU on an AGP card can send the render results back to the system, what is the point of PCI-Express? That was supposed to be one of the "enhancements" of PCI-E. PCI-X was said to have the same limitations as PCI and AGP.
  • Fab for machinima (Score:5, Informative)

    by Paul Crowley ( 837 ) on Tuesday April 20, 2004 @05:46AM (#8914540) Homepage Journal
    For some possible applications, check out machinima.com [machinima.com] - film-making in real time using game engines.
    • by dnoyeb ( 547705 )
      I think you make the best point on the board today.

      The opening quote of the article poster is ignorant. Movie rendering has been done in hardware forever. He seems to be mixing up doing rendering in hardware with rendering on the fly in a video card.

      What we have here is a slight mix of the two, but by no means anything new on the market. Its only letting you use your quadro if you already have one for movie rendering acceleration. I certainly would not buy one for this purpose. I imagine its still in
      • RTFA.

        You're the ignorant one. Movie rendering has been done on the CPU forever. This for the first time is doing final movie rendering on the GPU.

        This is definitely something new on the market. Point me to another product that does final movie rendering with hardware acceleration provided by the GPU, and I'll eat my hat.

        I imagine its still incredibly more profitable to use a CPU than GPU.

        Why? Because it's faster? Bzzzt. That's the whole point. Take a look at the transistor counts for the latest
  • Eat some gelato (Score:2, Informative)

    by SpikyTux ( 524666 )
    Gelato (Italian) == Ice cream
  • by Noryungi ( 70322 ) on Tuesday April 20, 2004 @05:48AM (#8914549) Homepage Journal
    Is there any Free software capable of exploiting the general computing power of modern video cards?

    Well, since they released "a C++ interface for plugins and integration" for Gelato (ice cream in Italian, btw), this probably means that free software can (and, eventually, will) support all these high-end functions... or am I completely wrong?

    For instance, just imagine Blender with a Gelato plug-in for rendering... hmmmm... Now I understand why they named it "Gelato"...
  • Nvidia releses Hardware-Accelerated video renderer?

  • by rexguo ( 555504 ) on Tuesday April 20, 2004 @05:54AM (#8914573) Homepage
    The AGP bus has assymetrical bandwidth. Upstream to video card is like 10x faster than downstream to the CPU. So you can dump tons of data to the GPU but you can't get the data back for further processing fast enough, which defeats the purpose.
    • Not if the purpose is to output to a recording or viewing device, but they're probably planning to use it with PCI-X anyway.
    • by snakeshands ( 602435 ) on Tuesday April 20, 2004 @06:09AM (#8914617) Homepage

      The purpose might mostly be to show people why they need to run out
      and get PCI Express hardware; it completely addresses the assymetry
      issue.

      I'm guessing the main reason Gelato is spec'd to work on their
      current AGP kit is to encourage the appearance of really impressive
      benchmarks showing how much better performance is with PCI Express.

      They have a good idea, and they're rolling it out at a good time,
      I think.

      Some folks were trying to do stuff like this with PS2 game consoles,
      but I guess now they'll have more accessible toys to play with.
      • You are missing the point that nVidia will be using a bridge on its first PCI Express setup. The chips will basicly talk to the bridge in AGP16x and will suffer from the same asymetry problems today agp cards suffer.
        Only the second generation PCI Express cards from nVidia will be native solutions and will use the bridge the other way arround (to usa a PCI Express chip inside an AGP system).
    • I believe the problem is not with AGP bus, but rather with the GPUs that are NOT designed in the first place to transfer anything back to the memory. In normal 3D applications, you just feed the graphics card with all sorts of data, like the texturs, geometry, shaders... and the result goes out throug the VGA connector! You don't need to give it back to the CPU or the memory. The GPU and the memory architecture of the graphics card is simply designed to just recieve the data with highest speed from the CPU.
    • If it's I/O bound, yes.

      However if the GPU can be left to crunch for most of the time and return say a row of pixels at a time I doubt the 1/10th speed of the AGP bus downstream would be a big problem. For complex scenes the input (textures, geometry, shaders) may well exceed the output (pixels) in terms of data.

    • Hello? Digital out? You can plug it into something other than an LCD monitor.
    • by Namarrgon ( 105036 ) on Tuesday April 20, 2004 @06:54AM (#8914759) Homepage
      You're right that the AGP port is asymmetric, but this is unlikely to be a bottleneck if they can do enough of the processing on the card.

      For 3D rendering, especially non-realtime cinematic rendering, you have large source datasets - LOTS of geometry, huge textures, complex shaders - but a relatively small result. You also generally take long enough to render (seconds or even minutes, rather than fractions of a second) that the readback speed is not so much an issue.

      Upload to the card is plenty fast enough (theoretical 2 GB/s, but achieved bandwidth is usually a lot less) to feed it the source data, if you're doing something intensive like global illumination (which will take a lot more time to render than the upload time). Readback speed (around 150 MB/s) is indeed a lot slower, but when your result is only e.g. 2048x1536x64 (FP16 OpenEXR format, 24 MB per image), you can typically read that back in 1/6 of a second. Not to say PCIe won't help, of course, in both cases.

      Readback is more of an issue if you can't do a required processing stage on the GPU, and you have to retrieve the partially-complete image from the GPU, work on it, then send it back for more GPU processing etc, but with fairly generalised 32 bit float processing, you can usually get away with just using a different algorithm, even if it's less efficient, and keep it on the card.

      Another issue might be running out of onboard RAM, but in most cases you can just dump source data instead & upload it again later.

    • So you can dump tons of data to the GPU but you can't get the data back for further processing fast enough, which defeats the purpose.

      Fast enough for what exactly? What purpose does that defeat?
      Compared to the cost of (eg) sending a frame across a LAN, the time taken to pull a frame across the bus would be utterly, utterly insignificant.
  • Teh horror !!! (Score:5, Insightful)

    by Anonymous Coward on Tuesday April 20, 2004 @05:54AM (#8914574)
    "Gelsto is proprietary (and pricey)"

    A company that wants to be payed for their work, weird !

    You will see more, allot more, of this for the Linux platform in the near future.

    Software may be released with source code, but no way that it will be released under GPL, most ISV's can't make a living releasing their work under GPL.

    And please the "but you can provid consulting services" argument is not valid, it dont work that way in the real world.
    • Re:Teh horror !!! (Score:2, Insightful)

      by Cobron ( 712518 )
      Still I think it's a good idea to mention if software for Linux is proprietary.
      That just saved me the trouble of clicking the link and checking it out ;) .

      ps: I recently visited a project trying to "harness the power of GPU's". I think that project was something like seti/folding/ud/... but tried to have all the calculations be made by the GPU of your 3d card.
      If someone knows what I'm talking about: please post it, I can't find it anymore ;)
      • Re:Teh horror !!! (Score:3, Interesting)

        by Hast ( 24833 )
        Probably something you can find on the General Purpose GPU [gpgpu.org] site.

        I've toyed with shaders some and implemented a system for image processing on GPUs. Quite a lot of fun really, though we didn't do any comparisons with CPU to see how much faster it was. (That project isn't published anywhere though.)

    • If I had mod points I'd mod you up. I'm getting tired of the slashdot mentality that everything has to be free. If a company makes a professional grade product, and there is demand for that product, they should be able to be rewarded for their efforts.

      • I'm getting tired of the slashdot mentality that everything has to be free.

        If you haven't contributed financially to this blog then you're a serious hypocrite - aren't you? (-:<

        Free-as-in-beer isn't absolutely necessary, but it does solve an awful lot of "parking meter change" style problems, and it's a highly viable services leader.

    • So why should Nvidia benefit from Linux, without some reciprical giving ? Hardware programming specs would be enough of a gift.

  • I wonder what market segment nVidia is gunning for. Are they after some of Discreets market share or trying to offer a hardware solution that will beat the crap out of Adobe After Effects.

    It would be really cool to have a hardware solution with Combustion like features for the price of After Effects.

    • Re:BURN!!!!!! (Score:3, Interesting)

      by Hast ( 24833 )
      Play around some with pixel/vertex shaders, they are quite easy to get the hang of and plenty powerful. (Even if you don't have the latest and greatest gfx cards.)

      Could make a nice addition to GIMP (if there isn't one already).
  • 'pricey' (Score:5, Insightful)

    by neurosis101 ( 692250 ) on Tuesday April 20, 2004 @05:59AM (#8914589)
    Um... depends on what you're looking for/expect. This isn't intended for you to buy and use at home. This is more likely for smaller developers (big developers write their own usually... think Pixar). Professional grade equipment is all expensive. The first common digital nonlinear editor was the casablanca, and with an 8 gig scsi drive ran close to a grand when it was released. This was just a single unit.

    I bet the type of people that buy this are like big time architects that have a few machines set up to do renders for clients, and want to perhaps do some additional effects for promo/confidence value, that likely already have people running that type of hardware.

    Then again all those Quadro users could be CAD people and they've got no audience. =)

    • I work for a smaller graphics house that is part of an architecture firm and does mostly (but not all) architecture work.

      This product is competing against other rendering engines like MentalRay, Vray, Renderman, etc. And at $2750 per license it's deffinately not for smaller developers or architects. There are plenty of other rendering engines out there that are significantly cheaper and dont require a video card that costs as much as an entire x86 rendernode.
      • Certainly true - more than Entropy's $1500 (when they sold it), more than many others, but still cheaper than PRMan's $3500 + $700/yr.

        I think the point is not that it can render just like other engines, but that it can do so at a far greater speed (with a lot more flexibility and features than the PURE card). That would indeed be worth the money to all but the smallest studios - much faster feedback at full quality is an artist's dream, quite apart from the (more expensive) option of using it to accelerat

        • Perhaps you could even use a render box with a bunch of PCI-X cards in it (not sure if PCI-X allows that, I sure hope it does). Give it a few months and the current top of the line cards will have halved in price and you can actually put together such a render machine for a reasonable amount of money.

          Then you naturally have to build a Beowulf cluster of those. ;-)
          • There are no PCI-X-based GPUs I'm aware of.

            PCI Express is a possibility, if you can find a motherboard with more than one x16 slot (or a GPU that fits an ordinary x1 slot). Doubt you will for some time, though.

            And you wouldn't have to use top-of-the-line cards, either. Something mid-range, with more bang for the buck would do fine.

            But yeah, one day. Great to see how the demand for better games has resulted in cool hw/sw like this flowing on to my own industry :-) Now we just need to figure out how th

    • Re:'pricey' (Score:3, Interesting)

      by RupW ( 515653 ) *
      Professional grade equipment is all expensive.

      No, you can get raytracing hardware for less than the software and a Quadro FX would cost you.

      For example, there's the ART PURE P1800 card [artvps.com] which is a purpose-built raytracing accelerator. It's a mature product with an excellent featureset, speaks renderman and has good integration into all the usual 3d packages. It's generally acknowledged as a very fast piece of kit with excellent image quality, and plenty of quality/speed trade-off options. And if you've a
    • Then again all those Quadro users could be CAD people and they've got no audience. =)

      Not just CAD - I do server-side Java programming, and we've all recently been bought new PCs. The spec we went for included a Quadro FX 500; don't ask me why, it just did... (it was that, or a similar machine with a GeForce - I didn't make the choice)
  • by attaka ( 739833 ) on Tuesday April 20, 2004 @06:06AM (#8914606) Homepage
    I have been reading interesting stuff about this lately. Take a look at this Stanford project: BrookGPU [stanford.edu]

    This might also be interesting: GPGPU [gpgpu.org] /Arvid

  • Linux software (Score:5, Informative)

    by HenchmenResources ( 763890 ) on Tuesday April 20, 2004 @06:06AM (#8914607)
    Is there any Free software capable of exploiting the general computing power of modern video cards?

    Take a look at the Jashaka [jashaka.com] project. It is a real time video editing suit and the designers have been working with and have supposedly been getting support from Nvidia, so they may have had access and I would imagine certainly will have access to these video cards. I can't imagine them not taking advantage of this technology.

    The other nice thing is if memory serves me correctly this program is being designed to work on Windows, Linux and OS X, so good news all around.

  • by PastaAnta ( 513349 ) on Tuesday April 20, 2004 @06:12AM (#8914628)
    is there any Free software capable of exploiting the general computing power of modern video cards?

    A quick Googling revealed the following:
    - BrookGPU [stanford.edu]
    - GPGPU [gpgpu.org]
  • ExLuna, take 2 :) (Score:2, Interesting)

    by Anonymous Coward
    Seems like ex-Exluna staff (bought by NVidia) is going to kick PRMan's a$$ on hardware level: they tried it on software level with Entropy, but got sued into oblivion by Pixar, now it's time for revenge?
  • M4 open GL VJtool. (Score:3, Interesting)

    by kop ( 122772 ) on Tuesday April 20, 2004 @06:21AM (#8914660)
    M4 [captainvideo.nl] is a free as in beer movieplayer/vj tool that uses the power of openGL to manipulate movies,images and text.
  • Software, like much technology, follows a classic cycle from rare/expensive to common/cheap as the knowledge and means required to build it get cheaper.

    "Moore's Law" is simply the application of this general law to hardware. But it applies also to software.

    Free software is an expression of this cycle: at the point where the individual price paid by a group of developers to collaborate on a project falls below some amount (which is some function of a commercial license cost), they will naturally tend to produce a free version.

    This is my theory, anyhow.

    We can use this theory to predict where and how free software will be developed: there must be a market (i.e. enough developers who need it to also make it) and the technology required to build it must be itself very cheap (what I'd call 'zero-price').

    History is full of examples of this: every large scale free software domain is backed by technologies and tools that themselves have fallen into the zero-price domain.

    Thus we can ask: what technology is needed to build products like Gelato, and how close is this coming to the zero-price domain?

    Incidentally, a corollary of this theory is that _all_ software domains will eventually fall into the zero-price domain.

    And a second corollary is that this process can be mapped and predicted to some extent.

  • by CdBee ( 742846 ) on Tuesday April 20, 2004 @06:38AM (#8914709)
    is there any Free software capable of exploiting the general computing power of modern video cards?

    I expect that once it suddenly becomes clear that the GPU in a modern video card has serious processing power, that someone will release a version of the SETI@Home [berkeley.edu] client which can use the rendering engine as a processor. Bearing in mind that most computers use their GPU's for a very small percentage of their logged-in life, I suspect there is real potential for using it for analysing on distributed computing projects.
  • Absolutely (Score:2, Informative)

    by TheFr00n ( 643304 )

    Check out www.jahshaka.com. It's an open source video compositing / FX package that leverages the 3D accelerator chip on your graphics card to do incredible things. This is one to watch, it's definitely going places.

    You can download binaries for linux and windows (and MAC), and source tarballs are available for the savvy.

    I know, it's not strictly a "renderer", but it employs many of the fuctions of a renderer to create realtime effects and transitions.

  • Little value... (Score:3, Insightful)

    by winchester ( 265873 ) on Tuesday April 20, 2004 @06:57AM (#8914767)
    Almost every FX house worth its salt in the CG business uses Pixar's Renderman on UNIX or Linux machines. The reasons behind this choise are very simple.

    Renderman is proven technology and has been so since the early '90s. Renderman is well known, its results are predictable and it is a fast renderer. Also, current production pipelines are optimised for Renderman.
    UNIX and Linux are quite good when it comes to distributed environments (can anyone say Render Farm?) and handle large file sizes well (Think a 2k by 2k image file, large RIB files).
    And last but not least, renderman is available with a source code license.

    Hardware accelerated film rendering is in essence nothing but processor operations, some memory to hold objects and some I/O stuff to get the source files and output the film images. Please explain to me why a dedicated rendering device from NVidia would be any better than your average UNIX or Linux machine? Correct, there aren't any advantages, only disadvantages. (More expensive, proprietary hardware, unproven etc.)
    • Please explain to me why a dedicated rendering device from NVidia would be any better than your average UNIX or Linux machine?

      Why do you think 3D hardware exists at all, when all it's doing is a load of integer maths? Surely a Linux machine is capable of adding numbers together, right? Obviously, dedicated hardware is faster, sh*tloads faster. Durr. The benefits to the artist's equivalent of the compile-edit-debug cycle are fairly obvious here, and worth rejigging the production pipeline to accomodate.
    • by Namarrgon ( 105036 ) on Tuesday April 20, 2004 @09:17AM (#8915540) Homepage
      Almost every FX house? I don't think so.

      PRMan is a fine product, but it has its limitations, as well as its price. There are numerous competitors, many of which use the same Renderman interface but offer more speed and/or more features at a lower price (BMRT and Entropy are[were] notable, and relevant, until Pixar squashed them with the threat of an expensive court case). Brazil, AIR, etc - these RIB-based renderers drop into the same place in the workflow.

      Please explain to me why a dedicated rendering device from NVidia would be any better than your average UNIX or Linux machine?

      Only if you explain why your average UNIX or Linux machine is better than a Commodore 64 or a PDA, which is also "in essence nothing but processor operations" etc :-) If you listed SPEED in there, you're on the right track.

      A modern GPU has far more floating-point hardware than any general-purpose CPU, and it's all geared towards the process of rendering pixels. For certain tasks, one of those expensive dedicated rendering devices from nVidia could be better than FIFTY of your "average" UNIX or Linux machines! Is that enough of an advantage to consider?

      Dang, I went and fed the troll, didn't I...

    • Re:Little value... (Score:4, Informative)

      by Zak3056 ( 69287 ) on Tuesday April 20, 2004 @09:18AM (#8915551) Journal
      Correct, there aren't any advantages, only disadvantages. (More expensive, proprietary hardware, unproven etc.)

      And, apparently, orders of magnitude faster.

      Personally, I'd put that rather firmly into the advantage column, and for a number of reasons. You could either render your movie with a smaller farm (always a plus) or you could render even more complex scenes in the same time period--which is probably what most people would use this technology for. On the commentary track of Monsters Inc, the guys from Pixar note that despite having MUCH faster hardware (and alot more of it) the average time to render a single frame of Monsters Inc was just as long as a single frame of Toy Story. Why? Because the frames were FAR more complex.

      I think this is a Good Thing(tm) at least for the people who have the imagination to use it.

    • 1) Renderman is anything but fast.

      2) Renderman is by no means alone in the FX world. MentalRay and Brazil most immediatly come to mind. 3) As to UNIX and linux being quite good in a distributed environment... well duh... so are Windows and Mac boxes. As to file sizes... with the limit of 4gb on 32bit boxes, thats a pretty damned big file, and thats active in memory size limit. As to with files, the limit is more in terrabytes. As to Renderfarms, almost ever renderer under the sun handles this the same
  • by Goeland86 ( 741690 ) <`goeland86' `at' `gmail.com'> on Tuesday April 20, 2004 @07:17AM (#8914813) Homepage
    I think that indeed there is free software to do movies and rendered animations using raytracing. First, Cinelerra can use a linux cluster for movie rendering. Second, there's a whole bunch of modellers/raytracers out there that perform very well: Povray is the oldest and most advanced, and can run on a pvm cluster, yafray is relatively recent and can use an openmosix cluster for networked rendering, Blender now integrates a raytracer AND exports to yafray. Those are the 4 programs I know of that I use, but there are more, I just haven't looked for more. So, yes, there is free software for movie rendering already!
    • Re: (Score:3, Interesting)

      Comment removed based on user account deletion
      • Repeat after me: Studios don't use FULL-SCENE raytracing, but they use raytracing for certain things where raycasting can't do a good approximization. Hollow Man is one movie where raytracing was used. They used PRMan for ordinary rendering, and then BMRT was called upon for the raytracing.

        And, the major studios want: Speed, quality and a good clean API(so they can add their own stuff too)
      • So, you're saying that Blue Sky Studios [blueskystudios.com] isn't a professional studio?

        In general, you're right - nobody does ray tracing for final renders, but Blue Sky is the exception that proves the rule.
  • math coprocssor (Score:3, Insightful)

    by PineGreen ( 446635 ) on Tuesday April 20, 2004 @07:22AM (#8914827) Homepage
    Instead of just using the native 3D engine in the GPU, as done in games, Gelato also uses the hardware as a second floating point processor.
    Does this mean that I could eventually use my GeForce to do things like matrix inversion for me?
  • ART, OGL assisted, now gelato. Sure there is a place but how do I stick a FX card into my several hundred 1U racks either physically or financially. Have you seen the size of these cards anyway ? Sure some vendors (mental images) are leveraging GPU power and have done the same with OGL for some time but unless the GPU calls are handled by calls to the renderer so you hide behind a consistent API it's a waste of your hard earned time getting your pipeline into shape in the first place. Long live GPU but I do
  • by agby ( 303294 ) on Tuesday April 20, 2004 @07:27AM (#8914849)
    I was under the impression that it's hard to use a video card for general computing tasks because of the way that AGP is designed. It's really good at shunting massive amounts of data into the card (textures, geometry, lighting, etc) but terrible at getting a good data rate back into the computer. They're designed to take a load of data, process it and push the output back to the screen, not the processor. This is the major reason, IMHO.
  • IF anyone's interested, the dino on the http://film.nvidia.com/page/gelato.html page was one of of Entropy's flagship images. Entropy was a pay-to-play renderer made under the renderman Spec by a they guy who wrote BMRT. Pixar sued the company that made both of the renderman compliant renderers, and basically forced them into business with Nvidia, who quickly snatched up the company and paid off Pixar. Nvidia had been trying to come up with a hardware shader language like that of renderman, and thusly ca
  • Admittedly it's not exactly the same things as NVIDIA's solution, but the main component of breaking big movie quality shaders into multiple passes is in ATI's Ashli (http://www.ati.com/developer/ashli.html). The big plus is instead of costing thousands of dollars it's free. Also I noticed everyone is saying agp read backs make this sort of thing useless. The fact is that most of the scenes rendered will take seconds to hours on the graphics card (vs. minutes to days on a CPU). The slow agp reads aren't
    • I have mod points, and I really want to do a little bit of smacking down, but I'll just go for correcting instead. I phear the metamods, yo!

      ASHLI is *not* a renderer. It isn't anywhere near doing what Gelato does. Gelato takes a scene file, and gives you a picture. It does it very nicely, using motion blur, programmable shading, and all sorts of fun stuff like that. It is written by the Ex - Ex Luna boys. (Larry Gritz, Matt Pharr, Craig Colb -- Three mofos who know their shizzle.)

      ASHLI takes a rende
  • Gelato is a $2,750 software package. It is intended to be used with an Nvidia Quadro FX 4000 workstation video card. The video card has not hit the market yet, but the Quadro FX 3000 goes for $1,300. Which brings the total cost for the package at between $4000 and $5000 per machine.
  • and someone has finally invented the Toaster.
  • What gets me excited is this line:

    Key to this doctrine of no compromises is the nature of how Gelato uses the NVIDIA Quadro FX GPU. Instead of just using the native 3D engine in the GPU, as done in games, Gelato also uses the hardware as a second floating point processor.

    WOW. That would be a fast FPU, I'm supposing. How fast can it sieve?

    Pan
  • by Ungrounded Lightning ( 62228 ) on Tuesday April 20, 2004 @06:29PM (#8922950) Journal
    [...] advanced rendering features: displacement, motion blur, raytracing, flexible shading and lighting, [...]

    That sounds like an old Siggraph presentation I saw a decade or two ago when I used to go to Siggraph. Lucasfilm, I think. (The fine sample picture in the article showed a motion-blured image of a set of pool balls in motion.)

    When rendering an image using raytracing, there are several effects that are achieved by similar over-rendering processes. I.e. you ray-trace several times varying a paramter:

    - Depth-of-field (use different points on the iris of the "camera", blurring things at different distances from the "focal plane".)

    - Diffuse shadows (use different points on the diffuse light source(s) when computing the illumination of a point.)

    - Motion blur (use different positions for the objects and "camera", evenly {or randomly} distributed along their paths during the "exposure" - ideally pick the positions of the whole set of objects by picking several intermediate times, rather than picking the postion of each object separately, to avoid artifacts of improper position combinations.)

    - Anti-aliased edges. (Pick different points in the pixel when computing whether you hit or missed the object or which color patch of its texture you hit.)

    As I recall there were about five effects that worked similarly, but I don't recall the other(s?) just now.

    To do any one of them requires rendering the frame N times {for some N} with the parameter varied, then averaging the frames. (Eight times might be typical.) Naively, to do them all would require N**5 renderings - 32,768 raytracings of the frame to do all five.

    The insight was to realize that the effects could be computed SIMULTANEOUSLY. Pseudorandomly pick one of the N from each effect's set for each frame and only render N frames, rather than N**5. Eight is a LOT smaller than 32K. B-)

    Sounds like Nvidia ported this hack to the firmware for their accellerator.

For God's sake, stop researching for a while and begin to think!

Working...