Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Ageia PhysX Tested 179

MojoKid writes "When Mountain View California start-up Ageia announced a new co-processor architecture for Desktop 3D Graphics that off-loaded the heavy burden physics places on the CPU-GPU rendering pipeline, the industry applauded what looked like the enabling of a new era of PC Gaming realism. Of course, on paper and in PowerPoint, things always look impressive, so many waited with baited breath for hardware to ship. That day has come and HotHardware has fully tested a new card shipped from BFG Tech, built on Ageia's new PPU. But is this technology evolutionary or revolutionary? "
This discussion has been archived. No new comments can be posted.

Ageia PhysX Tested

Comments Filter:
  • Well, since there hasn't been anything like it before, it would be Revolutionary by definition. However, I think that it will be a little while before we can really make any intelligent conclusion on the matter as it is still way to early in the development cycle for any kind of "review" to be valid. What, with one game and one demo as all that is available? Too soon.
    • Evolutionary (Score:3, Interesting)

      by phorm ( 591458 )
      Not necessarily true. While dedicated cards for physics haven't existed, dedicated cards for other operations have, and much of the physics calculations themselves are still being done in games, just with an extra load on the CPU in software rather than a dedicated unit. As physics becomes a bigger focus in the realism of 3d games, perhaps it is in fact a foreseeable evolutionary step that specific devices would exist to process this.
      • While dedicated cards for physics haven't existed

        Dedicated cards? Probably. Dedicated computers? Definitely, especially if you consider that the very first computer [wikipedia.org] was built to essentially perform physics calculations (artillery trajectories).

        Hardly revolutionary.
        • by Slithe ( 894946 )
          >> Dedicated cards? Probably. Dedicated computers? Definitely,

          Either say "Dedicated cards? Probably not. Dedicated computers? Definitely" or "Dedicated cards? Probably. Dedicated computers? Definitely not".
    • Well, since there hasn't been anything like it before, it would be Revolutionary by definition.

      Actually I think that definition is a little closer to Evolutionary than revolutionary.

      To be revolutionary it has to cause change in the industry.Something makes it so from now on all cards/computers will ship with one, all new games will require them.

      Revolutionary products aren't usually judged as such on release just upon reflection. 3dfx is good example of a revolutionary change in the graphics industry.

  • by nathan s ( 719490 ) on Tuesday May 09, 2006 @06:29PM (#15297236) Homepage
    ...they could use a card dedicated to keeping their server up when Slashdot finds it. It's already down for me.
  • by JoeLinux ( 20366 ) <joelinux@gma[ ]com ['il.' in gap]> on Tuesday May 09, 2006 @06:30PM (#15297238)
    Since Mainframes, I've always thought it makes more sense to modularize hardware.

    While studying for my EE, I often wondered what the purpose of having a clock was, since so much of the individual chips often had finished their calculations before the next clock cycle came around.

    I think we are going to see the clock go away, replaced with "Data Ready" lines, which will also help heavily in determining the bottlenecks in a given system (Hint: it's the system that is taking the longest to put up the "Data Ready" flag).

    I also think that optics will be the way of the future. Quantum will be like Mechanical Television: cute idea, but impractical for mass production.

    Optics. Think of it this way: Imagine a bus that can address individual I/O cards with full duplex, simply by using different colors for the lasers. Motherboards are going to get a lot smaller.

    That's my opinion, anyway.

    Joe

    ---
    Q:Why couldn't Helen Keller drive?

    A:Because she was a woman.
    • by daVinci1980 ( 73174 ) on Tuesday May 09, 2006 @06:42PM (#15297328) Homepage
      Having a Data Ready flag doesn't solve the problem that a clock solves. How do you know when you can read your 'Data Ready' flag? How do you know that your current reading of 'Data Ready' is really new data, and not the same data you haven't picked up yet?

      A clock is a syncronization scheme, and it solves a very low-level issue: How do I syncronize my reads and writes on a physical level?

      Many people have tried to create systems that don't have clocks. Without exception, they have all failed or have been unscalable.
    • by AuMatar ( 183847 ) on Tuesday May 09, 2006 @06:43PM (#15297342)
      The purpose of a clock- ease of development. With a clock, you can advance new input into the next pipe stage at known intervals, allowing each stage to finish completely. Without a clock, you need to make sure that no circuit feeds its data into the next part too soon. Doing so would end up causing glitches. For example, if the wire that says to write RAM comes in at time t=0, but the new address comes in at time t=1, you could corrupt whatever address was on the line previously. With a clock, all signals update at the same time.

      Its possible to make simple circuits go the clockless route. Complex circuits are nearly impossible. There's no way a p4 could be made clockless, the complexity of such an undertaking is mind boggling. Even testing it would be nearly impossible.

      The problem with data ready flags is the same as with the rest of the circuit- how do you prevent glitches without a latching mechanism?

      And this isn't about modularizing hardware. Its about adding extra processing power with specific hardware optimizations for physics computation. Wether its a good idea or not depends on how much we need the extra power. I'm not about to run out and buy one though.

      Actually, in desktops to day the trend is to remove modularization. AMD got a nice speedboost by moving the memory controller into the Athlon (at the cost of requiring a new chip design for new memory types). I'd expect to see more of that in the future- speed boosts are drying up, and moving things like memory and bus controllers are low hanging fruit.
      • Multi-stage processors have latching mechanisms between stages that release on a clock pulse. I think what he meant by "data ready flags" was to allow the latches between the stages to unlock automatically, instead of being dependant on a chipwide clock signal.

        But then, I'm only working on a Bachelor's in Computer Information Systems...what would I know about signalling in a complex silicon device?
        • The same problem remains- how do you prevent glitches on the data ready lines? If an operation takes 1 ns but the data ready line spikes high for .1ns at .5ns, that could cause a disaster.

          And I have a bachelor's in computer engineering. WHich means I have designed both synchronous and asynchronous circuits.
    • I'm not so sure about that. Down the years, the trend has usually been that companies have always released specialized chipsets or mini-CPUs that can take over some part of the CPU's workload. While this has worked in the short run (think math coprocessor), the CPU has become sufficiently powerful over time to negate the advantage. Look at it this way: If Intel/AMD releases a quad-core or octa-core CPU, in which each core is more powerful than the fastest single-core today, any of those cores could take up
      • by complete loony ( 663508 ) <Jeremy.Lakeman@g ... .com minus punct> on Tuesday May 09, 2006 @07:50PM (#15297655)
        Of course this argument falls down with graphics processing. While it is true that today's CPUs could probably process the graphics engine from games 5-7 years old, the bandwidth and processing requirements of current generation games is very different to the types of problems CPUs normally handle. It's a type of problem that generic CPUs can't keep up with. Physics may be a similar type of problem, one that can be performed far more efficiently than a current CPU can handle. That said, there has to be a large market for such a device to fund the R&D for future revisions or the generic CPU will catch up again.

        With graphics, small visual differences between hardware implementations are not a big problem. Physics processing needs a standard interface, and precise specs on what the output should be. If there is only going to be one vendor, and one proprietary interface, this market will fail.

        • Of course this argument falls down with graphics processing. While it is true that today's CPUs could probably process the graphics engine from games 5-7 years old,

          Not really. Quake3 and tux racer are few examples which the software based approach isn't cutting it.
          And thats with athlon64 3000 with integrated graphics. There is either very bad quality of graphics compared to dedicated card or slideshow.

          It's a type of problem that generic CPUs can't keep up with. Physics may be a similar type of problem, one
    • I/O is one of the areas that could really use some help. I envision a contactless bus where expansion devices are powered by induction; high-power devices could have good ol' electrical contacts. Just as PCI Express features 1-n lanes support, my fantasy bus uses multiple fiberoptic connections, with some slots supporting more than others for additional bandwidth.

      The only thing on the motherboard would be the bus arbitrator. Everything else would go into a module. Modules would also be able to not only

    • While studying for my EE, I often wondered what the purpose of having a clock was
      No clock, no data. Did you pass?
  • Anandtech too ... (Score:3, Informative)

    by Anonymous Coward on Tuesday May 09, 2006 @06:30PM (#15297239)
    The link: http://www.anandtech.com/video/showdoc.aspx?i=2751 [anandtech.com]

    Short summary: Great for synthetic benchmarks, probably not real-world ready.
  • by mobby_6kl ( 668092 ) on Tuesday May 09, 2006 @06:37PM (#15297290)
    Without question, one of the hottest topics throughout the industry this year has been the advent of the discrete physics processor or "PPU" (Physics Processing Unit). Developed by a new startup company called Ageia, this new physics processor gives game developers the opportunity to create entirely new game-play characteristics that were not considered possible using standard hardware. Since its original inception, both CPU and GPU vendors have come to the spotlight to showcase the ability to process physics on their respective hardware. However, the Ageia PhysX PPU is the only viable solution which is readily available to consumers.

    For the foreseeable future, the only vendors which will be manufacturing and selling physics processors based on the Ageia PhysX PPU are ASUS and BFG. With ASUS primarily focusing on the OEM market, BFG will enjoy a monopoly of sorts within the retail channel, as they will comprise the vast majority of all available cards on store shelves. Today, we will be running a retail sample of BFG's first ever Physics processor through its paces. Judging from the packaging alone, you can tell that this box contains something out of the ordinary. Housed in an unusual triangular box with a flip-down front panel, consumers can glimpse the card's heatsink assembly through a clear plastic window.

    BFG Tech PhysX
    Card And Bundle

    Flipping the box, consumers are presented with a quick listing of features complete with summaries and a small screen-shot. Most importantly, the package also lists the small handful of games which actually support the PPU hardware. This short list consists of City of Villains, Ghost Recon Advanced Warfighter, and Bet on Soldier: Blood Sport.

    Upon opening the packaging, we are presented with a standard fare of accessories. Beyond the actual card itself, we find a power cable splitter, a driver CD, a demo CD, and a quick install guide. Somewhat surprisingly, we also find a neon flyer warning of a driver issue with Ghost Recon Advanced Warfighter that instructs users to download the latest driver from Ageia to avoid the problem. This is a bit disheartening as there are only three games which currently support this hardware. With this in mind, it is hard to not feel as though the hardware is being rushed to market a bit sooner than it probably should have.

    Directing our attention to the card itself, we find a rather unassuming blue PCB with a somewhat standard aluminum active heatsink assembly. Amidst the collection of power circuitry, we also find a 4-pin molex power connector to feed the card as a standard PCI slot does not provide adequate power source for the processor. At first glance, the card looks remarkably similar to a mainstream graphics card. It's not until you see the bare back-plate with no connectivity options that you realize this is not a GeForce 6600 or similar product.

    Thankfully, the BFG PhysX card does not incorporate yet another massive dual-slot heatsink assembly as so many new pieces of high-end hardware do these days. Rather, we find a small single-slot active heatsink that manages to effectively cool the PPU while keeping noise at a minimum. Removing the heatsink, we were pleased to find that BFG has done an excellent job of applying the proper amount of thermal paste and that the base of the heatsink was flat with no dead spots. After powering the system, we see that BFG has dressed the card up with three blue LED's to appease those with case windows.

    With the heatsink removed, we have our first opportunity to glimpse the Ageia PhysX PPU in all its glory. Manufactured on a 0.13u process at TSMC, the die is comprised of 125 million transistors. Overall, the size of the die is slightly larger than the memory modules which surround it. Looking closely at the board, we see that the 128MB of memory consists of Samsung K4J55323QF-GC20 GDDR3 SDRAM which are rated for a maximum frequency of 500MHz. Unfortunately, neither BFG nor Ageia have disclosed what frequency the PPU memory and core operate at, so we are unsure
    • http://www.amd.com/us-en/assets/content_type/Down l oadableAssets/So32v64-56k.wmv [amd.com]

      Nice comparison concerning current 32-bit applications/limitations over 64-bit. If this video is TRUE, then I won't bother with a PPU - my Athlon 64 3000+ may already to be able to handle those extra physics calculations while any WELL-PROGRAMMED game will use any extra resources I have available for extra object/texture/physics rendering.

      Sorry, IMHO, PPU is at a loss. Mod down at will.
  • Skeptical (Score:5, Interesting)

    by HunterZ ( 20035 ) on Tuesday May 09, 2006 @06:38PM (#15297296) Journal
    From what I was able to read of the article before it got slashdotted, it sounds like games that can take advantage of it require installation of the Ageia drivers whether you have the card or not. This leads me to believe that without the card installed, those games will use a software physics engine written by Ageia, which is likely to be unoptimized in an attempt to encourage users to buy the accelerator card.

    Also, it's likely to use a proprietary API (remember Glide? EAX?) that will make it difficult for competitors to create a wider market for this type of product. I really can't see myself investing in something that has limited support and is likely to be replaced by something designed around a non-proprietary API in the case that it does catch on.
    • Comment removed based on user account deletion
      • Ever heard of OpenGL? If your don't have a card, the software driver will do the same thing, but slower. Same deal over here. I doubt it will be unoptimized anyway, developpers wouldn't put up with that.

        Yes, except that OpenGL was and is an open standard. It's not controlled by one company who is trying to push a product that accelerates software which uses their API.
    • Also, it's likely to use a proprietary API (remember Glide? EAX?) that will make it difficult for competitors to create a wider market for this type of product.

      I also remember that in its day Glide was faster and resulted in higher quality 3d than OpenGL or DirectX.

      LK
      • Re:Skeptical (Score:5, Insightful)

        by HunterZ ( 20035 ) on Tuesday May 09, 2006 @07:25PM (#15297542) Journal
        I also remember that in its day Glide was faster and resulted in higher quality 3d than OpenGL or DirectX.

        For a while, since 3dfx was the only one innovating for a while. Once they got hold of the market, nobody else could because the games only supported Glide, and nobody else was able to make Glide-supported hardware due to it being a proprietary API.

        Then nVidia came along with superior cards that only supported Direct3D and OpenGL because Glide was 3dfx proprietary. Game developers were forced to switch to D3D/OpenGL to support the new wider array of hardware. Since 3dfx cards were overly-optimized for Glide, this resulted in games that ran crappy on 3dfx hardware but great on nVidia. The rest is history.

        EAX is a similar story. Creative owns it, but what has happened is that many game developers don't bother to take advantage of it, instead relying on DirectSound3D or OpenAL as the lowest-common-denominator. The widespread use of SDKs suck as Miles Sound System do also help to allow transparent use of various sounds API features though, so mileage varies. Personally, I've been without Creative products for years now and haven't missed them one but. I'm currently waiting for the next generation of DDL/DTS Connect sound cards to come out, and then I'll give those a shot.

        The same thing is likely to happen here; competitors will make their own products, but because they won't be able the use the PhysX engine they will make their own. It will be an open API because they'll have to band together to get game developers to support their cards. Ageia will be forced to add driver support for the standard API, but it won't perform as well on their cards. If they're smart, they'll either open the API early on, or else release new hardware built around the open API. This is all assuming the PPUs even catch on, of course.

        The problem with the PC gaming hardware market is that when there's only one company making a certain type of product, they tend to stop innovating. Then, when someone else develops a competing product they try to use marketing to stay ahead instead of coming up with more competitive products. Sometimes gamers see through the marketing (3dfx) and sometimes they have a harder time doing so (EAX). It will be interesting to see how it turns out this time.
        • Although they may not be use the latest cheap hardware, free software advocates tend to see through the marketing most of the time. If you're ever curious, you might think to ask a free software advocate, its possible they're not entirely crazy.
    • This leads me to believe that without the card installed, those games will use a software physics engine written by Ageia, which is likely to be unoptimized in an attempt to encourage users to buy the accelerator card.

      I find myself a bit puzzled by what this thing's actually supposed to do for me. Given that there are currently no applications that require it (because since it's not actually shipping yet, it would be the kiss of death), then supporting the PbysX can make no difference to the actual gamepl

      • I'm not sure what indicates that it's a DSP. I'm not much of a hardware guy, but aren't DSPs intended to operate on a stream of data? I don't think that's what's going on here.

        It looks like the way they're setting it up is that they're building a physics engine that can offload some of its processing to this card. Apparently this is reflected in these initial games in the form of additional dynamic objects in the game environments.
      • I find myself a bit puzzled by what this thing's actually supposed to do for me. Given that there are currently no applications that require it (because since it's not actually shipping yet, it would be the kiss of death), then supporting the PbysX can make no difference to the actual gameplay --- because any games need to be able to run on machines without it. This means all it'll be good for is eye candy. Is it really worth spending money on the PhysX so you can get slightly prettier explosions, when inst
    • Certainly there would eventually need to be some kind of standard DirectX abstraction for physics . I would not bet on vendors agreeing one by themselves. Anyone know if Microsoft has any plans on a physics API?

      -matthew
    • Re:Skeptical (Score:3, Insightful)

      by aarku ( 151823 )
      It absolutely is optimized in software. That's ridiculous. My own little informal tests have put it high and above Newton and ODE for a lot of cases, and who knows about Havok. (too damn expensive to try)

      I think most people don't realize it's a great physics engine by itself that has the added bonus of supporting dedicated hardware. Plus, a lot of the larger developers presumably have source access, so if it doesn't look optimized or if there are big /* LOL THIS'll MAKE EM BUY A CARD */ comments... well.
    • It's also going to disappear without much fanfare because it's pointless. It's far better to spend the extra money on a dual (or more) core processor which can be used for all your computing needs, rather than fewer general purpose cores and an expensive single use card. Dual core processors make it redundant before it's even come out.
  • by Eideewt ( 603267 ) on Tuesday May 09, 2006 @06:39PM (#15297306)
    I think that while this card can do some amazing physics stuff, we aren't ready to make use of that capability for anything more than a little eye candy. Not in networked games, at least. Trying to keep everyone's world in sync is hard enough as it is, without adding even more objects that need to appear in the same place for everyone.
    • Not in networked games, at least. Trying to keep everyone's world in sync is hard enough as it is, without adding even more objects that need to appear in the same place for everyone.

      Actually, I think this problem could be solved with a little bit of creative coding. You see, you don't really need to send the complete position of every object during the movement. You could just send the starting point of each object, and the amount of force applied to it, and let the PPU on each client computer work out t
      • Regardless, that's a lot of objects to send force info for, plus everyone needs to get their objects moving at the same time. I'm sure creative coding can do a lot to mitigate the problem, but if it were easy to keep a bunch of clients in sync then we wouldn't ever see lag.
  • Coral Cache link (Score:2, Informative)

    by Anonymous Coward
  • Ghost Recon video (Score:5, Informative)

    by jmichaelg ( 148257 ) on Tuesday May 09, 2006 @06:46PM (#15297356) Journal
    Anandtech posted these video sequences [anandtech.com] to show what you see with and without the card.

    The Anandtech article [anandtech.com] states that the physics hardware slows down the framerates which Aegis can't possibly be happy about.
  • short for "abated"
  • no titles yet (Score:2, Interesting)

    by jigjigga ( 903943 )
    Ive been following them for a long time- their software demos blew my mind a few years ago (the one with the towers made of bricks that you could destroy oh so fun). We should wait for real games to make use of the physics. Ghost recon uses it as a gimmick. The tech demo game as listed in one of the articles is a real showing of what the card is capable of. When the game engines catch up and use it as an intrical part rather than a gimmick it will usher in a new era of gaming. It really will, look at what
  • by cubeplayr ( 966308 ) on Tuesday May 09, 2006 @07:02PM (#15297444)
    Is there any competion for Aegis? Reviews are all fine and dandy but product comparisons is where the decisions should be made. It should be based on which PPU can perform a given task faster/better. Competition would also drive each competitor to better their own product to beat the other. However, they shouldn't be mutually exclusive (ie. If you use Product A, then you can't use a program with only Product B support).

    I wonder how long it will be before there is a mainstream demand for a separtate physics unit (probably as soon as games require them). It sounds like a great idea to take some of the load off the CPU. Does this mean that now game performance will be more directly linked to the speed and power of the GPU and PPU and that the CPU will be more of an I/O director and less of a number cruncher?

    I've seen numerous posts of people saying that they do not have any available PCI slots. Will the introduction of a new type of card lead to larger motherboards with more slots or might it lead to a small graphics card that does not monopolize the PCI space? Also, there is the concern of adding another heat source to the mix.

    "Get you facts first - then you can distort them as you please." -Mark Twain
    • The competition is engines like Havok FX which run on a graphics card and provide "effects physics". This requires very modern graphics hardware but not a special card. Presumably the downside is that the extra load on the GPU reduces framerates or graphics quality in some other way, but I don't know enough to say. It'll be interesting to see how it works out at any rate.
  • by throx ( 42621 ) on Tuesday May 09, 2006 @07:10PM (#15297482) Homepage
    I really don't see a custom "Physics Processor" being a long-lived add-on for the PC platform. It's essentially just another floating point SIMD processor with specialized drivers for game engine physics. With multicore+hyperthreaded CPUs coming out very soon, the physics engines can be offloaded to extra processing units in your system rather than having to fork out money for a card that can only be used for a special purpose.

    In addition, there's already a hideously powerful SIMD engine in most gaming systems loosely called "the video card". With the advent of DirectX 10 hardware which lets the card GPU write it's intermediate calculations back to main memory rather than forcing it all out to the frame buffer, a whole bunch of physics processing can suddenly be done through the GPU.

    Lastly, the API to talk to these cards is single-vendor and proprietary. That's never been a long term solution for longevity (unless you're Microsoft), so it won't really take off until DirectX 11 or later integrates a DirectPhysics layer to allow multiple hardware vendors to compete without game devs having to write radically different code.

    So, between multicore/hyperthreaded CPUs and DirectX10 or better GPUs with a proprietary API to the card... cute hardware but not a long term solution.
    • "With multicore+hyperthreaded CPUs coming out very soon, the physics engines can be offloaded to extra processing units in your system rather than having to fork out money for a card that can only be used for a special purpose."

      I think the same argument used to be made for 3D acellerator cards many years ago. Still hasn't come to pass. The basic problem is that dedicated hardware will always be more powerful than a generic processor. The real question is whether or not the physics problems that are offlo
    • Fear: When you see B8 00 4C CD 21 and know what it means

      I think that it's something like:

      mov ax, 4Ch
      int 21h

      I think that interrupt 21h is the DOS interrupt for things like printing text to the screen and other miscellaneous stuff. If I had to guess, I'd say function 4Ch is "exit program." Of course, I could be completely wrong.

      I was a BIOS programmer for several years... I did a lot of work on the BIOSes for the NForce-1, NForce-2, and NForce-4 chips. So with a sig like that, I'm guessing you w

    • With multicore+hyperthreaded CPUs coming out very soon

      Eh? I already have a multicore CPU, and don't consider myself to be an early adopter by any means. If you mean specifically "CPUs with multiple cores and hyperthreading support", then

      a) I believe they're already available (although I'm not in the market at the moment so am not really keeping up with it)
      b) HT was never that big a deal, performance-wise

      In addition, there's already a hideously powerful SIMD engine in most gaming systems loosely called "the
      • The HT+Multicore bit was more referring to going more and more mainstream, not that there weren't systems out there already.

        That's all very well, but don't forget that that SIMD engine is already busy rendering scenes. I'd hate to have to choose between nice visuals anda decent frame rate, good Physics and a decent frame rate, or nice visuals and good physics and a sucky frame rate.

        So what's the point of putting a totally incompatible SIMD engine in there then? Put in something that can be used to improv

  • If it doesn't have a fan, and at least one additional power connector, how can anyone take it seriously as cutting-edge hardware?

    And that's not even mentioning a lack of DRM. Doesn't Hollywood own gravity these days? I'm sure a patent was filed somewhere - or was it a copyright?

  • much like that cat that ate cheese then crouched in front of the mouse hole...

  •   Doesn't look like a very good performance improvement for the money. In fact, CPU's new "dual-core" marketing push may just eat up the dollars for something like this. If you simply move your physics engine to hardware, it only solves 1 part of a larger, and very delicate puzzle.
  • Just as the FPU was once an addon separate from your CPU, I wonder how much longer this so-called PPU will remain an addon separate from your GPU. I imagine that soon we'll have graphics cards with built-in PPUs, or even GPUs with PPUs built into the die as we have FPUs built into CPUs now.
    • To me this looks like a preview of what the Cell will be capable of, although this looks slightly less flexible. All this represents is a multicore SIMD floating point unit, much like a graphics card, but with a more rounded instruction set, except that its all the way out on the PCI bus rather than being in core.

      Unlike the GPU which is quite happy sitting out on its own, doing what its told, to my mind a physics engine should be more interactive.

      Graphics processor:
      Transform polygons in to pi
    • AMD has licensed the technology behind what is supposed to be a VERY fast FPU device. It can supposedly do the math faster than a quad core AMD CPU. Intel also attempted to get the rights but AMD apparently got them and with their Hypertransport is supposed to be in a better position to actually use it. Obviously not released yet but I believe there was an article posted here a month or so talking about it. So yeah, seperate CPUs CAN be an advantage for specialized tasks...

      Also, DRC has come out with a prog
  • shifting functions away from the main CPU is not new, and adding a card or device to handle non-core functions is not new, so it's the specific functions that the card handles that are new(ish) Prior Art would include FPU chip, co/extra-processor cards (like PC cards for a Mac, Sidecar for Amiga), MPEG-2 cards, etc.
  • I want to see how they will implement this in X11 or Xgl-type desktops. When My icons collide into each other, I want it done realistically! When I kill Firefox because it's frozen, I want to see it shatter into a million pieces! And then have those pieces push around the rest of my desktop.

    This isn't serious, of course, but the reason I say this is I wonder if there are applications for things other than video games.

  • by Animats ( 122034 ) on Tuesday May 09, 2006 @11:27PM (#15298598) Homepage
    It sounds like they're looking at demos with an older version of the Ageia API. In the 2.4 release, cloth (even tearable cloth) is supported. The demo of this was shown at GDC, but it may not be in the current crop of games. It's less impressive than one might expect.

    Still, I would have expected a bigger improvement in performance on existing stuff. There may be too much of a bottleneck getting in and out of the physics processor, which is the usual problem with coprocessors. I'd expect more improvement in fluids, particles, hair and cloth physics, which usually don't feed back into the gameplay engine and thus can be done concurrently with the main engine work. If you're banging boxes around, the main game engine probably has to wait for the physics engine to get the new box positions, so there's no big win there. Even if you have feedback to the game engine from cloth, you can probably delay it a cycle, so that when the cape gets caught in the door, it doesn't yank on the character until one cycle later.

  • What the point here with this processor? A dual-core (or even quad-core) with a dual-sli GPU isn't enough!! We need a Physics processing unit now. Man, do they ever invent ways to siphon more money from a fools wallet. $300, good lord that is a lot of money. Developer can't utilized the processors that are in the computer already for the extra physics. With questionable support in the near term and total lack of a competing product, I am not shelling out a thing for this.
  • You guys do know what happened to the idea of separate "486" and "486sx" lines, right?

    No, I don't see anyone trying to sell us "x86 math coprocessors" any more either.

What is research but a blind date with knowledge? -- Will Harvey

Working...