Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Ageia PhysX Tested

ScuttleMonkey posted more than 8 years ago | from the useless-in-the-short-term dept.

179

MojoKid writes "When Mountain View California start-up Ageia announced a new co-processor architecture for Desktop 3D Graphics that off-loaded the heavy burden physics places on the CPU-GPU rendering pipeline, the industry applauded what looked like the enabling of a new era of PC Gaming realism. Of course, on paper and in PowerPoint, things always look impressive, so many waited with baited breath for hardware to ship. That day has come and HotHardware has fully tested a new card shipped from BFG Tech, built on Ageia's new PPU. But is this technology evolutionary or revolutionary? "

cancel ×

179 comments

Revolutionary (1)

uzor (787499) | more than 8 years ago | (#15297222)

Well, since there hasn't been anything like it before, it would be Revolutionary by definition. However, I think that it will be a little while before we can really make any intelligent conclusion on the matter as it is still way to early in the development cycle for any kind of "review" to be valid. What, with one game and one demo as all that is available? Too soon.

Evolutionary (2, Interesting)

phorm (591458) | more than 8 years ago | (#15297466)

Not necessarily true. While dedicated cards for physics haven't existed, dedicated cards for other operations have, and much of the physics calculations themselves are still being done in games, just with an extra load on the CPU in software rather than a dedicated unit. As physics becomes a bigger focus in the realism of 3d games, perhaps it is in fact a foreseeable evolutionary step that specific devices would exist to process this.

Re:Evolutionary (1)

Mydron (456525) | more than 8 years ago | (#15297986)

While dedicated cards for physics haven't existed

Dedicated cards? Probably. Dedicated computers? Definitely, especially if you consider that the very first computer [wikipedia.org] was built to essentially perform physics calculations (artillery trajectories).

Hardly revolutionary.

Spelling fix. (0)

Anonymous Coward | more than 8 years ago | (#15297226)

It's bated breath, not baited.

Bated as in masturbated.

Re:Spelling fix. (1)

Gideon Fubar (833343) | more than 8 years ago | (#15297282)

actually, it's bated as in abated, as in stopped.

but whatever turns you on...

hear that sound? (0)

Anonymous Coward | more than 8 years ago | (#15297563)

it sounded something like *SWOOOOOOOOOOOOOOSH* ...

Re:Spelling fix. (1)

SredniVashtar (237017) | more than 8 years ago | (#15297314)

Actually, I just assumed that people had eaten a lot of sashimi and "baited" was intentional.

Looks like... (5, Funny)

nathan s (719490) | more than 8 years ago | (#15297236)

...they could use a card dedicated to keeping their server up when Slashdot finds it. It's already down for me.

Re:Looks like... (3, Informative)

Anonymous Coward | more than 8 years ago | (#15297260)

Re:Looks like... (2, Funny)

Grey_14 (570901) | more than 8 years ago | (#15297336)

whoa, for a second there I thought you were linking to such a card,

Re:Looks like... (2, Funny)

uzor (787499) | more than 8 years ago | (#15297517)

That would be cool. An Anti-Slashdot Effect Hardware Accellerator!! Sign me up!

Re:Looks like... (1)

Tyrion Moath (817397) | more than 8 years ago | (#15297386)

Even the mirror was already Slashdotted. Impressive.

Anandtech already did a review - a while back (2, Informative)

Ruff_ilb (769396) | more than 8 years ago | (#15297396)

http://www.anandtech.com/video/showdoc.aspx?i=2751 [anandtech.com]

"The added realism and immersion of playing Ghost Recon Advanced Warfighter with hardware physics is a huge success in this gamer's opinion. Granted, the improved visuals aren't the holy grail of game physics, but this is an excellent first step. In a fast fire fight with bullets streaming by, helicopters raining destruction from the heavens, and grenades tearing up the streets, the experience is just that much more hair raising with a PPU plugged in."

Re:Looks like... (1)

OwnedByTwoCats (124103) | more than 8 years ago | (#15297551)

That link didn't work, either. :-(

Re:Looks like... (0)

Anonymous Coward | more than 8 years ago | (#15297268)

Service Unavailable...

Sigh... So much for the slashdot effect diminishing...

anybody got a mirror? :/

Re:Looks like... (0)

Anonymous Coward | more than 8 years ago | (#15297424)

So you are saying their server isn't evolutionary or revolutionary, but stationary.

Re:Looks like... (2, Funny)

CODiNE (27417) | more than 8 years ago | (#15297485)

Naw man, that's just a REALISTIC rendering of how their server would respond if each connection was like a shot from a BB gun. That's some seriously FINE GRAINED Physics processing!

Wave of the future... (3, Insightful)

JoeLinux (20366) | more than 8 years ago | (#15297238)

Since Mainframes, I've always thought it makes more sense to modularize hardware.

While studying for my EE, I often wondered what the purpose of having a clock was, since so much of the individual chips often had finished their calculations before the next clock cycle came around.

I think we are going to see the clock go away, replaced with "Data Ready" lines, which will also help heavily in determining the bottlenecks in a given system (Hint: it's the system that is taking the longest to put up the "Data Ready" flag).

I also think that optics will be the way of the future. Quantum will be like Mechanical Television: cute idea, but impractical for mass production.

Optics. Think of it this way: Imagine a bus that can address individual I/O cards with full duplex, simply by using different colors for the lasers. Motherboards are going to get a lot smaller.

That's my opinion, anyway.

Joe

---
Q:Why couldn't Helen Keller drive?

A:Because she was a woman.

Re:Wave of the future... (3, Insightful)

daVinci1980 (73174) | more than 8 years ago | (#15297328)

Having a Data Ready flag doesn't solve the problem that a clock solves. How do you know when you can read your 'Data Ready' flag? How do you know that your current reading of 'Data Ready' is really new data, and not the same data you haven't picked up yet?

A clock is a syncronization scheme, and it solves a very low-level issue: How do I syncronize my reads and writes on a physical level?

Many people have tried to create systems that don't have clocks. Without exception, they have all failed or have been unscalable.

Re:Wave of the future... (1)

misleb (129952) | more than 8 years ago | (#15297564)

Perhaps if the "Data Ready" acted as an interrupt rather than a passive line that is polled.

-matthew

Re:Wave of the future... (0)

Anonymous Coward | more than 8 years ago | (#15297990)

You're a software developer aren't you?

Re:Wave of the future... (1)

Khyber (864651) | more than 8 years ago | (#15297719)

Well, to me, the answer seems simple (downmod at will) but if you're using lasers of different colors, can you not time them by figuring out the differences of their wavelengths? (Hertz/second for a wave?) I'd think this may perhaps be very simple to accomplish if we knew/could control perfectly the wavelengths of light that we use, then create programs that work in this manner.

Re:Wave of the future... (0)

Anonymous Coward | more than 8 years ago | (#15297792)

I'm glad that your post seems simple to someone...

Quantum Computers (1)

compact_support (968176) | more than 8 years ago | (#15297330)

I'm not sure what you've heard quantum computers promise to do. I agree with you that optical computers, or rather, some form of classical computer will be the predominant type.

As far as we know, quantum computers can do (a) few things better than a classical computer:

Factoring numbers (this is why they'll kill public key encryption as it stands now)

Computing discrete logarithms (another function of cryptographic interest)

Search unstructured data (a quantum computer can do this in O(sqrt N) time as opposed to O(N) for a classical computer to run through every element)

Simulate quantum systems (fairly obviously)

Other than that, there's no asymptotic advantage to using quantum computers. For doing your taxes or even hosting the Transgalactic Wikipedia, classical computers are here to stay.

compact_support

Re:Quantum Computers (1)

fabs64 (657132) | more than 8 years ago | (#15297547)

Computing least-cost algorithms in N time?

There's a fair few useful algorithms that aren't in P but are in NP.

Re:Wave of the future... (4, Interesting)

AuMatar (183847) | more than 8 years ago | (#15297342)

The purpose of a clock- ease of development. With a clock, you can advance new input into the next pipe stage at known intervals, allowing each stage to finish completely. Without a clock, you need to make sure that no circuit feeds its data into the next part too soon. Doing so would end up causing glitches. For example, if the wire that says to write RAM comes in at time t=0, but the new address comes in at time t=1, you could corrupt whatever address was on the line previously. With a clock, all signals update at the same time.

Its possible to make simple circuits go the clockless route. Complex circuits are nearly impossible. There's no way a p4 could be made clockless, the complexity of such an undertaking is mind boggling. Even testing it would be nearly impossible.

The problem with data ready flags is the same as with the rest of the circuit- how do you prevent glitches without a latching mechanism?

And this isn't about modularizing hardware. Its about adding extra processing power with specific hardware optimizations for physics computation. Wether its a good idea or not depends on how much we need the extra power. I'm not about to run out and buy one though.

Actually, in desktops to day the trend is to remove modularization. AMD got a nice speedboost by moving the memory controller into the Athlon (at the cost of requiring a new chip design for new memory types). I'd expect to see more of that in the future- speed boosts are drying up, and moving things like memory and bus controllers are low hanging fruit.

Re:Wave of the future... (-1, Redundant)

JoeLinux (20366) | more than 8 years ago | (#15297561)

Re:Wave of the future... (1)

AuMatar (183847) | more than 8 years ago | (#15297681)

An ARM has nowhere near the performance or complexity of a Pentium chip. They're nice for what they do, but comparing the two of them is laughable, we're talking at least an order of magnitude difference in processing power, and probably 2 in complexity. Beyond that I won't comment because I'm unable to find decent third party commentary on the chip, but I would be highly surprised if the full thing was truely clockless. At the very least the interface to memory is, as it says so on their own webpage "standard synchronous interface". My bet is that parts of the chip are asynchronous and capable of being shut off, and parts aren't. But I haven't examined one to say for certain.

Re:Wave of the future... (2, Informative)

Short Circuit (52384) | more than 8 years ago | (#15298010)

Multi-stage processors have latching mechanisms between stages that release on a clock pulse. I think what he meant by "data ready flags" was to allow the latches between the stages to unlock automatically, instead of being dependant on a chipwide clock signal.

But then, I'm only working on a Bachelor's in Computer Information Systems...what would I know about signalling in a complex silicon device?

Re:Wave of the future... (2, Interesting)

asliarun (636603) | more than 8 years ago | (#15297347)

I'm not so sure about that. Down the years, the trend has usually been that companies have always released specialized chipsets or mini-CPUs that can take over some part of the CPU's workload. While this has worked in the short run (think math coprocessor), the CPU has become sufficiently powerful over time to negate the advantage. Look at it this way: If Intel/AMD releases a quad-core or octa-core CPU, in which each core is more powerful than the fastest single-core today, any of those cores could take up the physics processing workload. Best of all, this can happen without sacrificing performance on the other threads that're running. Furthermore, if Intel/AMD realizes that physics processing is becoming increasingly important, they will add special processing units for it in the future CPUs and come out with an additional instruction set, just like they've done for MMX/SSE. This would almost totally negate the value of having these specialized co-processors, albeit only in the long run. This will work as a quick fix for an immediate problem though.

To cut a long story short, i think that these specialized chips solve today's problem, not tomorrow's. I predict that this company will get bought over by either nVidia/ATI or Intel/AMD.

Re:Wave of the future... (1)

Watson Ladd (955755) | more than 8 years ago | (#15297476)

But physics developers don't take advantage of the many libraries like BLAS avalible to speed up vector computation. Instead the compilier is left to do the dirty work. Radical thought: Let's use code that takes advantage of machine- and processor- specific flags to optimize matrix and vector operations and use that to do matrix and vector stuff. This card isn't open, but I think it would only be an optimized sparse matrix solving chip.

Re:Wave of the future... (4, Insightful)

complete loony (663508) | more than 8 years ago | (#15297655)

Of course this argument falls down with graphics processing. While it is true that today's CPUs could probably process the graphics engine from games 5-7 years old, the bandwidth and processing requirements of current generation games is very different to the types of problems CPUs normally handle. It's a type of problem that generic CPUs can't keep up with. Physics may be a similar type of problem, one that can be performed far more efficiently than a current CPU can handle. That said, there has to be a large market for such a device to fund the R&D for future revisions or the generic CPU will catch up again.

With graphics, small visual differences between hardware implementations are not a big problem. Physics processing needs a standard interface, and precise specs on what the output should be. If there is only going to be one vendor, and one proprietary interface, this market will fail.

Re:Wave of the future... (1)

drinkypoo (153816) | more than 8 years ago | (#15297388)

I/O is one of the areas that could really use some help. I envision a contactless bus where expansion devices are powered by induction; high-power devices could have good ol' electrical contacts. Just as PCI Express features 1-n lanes support, my fantasy bus uses multiple fiberoptic connections, with some slots supporting more than others for additional bandwidth.

The only thing on the motherboard would be the bus arbitrator. Everything else would go into a module. Modules would also be able to not only have multiple lanes per slot, but also take up multiple slots, and use aggregate bandwidth for communications, so if a module needs two lanes but needs to physically be two slots in size, it can get those two lanes from single-lane slots and not need to take up, say, a one-lane slot AND a multi-lane slot in order to get the needed number of lanes.

Ideally, all of our interfaces will move to fiber as well, with conductors only for power, like the supposedly planned future upgrade to 3.2Gbps firewire.

Asynchronous Logic and CPUs (0)

Anonymous Coward | more than 8 years ago | (#15297454)

If that was the case what you'd get was asynchronus logic. This has been done before, There have been many asynchronus digital computers. There was even an ARM compatable asynchronus CPU designed. Look up AMULET Project.

What I really think you are talking about is a system that on a macroscopic level is asynchronus, but on a microscopic level is synchronus (e.g. Each chip has it's own internal clock, or each part of the chip has its own clock; However there is no more "System" clock).

See http://www.cs.man.ac.uk/async/background/return_as ync.html [man.ac.uk]

Re:Wave of the future... (1)

dilvish_the_damned (167205) | more than 8 years ago | (#15297507)

While studying for my EE, I often wondered what the purpose of having a clock was
No clock, no data. Did you pass?

Re:Wave of the future... (1)

HoboMaster (639861) | more than 8 years ago | (#15297533)

"Q:Why couldn't Helen Keller drive?

A:Because she was a woman."

Q: Why can't Ray Charles drive?

A: Because he's blind. Racist.

Re:Wave of the future... (0)

Anonymous Coward | more than 8 years ago | (#15297613)

It's "Q: Why can't Ray Charles read?" you dumb cracker.

Re:Wave of the future... (1)

TerranFury (726743) | more than 8 years ago | (#15297686)

A lot of people say that the future is optical, but I'm not sure the physics allows it. Optical wavelengths are much, much longer than is the wavelength of the electron until you get up to x-rays. That means that feature sizes on some hypothetical optical chip would necessarily be much larger than are feature sizes on an electronic chip. Fiber might be appropriate, as you said, for tying chips together on the motherboard, but that's about it.

Re:Wave of the future... (1)

smackt4rd (950154) | more than 8 years ago | (#15297825)

I think we are going to see the clock go away, replaced with "Data Ready" lines, which will also help heavily in determining the bottlenecks in a given system (Hint: it's the system that is taking the longest to put up the "Data Ready" flag).
Exactly :) I took a class on asynchronous logic design this semester. Pretty interesting stuff. (prof is quoted in this article :) It's still not widely accepted yet, but it does have alot of advantages over synchronous circuits. http://www.embedded.com/showArticle.jhtml?articleI D=181500998 [embedded.com]

Anandtech too ... (3, Informative)

Anonymous Coward | more than 8 years ago | (#15297239)

The link: http://www.anandtech.com/video/showdoc.aspx?i=2751 [anandtech.com]

Short summary: Great for synthetic benchmarks, probably not real-world ready.

Re:Anandtech too ... (1)

arthurh3535 (447288) | more than 8 years ago | (#15297431)

Actually, City of Villains (with the currently being tested i7) fully supports the Aegia Physics processor (hardware and software.)

Very cool things with cars smashing to peices in certain missions.

Looks like PPUs don't help webservers (0)

Anonymous Coward | more than 8 years ago | (#15297245)

Slashdotted instantly. Perhaps PPUs will be good for rendering simulations of the Slashdot Effect.

s/baited breath/bated breath/g (0)

Anonymous Coward | more than 8 years ago | (#15297264)

... honestly

some breath smells like bait (-1, Flamebait)

Anonymous Coward | more than 8 years ago | (#15297453)

so if your comment is waiting for a flame, is it flame bait or flame bate ;^)

/.'ed (0, Redundant)

njvic (614279) | more than 8 years ago | (#15297275)

Someone needs to come up with a chip to offload slashdot traffic!

Will it fly? (1)

IlliniECE (970260) | more than 8 years ago | (#15297288)

Its a neat idea, but at what they're charging, will many people add yet another card to their motherboard?(Heck, my PCI-e slots are already jammed full)

Slashdotted, but I got the first page (4, Informative)

mobby_6kl (668092) | more than 8 years ago | (#15297290)

Without question, one of the hottest topics throughout the industry this year has been the advent of the discrete physics processor or "PPU" (Physics Processing Unit). Developed by a new startup company called Ageia, this new physics processor gives game developers the opportunity to create entirely new game-play characteristics that were not considered possible using standard hardware. Since its original inception, both CPU and GPU vendors have come to the spotlight to showcase the ability to process physics on their respective hardware. However, the Ageia PhysX PPU is the only viable solution which is readily available to consumers.

For the foreseeable future, the only vendors which will be manufacturing and selling physics processors based on the Ageia PhysX PPU are ASUS and BFG. With ASUS primarily focusing on the OEM market, BFG will enjoy a monopoly of sorts within the retail channel, as they will comprise the vast majority of all available cards on store shelves. Today, we will be running a retail sample of BFG's first ever Physics processor through its paces. Judging from the packaging alone, you can tell that this box contains something out of the ordinary. Housed in an unusual triangular box with a flip-down front panel, consumers can glimpse the card's heatsink assembly through a clear plastic window.

BFG Tech PhysX
Card And Bundle

Flipping the box, consumers are presented with a quick listing of features complete with summaries and a small screen-shot. Most importantly, the package also lists the small handful of games which actually support the PPU hardware. This short list consists of City of Villains, Ghost Recon Advanced Warfighter, and Bet on Soldier: Blood Sport.

Upon opening the packaging, we are presented with a standard fare of accessories. Beyond the actual card itself, we find a power cable splitter, a driver CD, a demo CD, and a quick install guide. Somewhat surprisingly, we also find a neon flyer warning of a driver issue with Ghost Recon Advanced Warfighter that instructs users to download the latest driver from Ageia to avoid the problem. This is a bit disheartening as there are only three games which currently support this hardware. With this in mind, it is hard to not feel as though the hardware is being rushed to market a bit sooner than it probably should have.

Directing our attention to the card itself, we find a rather unassuming blue PCB with a somewhat standard aluminum active heatsink assembly. Amidst the collection of power circuitry, we also find a 4-pin molex power connector to feed the card as a standard PCI slot does not provide adequate power source for the processor. At first glance, the card looks remarkably similar to a mainstream graphics card. It's not until you see the bare back-plate with no connectivity options that you realize this is not a GeForce 6600 or similar product.

Thankfully, the BFG PhysX card does not incorporate yet another massive dual-slot heatsink assembly as so many new pieces of high-end hardware do these days. Rather, we find a small single-slot active heatsink that manages to effectively cool the PPU while keeping noise at a minimum. Removing the heatsink, we were pleased to find that BFG has done an excellent job of applying the proper amount of thermal paste and that the base of the heatsink was flat with no dead spots. After powering the system, we see that BFG has dressed the card up with three blue LED's to appease those with case windows.

With the heatsink removed, we have our first opportunity to glimpse the Ageia PhysX PPU in all its glory. Manufactured on a 0.13u process at TSMC, the die is comprised of 125 million transistors. Overall, the size of the die is slightly larger than the memory modules which surround it. Looking closely at the board, we see that the 128MB of memory consists of Samsung K4J55323QF-GC20 GDDR3 SDRAM which are rated for a maximum frequency of 500MHz. Unfortunately, neither BFG nor Ageia have disclosed what frequency the PPU memory and core operate at, so we are unsure how close to the theoretical limit these components are running. We have been hearing rumors of PhysX card running memory at speeds over 700MHz, though these are obviously different cards than the BFG model seen here as the chips would be running in excess of 200MHz over their maximum rated operating frequency.

Re:Slashdotted, but I got the first page (1)

Novotny (718987) | more than 8 years ago | (#15297370)

This has been all over the web in recent days - a lot of the reviews seem to be saying the card is under-utilised at the moment. Whether that changes or not depends on whether the big system builders start to deploy it as standard, and/or the next gen console titles require it to adequately port. Either way, this couold be an albatross - we really need to see what nvidia/ati can do...

I'll wait for 64-bit TYVM... (2, Informative)

Khyber (864651) | more than 8 years ago | (#15297746)

http://www.amd.com/us-en/assets/content_type/Downl oadableAssets/So32v64-56k.wmv [amd.com]

Nice comparison concerning current 32-bit applications/limitations over 64-bit. If this video is TRUE, then I won't bother with a PPU - my Athlon 64 3000+ may already to be able to handle those extra physics calculations while any WELL-PROGRAMMED game will use any extra resources I have available for extra object/texture/physics rendering.

Sorry, IMHO, PPU is at a loss. Mod down at will.

Skeptical (4, Interesting)

HunterZ (20035) | more than 8 years ago | (#15297296)

From what I was able to read of the article before it got slashdotted, it sounds like games that can take advantage of it require installation of the Ageia drivers whether you have the card or not. This leads me to believe that without the card installed, those games will use a software physics engine written by Ageia, which is likely to be unoptimized in an attempt to encourage users to buy the accelerator card.

Also, it's likely to use a proprietary API (remember Glide? EAX?) that will make it difficult for competitors to create a wider market for this type of product. I really can't see myself investing in something that has limited support and is likely to be replaced by something designed around a non-proprietary API in the case that it does catch on.

Re:Skeptical (1)

JFMulder (59706) | more than 8 years ago | (#15297394)

Ever heard of OpenGL? If your don't have a card, the software driver will do the same thing, but slower. Same deal over here. I doubt it will be unoptimized anyway, developpers wouldn't put up with that.

Re:Skeptical (1)

HunterZ (20035) | more than 8 years ago | (#15297470)

Ever heard of OpenGL? If your don't have a card, the software driver will do the same thing, but slower. Same deal over here. I doubt it will be unoptimized anyway, developpers wouldn't put up with that.

Yes, except that OpenGL was and is an open standard. It's not controlled by one company who is trying to push a product that accelerates software which uses their API.

Re:Skeptical (1)

Lord Kano (13027) | more than 8 years ago | (#15297417)

Also, it's likely to use a proprietary API (remember Glide? EAX?) that will make it difficult for competitors to create a wider market for this type of product.

I also remember that in its day Glide was faster and resulted in higher quality 3d than OpenGL or DirectX.

LK

Re:Skeptical (5, Insightful)

HunterZ (20035) | more than 8 years ago | (#15297542)

I also remember that in its day Glide was faster and resulted in higher quality 3d than OpenGL or DirectX.

For a while, since 3dfx was the only one innovating for a while. Once they got hold of the market, nobody else could because the games only supported Glide, and nobody else was able to make Glide-supported hardware due to it being a proprietary API.

Then nVidia came along with superior cards that only supported Direct3D and OpenGL because Glide was 3dfx proprietary. Game developers were forced to switch to D3D/OpenGL to support the new wider array of hardware. Since 3dfx cards were overly-optimized for Glide, this resulted in games that ran crappy on 3dfx hardware but great on nVidia. The rest is history.

EAX is a similar story. Creative owns it, but what has happened is that many game developers don't bother to take advantage of it, instead relying on DirectSound3D or OpenAL as the lowest-common-denominator. The widespread use of SDKs suck as Miles Sound System do also help to allow transparent use of various sounds API features though, so mileage varies. Personally, I've been without Creative products for years now and haven't missed them one but. I'm currently waiting for the next generation of DDL/DTS Connect sound cards to come out, and then I'll give those a shot.

The same thing is likely to happen here; competitors will make their own products, but because they won't be able the use the PhysX engine they will make their own. It will be an open API because they'll have to band together to get game developers to support their cards. Ageia will be forced to add driver support for the standard API, but it won't perform as well on their cards. If they're smart, they'll either open the API early on, or else release new hardware built around the open API. This is all assuming the PPUs even catch on, of course.

The problem with the PC gaming hardware market is that when there's only one company making a certain type of product, they tend to stop innovating. Then, when someone else develops a competing product they try to use marketing to stay ahead instead of coming up with more competitive products. Sometimes gamers see through the marketing (3dfx) and sometimes they have a harder time doing so (EAX). It will be interesting to see how it turns out this time.

Re:Skeptical (1)

TimboJones (192691) | more than 8 years ago | (#15297606)

The problem with the PC gaming hardware market is that when there's only one company making a certain type of product, they tend to stop innovating.
s/PC gaming hardware/any/

Of course, as you say, if the market is lucrative enough, competitors will come in and compete with superior products using non-proprietary standards.

SSDD

glide history (1, Interesting)

Anonymous Coward | more than 8 years ago | (#15297816)

Your history of glide is a little backwards. OpenGL predates glide and was a clean rewrite of SGI's original graphics API with input from other graphics vendors at the time. The current graphics pipeline was a solved problem by the early '90s. The primary problem that glide solved was how to make a $30000 workstation into a $300 graphics card. The answer was to throw out most of the pipeline and make a passthrough card that didn't even do video. Glide itself wasn't anything particularly fancy and mostly consisted of functions to send untransformed polygons to the hardware. It took a few weeks for there to be a glide->OpenGL compatability layer.

3DFX simply failed to keep up with NVIDIA. They did an incredible job integrating both video cards and graphics engines together in the RIVA128 chipset as well as adding a basic lighting and transform pipeline to the hardware. They also did a much better job supporting the standard software APIS of OpenGL and DirectX. They still do a much better job with drivers than ATI.

    Michael

Re:Skeptical (0)

Anonymous Coward | more than 8 years ago | (#15297659)

I also remember that in its day Glide was faster and resulted in higher quality 3d than OpenGL or DirectX.

Solely because 3DFX wanted to force people to develop to their proprietary API, in order to lock out the competition, and therefore they crippled their DirectX and OpenGL drivers.

This was bad for everyone apart from 3DFX - both consumers and developers suffered. Thank God Microsoft came along and used their monopoly power to commoditise the 3D card market by forcing developers to code to an API that was available for 3DFX's competitors to target. NVidia released a card with DirectX drivers that made 3DFX's proprietary best look like shit, and the rest is history.

Re:Skeptical (1)

david.given (6740) | more than 8 years ago | (#15297448)

This leads me to believe that without the card installed, those games will use a software physics engine written by Ageia, which is likely to be unoptimized in an attempt to encourage users to buy the accelerator card.

I find myself a bit puzzled by what this thing's actually supposed to do for me. Given that there are currently no applications that require it (because since it's not actually shipping yet, it would be the kiss of death), then supporting the PbysX can make no difference to the actual gameplay --- because any games need to be able to run on machines without it. This means all it'll be good for is eye candy. Is it really worth spending money on the PhysX so you can get slightly prettier explosions, when instead you can spend the same amount of money on a better GPU or CPU so you'll get prettier everything?

I'd also rather like to know what it actually is. There's practically no technical details out there. Obviously it's basically a DSP-ish processor on a card, with its own RAM, etc, but is it their own DSP or an off-the-shelf core?

Because if it is a reasonably normal vector processor, then it'll be a shit hot one, and I'd love to see what else people can do with it. Screw games --- can it encode video? Process audio? Could you use it as the decoder for a software radio? What about speeding up statistical analysis for, say, really high-grade text searching or spam filtering? Is it suitable for simulating really big neural networks at a reasonable speed? What interesting applications could you come it with for a really high end vector processor attached to a high end PC, that people haven't already come up with?

Re:Skeptical (1)

HunterZ (20035) | more than 8 years ago | (#15297557)

I'm not sure what indicates that it's a DSP. I'm not much of a hardware guy, but aren't DSPs intended to operate on a stream of data? I don't think that's what's going on here.

It looks like the way they're setting it up is that they're building a physics engine that can offload some of its processing to this card. Apparently this is reflected in these initial games in the form of additional dynamic objects in the game environments.

What this card does (0)

Anonymous Coward | more than 8 years ago | (#15297614)

This card isn't exactly a DSP, it's not so much designed to process streaming signals. It's more like a vector processor optimized for physics calculations such as vector scaling, angular velocity/acceleration calculations, you know general calculations you would do in your low level college physics or mechanical engineering classes.

The processor can do very fast calculations on large vectors better(faster, more throughput I don't know, I didn't design it) than a CPU which is a GENERAL purpose processor. It can do nearly anything at the cost of efficiency. That is why graphics cards exist. The CPU can do the DirectX calculations, but a video card does them much faster by having a specialized architecture (vector shaders, pixel shaders, fixed function shaders, etc).

What Aegia did is made a software physics engine that can run on the CPU, but can do the same thing much faster and on a greater scale on their specialized card. So instead of calculating 20 fragments from a grenade it can calculate 100 (not exact numbers, just an example). In theory it can make games much more realistic. Imagine a rock slide with hundreds of rocks instead of just 3 small boulders falling at you or an water fall instead of a trickle of a few drops. You get the idea.

Re:Skeptical (1)

misleb (129952) | more than 8 years ago | (#15297715)

I find myself a bit puzzled by what this thing's actually supposed to do for me. Given that there are currently no applications that require it (because since it's not actually shipping yet, it would be the kiss of death), then supporting the PbysX can make no difference to the actual gameplay --- because any games need to be able to run on machines without it. This means all it'll be good for is eye candy. Is it really worth spending money on the PhysX so you can get slightly prettier explosions, when instead you can spend the same amount of money on a better GPU or CPU so you'll get prettier everything?

Well, if you've already maxed out your CPU (where upgrading would mean new motherboard and RAM) and you already have a good GPU, why not consider getting a physics processor to give new games an extra kick? I know i'd consider it with my current system. I just got Oblivion and I am running with a a good GPU but the bare minimum CPU, I think offloading the physics from the CPU would speed the game up nicely without having to gut my machine.

I'm not exactly a hardcore gamer, but I know those guys will do anything to sqeeze some extra performance out of games. Sometimes it can mean the difference between kicking ass and becoming brick mortar.

-matthew

Re:Skeptical (1)

misleb (129952) | more than 8 years ago | (#15297624)

Certainly there would eventually need to be some kind of standard DirectX abstraction for physics . I would not bet on vendors agreeing one by themselves. Anyone know if Microsoft has any plans on a physics API?

-matthew

Re:Skeptical (2, Insightful)

aarku (151823) | more than 8 years ago | (#15297726)

It absolutely is optimized in software. That's ridiculous. My own little informal tests have put it high and above Newton and ODE for a lot of cases, and who knows about Havok. (too damn expensive to try)

I think most people don't realize it's a great physics engine by itself that has the added bonus of supporting dedicated hardware. Plus, a lot of the larger developers presumably have source access, so if it doesn't look optimized or if there are big /* LOL THIS'll MAKE EM BUY A CARD */ comments... well... Unreal 3 and everyone else [unity3d.com] wouldn't be using it would it then?

Maybe the world isn't ready (2, Insightful)

Eideewt (603267) | more than 8 years ago | (#15297306)

I think that while this card can do some amazing physics stuff, we aren't ready to make use of that capability for anything more than a little eye candy. Not in networked games, at least. Trying to keep everyone's world in sync is hard enough as it is, without adding even more objects that need to appear in the same place for everyone.

Re:Maybe the world isn't ready (0)

Anonymous Coward | more than 8 years ago | (#15297609)

Keeping networked players in sync wouldn't be any more difficult with this. The only info that ever absolutely needs to be transmitted is player input, since at its most basic every object's velocity is just scripted position + player input.

Slashdotted in under a minute... (0, Offtopic)

ZSpade (812879) | more than 8 years ago | (#15297317)

The one time I go to actually read the artical and it's been Slashdotted.

Here is an alternative artical for anyone interested:

AGEIA PhysX tested with GRAW and Cell Factor [pcper.com]

Re:Slashdotted in under a minute... (0)

Anonymous Coward | more than 8 years ago | (#15297351)

If you wanna do something great, first learn how to spell "article"

Ghost Recon video (5, Informative)

jmichaelg (148257) | more than 8 years ago | (#15297356)

Anandtech posted these video sequences [anandtech.com] to show what you see with and without the card.

The Anandtech article [anandtech.com] states that the physics hardware slows down the framerates which Aegis can't possibly be happy about.

it's BATED breath, dammit (5, Insightful)

Rimbo (139781) | more than 8 years ago | (#15297373)

short for "abated"

Re:it's BATED breath, dammit (1)

The Ape With No Name (213531) | more than 8 years ago | (#15297427)

Thank you. I can't stand "baited breath." It makes me think people have a nightcrawler in their mouths.

Wait for the response with a wikipedia link.....

Re:it's BATED breath, dammit (4, Funny)

drewmca (611245) | more than 8 years ago | (#15297525)

I think your point is mute.

I wish I could mod this up 100 points. (4, Insightful)

joe_n_bloe (244407) | more than 8 years ago | (#15297820)

Some day Slashdot will allow people to edit their posts for grammar and spelling, or perhaps there will be a Slashdot editor who knows grammar and spelling.

Of course... (1)

sinfree (859988) | more than 8 years ago | (#15297384)

It isn't worth much until games actually start using it.

no titles yet (2, Interesting)

jigjigga (903943) | more than 8 years ago | (#15297418)

Ive been following them for a long time- their software demos blew my mind a few years ago (the one with the towers made of bricks that you could destroy oh so fun). We should wait for real games to make use of the physics. Ghost recon uses it as a gimmick. The tech demo game as listed in one of the articles is a real showing of what the card is capable of. When the game engines catch up and use it as an intrical part rather than a gimmick it will usher in a new era of gaming. It really will, look at what happened with hardware 3d.

IT'S BOTH! (0)

null etc. (524767) | more than 8 years ago | (#15297432)

But is this technology evolutionary or revolutionary?

Wow, I love how everyone sounds so intellectually sophisticated and interesting when they use such pithy phrases.

Where's the competition? (3, Insightful)

cubeplayr (966308) | more than 8 years ago | (#15297444)

Is there any competion for Aegis? Reviews are all fine and dandy but product comparisons is where the decisions should be made. It should be based on which PPU can perform a given task faster/better. Competition would also drive each competitor to better their own product to beat the other. However, they shouldn't be mutually exclusive (ie. If you use Product A, then you can't use a program with only Product B support).

I wonder how long it will be before there is a mainstream demand for a separtate physics unit (probably as soon as games require them). It sounds like a great idea to take some of the load off the CPU. Does this mean that now game performance will be more directly linked to the speed and power of the GPU and PPU and that the CPU will be more of an I/O director and less of a number cruncher?

I've seen numerous posts of people saying that they do not have any available PCI slots. Will the introduction of a new type of card lead to larger motherboards with more slots or might it lead to a small graphics card that does not monopolize the PCI space? Also, there is the concern of adding another heat source to the mix.

"Get you facts first - then you can distort them as you please." -Mark Twain

Re:Where's the competition? (1)

IamTheRealMike (537420) | more than 8 years ago | (#15297700)

The competition is engines like Havok FX which run on a graphics card and provide "effects physics". This requires very modern graphics hardware but not a special card. Presumably the downside is that the extra load on the GPU reduces framerates or graphics quality in some other way, but I don't know enough to say. It'll be interesting to see how it works out at any rate.

PCI Express (1)

ThurstonMoore (605470) | more than 8 years ago | (#15297445)

Why are these cards not PCI Express? Most anyone who would buy one of these would have a motherboard with PCI Express slots.

Re:PCI Express (0)

Anonymous Coward | more than 8 years ago | (#15297860)

Not all mobos have multiple PCI-E slots. Besides I think the standard PCI bus has enough bandwidth to cope with the PPU demands, for now.

Data parallel? (1)

Analog Squirrel (547794) | more than 8 years ago | (#15297458)

I wonder what this will mean to the people who are currently using GPUs for data parallel-ish computation? This is sort of a specialization of the GPGPU idea... I wonder if it would work well for doing "real" physics computation?

I don't see this as long lived (5, Interesting)

throx (42621) | more than 8 years ago | (#15297482)

I really don't see a custom "Physics Processor" being a long-lived add-on for the PC platform. It's essentially just another floating point SIMD processor with specialized drivers for game engine physics. With multicore+hyperthreaded CPUs coming out very soon, the physics engines can be offloaded to extra processing units in your system rather than having to fork out money for a card that can only be used for a special purpose.

In addition, there's already a hideously powerful SIMD engine in most gaming systems loosely called "the video card". With the advent of DirectX 10 hardware which lets the card GPU write it's intermediate calculations back to main memory rather than forcing it all out to the frame buffer, a whole bunch of physics processing can suddenly be done through the GPU.

Lastly, the API to talk to these cards is single-vendor and proprietary. That's never been a long term solution for longevity (unless you're Microsoft), so it won't really take off until DirectX 11 or later integrates a DirectPhysics layer to allow multiple hardware vendors to compete without game devs having to write radically different code.

So, between multicore/hyperthreaded CPUs and DirectX10 or better GPUs with a proprietary API to the card... cute hardware but not a long term solution.

Re:I don't see this as long lived (1)

espinafre (973274) | more than 8 years ago | (#15297586)

a DirectPhysics layer

Quick, patent/copyright/trademark that!

Re:I don't see this as long lived (0)

Anonymous Coward | more than 8 years ago | (#15297696)

B8 00 4C CD 21

CLV (had to look that one up)
BRK
JMP $21CD

WTF?

Re:I don't see this as long lived (1)

orkysoft (93727) | more than 8 years ago | (#15297731)

mov ax, 4c00h
int 21h

Now you can be afraid too.

Re:I don't see this as long lived (0)

Anonymous Coward | more than 8 years ago | (#15297797)

Hmm...could have said all the same things about the 3DFX when it came out 10 years ago (or so) and now look where we are.

The 3DFX had a custom API, was a specialised processor, not many game supported it in the begining, there were faster CPUI's 'just around the corner'.

The PhysX card is just the begining.

Re:I don't see this as long lived (1)

colganc (581174) | more than 8 years ago | (#15298127)

3Dfx had themselves in a hot game called Quake. The difference between Quake and GL Quake was/is night and day. 4x the numbers of pixels being displayed, the overall picture being rendered was of higher quality, and the entire game ran faster. So far the closest thing to GL Quake for Aegia is Cell Factor. The big difference between Cell Factor and GL Quake is Cell Factor has a chicken and egg problem. The PPU is needed to run Cell Factor AFAIK. Quake ran and ran fairly well without a Voodoo chipset based card.

Sorely Lacking in Pizazz (1)

Nom du Keyboard (633989) | more than 8 years ago | (#15297486)

If it doesn't have a fan, and at least one additional power connector, how can anyone take it seriously as cutting-edge hardware?

And that's not even mentioning a lack of DRM. Doesn't Hollywood own gravity these days? I'm sure a patent was filed somewhere - or was it a copyright?

Super physics graphics cards (0)

Anonymous Coward | more than 8 years ago | (#15297509)

Nobody seems to care about good old fashioned fashion anymore. Everything has to be technically superior these days. The article was slashdotted, but for those who missed it , it wasn't worth reading, the best thing I can say is "Oh".

I like graphic cards that can make people say "we don't need them yet", because this is the attitude that gets their job outsourced to "offshore". Short sighted vision. Which is ironic when it comes to graphic cards.

many waited with baited breath... (1)

Locke2005 (849178) | more than 8 years ago | (#15297546)

much like that cat that ate cheese then crouched in front of the mouse hole...

HotHardware? (1)

Swift Kick (240510) | more than 8 years ago | (#15297558)

Should we petition HotHardware to change their name to Not-So-HotHardware or maybe Lukewarm Hardware at best?

Looks like their servers exploded from the slashdoting.....

short peek (1)

mugnyte (203225) | more than 8 years ago | (#15297745)


  Doesn't look like a very good performance improvement for the money. In fact, CPU's new "dual-core" marketing push may just eat up the dollars for something like this. If you simply move your physics engine to hardware, it only solves 1 part of a larger, and very delicate puzzle.

atrocious spelling error (1)

planetfinder (879742) | more than 8 years ago | (#15297857)

"so many waited with baited breath" should read "so many waited with bad breath"

Physics Engine !!! (1)

hector1965 (679971) | more than 8 years ago | (#15297956)

maybe our fellows geeks at FermiLab can find some use to such physics engine to simulate things instead of runing all those expensives experiment ;^). Imagine a Beowolf cluster of this things maybe running Linux ??? profit

Re:Physics Engine !!! (1)

failure-man (870605) | more than 8 years ago | (#15298067)

That's exactly what I was thinking when I first heard about these. Forget frame rates, I want to know what the chances are of getting Fluent to use something like this for a three-dimensional, compressible gas-flow simulation (with heat transfer.)

After all, there are only so many P4s you can throw at a problem in a public computer lab before people start whining about "wanting to do their homework!". Undergrads . . . . . .

FINALLY! (1)

poind3xt3r (890661) | more than 8 years ago | (#15298036)

Now how can I put this to use on my homework!
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...