Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

PhysX Dedicated Physics Processor Explored

Zonk posted more than 8 years ago | from the more-than-just-ragdoll dept.

142

Ned_Network writes "Yahoo! News & Reuters has a story about a start-up who have created a dedicated physics processor for gamers' PCs. The processor undertakes physics calculations for the CPU and is said to make gaming more realistic - examples such as falling rocks, exploding debris and the way that opponents collapse when you shoot them are cited as advantages of the chip. Only 6 current titles take advantage of the chip but the FAQ claims that another 100 are in production."

Sorry! There are no comments related to the filter you selected.

another flash website... (0, Offtopic)

tomstdenis (446163) | more than 8 years ago | (#15233330)

Yeah...

I'd explore the website except their webdev team is stupid.

Flash is meant for things Flash is meant for. Not menus.

That's HTML.

Tom

Re:another flash website... (1)

Odiumjunkie (926074) | more than 8 years ago | (#15233354)

Oh, is that what they used? I was wondering why it wasn't displaying properly in lynx.

Re:another flash website... (0, Offtopic)

tomstdenis (446163) | more than 8 years ago | (#15233387)

I use mozilla. I just don't has flash installed. Truth be told though did they actually use Flash for something highly scripted or interactive? Or was it just their really really neato super cool uber sweet way of doing a simple button menu?

People like that were probably hypercard junkies in the 80s and are getting their fix today.

Tom

Re:another flash website... (1)

bersl2 (689221) | more than 8 years ago | (#15233461)

They want animated vector graphics. SVG will provide that, but it's not ready yet. Flash will have to do for now.

Re:another flash website... (2, Insightful)

mrchaotica (681592) | more than 8 years ago | (#15233987)

Well, that's just too bad for them, now isn't it? 'Cause the important people -- namely us, the readers of the site -- care about usability, not Flash!

If they can't do animated vector graphics the right way, they shouldn't do them at all!

Joke (0, Redundant)

Ramble (940291) | more than 8 years ago | (#15233332)

Next up, Alzheimer's cure found.

Is that what I think it is. (2, Funny)

kitsunewarlock (971818) | more than 8 years ago | (#15233335)

Is that a chess game in that list? Why would a chess game need a phsyics engine. Perhaps the programmers would like to use an engine for animations (the king falling down perhaps?) instead of frame by frame and filler animation.

Re:Is that what I think it is. (0)

Anonymous Coward | more than 8 years ago | (#15233391)

Is that a chess game in that list? Why would a chess game need a phsyics engine?

Perhaps you have never played Battle Chess [wikipedia.org] !

Re:Is that what I think it is. (5, Insightful)

TubeSteak (669689) | more than 8 years ago | (#15233408)

Chess games rely on brute computation to up the difficulty level.

Anything the programmers can do to examine more moves into the future is a good thing for them. Even Deep Blue couldn't look more than 30 moves into the future. Dunno about the 'son of' Deep Blue.

Animations, etc consume trivial amounts of CPU/graphics power compared to examining the next XY possible moves in a chess game.

Re:Is that what I think it is. (1)

Kagura (843695) | more than 8 years ago | (#15233760)

Yeah, no kidding! Even Deep Blue couldn't defeat more than the world's best chess player [wikipedia.org] !

Chess isn't governed by physics (1)

ergo98 (9391) | more than 8 years ago | (#15233990)

Chess games rely on brute computation to up the difficulty level.

Yeah, but as the OP asked -- what in the world would a physics coprocessor have to do with a chess game?

Purpose specific devices, such as sound processing DSPs, video card GPUs, or in this case a physics processor, beat out general purpose chips (like the AMDs and Intels that we know and love) because they've been designed for a very specific task. Where a general purpose device might require 1000 operations for a FFT, a DSP might require three because that's one of its primary purposes.

Nonetheless, that performance advantage most certainly doesn't carry over to non domain specific tasks.

So the original question holds -- what in the world would a chip built specifically for physics have to do with chess? While there have been chess processors (such as Deep Blue), these certainly weren't built following the rules of physics...

Re:Chess isn't governed by physics (1, Interesting)

Anonymous Coward | more than 8 years ago | (#15234041)

It's not simply a DSP. It's a fully programmable physics chip - which PROBABLY means it's a single instruction, multiple data type chip (much like the programmable pixel shader logic in a gpu).

This type of CPU would be vastly superior to a standard cpu for calculating possible moves.

Though, while it helps with chess move logic, it wouldn't help with Go logic.

Go logic is still vastly inferior and more difficult. Why I brought up go, I have no idea.

Re:Chess isn't governed by physics (4, Interesting)

kitsunewarlock (971818) | more than 8 years ago | (#15234118)

Actually, I was thinking of Go when I read your post...then I saw the word and was like "wow".
You are probably thinking of it since Go is pseudo-famous (among engineers who have attempting thusly and in Japan) as a game that cannot be easily made into a computer simulation properly. While chess has 16 opening moves, Go has...well 12 decent ones, but statistically 361. Finding the variations in a game of Go would just...be impossible currently. It is commonly said that no game has ever been played twice. This may be true: On a 19×19 board, there are about 3361×0.012 = 2.1×10^170 possible positions, most of which are the end result of about (120!)^2 = 4.5×10^397 different (no-capture) games, for a total of about 9.3×10^567 games. Allowing captures gives as many as 10^7.49x10^48

There's more go games then theorized protons in the visable universe!

Correction (1, Informative)

Anonymous Coward | more than 8 years ago | (#15234336)

There are more go games than theorized protons in the visible universe.

Re:Chess isn't governed by physics (1)

corvair2k1 (658439) | more than 8 years ago | (#15235313)

Chess has eighteen opening moves due to the knights.

Re:Chess isn't governed by physics (1)

MyGirlFriendsBroken (599031) | more than 8 years ago | (#15235474)

Chess has eighteen opening moves due to the knights.

That would be 20 opening moves then. 2 for each prawn and 2 for each knights, (2x8)+(2x2) = 20 ;-)

Re:Chess isn't governed by physics (1)

corvair2k1 (658439) | more than 8 years ago | (#15235492)

I knew I was going to miss something! Maybe I can cover my mistake by insisting that nobody would ever move their knights onto the edge of the board. ;)

Re:Is that what I think it is. (1)

mrchaotica (681592) | more than 8 years ago | (#15234025)

That makes me wonder: is the chess algorithm suitable for running on a GPU, or even possibly this physics chip (i.e., this [gpgpu.org] kind of thing)?

Re:Is that what I think it is. (2, Interesting)

Qa1 (592969) | more than 8 years ago | (#15235272)

Actually, Deep Blue was "Fast and Dumb" - it could indeed search fast and thus foresee many moves ahead ("into the future"), but it didn't have a good sense of which moves are worth checking out. If there are 10 moves available in the position, DB would generally check all of them out. Which meant that:

  1. It wasted a lot of power calculating hopeless and downright stupid moves. That's especially evident when you consider the huge branching factor of exploring all moves in each position.
  2. It would make mistakes if the position required calculating beyond 20-30 moves ahead - i.e. making a strategic move, as opposed to tactical (short range, immediate appearant profit) moves.
  3. Contrary to popular opinion, DB wasn't the best chess computer that could be built at the time. It was the strongest chess playing hardware ever created (at that point). The software recieved very little attention, and if you'd swap the generic DB engine with a decent program on the same hardware, it would be much better. In fact, you could substantially reduce the hardware and still get a stronger chess game with a better program. DB was very dumb, even more than the dumbest "fast searchers" professional level playing software.

It's pretty evident that fast searching has reached its limits. The branching factor makes "more muscle" (as per the famous "brute force" method) pretty useless. The current top programs are the "smart searchers": Hiarcs especially (the epitome of a very wise, very "slow" program), and also Shredder [telia.com] . In fact, even the formerly "fast and dumb" programs need to be smarter than they used to be to remain competitive at the top of the computer chess league. But, as mentioned above, none of them ever was as dumb as the fastest, dumbest program ever: Deep Blue.

Re:Is that what I think it is. (1)

Qa1 (592969) | more than 8 years ago | (#15235291)

Sorry, I miss-clicked the Submit instead of Preview button. Here are some format corrections and clarifications of parent post:

Actually, Deep Blue was "Fast and Dumb" - it could indeed search fast and thus foresee many moves ahead ("into the future"), but it didn't have a good sense of which moves are worth checking out. If there were 10 moves available in the position, DB would generally check all of them out. Which meant that:

  1. It wasted a lot of power calculating hopeless and downright stupid moves. That's especially evident when you consider the huge branching factor of exploring all moves in each position.
  2. It would make mistakes if the position required calculating beyond 20-30 moves ahead - i.e. making a strategic move, as opposed to tactical (short range, immediate appearant profit) moves.

Contrary to popular opinion, DB wasn't the best chess computer that could be built at the time. It was the strongest chess playing hardware ever created up until that point. The software recieved relatively little attention, so if you'd swap the rather generic software engine DB used with a decent professional program on the same hardware, it would be deliver much stronger performance. In fact, you could substantially reduce the hardware and still get a stronger chess game with a better program. DB was very dumb, even more than the dumbest "fast searchers" professional level playing software.

It's pretty evident that fast searching has reached its limits. The branching factor makes "more muscle" (as per the famous "brute force" method) pretty useless. The current top programs are the "smart searchers": Hiarcs especially (the epitome of a very wise, very "slow" program), and also Shredder [telia.com] . In fact, even the formerly "fast and dumb" programs need to be smarter than they used to be to remain competitive at the top of the computer chess league. But, as mentioned above, none of them ever was as dumb as the fastest, dumbest program ever: Deep Blue.

Re:Is that what I think it is. (0)

Anonymous Coward | more than 8 years ago | (#15233786)

Most likely this will be used in some Super-Xtr3m3(tm) chessboard display mode, probably with (as you mentioned) pieces falling around the board and whatnot. Lots of commerical chess-program companies try to sell their product with fancy graphics and displays (ooOOoo! Look at the shadows on the rook!). Things like "improved handling of passed pawns" or "15 ELO better than the previous version" don't look as nice on packaging as screenshots of photorealistic chess boards.

Physics Good, Fire Bad (4, Insightful)

Cy Sperling (960158) | more than 8 years ago | (#15233357)

I like the idea of offloading physics processing to a speciallized card. Seems like it should up the ante for games to move beyond just ragdoll physics for characters and into more environmental sims as well. I would love to see volumetric dynamics like fog that swirls in reaction to masses moving through it. A deeper physics simulation hopefully means more to do rather than more to look at as well. Playing with gameworld physics from an emergent gameplay standpoint has real play value versus larger prettier textures.

Re:Physics Good, Fire Bad (1, Insightful)

Anonymous Coward | more than 8 years ago | (#15233479)

The problem is either it's just eyecandy or it isn't. If it's not just eyecandy, and actually affects how you play the game, then you can't sell the same game to people without the card. This is a problem.

At this point, there's only one game that takes any advantage of dual-core CPUs. Most games are still targetted towards lowend 2Ghz/GeForceMX systems. Seems kind of ridiclous to run headlong into specalized PHYZICKS processors when high-end games already fail to take better advantage of existing hardware.

Physics Good, Fire Bad-AI better. (0)

Anonymous Coward | more than 8 years ago | (#15233817)

Some of the new game engines will use a dual-cpu. Now what does give a game replayability is good AI. Compare a scripted game to a game with good AI. The former loses it's replayability due to the fact you know what's going to happen next. The latter doesn't have this problem.

Re:Physics Good, Fire Bad (1)

LBt1st (709520) | more than 8 years ago | (#15235261)

People said the same about GPU's. But unlike GPU's we don't have any killer apps to get people to want these things. Back then, we had id pumping out AAA titles that demanded the latest tech. There no games out there that have people thinking, "I've Got to get one of those!".

Re:Physics Good, Fire Bad (2, Interesting)

Babbster (107076) | more than 8 years ago | (#15233515)

I like the idea, too, though in practice I've got two big questions:
1) Is it going to come down in price? Considering that "mid-range" GPUs are going for around $300, this card at $300 (okay, $299) represents a doubling of the cost to bring a gaming system "up to speed." Right now, with only one option, it's a one-time thing but we all know that if it's successful there will be upgrades.
2) Is this really going to make a huge difference in a world where dual-core CPUs are becoming mainstream, and more cores are coming in the future? Is the performance advantage of their specially designed physics processor so important that, say, an eight-core CPU in 2008 couldn't perform similarly (given enough memory for the software engine), making the existing PhysX cards obsolete?

Considering that one of the titles they tout - Ghost Recon for the Xbox 360 - already implements their technology in software (and they brag about how great it is there), I just don't think that this add-in card has any staying power.

Re:Physics Good, Fire Bad (1)

jericho4.0 (565125) | more than 8 years ago | (#15233708)

Is the performance advantage of their specially designed physics processor so important that, say, an eight-core CPU in 2008 couldn't perform similarly"

Yes. In general, purpose built hardware can do its job orders of magnitude faster than a general purpose CPU. For example, the 3D performance of an old low end video card will still smoke the software renderer on a high end CPU.

The traditional PC players seem to be set on multiple copies of the same core. CPUs like the Cell, or KiloCore, are taking a middle path, mixing general purpose hardware with hardware that is less flexible, but much faster for certain tasks. I belive this aproach will deliver the most for consumers.

Re:Physics Good, Fire Bad (1)

Babbster (107076) | more than 8 years ago | (#15234295)

Your point about purpose-built versus general-purpose processors is well taken, and it's a big part of Ageia's marketing. As others have noted, though, right now a developer has to cater to that particular hardware when designing the game. This is something that has been done before (I remember having to choose my audio card from a list in the DOS days) but it requires an installed base to really take off.

I think you hit on something potentially big, though, in your second paragraph. Many have talked about adding a PPU to graphics cards. If Ageia got together with Intel or AMD and got their physics processor designed into a multi-core AMD or Intel CPU, that could really get a lot of gamers to upgrade. Right now, the CPU is just about the last thing the average gamer (or even the hardcore gamer) replaces because, invariably, the main bottleneck is graphics performance.

Re:Physics Good, Fire Bad (1)

ivan256 (17499) | more than 8 years ago | (#15234667)

Intel and AMD would both be quick to tell you that the vector engines built into their processors (remember MMX, and SSE?) are perfectly suited for these tasks, and that this is a software problem. One of the two of them would do well to come out with a library for this kind of thing (that only works on their processor, of course), and put these guys out of business in the process.

Re:Physics Good, Fire Bad (1)

ivan256 (17499) | more than 8 years ago | (#15234656)

For example, the 3D performance of an old low end video card will still smoke the software renderer on a high end CPU.

That is because of the IO bottleneck moreso than because the purpose built processor is more powerful for the particular task. That's why 2D acceleration is still important, even though modern CPUs can render 2D scenes in signifigantly faster than real time. IO intesive tasks, of which graphics display is one, are well suited to specialized hardware.

Physics is not an IO intensive process, and the calculations required are well suited to general purpose CPUs.

Interestingly enough, it's much easier to obtain venture capital for a piece of hardware than an API...

And here's the real sticking point (2, Insightful)

Sycraft-fu (314770) | more than 8 years ago | (#15233743)

I think many games are going to find it's not really usable without mandidating it. Let's say I make a multi-player game and I want players to be able to do things like trow objects at each other, bash down doors, and so on. The PhysX proves to be ideal, allowing me to do all the calculations I need for my realistic environment. However, now I have a problem: There's no way to simplify things for non-PhysX computers that still makes the game act the same. The actual gameplay is influenced by having this physics engine, and there's no going both ways.

Well that clearly isn't going to work, not enough people will own it to mandidate it.

Ok that means you are stuck using it for eye candy. Physics effects that make things look cooler, but don't really change gameplay. Hmmm, well at $300 just for eye candy, you face some stiff competition. I bet $300 spent on a PhysX doesn't make games as pretty as $300 spent on a GeForce 7900 does.

We'll see but I think your processor argument has a lot of merit. Is this thing going to be far enough ahead to outpace processors for some time to come? Because I don't think it's the kind of thing people will upgrade every year, and I think there;s going to be a lot of intertia to overcome. I mean I'm intrigued, and $300 is not out fo the range I'd consider spending for an addin card is I like what it does. However I've got to wait and see if it's got any legs and if the difference is big enough for me to care. Well during that time, I'm going to have to guess people will improve physics in software and start using dual cores for that. Right now I have a processor core that sits almost idle during games, just tending to system tasks. I have to ask how much more you could get out of it when it's used, how close to the PhysX accelerator can you come. Answer may be close enough I don't care to purchase one.

Re:And here's the real sticking point (1)

CastrTroy (595695) | more than 8 years ago | (#15233825)

I'm sure people said the same thing about the 3D video cards when they first came out. Really, at that point, it's all just eyecandy, and while you can do some stuff in software at reasonable speeds, no software renderer can compete with a 3D card that does everything in hardware. They don't make games anymore that run on software emulation. Most games require $150 video card to run. I think the physics cards will come down to this price range.

I've often wondered why they didn't make a game like a FPS that would run at Quake 1 Graphics when run on an old computer, but scale up to HL2 type graphics if you have a newer machine. I remember in university we used to resort to playing Quake 2, just so everybody could play, even though Quake 3 was a better game.

Re:And here's the real sticking point (2, Interesting)

abandonment (739466) | more than 8 years ago | (#15234189)

People DID say this with the first generation of dedicated 3d hardware chips, which is effectively why 3dfx went out of business. There wasn't enough installed base to make the developer cost worthwhile balanced with the benefit of reaching such a small installed base.

There are several things wrong with the Ageia business model:

1) they mandate that you use THEIR physics engine in order to access the physics hardware - there is no low-level hardware API that any engine can access - so by supporting their hardware, you are excluding using well-known and stable physics engines such as Havok, ODE (for the open-source crowd) etc for your games. This is a major issue from a development standpoint.

2) the cost issue (which others have brought up). The added cost vs the benefit of actually having these chips installed is simply too much for hardware vendors to actually see this as being a worthwhile thing to add to machines. Currently Ageia is relying on the hardcore gamer crowd seeing this as something that MUST be supported by games, which is a bad way to go about things. Until they sign on a vendor like Dell or HP to actually build machines with these chips, then it's a no-go for developers.

---------

Re: 1) I've heard that Havok & Nvidia are partnering together to create a bundled video card with an extra dedicated CPU for physics in a single card - so instead of having the single GPU, you will be able to have a GPU, PPU all on a single card in your machine. This will bring the cost down significantly and actually be worth supporting (both for the hardware vendors looking to build machines for the lowest cost) as well as developers - Nvidia's marketing muscle and existing OEM chain will guarantee that vendors will actually build machines using their cards.

As well, from their experience in the video world, i'm guessing that Nvidia's version will provide a low-level API for accessing the hardware, which any physics vendor can then support, instead of forcing developers to use THEIR physics engine (whether it's havok or otherwise).

Until this happens, the concept of a dedicated processor for physics is inevitably going to go the way of 3dfx. Perhaps Ageia will be bought by ATI looking to create their own dedicated GPU / PPU combination, but otherwise I don't see it catching on.

With dedicated 3d graphics, at least there are OTHER applications / reasons that a general mass-market consumer might want such a card - ie the aero-style 3dish interfaces, etc. With a physics processor, unless you are playing games that require it, it's an unnecessary add-on.

Re:And here's the real sticking point (1)

Babbster (107076) | more than 8 years ago | (#15234358)

Until they sign on a vendor like Dell or HP to actually build machines with these chips, then it's a no-go for developers.

It should be noted that Dell/Alienware is (and has been for at least a month or more) offering the PhysX card as a build-to-order option. :)

Re:And here's the real sticking point (1)

QuantumLeaper (607189) | more than 8 years ago | (#15234506)

You also forgot Sony, they are going to be including it their up and coming product, which should sell 100,000,000 or so, if it sells as good as their last one which was called the Playstation 2

Re:And here's the real sticking point (1)

NeMon'ess (160583) | more than 8 years ago | (#15235378)

People DID say this with the first generation of dedicated 3d hardware chips, which is effectively why 3dfx went out of business. There wasn't enough installed base to make the developer cost worthwhile balanced with the benefit of reaching such a small installed base.

No. 3dfx went out of business because the Voodoo4 shipped way behind schedule and wasn't optimized for 32-bit rendering. It did 16-bit rendering fast, but only about on par with nVidia's chip, which could do 32-bit and make everything look much nicer.

Now I do totally agree that Ageia is quite wrong to require their physics engine. It reminds me of 3dfx's GL-Glide-only games. The difference will be that Ageia won't have 50% of the ppu marketshare for very long, and at $300 will never get much of the total marketshare before nVidia stomps. It'll be like how the GeForce added T&L support to the TNT 2 Ultra. The origianl GeForce didn't render polygons much faster than the TNT 2 Ultra, but a year later when T&L supporting games came out, the GeForce rendered much prettier frames.

Re:And here's the real sticking point (1)

Sycraft-fu (314770) | more than 8 years ago | (#15235021)

But graphics can be made to scale without changing gameplay. Quake 1 played fine in software, just didn't look as good. Physics is a more integral part of gameplay. Used just as eye candy, I'm not sure it'll be effective enough to sell people on a $300 part. Espically because it needs to be a lot better than what software offers. I remember getting my first 3d card, it was night and day the difference. Well worth the money to me. How well will the PhysX do?

Goes double when games start using dual core processors to help. I already have a dual core processor. How much benefit will there be when things like physics are spun off so they can run on the second core that's currently nearly unused? Probably not as much as the acelerator, but maybe enough I don't care. How about as compared to a quad core, which is comming later this year?

Also, most games DON'T require a $150 accelerator. I'd say over 80% will actually play with an integrated Intel GMA 900 or 950. They don't run WELL, but they are playable. A person in my WoW guild was doing just that. However over 95% run playably or better on a $50 accelerator. Really, you can get a Radeon 9600 Pro for $50 and that'll run WoW well, Quake 4 playable, AoE 3 pretty well, etc. Not goign to get all the eye candy, but it's plenty to play on. I know more than a few that have that class of hardware and are fine with it.

Graphics accelerators were able to rise precisely because games could support them and look better, but they didn't have to. When they first came out, GLQuake was probably the only thing you could get. Slowly more games started supporting them, but always with software fallback. Finally they were popular enough that games started mandidating them, but that took quite a while.

The problem I see here is that games will have to go one of two ways: They can support it just as an effects enhancement, but then it's going to have to be used only on things that don't affect gameplay. Those I don't know will be enough to make people think it's worth the money. Instead they could make the gameplay dependant on it, but then you have to mandidate the card. You can't very well have a game where some people use it and play one way and some don't and play another. Like say a racing game, if you have it physics are real and it's more challenging since your car can slide out and such. If you don't thye are simplified so it's easier to race. Well that means those without it are at an advantage, the people that had it would turn it off so they could competitively race against those without.

I'm not saying it's doomed for certantity, I'm just saying that I see real problems with it's adoption.

Re:Physics Good, Fire Bad (1)

zerocool^ (112121) | more than 8 years ago | (#15234857)


Not to mention, you point out that a good graphics card will cost you $300... and for another $300, I'd rather have another identical card and rock some SLI.

~Will

Re:Physics Good, Fire Bad (1)

darkhitman (939662) | more than 8 years ago | (#15234461)

With physics such an integral part of today's killer games, I think this is a logical expansion... think of the possibilities, people--if HL2 can do what it did with a single processor, think what it could have accomplished with a separated processor for physics.

My only concern is, of course, logistics. How expensive is this? Will games risk developing into it with the risk of it becoming obsolete? How hard is it to develop for? Etc, etc. And can I jam one into my current computer without buying (yet another) mobo?

Game Play Processing Unit (4, Funny)

9mm Censor (705379) | more than 8 years ago | (#15233373)

I want a GPPU. A card to enhance the game play of vids. Screw graphics and physics. I want a card that makes games more fun.

Already exists. Kinda (3, Insightful)

Opportunist (166417) | more than 8 years ago | (#15233529)

It's called "creativity" and is normally used only in the development of games. Actually has been for ages before studios found it too expensive, and realized it's cheaper to develop games without it.

Re:Already exists. Kinda (1)

cgenman (325138) | more than 8 years ago | (#15233995)

It's called blow, and it was outlawed in the 90's.

Re:Game Play Processing Unit (1)

Skuld-Chan (302449) | more than 8 years ago | (#15233603)

I want a GPPU. A card to enhance the game play of vids. Screw graphics and physics. I want a card that makes games more fun.

Assuming its technically possible to improve the gameplay of "vids" (videos?) I'm all with you :).

Creativity (0)

Anonymous Coward | more than 8 years ago | (#15233865)

I believe it's called a Sense of Creativity. It doesn't cost much, but it has to be installed in the game designer prior to the game going through implementation. I think you can grow one over a long period of time through a new agricultural process known as "going Outside."

phys processor makes more sense on the gfx card (1)

majid_aldo (812530) | more than 8 years ago | (#15233374)

i mean already the only reason people buy a mid to upper range card is to play games. it makes alot of sense to put it on the graphics card.

admittedly, im not addressing whether this chip is useful.

Re:phys processor makes more sense on the gfx card (1)

Jordanis (955796) | more than 8 years ago | (#15235025)

I think there's a couple reasons for not.

For starters, GPU boards are already pretty huge. I don't think there's physically room. Then there's the issue of heat--you'd be localizing even more heat to one card. Not good. Finally, in a pure marketing sense, I think it's easier to get people to buy a $400 GPU and then a $300 PPU than to drop $700 on a combo board.

Separate is better for the consumer, anyway--more consumer choice about which products to buy. But I think the real central issue to why the combo won't happen is the physical size and heat problems.

no way in hell (1)

B0red At W0rk (876713) | more than 8 years ago | (#15233382)

there's no way in hell this will take off in the mass market unless it's incorporated on a graphics card of something. Nobody except hardcore techies is cogga buy a separate compoenent just for games physics.

Re:no way in hell (2, Insightful)

WML MUNSON (895262) | more than 8 years ago | (#15233525)

I'm going to assume you weren't around when 3D accelerators first came into existence and everyone was saying the same thing as what you just said.

Improved physics matter only to "hardcore techies?" I challenge you to explain Half-Life2's success without including the use of physics in your answer.

Physics is an emerging area in gaming and huge quantities of resources are being poured into its improvement. A card that not only offloads the physics calculations to a separate chip, but as a result gives us the capabilities for more and better in-game physics capabilities is absolutely a great idea. Puzzles can become more interesting, visuals can become more immersive due to improved particle physics just for starters, you'll have creative ways to destroy your enemies without shooting them directly, destructible environments... and the list keeps going..

It's only a matter of time until these take off. Some folks might have a tough time finding an empty slot for one of these on their motherboard (with all the QUINTUPLE-SLI configs people have now-adays), but they'll just upgrade to a bigger case and a board with more slots especially if developers keep stepping on-board.

Games probably won't REQUIRE one for quite some time, but I would expect these will be about as widespread as 5.1+ sound-cards in just a few years..

Re:no way in hell (1)

B0red At W0rk (876713) | more than 8 years ago | (#15233740)

I understand all this but with graphics you can see the impact of your investment in the card directly on the screen. Physics is too abstract to market to the average Joe. What next, an AI chip? People are willing to invest only so much money on a gaming rig. I don't see this kind of addon taking off unless Intel and AMD hit a speed limit.

Re:no way in hell (1)

WML MUNSON (895262) | more than 8 years ago | (#15233927)

That demo of Cellhunter (whatever game that was) on the linked site showed a HUGE impact of your investment right on the screen. I'm going to guess you didn't view that video. :)

How much more are you looking for from video cards besides higher resolutions, textures, aa capabilities, and whatnot?

If you don't increase interactivity and such within these games and just keep improving the visuals, you will still have a dead, lifeless game world to play in no matter how good it looks.

Like I said before, when 3D accelerators first came out, people said the same things about them.

Re:no way in hell (1)

RobertLTux (260313) | more than 8 years ago | (#15233733)

oh i could think of a few nongaming things that could use this For XGL 2.5.67 1 "throwable windows" 2 smashable windows/objects (bin that POS shareware ap since it has spyware and it breaks into itty bitty bits) 3 just keep dreaming

Just in time for... (0, Offtopic)

Kj0n (245572) | more than 8 years ago | (#15233385)

Duke Nukem Forever!

Sorry guys, I just couldn't resist.

Re:Just in time for... (2, Funny)

thepotoo (829391) | more than 8 years ago | (#15233482)

Dear Kj0n:

Thank you for your suggestion. We at 3D Realms pride ourselves on taking into account suggestions made by fans.
Unfortunately, we regret to inform you that Duke Nukem Forever will not be shipping with support for the referenced PhysX Dedicated Processor, because Infinium Labs' "Phantom" Console, our primary release platform, will not include such a card.

It is possible that a future port to Windows x128 will include support for the card. Please expect 15-20 year delays while we add support for the processor.

Thank you for your understanding.

Sincerely,
3D Realms.

I wonder..... (1)

allaunjsilverfox2 (882195) | more than 8 years ago | (#15233409)

Would they put a extra port on a motherboard to give it it's own bandwidth or would they be forced to use the existing ports, Which I admit haven't even begun to get fully utilized. But the only place I can see this having any use is possibly in renderfarms. Otherwise, I'm buying the cheapest card for the best value. Regardless of namebrand, reviews,etc.

Re:I wonder..... (1)

CastrTroy (595695) | more than 8 years ago | (#15233834)

I think that's the plan with PCI express. Put multiple high speed ports on the same Motherboard, for Video cards, and anything else that requires high speed access.

Cellfactor video looks pretty cool... (4, Informative)

vertinox (846076) | more than 8 years ago | (#15233415)

Check out this link: http://physx.ageia.com/footage.html [ageia.com]

Go to the section that says "I'm old enough" with the Cellfactor video and take a look at the flash movie. Although Cellfactors almost could be a poster child game of mother of all physics engines. It looks like it puts Half Life 2 to shame. (Although I wonder if you character has that much physic power to fly through the air and throw jeeps at people then why bother with having a gun?)

I really dig the blood particle effets as someone is gibbed while standing on the ledge and the blood just splashes down the side of the platform.

And you can really tell the difference in particle debris in the comparison videos at the top. However, I wonder if the same effect can be acheived with cranking up your settings on a high end gaming rig without the card. I'd wait til some 3rd party hardware review site gives the final verdict.

Re:Cellfactor video looks pretty cool... (2, Insightful)

TubeSteak (669689) | more than 8 years ago | (#15233487)

However, I wonder if the same effect can be achieved with cranking up your settings on a high end gaming rig without the card.
TFA points out that even a high end gaming rig can't handle all the objects the chip allows the game to generate:
But before starting the demonstration, Hegde had to lower the resolution of the game.

The reason? The chip can generate so many objects that even the twin graphics processors in Hegde's top-end PC have trouble tracking them at the highest image quality.
Basically, the tech in this chip is ahead of its time. It would seem like a wise idea to have PhysX enabled-games (optionally) benchmark your rig & automagically limit the # of objects generated so that the gaming experience doesn't drop below a certain FPS at your chose graphics settings.

Re:Cellfactor video looks pretty cool... (1)

cgenman (325138) | more than 8 years ago | (#15233557)

On a side note, notice how all of those games are FPS?

Cellfactor seems to really take advantage of the idea of using physics as

The ghost recon videos could easily be replicated by using non-colliding particle systems which simply transpose through geometry before wearing out. Heck, add a dirt-cheap ground level collision plane, and you're all set. In the heat of an explosion, it would look just about as realistic, and without the additional hundred dollars in hardware to upgrade every year. As is they disappear after about 3 seconds anyway.

And Bet On Soldier's glowing particle systems are neat, but the gameplay doesn't change one bit.

Cellfactor seems to really take advantage of the hardware, and is a game I'd buy at launch to boot (Even though the time to crate [oldmanmurray.com] is about 1 second). The rest of what they're showing is nice, but not what you would want physics for. Realistic debreis? Come on, we can do better.

Re:Cellfactor video looks pretty cool... (1)

zerocool^ (112121) | more than 8 years ago | (#15234877)


Watching that video, it does look cool. But the first thing that comes to mind is "tech demo". That's what that game looks like. I can't think of any reason that's cool other than showing off a bunch of physics; and I also can't imagine that the commercially standard hardware that will be available at the release of that game won't be sufficient to run it just fine.

It does look cool. But, c'mon. Essentially, they're trying to sell you a $350 game. And that blood? I haven't seen blood that fake since Rise of the Triad.

~w

G4... (0)

corychristison (951993) | more than 8 years ago | (#15233417)

I saw this on G4 Tech-TV a while back... damn near a year ago, actually. I think he mentioned they will come with a price tag of around $600-$700 USD..

No... I didn't read the article.

Price is $299 (0)

Anonymous Coward | more than 8 years ago | (#15233559)

Which definitely makes it affordable. And the article is right: current games have started to seem clunky because the great graphics really makes the terrible physics more noticeable. Before the biggest limit to reality was graphics.

Slots? (1)

Dr. Eggman (932300) | more than 8 years ago | (#15233452)

Ok, as far as I can tell, the Physx will be PCI atleast at first. I am upgrading my computer soon and I'm trying to leave plenty of room for the future. To that end, I'm looking to get a mobo with 2 PCIe x16 slots (which I am guesstimating would be the slot type the Phyx would use in a future varient, I'll have two other sizes as well but that was unitentional.) But to get a mobo with 2 PCIe x16 slots it comes in the form of an Nvidia SLI mobo. Does anyone know if these SLI capable boards will accept somethign else in the second PCIe x16 slot other than a grfx card, for example a Physx card that uses PCIe x16?

I suppose it's not exactly dire, as the mobo in question also has 3 PCI slots, chosen specifically to be able to hold my current cards plus a transitional PCI Physx... but its good to know these things.

Re:Slots? (1)

Joe The Dragon (967727) | more than 8 years ago | (#15233488)

most of the x16 sli boards have a x4 slot as well. How much bandwith does this card need?

is 4.5% APR supported by Ageia? (3, Interesting)

ignatz72 (891623) | more than 8 years ago | (#15233476)

From the article: "The consumers will see how the games behave better," Hegde said.

But in the same article, they mention that the extra particles the processor generates swamps the DUAL gpu setup he's got in a demo system. How many of you want to wager the demo system is a hoss in it's own right?

Apparently this card isn't going to help those of us holding out with our Athlon XP AGP systems that perform fine on current gen games, if a current bleeding edge rig can't cut it. :(

SO now I have to plan for a quad AM2 CPU, quad dual-sli chip GPU w/ 32 Gigs of memory? Damnit all to hell...

*/me researches mortgage rates to subsidize next box-build*

Re:is 4.5% APR supported by Ageia? (1)

Joe The Dragon (967727) | more than 8 years ago | (#15233581)

Put a HW raid card in there as well

Massively destructible & collateral damage. (5, Informative)

Ned_Network (952200) | more than 8 years ago | (#15233486)

Bah! They cut some of the best bits of my submission!

The price of $300 seems a bit steep right now to a casual player like me, but this bit from the site's FAQ I find very appealing:

Buildings and landscapes are now massively destructible with extreme explosions of thousands of shards of glass and shrapnel that cause collateral damage
The PPU seems to be available as a PCI card [ageia.com] but is also available in off-the-shelf machines [ageia.com] from Dell & Alienware.

There's a comparison video [ageia.com] showing the difference between Tom Clancy's Ghost Recon Advanced Warfighterwith & without the PhysX installed and a couple of hi-res [ageia.com] videos [ageia.com] that are available by FTP, so can't be cached by Coral, I don't think.

What I really have to wonder, if this thing is as good as they reckon, is why I haven't heard of it before?

Re:Massively destructible & collateral damage. (1)

leland242 (736905) | more than 8 years ago | (#15234824)

This was on Attack of the Show on G4 months ago...

Looked cool, good idea, but if no one supports it - either gamemakers or consumers, it's not likely to go anywhere.

Here's the problem with this (4, Insightful)

SlayerDave (555409) | more than 8 years ago | (#15233492)

There is no common, open API for physics. Rather, there are several proprietary, closed APIs which offer similar functionality, but have no common specification. For instance, there are Havok [havok.com] , Ageia [ageia.com] , Open Dynamics [ode.org] , and Newton [newtondynamics.com] , just to name a few. The PhysX chip from Ageia only accelerates games written with their proprietary library in the game engine. Other games written with Havok, for instance, should receive no benefit at all from the installed PPU. On the other hand, Havok and NVIDIA have a GPU-accelerated physics library [havok.com] , but games without Havok (or users without NVIDIA SLI systems) won't get the benefit.

On the other hand, graphics cards make sense for consumers because there are only two graphics APIs, OpenGL and DirectX, and they offer very similar functionality under the hood (but significantly different high-level APIs). So a graphics card can accelerate games written with either OpenGL or DirectX, but that's not the case with the emerging PPU field. In graphics, the APIs developed and converged on common functionality long before hardware acceleration was available at the consumer level, but I don't think the physics API situation is stable or mature enough to warrant dedicated hardware add-in cards at this time.

However, I think there are two possible scenarios that could change this.

1) Havok and Ageia could create open or closed physics API specifications and make them available to chip manufacturers, e.g. ATI and NVIDIA, which have the market penetration and manufacturing capability to make PPUs widely available. I could imagine a high-end PCIe card that had both a GPU and a PPU on-board.

2) Microsoft. Think what you will about them, but DirectX has greatly influenced the game industry and is the de-facto standard low-level API (although there are notable exceptions, such as id [idsoftware.com] ). Microsoft could introduce a new component of DirectX which specifies a physics API that could then be implemented in hardware.

But unless one of those things happens, I don't think proprietary PPUs are going to make a lot of sense for consumers.

Nitpick (4, Informative)

loqi (754476) | more than 8 years ago | (#15233553)

ODE isn't closed and proprietary.

Re:Nitpick (2)

SlayerDave (555409) | more than 8 years ago | (#15233571)

True, my mistake.

Yeah sure (1)

SmallFurryCreature (593017) | more than 8 years ago | (#15233556)

All you are saying is that Physics engines are now in the same state as GPU's were when they first emerged. Hell at the time game mags even had little icons to show wich games supported wich cards.

No common interface and the game makers just had to make sure to include code for the cards they thought of as important enough.

This lasted quite a long time until things settled down. Oh but wait NO!

Check Tomb Raider Legends. It has a special option, "Next gen content" wich is claimed to be optomized for the Nvidia cards. Granted some bugs seem to get in the way BUT it seems clear that even in the days of directx there still is room for games having extra content depending on your card.

What we are going to see is that this company is going to proudly list the games that support it and be very optomistic about listing all the games that could possibly support it because the game company didn't flatly turn them down.

Some very successfull games will happen who don't give a shit. Some mediocre games will look better because they support it until finally this product will either die (Like virtual reality helmets) or stay with us (like GPU cards).

Wich one will be the case? Frankly I don't know. Graphics in a way are easy and if you remember it didn't take a redesign of the game to add the Monsterboard patch for games like Tomb Raider and Quake. The games stayed the same.

The physics part that is just pretty pictures would still not be easy. All Tomb Raider and the Quake patch had to do was release higher res version of the super high res textures the developers had anyway. Is it going to be as easy to add increased physics to a game?

Is there going to be any demand for just pretty picture physics or are people going to want to see the gameplay affected before they invest in this card?

We will just have to wait and see. I think the battle is going to be wether gamers will find that it improves their game. Game makers will deliver the code if their is a demand, just like they did with the first gpu's.

Re:Yeah sure (1)

SlayerDave (555409) | more than 8 years ago | (#15233593)

I agree with most of your points, but the game market of 2006 is very different than 1995. One significant difference is that the GPU market has stabilized. There are OpenGL and DirectX, and all modern cards support both. I imagine a similar thing has happened in the soundcard market, but I don't know for sure. I think it would be difficult to introduce a new expensive piece of hardware that only improved certain games in today's market. Consumers today expect that if they shell out $300 for a card, it will improve all of their games, not just a small fraction.

Re:Yeah sure (1)

racermd (314140) | more than 8 years ago | (#15234671)

The point about how the 3D video card market has stabilized and how the sound card market has done so in a similar manner is a little off.

With 3D video cards, there's 2 major players - ATI and nVidia. Both support Direct3D and OpenGL. The only real differences are what extras are thrown in the mix for each new revision of their respective chips.

With sound cards (for gaming purposes, anyway), it's pretty much all Creative Labs. There's really only multiple variations of EAX for accelerated sound in games. With Creative being the only major player in the market, they can do pretty much anything they want with the direction of EAX. Some other manufacturers have licensed the older EAX versions to be included on their own cards (don't ask me now, I'm too lazy to go look it up).

The major *realistic* difference is that slightly bad video is so much more noticable to the average person than slightly bad sound. Furthermore, it's easy to compare two screenshots on the web than it is to compare two audio clips. Therefore, the ability to do an A/B comparison to detect differences is much easier with visual content than audio content. Not that it's impossible, but the internet as it is today is still primarily a visual medium.

Re:Here's the problem with this (1)

GaryPatterson (852699) | more than 8 years ago | (#15233704)

We need an OpenPL to sort this out. Something very much like OpenGL, but for physics.

Imagine defining (for example) a feather. You create a simple model and a nice texture with an alpha map. You define a few OpenGL parameters and that'll render nicely on any GPU.

Then you assign it some physics parameters - mass, air resistance, shape, density - and that feather can then be instantiated with all the parameters needed to control it.

Now think of a chicken panicking and running away. A bunch of feathers fall out as the chicken starts moving, and they * without any CPU interaction * float naturally to the ground. Dust swirls as the chicken kicks up little footprints, again * without any CPU interaction *.

Suddenly we've got a system that can both draw and manage objects in the game. The game code would need to be able to read back properties of some objects, but most of the actual object management could be done on the card without any CPU input. The CPU's now free for AI and world management.

I think we'll see a move towards physics-based realism that requires cards like this one. I really hope that we see a parallel move towards an open physics language.

Indeed we do need an open physics API.... (1)

mrchaotica (681592) | more than 8 years ago | (#15234081)

...and here it is! [ode.org]

Re:Here's the problem with this (1)

octopus72 (936841) | more than 8 years ago | (#15234008)

As Havok was threatened to be pushed out of business by novodex API and physX hardware, they seem to have signed the agreement with Nvidia. Though I don't believe this solution will be as good as ageia cards (even SLI), because GPU must do other shader processing, and is generally not designed for such tasks (although they have advantage because all processing is done on one card and geometry data doesn't have to take round trips).

If I were a game developer, I'd be confused which API to pick. I'm sure Novodex will not support acceleration on GPU's (via SM30), because people wouldn't have reasons to buy physX card. On the other side, Nvidia sees accelerated Havok as opportunity to sell more cards in SLI setup or persuade people to buy better gfx cards instead of physX.

Physx seems to have atracted many developers (100 games to come), and is a pretty expensive solution for users wanting the hardware acceleration. On the other side, havok will be accessible to mid-range users which don't have additional $200 for a physics card, or want to have SLI for other, ordinary titles. After all, most gamers buy whatever is hyped (with parents' money), so for now I think Ageia has serious edge, but probably ATI and Nvidia will react in a year or two with more dedicated hardware capabilities. I wonder if Havok made exclusive deal with nividia because excluding ATI would be disaster for the success of API.

Which API will win? Ageia shouldn't have any interest to support graphic card shaders in their SDK/API, while Havok might try to support physX to at least have unified platform solution, because they earn solely from selling the middleware. I think these two will be around for some time (each with certain advantage), unless all developers pick the same side. Users will certainly benefit from the competition. Agreed, best way would be to have single card with both chips (with efficient interconnection and because of that redesigned API, so it is a long term goal), but I doubt't that GPU makers want to share that piece of cake.

Re:Here's the problem with this (1)

zerocool^ (112121) | more than 8 years ago | (#15234900)


And this is exactly what's wrong.

If you go to their site, and you watch the video clips, you think "Wow, what have I been missing". But, what's happened in reality is this:

if (physx card)
then
explosion(do_extra_shit)
else
explo sion(normal_boring)
end


That's all. Proprietary API and exclusive deals with game manufacturers mean that people who have the card see extra shit, even if their normal graphics card setup could have handled it without. I'd like to see the exact same code run on a $300 graphics card and this $300 technological wonder, and then on two $300 graphics cards in SLI. I bet it wouldn't look any different. You ask the hardware to do something; if it's got the horsepower it does it. When you start assuming, you're just fucking the consumers.

I'm tired.

~Will

I guess I need to get my eyes checked... (2, Funny)

andphi (899406) | more than 8 years ago | (#15233499)

Because I could have sworn the article was about a "Dedicated Physics Professor", not a peripheral processor. For a moment, I had visions of a computer program that teaches advanced physics to its users. Silly /me

Viable? (1)

drwiii (434) | more than 8 years ago | (#15233519)

With dual-core CPUs taking hold, and quad-core CPUs on the way, is the addition of a fixed-purpose processor really a viable long-term solution?

They seem to think so [ageia.com] , but then again they have an interest in selling fixed-purpose processors.

Re:Viable? (0)

Anonymous Coward | more than 8 years ago | (#15234898)

"With dual-core CPUs taking hold, and quad-core CPUs on the way, is the addition of a fixed-purpose processor really a viable long-term solution?"

Note Tim sweeney (of Unreal fame) said the same thing about 3D accelerators, that CPU's would catch up and overtake them, he was of course incorrect. The fact is without the specialization in 3D video processing, modern CPU's simply could not render the graphics modern gpu's could at respectable framerates. Think about what specialized cpu + high end memory buy you in terms of being able to process specialized tasks much faster then a modern cpu.

Think about it this way, it's all about the amount of processing power you couple with memory bandwidth. Modern CPU's (and gpu's for that matter) are still choked by and large by memory bandwidth. Notice the performance effects of DDR 266 vs DDR 400 ram in many games. Memory speed and several gigabytes of local memory bandwidth is a critical requirement for getting tasks done much quicker then any modern cpu could, simply cannot compete with specialized add in cards.

Basically A Poor Man's Cell Type Co-Processor (0)

Anonymous Coward | more than 8 years ago | (#15233675)

There are two things that are going to help put the nail in the x86 game market:

1) MMORPGs

2) AEGIA

Of course there will always be people making games for people's home computers but the market has been in decline for the past five years and there is virtually no chance it will ever recover. MMORPGs are leading pc gamers to buy less and less pc games ever year. And products like PhysX make it painfully clear to pc gamers how woefully far behind the x86 architecture compared to modern chips like Cell.

The main problem with PhysX is that you can't use it for any meaningful gameplay elements without:

1) Creating two disjoint versions of your game that can't be played together online

2) Creating extra testing/patching/version overhead since you are shipping two different games

With graphics almost everything you do is just an interpretation of you game logic and how you display it to each individual user can be completely different. Since you can't put out a game that requires a PhysX card and have any hope of making your dev money back since the number of people that are going to pay for the card in addition to the already very expensive pc graphic cards is tiny, you will end up having to make all your use of the PhysX card for fluff or eyecandy type elements. That is stuff that looks cool but has no actual effect on game play - ie. lots of stuff flying around in explosions.

And even if every pc gamer for the sake of argument went out and bought these cards you end up making a mess for developers to deal with because you end up with three different disjoint worlds in your game engine:

1) x86 world - your main game logic and data structures

2) GPU world - your visual representation of your game logic and data

3) Physics world - your 3d/physical - but non-graphics -representation of your game logic and data

All of these three worlds separated by a relatively slow bus between them.

You might as well ditch the worthless x86 chip and link the PPU to the GPU by a very fast custom bus...which just happens to look like the PS3 architecture...

Re:Basically A Poor Man's Cell Type Co-Processor (1, Interesting)

Anonymous Coward | more than 8 years ago | (#15233778)

"You might as well ditch the worthless x86 chip and link the PPU to the GPU..."

Funny you should say that, one of my friends who is a very senior engineer at NVidia has been talking about the same thing for the past year or so. He has been saying how NVidia views the x86 chips that drive pc gaming systems as a worthless relic that they would like to make irrelevant and have pc game developers to essentially start writing their entire game engines on their GPU.

He seems to be just gushing with excitement over what they are doing in partnership with Sony - it sounds like the PS3 is just the beginning.

No wonder Microsoft went through all the trouble to switch to the more powerful PPC chips and ditched x86.

Multiplayer (5, Insightful)

lord_sarpedon (917201) | more than 8 years ago | (#15233772)

There's a major flaw. Multiplayer gameplay requires certain clientside behaviors to be deterministic, otherwise clients will fall out of sync. Physics is one of those. If Bob uses a PhysX card and an explosion lands a box in position X, but Alice, without a PhysX card, has the same box in position Y, then there is a problem. Both can't be right. The server would have to correct for discrepancies such as that because the position of a box affects gameplay; bullets and players can impact it. Perhaps more position updates would have to be sent to make sure Alice ends up in the same spot as Bob. But what about midflight? I suppose this doesn't matter for blood smears and purely aesthetic effects, but as the videos show, thats not where PhysX really shines. This puts a physics accelerator in an entirely different class than a graphics card. You can adjust your graphics settings, but the quality of your physics simulation in multiplayer can only be as good as the least common denominator without killing gameplay for some of the parties involved. Sure, AGEIA could have non-accelerated versions for everything in its library when acceleration isn't available that produce the same result, but then you are offloading the entire functionality of an addon card on to the cpu...imagine running Doom at full settings using software rendering. Extreme example. But that defeats the very purpose of the card, if developers are limited because most of their customers might not have it.

Re:Multiplayer (0)

Anonymous Coward | more than 8 years ago | (#15233984)

Most of the physics is done server-side for multiplayer games anyway, so having a physics processor on the client won't have as much impact - although on the server it could be very useful. On the client, it would be limited mostly to particle effects and other things that don't affect gameplay. OTOH, if the server is using one of these cards, and say, a truck blows into 100 pieces, each of them will have to be updated across the network, multiplying the required bandwidth to play.

Re:Multiplayer (0)

Anonymous Coward | more than 8 years ago | (#15234750)

imagine running Doom at full settings using software rendering.
It ran just fine on my 486DX.

Probably OpenSource underneath (1)

Usquebaugh (230216) | more than 8 years ago | (#15233794)

Does anyone know where their engine came from?

Has anybody seen this card in person?

This is something that OpenSource could be doing are http://www.ode.org/ [ode.org] responding to this?

My guess is that this engine is OpenSource and running on some sort of FPGA. Would help if a standard such as OpenGL could be drafted.

Forget games, there's a large market for physics models in design houses.

Re:Probably OpenSource underneath (0)

Anonymous Coward | more than 8 years ago | (#15233814)

"Has anybody seen this card in person?"

The PhysX card is essentially a less powerful version of the first Cell chip, the Broadband Engine.

Essentially banks of raw floating point power that leave desktop pcs in the dust. Almost any of any interest in modern game engines is some form of cranking through massive amounts of floating point data.

Probably Envy underneath (0)

Anonymous Coward | more than 8 years ago | (#15233897)

"My guess is that this engine is OpenSource and running on some sort of FPGA."

Why would you guess that? Envy?

Probably shouldn't talk out of your ass. (1)

Homestar Breadmaker (962113) | more than 8 years ago | (#15234280)

What is with morons and FPGAs lately? Its not an FPGA. Nobody uses FPGAs for performance sensitive tasks. And its their own code, which works together with their own physics engine that has been around for quite a while (used to be called novodex).

Re:Probably shouldn't talk out of your ass. (1)

sophanes (837600) | more than 8 years ago | (#15234783)

Yeah, this gives me the shits as well. Many people here seem have gotten into their little heads that FPGAs == programmable hardware == open source goodness. They do this with little understanding of FPGA technology and its limitations.

FPGAs (especially the big/high speed grade devices) are *very* expensive, and while impressive gate density is being achieved, their performance is constrained by their internal interconnect topology and technology. Consequently, FPGAs are good for glue logic, specialised functions (e.g., bit-serial designs, network processors) but bad at most others (e.g., general purpose processors, signal processing, performing vector operations). More often than not a cheaper and faster solution is to just use a uC.

Re:Probably OpenSource underneath (1)

Mithrandir (3459) | more than 8 years ago | (#15234529)

I hang around on the ODE mailing lists and I've not seen any mention of this card, or even implementing the Havok/NVidia-SLI style of setup.

The project somewhat stalled for a while as the main developer with write access to the head sort of disappeared. In the last month they've moved to sourceforge, subversion, moved all the old unstable code into the head, and added a wiki. They're pushing very quickly towards a 0.6 release (0.5 has been out for about 2-3 years now). This has caused quite a bit of rejuvination of interest recently, so maybe we'll see something along those lines pop up in the post 0.6 code?

Sounds great, but too expensive. (1)

supabeast! (84658) | more than 8 years ago | (#15233951)

I can see paying $300 for a 3D card - I've done it plenty of times - but $300 more to tweak out some physics effects? Not a chance for a gaming machine. They should get support for these things written into popular particle effects systems for video editors - $300 for real-time high-quality particles would sell like a charm in the visual effects world.

I'm guessing that Ageia is hoping on a buyout by Nvidia or ATI. Getting this technology built into GPUs would be a great selling point, and be a great way to almost guarantee that the developers chips would be the ones to end up in next-next-gen consoles.

Re:Sounds great, but too expensive. (1)

MobileTatsu-NJG (946591) | more than 8 years ago | (#15234849)

"I can see paying $300 for a 3D card - I've done it plenty of times - but $300 more to tweak out some physics effects? Not a chance for a gaming machine."

Depends on what they do with it. The biggest drawback to modern FPS's is that the environment isn't destructable. Supposedly, this type of card will help deal with that. Honestly, I was hoping that the PS3 and the 360 would usher in this new era, but so far it's looking like we're still another generation away from that.

In any event, if a few games capitalize on the use of this sort of processor, it'll become the next big thing to add to your rig. (or it'll become standard on video cards...)

"They should get support for these things written into popular particle effects systems for video editors - $300 for real-time high-quality particles would sell like a charm in the visual effects world."

Probably. Particle, hard body, and soft body calculations are pretty expensive CPU wise. Unfortunately, 3D companies these days don't seem to be taking the sort of advantage of these sorts of cards like they could. I'm still wholy unimpressed with how little they use modern video cards. I imagine there's reasons for it, but it still stinks. I wouldn't expect anything like this to come to fruition any time soon.

Scientific Research (2, Interesting)

Bipedismaximus (713734) | more than 8 years ago | (#15233954)

So I understand this is for games but, could this help scientific research such as molecular dynamics or other physics simulations? What is the accuracy? What type of calculations can it speed up?

Re:Scientific Research (0)

Anonymous Coward | more than 8 years ago | (#15234007)

I read online once that Aegia was hoping to get into other uses of physics hardware other than gaming. They are just trying to get established right now.

so low (1)

goarilla (908067) | more than 8 years ago | (#15234045)

this seems like a wonderful tech but i don't get why they are aiming so low
physics is a vital part of games yes it yes ... but
this makes me think they are only aiming for the easy money
i do think if the specs were open hence call it gpl'ed if you want
not only would the game market benefit from this tech but also
research centra, universities, ...

I'm curious about scientific applications.... (2, Interesting)

ShyGuy91284 (701108) | more than 8 years ago | (#15234274)

The physics simulation needed for a variety of scientific problems has always needed incredible processing power (such as the Earth Simulator). I'm wondering how accurate they can make this physics simulation, and if it would work better at physics simulation then traditional CPU-powered methods. It makes me want to compare it to the Grape Clusters used for some highly-specialized force-related research (I know University of Tokyo and Rochester Institute of Technology have them).

Great for single player, bad for multiplayer? (2, Interesting)

Anonymous Coward | more than 8 years ago | (#15234528)

Everyone knows that computer technology just gets better and better as time goes on but your ISP is still stuck in the past as the the execs go out and play a few rounds of golf. How do they expect to run these huge physics calculations over the internet in a massive game like say for instance Battlefield 2? I honestly don't know the first thing about physics or how this stuff gets across a network but Counter-Strike:Source doesn't even let you take advantage of the 5-6 physics barrels in a map and even these barrels are rumoured to cause much lag! What kind of effect would a realistically modeled house demolition have on network performance? Is our shitty bandwidth gonna force us back to the gaming stone age on 8 player servers with the only tradeoff being pretty physics to make up for the other 24 players?

Re:Great for single player, bad for multiplayer? (3, Insightful)

MobileTatsu-NJG (946591) | more than 8 years ago | (#15234864)

"How do they expect to run these huge physics calculations over the internet in a massive game like say for instance Battlefield 2?"

I can offer an uninformed theory. If an event is passed to the other players such as "barrel at explodes", then the processing is done at the client end for all of the players. If the event is done properly, they should all reach the same conclusion.

Unfortunately, as I'm writing this, I can start to see the problem. Okay, I apologize, but I'm going to do a 180 here. Imagine a car crashes through a brick wall and a hundred bricks go flying away. That alone should work fine. But if another player runs into the path of one of the bricks and it bounces off of him, suddenly it's no longer as predictable. His latency along with everybody else's latency means ONE of the computers has to make the decision of where everything goes. That, in and of itself, is probably okay, but then you have to pass a great deal more data along to let the other clients know what's happening.

So... yeah, I see your point.

This is a good idea but... (1)

sentientbrendan (316150) | more than 8 years ago | (#15234880)

this should definitely be a feature attached to the video card. Either that, or they should bundle physics accelerators with graphics accelerators. Also, like others have mentioned, its important that we get a standard API for this for it to catch on...

Really, it would have been a lot better to introduce this technology on a console than on the PC. If the ps3, for instance were to come with this, developers would get a chance to play around with it in earnest and prove its usefullness, if any, to the consumer. As it is, I doubt there will be real strong support for it, and uptake will probably be pretty slow.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?