Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

3dfx Unveils Info Regarding Voodoo 4 & 5

Hemos posted more than 14 years ago | from the better-graphics-coming-your-way dept.

Graphics 249

A reader wrote to us about the latest press release from 3dfx regarding the Voodoo 4 and 5. The V4 and V5 will apparently be released in March of 2000. The V4 will be single processor, but the V5 will have both a commercial and professional version, respectively supporting up to 4 and up to 32 VSA-100 processors, and up to 128 and 2GB of RAM each. The release for the V4 and V5 is rolled in with the VSA-100 talks - definitely worth checking out.

cancel ×

249 comments

hella lame (0)

Anonymous Coward | more than 14 years ago | (#1531613)

3dfx sux, first post (yet again) -- I suck worse.

two news for the price of one (0)

butfala (39229) | more than 14 years ago | (#1531614)

I'm posting for this one, hope it'll be replicated on the other :)

2GB of RAM!? (0)

Anonymous Coward | more than 14 years ago | (#1531615)

Sweet moses! 2GB of RAM! Is that right!? hehehe

Oh baby! (1)

Rob the Roadie (2950) | more than 14 years ago | (#1531616)

For the consumer market, products based on the VSA-100 deliver from 333 megatexels/megapixels per second up to 1.47 gigatexels/gigapixels per second fill rates using 16-128 MB of video memory and one to four processors per board.

This has got to be the greatest bit of kit I've seen in a long time!

Powerful? Without question!

Linux support? I sincerly hope so!

post (0)

Anonymous Coward | more than 14 years ago | (#1531617)

first

we'll see...oh and NVIDIA rules (1)

JEDi_ERiAN (79402) | more than 14 years ago | (#1531618)

we'll see when this board's released if it truly has these specs. plus, don't forget about nvidia, they have released new hardware so quickly this past year, i wouldn't put it past them to have a voodoo4/5 killer out by that time (or before). the geforce shows promise, but we'll have to wait until software is coded for these chips in order to see it perform in all of its glory.

>two news for the price of one (0)

Anonymous Coward | more than 14 years ago | (#1531619)

Now look what you have done! You scared the other one away.

Why is everyone so excited? (3)

The Wing Lover (106357) | more than 14 years ago | (#1531620)

I can't figure out why everyone is so happy about 3dfx putting out another Voodoo chip. They're pushing a proprietary interface (Glide), where a perfectly good standard exists instead (OpenGL). They're using market pressure to get game manufacturers to adopt their standard, and lawsuits against developers who try to write Glide wrappers so that Glide-only games can be played on other video cards.

Doesn't this sound a bit like another company that everyone is up in arms about?


- Drew

The usual... (3)

pen (7191) | more than 14 years ago | (#1531621)

Let's get this off our chests...
  • Linux support?
  • This is just another trick by Microsoft
  • Wow, I'd like to see a Beowulf cluster of these..
Personally, I think that this is great... let's hope iD is keeping up and giving us RealGuts(tm) in Quake 4.

Remember how the cheapo motherboards used to be able to allocate some of the system RAM for video RAM? It would be pretty funny if these cards could do the opposite.

--

SMP video (1)

wakko (16983) | more than 14 years ago | (#1531622)

I've heard of SMP computers, but SMP video cards?

Looks like we'll have to all compile our kernels for SMP machines now.
--

Hehe...this guy's funny (0)

Anonymous Coward | more than 14 years ago | (#1531623)

But I do like a sense of humor. It's nice to see 3dfx sticking to the Glide standard rather than some proprietary OpenGL nonsense that doesn't port anywhere. Every tried to run a TNT in anything except Windows? The 3D works like shit. Sure, Glide might be slower in Linux/FreeBSD than Windows, but it is a simple, fast, and more efficient protocol than the crap that nvidia is turning out anyday. But don't fret nvidia users, you own a very nice 32MB card that can do wonders on a 2D desktop...look at X go!

umh...crap? (3)

marcos76 (113405) | more than 14 years ago | (#1531624)

3Dfx unveils good number for his new 3d architecture...but It lacks of geometric acceleration. Do you still want 200 fps at 1600x1200 32 bit with 3 big polys at each frame? :) No thanks, I prefer lower fill rate and higher polys counter.

jaw drops (1)

lubricated (49106) | more than 14 years ago | (#1531625)

This thing is incredible. My jaw dropped to the floor and didn't stop till it went down a few floors. This is the first time 3dfx will put something out that is good not only for gaming. They are using a completly new architecture this time. It's about time I was wondering when the kings of 3d would retake their crown.

About time. (0)

Kid Zero (4866) | more than 14 years ago | (#1531626)

I can't wait to see how this card works. It has to be better that Nvidia's unsupported T&L, Matrox's unsupported Enviroment Bump Mapping and the now ancient Riva TNT2 ultra. I doubt Nvidia will have anything out by then that can touch this.

Re:The usual... (1)

jonnythan (79727) | more than 14 years ago | (#1531627)

That's great and all, but these little babies are wonderful Beowulf's by themselves :)))

serious question (1)

Bocephus (6835) | more than 14 years ago | (#1531628)

Tom's Hardware noted that the new parallel-processing video card from ATI, the Rage MAXX, has to wait 2 frames to accept a user input, as opposed to 1 for most single-chip solutions. Even though SLI has every chip working on the same frame, does it still suffer from the same delayed-input problem?

It wouldn't have been a problem in the days of the Voodoo2 SLI setup, as any player with one could get frame rates typically twice as fast or faster than pretty much every 3D accelerator out there, so the 2-frame lag would be the same or less time as the single-frame delay. However, with the ungodly frame rates offered by a single GeForce 256 with Double Data Rate RAM, if there were a two-frame delay for someone with a Voodoo5 5500, in a LAN game of Quake III the Voodoo5 user would be toast.

What? (1)

rikkitikki (91982) | more than 14 years ago | (#1531629)

Still no geometry acceleration?? Bah!

hrm... (4)

Haven (34895) | more than 14 years ago | (#1531630)

Windows 95, 98, NT4.0 and Windows 2000 drivers Allows you to run the Voodoo5 6000 AGP with all popular operating systems.


Okay, not only am I defending linux on this one, I am also wondering where the MAC drivers are. If 3dfx wanted to have some incredible benchmarks they should write a MAC driver and throw it into a G4. They say the Voodoo 5's aren't only for gamers, why not port the drivers to the most popular graphics design platform?

Why does anyone care? (5)

jalefkowit (101585) | more than 14 years ago | (#1531631)

I fail to understand why this stuff excites people. I've always thought that the market for add-on 3D graphics cards was going to develop a lot like the market for add-on sound cards did, and so far I'm seeing nothing that indicates otherwise.

What I mean is -- consider for a moment how the market for add-on sound cards developed. Up to 1992, sound on the x86 PC was basically nonexistant, unless you owned a flaky almost-compatible like the Tandy 1000. Then the multimedia tidal wave hit and suddenly there was consumer demand for hardware sound support -- and a market sprang up to fill the demand.

Once the demand for sound cards sprang up, the market developed through 3 distinct stages in the next 5 or so years:

  1. Race for Market Position: Five thousand companies hit the market selling sound cards that are all completely incompatible with each other. Software developers pull their hair out trying to decide which to support. Consumers pull their hair out trying to decide which to buy. Eventually one (Creative Labs' Sound Blaster) ekes out enough sales to justify making it the default choice for software developers to support, which launches a virtuous circle of consumers buying it because that's what the software supports and developers supporting it because that's what the consumers have.
  2. Hegemony through De Facto Standards. Soon the virtuous circle described above means that, for good or ill, the Sound Blaster becomes the de facto standard in the marketplace. Other products either become Sound Blaster compatible or are consigned to the margins. Creative maintains its profit margins by releasing a new board every so often(SB, SB Pro, SB16, SB32), upping features and performance. But eventually the feature set becomes Good Enough (TM) for most users, and adding new features becomes a less and less compelling reason for consumers to upgrade. (In the sound card market, this happened, IMHO, with the release of the Sound Blaster 16.) This puts downward pressure on prices, which broadens the market for these Good Enough products (and strains the market for the latest and greatest), which leads to...
  3. Integration and Commoditization. The fact that suddenly the hardware is cheap enough for everyone to own leads to integration -- the Good Enough hardware starts to become part of the motherboard, and the software APIs get rolled into the OS. This effectively kills the mass market for upgrade hardware -- if you can get a Good Enough sound card built right into your PC at the point of purchase, why spend $200 for the Latest and Greatest, especially since you'll never use most of those snazzy features anyway?

So this is where we are today in sound cards -- while a few enthusiasts care about buying the latest Sound Blaster Live! or whatever, the vast majority of users are happy with the 16-bit audio that's hardwired into their motherboards. It's Good Enough!

And that's what's going to happen in the 3D card marketplace, IMHO, fairly soon. We've already passed through stage 1 (I remember agonizing over whether to buy a Voodoo1 or a Rendition Verite card) and stage 2 (with 3Dfx milking their brand name for all it's worth through the Voodoo3). But now Good Enough 3D hardware is starting to come integrated on motherboards, and 3Dfx's Voodoo-only APIs have been almost entirely forsaken in favor of Direct3D, which is integrated into the OS. I've run 3D games on cheapo PCs using this integrated hardware, and while the performance isn't great, it's Good Enough -- while the add-on card companies fight over which card can provide 80 fps in Q3Test, or other "features" which would be lost on the average consumer anyway. So watch for it -- in a year I'd be amazed if there's still a market for whizbang add-on cards. Most people will be just fine with the Voodoo2-level hardware they'll get free with their PC.

-- Jason A. Lefkowitz

Divided (3)

Hobbex (41473) | more than 14 years ago | (#1531632)


Upon seeing the specs for that baby, part of me just screams I want it, but the other, more rational part of me wonders what the point really is.

I mean, great: gigatexels per second. As much RAM as I currently have on my mainboard. Meaning what? I can now play Quake3 at 4,000*3,000 resolution? Yay. Yes, I know about anti-aliasing, but this is overkill for even that if not running very righ resolutions (1024*768 and above).

Read my lips, 90% of all speed problems with games on current hardware is the geometry setup bogging down the processor. Unless you play at above mentioned resolutions, or happen to have dual athlon 700s and are playing at 100 fps already (and if I am right in assuming that this does not have a Geometry chip like the GeForce) this card will be exactly 0% faster for you.

In my opinion Nvidia have taken a much wiser approach to the whole 3d acceleration concentrating on the weekest pointinstead of just pouring in endless amount of pixel fillrate that the processor can't render anyways unless you are stairing at a blank wall.

-
We cannot reason ourselves out of our basic irrationality. All we can do is learn the art of being irrational in a reasonable way.

Re:Why is everyone so excited? (1)

redd (17486) | more than 14 years ago | (#1531633)

just a bit of pedanticism..

Glide significantly out-performs Open-GL (open-gl is just too complicated, but I thought everyone knows this).

As for the licensing.. I know :-(

roll out crystal space.. :-)

Re:Divided (1)

jonnythan (79727) | more than 14 years ago | (#1531634)

This is great, and I totally agree with you, but I'd just like to know where your "90%" stat comes from.

Re:not new (0)

Anonymous Coward | more than 14 years ago | (#1531635)

This 'SMP video' is not new at all.

Look at the Voodoo 1 :
- 1 pixelFX
- 1 texelFX

Now Voodoo 2 :
- 1 pixelFX (but running a bit faster)
- 2 texelFX (work on the same pixel, apply different textures, that's multitexturing)

Voodoo 2 was just a 'SMP' version of Voodoo 1.

ATI rage fury maxx use the same technique to boost frame rates (two Rage128 chips working in parallel).

Details... and analysis (3)

bwoodring (101515) | more than 14 years ago | (#1531636)

The page with detailed info concerning these boards is www.3dfx.com/prod/voodoo/newvoodoo.html

The really interesting thing is that *once again* 3dfx promised us more than it will deliver. On the low end (Voodoo4 4500) these babies are getting smoked by the GeForce 256, which will be a half a year older! The GeForce can do 480 Megapixels per second, about 1.3 times as fast as a Voodoo 4 (which clocks in at 367 Megapixels per second).

If the past is any indication it at least a few more months for the Voodoo 5 to be released (ignore what 3dfx says), by this time Nvidia will probably already have a better card.

In summary, the Voodoo 4 is slower and less feature rich than the GeForce 256, plus is won't be out for 4 more months. It could take longer for the Voodoo 5 which will probably be an anachronism before it is released.

Come on 3dfx! This is *not* the technology that will keep us ahead of the PSX2!!!

Re:serious question (0)

Anonymous Coward | more than 14 years ago | (#1531637)

Simply said, no. SLI (Scan Line Interleave) is not the same process as the ATI cards are using. Basically, each cards is working on the same frame at the same time. One draws the even lines, the other draws the odd lines. So once a frame is done, you can have user input.

Memory technology? (1)

spinkham (56603) | more than 14 years ago | (#1531638)

I've been waiting to my old 90 Mpixel voodoo 2(which still is pretty decent on most games really)...
If Linux drivers come out, I'll probably go for the Voodoo 5 6000 quad beasty.. That should hold out for a while..

Anyone find anything on the memory technology yet?
This card would have to have some massive memory bandwidth to keep up with those fillrates..
I know even the sdram GeForce is memory bandwidth limited at much lower fillrate.
It appeares that each graphics chip has its own dedicated 32 megs from the specs(with up to 64 being addressable by each chip), so that is one memory trick I'm sure they are using.. Any other details?

(anyone notice that 3Dfx, long saying "32 bits doesn't make much difference" cuz they didn't have the technology, is now pushing it ;-)

Re:Why does anyone care? (1)

Anonymous Coward | more than 14 years ago | (#1531639)

And nobody will ever need more than 640k...

24 bit depth buffer (0)

FurmanJ (93518) | more than 14 years ago | (#1531640)

Unfortunately a 24 bit z buffer will not cut it for any sort of advanced application.

Old SGI Indigoes (1991!) had 24 bit z buffer and one could frequently see through polygons.
JJ

Re:Hehe...this guy's funny (2)

FauxPasIII (75900) | more than 14 years ago | (#1531641)

WOW !! You are amazing, I've never seen such a high concentration of incorrectness in one post. Let's do this blow-by-blow, shall we ?



> It's nice to see 3dfx sticking to the Glide standard rather than some proprietary OpenGL nonsense that doesn't port anywhere.

Glide is a proprietary interface owned and VIOLENTLY copyrighted by 3Dfx. They sue the ass off of anybody who tries to figure out how it works, make a wrapper, etc. Search the slashdot archives for details. OpenGL, on the other hand, is an open, FULLY portable to basically ALL platforms available, complete graphics API.



> Every tried to run a TNT in anything except Windows? The 3D works like shit.

Oh, then I guess the fact that I was just playing Quake 3 was just my imagination, because it looked great at 1024x768x32 bit color. I could have SWORN I got a TNT2.
Try updating your drivers.


> Sure, Glide might be slower in Linux/FreeBSD than Windows, but it is a simple, fast, and more efficient protocol than the crap that nvidia is turning out anyday.

nVidia didn't 'turn out' OpenGL, SGI did. And Glide actually works FASTER in Linux than in Windows. Even when you're making concessions, you're blatantly wrong. Dolt.


> nvidia users, you own a very nice 32MB card that can do wonders on a 2D desktop...look at X go!

The only part of your post with a shred of accuracy. The 2D on this card is almost as impressive as the 3D.

Re: The answer (1)

Anonymous Coward | more than 14 years ago | (#1531642)

Why should they bother writing MAC drivers when they don't even have an AGP slot to plug the card in ??

Question. (0)

Anonymous Coward | more than 14 years ago | (#1531643)

Does anyone out there have any idea what comes after geometry acceleration, if anything?

I am... (3)

Anonymous Coward | more than 14 years ago | (#1531644)

a typical Slashdotter...I will pay $5000 for a video card, but $40 for Word Perfect is a travesty, because I can't see the source code.

Re:About time. (0)

Anonymous Coward | more than 14 years ago | (#1531645)

Er? Unsupported? Wow, didn't know I was doing unsupported OpenGL transformations. It's a wonder any of my programs run. Er, no, wait, someone forgot to label the post flamebait. Damn! I guess I didn't need to go recompile everything with the magic Transformation and Lighting flag set to true (DTL_FINALLY_SUPPORTED)...

Re: No (0)

Anonymous Coward | more than 14 years ago | (#1531646)

All processors are working on the same frame.

So you don't have delayed input.

BUT performance wise, the ATI approach is better.

Re:Why does anyone care? (2)

jonnythan (79727) | more than 14 years ago | (#1531647)

Dude...I don't want to touch any of that. First, your eyes can take in and process orders of magnitude more information, bit for bit, than the ears. Sound cards these days are approaching levels of auditory saturation...and pushing standard, affordable speaker setups to their absolute limits. Hence the leveling off of sound card innovation

Now.....ugh. Your argument is the equivalent of saying that people will be happy with their PIII 550's next year and won't ever need to buy an Athlon 1 GHz. This...makes no sense at all. Video cards still have a LONG LONG way to go before they reach the maximum levels of performance they can acheive with monitors. Until we have a graphics card that processes information on a pixel-by-pixel basis with 64 million colors and real-world geometry at over 1600X1200 resolution with 60FPS we won't be anywhere near graphic cards levelling off.

Sound cards have already pushed speakers to their limits . Video cards have a long, long way to go.

Re:Why is everyone so excited? (2)

Haven (34895) | more than 14 years ago | (#1531648)

don't forget about the 3dfx mini-opengl driver.

Re:It appears you are a bit confused... (0)

Anonymous Coward | more than 14 years ago | (#1531649)

You're the one who's wrong. glide is a proprietary API that 3dfx cooked up to lock people into their proprietary standard. While OpenGL is relatively new on windoze, it's the open standard the Big Boys use in the Unix world, for serious stuff.

Current Generation Good Enough? (1)

GoofyBoy (44399) | more than 14 years ago | (#1531650)

>But now Good Enough 3D hardware is starting to come

For most people "Good Enough" hardware is here right now. A huge majority of gamers should be happy with the current generation of 3d cards, the improvements between generations are getting smaller and smaller. Also, the limiting factor in multiplayer is not the 3d card but your modem.

Maybe its just me but pretty pictures are impressive for the first hour. After that I want game play. Its the software which is more important to me.

I still have to go home and see how Q3Demo plays on my machine at home :)

3dfx picks the "right" features (1)

billybob jr (106396) | more than 14 years ago | (#1531651)

How do you know more geometry will be better? I'm not sure myself, but IMO 3dfx has been king, more or less, up to this point because they choose the best features to include. That is the heart of engineering. Deciding what to include and what to sacrifice, which trade-offs are the best trade-offs. Other manufacturers have included more and "better" features, yet when you look at price and speed and quality, I feel that 3dfx has been the best so far. Although the TNT2s are very good products also, and I'm sure the GeForce will be too.

What makes geometry better than fillrate?

Re:Question. (1)

spinkham (56603) | more than 14 years ago | (#1531652)

More T&L...
Right now is just a little triangle setup and lighting, there is much more that can be done in hardware..
And we need faster T&l, faster memory interfaces, more fillrate, etc...

Multi-processor based video cards? (1)

Cyberllama (113628) | more than 14 years ago | (#1531653)

I wonder if this heralds an era in which a video accelerator has several processors dedicated to individual tasks(i.e. one for bump mapping, one for texture mapping, etc.) We could be on the virge of very fast graphics indeed.

Re: The answer (2)

Haven (34895) | more than 14 years ago | (#1531654)

makes sense... but they could port the PCI versions...

Re:Why does anyone care? (1)

jonnythan (79727) | more than 14 years ago | (#1531655)

That sounded bad...what I meant to get across was that video cards have a long way to go until they saturate the capabilities of monitors. This is already possible, but graphics cards can, in theory, virtually mimic the real world on a 2D surface given good algorithms and ample processing power. I'm not too sure about how this would be done...but they need tremendous amounts of power and clocks to mimic the intricasies of the real world . The evolution of graphical power will continue to move quickly and steadily as we progress, just as the power of CPU's will.

Re: The answer (1)

nosferatu-man (13652) | more than 14 years ago | (#1531656)

Check your facts. [apple.com]

Re:serious question (1)

b0b0 (87979) | more than 14 years ago | (#1531657)

The Ge256 has proven itself to be no more then a logical extention of the Tnt2-line of cards. The drivers are even the same for both cards. My oc'd voodoo3 3000 is competitive with the ge256 up until >1280x1024, and my poor card doesnt have a fabled 'nVidia GPU'. 3dfx know what they are doing, and to all my fellow gamers out there, save your ducats until march. Quake3 or UT on a quad processor 128mb card? Oh ya.....

Re:I am... (0)

Anonymous Coward | more than 14 years ago | (#1531658)

Many /. of the old guard (as opposed to the teenage windows trolls that linux's increasing popularity has sent our way) understand the philosophy behind open source software. It seems that you do not. Most mature slashdotters would happily pay for an open-source product, not becuase they have to, but because they want to. This is actually pretty much like the closed source world, since the vast majority of people have copied their software from someone else, rather than pay the producer - but there's still plenty of people who pay the producer. They like having a printed manual. They like having support. They like having the software on its own CD, in case of system failures.

Sound is different to 3D graphics (1)

xmedar (55856) | more than 14 years ago | (#1531659)

If you have seen ray traced graphics and compared them to the current crop of 3D graphics accelerators then you will realise there is a huge difference in quality, and it will take many years for you to have that level of graphics in realtime, maybe then it will end up as an integrated part, I would guess we are 5 to 10 years away from that.

Re:Details... and analysis (0)

Anonymous Coward | more than 14 years ago | (#1531660)

"Come on 3dfx! This is *not* the technology that will keep us ahead of the PSX2!!!" The PSX2 has 2.4 Giga Pixel per Second fillrate with Z buffer and Alphablend enabled and a polygon reate of 75 million/sec. Some way to go yet :) The Graphics chip of the PSX2 ("Graphics Synthisizer") consists of 43 million transisters.

Re:jaw drops (0)

Anonymous Coward | more than 14 years ago | (#1531661)

huh? good not only for gaming? what else would you use this card for? It has a huge fillrate, but doesn't offload geometry processing from the CPU. Professional grpahics (cad and modelling) involve huge numbers of shaded polygons, but relatively little textures. Games on the other hand like a few big polygons with multiple textures. Games want fillrate, professional apps want triangle rate.

multiple os support, pmesa? (0)

Anonymous Coward | more than 14 years ago | (#1531662)

Yes read the FAQ, it says it will have beos and linux support! lets see if pmesa will support it.

re: 'heart of engineering' (2)

SethJohnson (112166) | more than 14 years ago | (#1531663)

That's what it might look like at face value, but I have found rarely that engineering makes these decisions:
>because they choose the best features to include
This is usually done by the marketing department, and often to the chagrin of the engineers, who would like to think they know better...

Seth

Nothing new. (2)

Skinka (15767) | more than 14 years ago | (#1531664)

Examples of "SMP video".
  • Quantum's dual Voodoo1.
  • 3dfx Voodoo2 (SLI)
  • ATI Rage Fury Pro 128 Max Turbo Fast Thing, or whatever the hell it is called.
  • Bitboys Glaze3D (if/when it comes out).
Everyone implements it bit differenty, but the idea of SMP Video is not new. CAD-people, who use cards that cost more than a decent car, have had this stuff forever.

Re:The usual... (0)

Anonymous Coward | more than 14 years ago | (#1531665)

RAM sitting on the PCI bus? Uh, no thanks. Not only do you have to go through PCI, but the PHB also (PCI Host Bridge, not the *other* PHB).

AGP might be better for this, but still, isn't AGP optimized for writes? Most processor ops are reads so this would be ugly.

RE: YOUR an idiot. (was Re:Why does anyone care?) (0)

Anonymous Coward | more than 14 years ago | (#1531666)

That made me laugh! Actually, you're an idiot for not knowing how/when to use the contraction of "you" and "are". Duh. Go back to grammar school.

Re:The usual... (0)

Anonymous Coward | more than 14 years ago | (#1531667)

it was a joke...pay attention. The poster wasn't serious.

Re:Memory technology? (1)

spinkham (56603) | more than 14 years ago | (#1531668)

Ok, on page 54 of their quantum PDf presentation, it mentions the quantum 3D Alchemy (the high end cards with 8-32 processors) and it's memory bandwidth of "over 100GB/sec"
I assume this is extrapolated from the 32 processor version, each with it's own mem interface..
that means memory bandwidth per processor is about 3 GB second.
Me thinks this will be a bit memory limited in games that don't use it's texture compression...
However, if games start supporting 3dfx's texture compression, they should fly...

And the cycle continues... (1)

Pufferfish (100833) | more than 14 years ago | (#1531669)

I can't see why people are up-in-arms about this paticular release, it's just the continuation of the graphics-card cycle. nVidia, S3, and the rest will continue to release new chipsets along with 3dfx. 3dfx is far from king of the hill: namely, they release products at such a slow rate that for much of the time it's better to get a TNT2 Ultra, for instance, instead of a Voodoo3.

Sometimes 3dfx will be on top, sometimes nVidia will be reigning champion (the NV15 is in the works, i hear...)

The Savage2000 and ATI Maxx are almost out...it really doesn't matter. These cards are way more powerful than anyone really needs, or will need, for a while. Until software comes out that really needs a couple hundred megatexels a second, I don't really care who's on top of the hill (and by that time, there will be even more powerful cards coming out)...

Re: these cards come in PCI (1)

SethJohnson (112166) | more than 14 years ago | (#1531670)

And not only that, but there are Voodoo 3 drivers available in Beta from 3dfx for those cards to run on Macs. I'd imagine they will continue evolving that code base to support the 4 and 5 cards on the Mac platform as well.
Seth

Yeeeow! (1)

schnurble (16727) | more than 14 years ago | (#1531671)

OK. From the neat little Shockwave Flash thinger on the 3dfx website, it says "So powerful, it's kind of ridiculous." Actually, I find it highly ridiculous.

Come on. Do we -really- need to be able to frag at 3200x2400x32bpp at 60 frames per second? Well, ok, I guess piping this to a 57" big screen TV would be nice.

But seriously, isnt this approaching overkill? I find Quake II to be quite fun on a Voodoo3 3000 AGP. Granted, my shitty 14" monitor is the limit, and why I'm only running at 8x6. But alas.

finally (2)

Haven (34895) | more than 14 years ago | (#1531672)

now do we have the hardware to support Virtual Reality?

Re:Question. (1)

Haven (34895) | more than 14 years ago | (#1531673)

speed

Re:3dfx picks the "right" features (0)

Anonymous Coward | more than 14 years ago | (#1531674)

For "serious" uses, and for more realistically curved oranic models, geometry is more important. If you're making a games card (and 3dfx is), then fillrates fine - after all, you'll mainly be rendering Metallo-Clank-o-Matic Killer War Robots etc. to a fullscreen display. However, in future, you may get more organic looking games. Kingpin, for example, looks way better on a Matrox G400 than a Voodoo 3. Of course, if no-one has high poly-count cards, then these games will be longer coming, because the market won't be there.

Also, for real work, i.e. CAD and Film work, you want lots and lots of very small polygons.

Re:Divided (1)

DonFarfisa (61785) | more than 14 years ago | (#1531675)

I'm not trying to be flamebait, but I think most "hardcore gamers" decide how great a card is before they even try it. If the numbers are high, it rules. 2GB sounds awfully extreme to me, and can the architecture on the card handle all of that? I mean, what kind of game has 2GB of textures in it? I guess you could write several "versions" of each texture, but would that make it much faster than just re-rendering it? I'm curious to see if there's much difference. According to numbers, it should look amazing, but I'm not expecting much.

Re:PCI Version Already Supports Macs (2)

ecampbel (89842) | more than 14 years ago | (#1531676)

Macintouch [macintouch.com] is reporting that the PCI version is already supported by the company's existing Macintosh drivers. You can read the FAQ [3dfx.com] yourself. No doubt, if there is enough interest, drivers for AGP Macintoshes will be forthcoming.

Each are gambles (2)

Stiletto (12066) | more than 14 years ago | (#1531677)

T&L is a gamble. High fillrate is a gamble. Bump mapping is a gamble. Any new feature a chip manufacturer puts on thier chip is a gamble.

No one knows what game companies are going to try next.

There is no way of telling whether hardware transformation and lighting is going to make any difference at all in future games. Sure, nvidia is going to tell you that future games will depend on it! There is no way of knowing that 5 gazillion texels/sec is going to really make much of a difference to future games, although 3DFX doubless wants you to think that. No one knows whether game companies are going to stuff their games with bump-mapped polygons, no matter how much Matrox tells you it's the truth.

Point is, each of these hardware developers are hedging their bets, that game companies will favor their technology.

As for us consumers, I would take a "wait and see" approach. I'd never go out and buy the latest and greatest until I see what games run well on them and what games do not run well on them. Specs from pre-released hardware are meaningless, and even released hardware that runs a FEW games spectacularly is nothing to base a purchase on.

Look for the architecture that stands the test of time, and has support for the platform and games you play.

Re:Why does anyone care? (0)

Anonymous Coward | more than 14 years ago | (#1531678)

I disagree with you. Many of the key problems in computer graphics exhibit greater than linear complexity -- for instance, global lighting, shadows, multiple reflections. Additionally, the human visual system is extremely acute. For instance, it takes a number of hours for Pixar's rendering farms to output a single frame of Toy Story 2 and, while TS2 is startlingly realistic, no one would mistake it for reality. I think the hard-core gamers/graphics afficianados (sp?) are going to be willing to pay the $200+ premium for a high end card until the day comes when a computer can real-time render images of a comparable quality to live-action movies, which isn't going to happen anytime soon (I would guess not in the next decade at least). However, I agree that the public at large is not going to be joining that group -- as you say, a typical MB or bundled card is now as good as a 1-2 yr old high-end card. Additionally, between game consoles, sub-$500 PCs and the emerging thin clients, I think the $200 video card premium is going to start being a lot harder for non-fanatics to accept.

Re:Details... and analysis (1)

MassacrE (763) | more than 14 years ago | (#1531679)

The GeForce 'chip' does 480 MegaPixels/sec, true, but without textures. The ram on the GeForce boards is not fast enough to serve textures at that speed. When the dual cycle ram boards come out , it will actually go at 480, but in the meanwhile this is a meaningless number - you are limited by the speed of texture and pixel data through memory before you get close to 480 MPix/sec.

I would say they are the same speed (Before GeForce gets new ram), but that is disappointing considering that the 3dfx also cannot do geometry.

Re:Question. (0)

Anonymous Coward | more than 14 years ago | (#1531680)

Hardware b-spline surfaces and NURBs?

I saw nothing about fast writes (1)

Alfthemack (17146) | more than 14 years ago | (#1531681)

In order to push that many pixels/texels fast writes will need to be enabled since the current memory bus is just too slow.

Also, is the current PC-133 Mhz standard fast enough (for workstations not gamers) to send 2GB worth of textures?

I don't know. That's why I'm asking you guys...

Re:Multi-processor based video cards? (1)

monstar (62285) | more than 14 years ago | (#1531682)

>We could be on the virge of very fast graphics
>indeed.

virge, LOL! please tell me that was intentionally funny (the virge being a highly crap excuse for a 3d card)

Re:STUPID IDIOT (0)

Anonymous Coward | more than 14 years ago | (#1531683)

Winmodems do suck. Do you know what a winmodem does? It makes the CPU do the work that a $16 chip could do, for a 10% performance hit. Stupid, stupid, stupid. Also, winmodems destabilise your system, since they have to have priveleged memory and CPU time.

BTW, USB mouse and keyboard support is in linux kernel 2.3 as strandard now, and can be patched into linux kernel 2.2 too.

USB mice are superior to standard serial mice. (higher sampling rate). Winmodems are inferior to real modems. Your argument was, I hope, based merely on incomplete data, rather than any intent to disseminate misinformation...

Re:Yeeeow! (0)

Anonymous Coward | more than 14 years ago | (#1531684)

I'm not sure where you pulled those figures from, but you're dreaming. This new series of cards 'were targetting 1024x768x32 at 60 fps' according to the interview with Scott Sellers on their website. I'm assuming this is with all their tbuffer-goodness enabled too(it must be for such sh*tty numbers!) Overkill??? Far from it. Try to run next-gen stuff on that voodoo3 kit of yours and you'll be a little disappointed. Trust me, I had the same card, and I was starting to see the limits of it. Graphics hardware will ALWAYS need to be faster because the software developers will fill the void with better textures/more polygons/higher resolutions/etc. If you don't believe this, maybe you should go back to playing Doom on your 386...

Re:Yeeeow! (1)

Snack Cake (71094) | more than 14 years ago | (#1531685)

Do you seriously think that we have reached the ultimate plateau of graphics excellence, and there is nothing more to achieve? These boards aren't designed for playing Quake II at 800x600, they are designed for Quake III and other yet to be relased games. Current PC games still don't compare to the quality of demos for the PSX2, which itself isn't on par with the quality of movie effects or animation (eg, A Bug's Life, The Phantom Menace). Faster is still definitely better; the only question is whether 3Dfx is making there cards fast the right way.

Re:Divided (1)

platypus (18156) | more than 14 years ago | (#1531686)

Well, you could implement a fully ram-caching apache in this thingy and blow nt in the next mindcraft test ;-).

It's fillrate, not HW T&L (0)

Anonymous Coward | more than 14 years ago | (#1531687)

I saw an interesting preview over on http://www.sharkyextreme.com/ this past week. The ATI Rage Fury Maxx (2 processors, each rendering a different frame) smoked the GeForce 256 in just about every test.

Funny, that ATI card doesn't have hardware T&L that everyone's claiming is going to end world hunger and lots of other stuff. So, why's the ATI card faster? It's got way better fillrate (aka polygon painting) than the GeForce. So, as benchmarked, the GeForce may have a spiffy T&L engine, but it's crippled by its slower fillrate on current games.

For now, I'd say that 3dfx is right: fillrate is king, T&L isn't worth spending the $ or transistors on until the fillrate problem is 'solved' (hah)

Re:I saw nothing about fast writes (0)

Anonymous Coward | more than 14 years ago | (#1531688)

it's fast enough if the textures are loaded into the card at startup and left there until you're done. I couldn't imagine having more than 2 GB of texture data loaded into a scene at once. I imagine just b/c I said that, it'll happen soon enough. Anyway, that card (32 processors from Quantum) is gonna be $40,000 (not kidding) anyway, so who cares. Are you gonna put one in for Quake 3?

What About When I'm Done Playing Games? (2)

Nightspore (102270) | more than 14 years ago | (#1531689)

All of these Nvidia GeForce/3dfx Voodoo 4 and 5 boards are technically amazing but this level of consumer 3D hardware is in desperate need of a new killer app. I have a Voodoo 1 and I'll likely be more than satisfied with the performance of Quake Arena on that thing. I simply refuse to drop hundreds of dollars more to play games at higher resolutions/framerates.

Why, when we all have a global network right in front of us that is ablaze with information and commerce, is no one strapping a hardware-accelerated 3D engine onto the net? When am I going to be able to navigate the web in 3D? When can I use my 3dfx or Nvidia board to do real work, or to shop, or to explore real information and news? Why is the web still 2D? Wake the fuck up. Screw VRML - I'm not even asking for any sort of server tech - just give me a fly-through 3D-abstraction of the HTML/XML content that is already there. If people know the engines are out there they will start to build for them.

Bottom line - I won't give 3dfx or anyone else more of my money to play another FPS. I will part with more money for 3D hardware when I can use a Voodoo 5 6000 to give me an Ono-Sendai Cyberspace VII-like window into the net.

Night

Do game developers want fill rate or T&L? (more) (1)

kinesis (13238) | more than 14 years ago | (#1531690)

Billy "Wicked" Wilson of Voodoo Extreme asked a bunch of high-profile devs exactly this question. [voodooextreme.com]

The majority response, was that if they had to choose, they'd pick a card with accelerated geometry processing and a mediocre fill rate over a card with an insane fill rate and no geometry acceleration.

What does that tell you about the direction the game developers want to go? They want to build games with higher-polygon engines/content. My guess is that's what we're gonna see.

Idiot! You've been had. (0)

Anonymous Coward | more than 14 years ago | (#1531691)

No one's that dense. This guy was yanking your chain.

Caveats (2)

TraumaHound (30184) | more than 14 years ago | (#1531692)

First some quotes (from the press release):

Re: the Voodoo4
The boards, which render two fully featured pixels per clock, will deliver between 333 and 367 megatexels/megapixels per second

Re: the Voodoo5 5000/5500
The board, which renders four fully featured pixels per clock, will deliver between 667 and 733 megatexels/megapixels per second fill rate

Re: the Voodoo 5 6000
The Voodoo5 6000 AGP, which renders eight fully featured pixels per clock, will deliver between 1.33 and 1.47 gigatexels/gigapixels per second fill rate


Now, if you'll notice they state how many "fully featured pixels per clock" each card delivers. Also, notice that the V4 does 2, the V5-5500 4, and the V5-6000 8. Along with that, as I guess one would expect, the V5-6000 has double the fillrate of the 5500 which has double the fillrate of the V4.

So? What's my point? Well, with the Voodoo2 -- which could render two pixels per clock -- the full fill rate was acheived only if the app was rendering two pixels per clock. (ie. multitexturing) If the app wasn't multitextured, the effective fillrate was actually only half the "marketing" fillrate. I think this was also the case with the Voodoo3, although I'm not positive.

I'm not saying that this is definitely the case with these cards, but:

Correct if I'm wrong, but I think these cards are still based on the same architecture as the V1, V2, and V3.
3DFX is somewhat notorious for advertising the higher "marketing" fillrate as opposed to the true fillrate.
The fact that they qualify the fillrate of each card by stating the number of render pixels per second kind of worries me.


If this is the case, apps that don't take full advantage of the high end cards (ie. have less than 8 pass multitexturing) may leave you with nothing more then a glorified and expensive Voodoo4.

The importance of PCI: dual-head kings of tomorow? (0)

Ricardo Casals (103689) | more than 14 years ago | (#1531693)

A lot of people don't realize how important PCI is for dual-head displays. And I really like 3dfx for maintaining PCI products around? Why? This way we can have dual-head displays with two graphics cards of the same quality, one on AGP and the other on PCI, and they deliver a great ammount of performance. Here is 3dfx, your dual-head display king of the next millennium. Nobody matches these babies. And that is why people shouldn't criticize PCI technology, until we can have two AGP slots (I don't see that happening in the mainstream any time soon). So stop bashing on PCI, it will take 3dfx where nobody else can go! :-)

Linux IS THE 3D POWERHOUSE (0)

Anonymous Coward | more than 14 years ago | (#1531694)


good lord, now this news...is there any reason to doubt how much of a 3d powerhouse, Linux/INTEL really is??

heck, a properly configured xeon machine with a TNT card smokes an SGI Onyx2 in most performance and 3d benchmarks. And SGI calls itself an innovator?? Please, theve done nothing for 3d/animation/graphics compared to what Linux has done for the world.

Look for Redhat boxes to Invade the movie scene soon...

Re:Why is everyone so excited? (1)

ddwalker (67412) | more than 14 years ago | (#1531695)

3dfx appeals to the greed in developers with their Glide API. By this I mean that the developers can port to Glide, and the framerates are 10-15% better than OpenGL or Direct3D. The reality is that if the developers want to kill the Glide API, they should just exercise some restraint and NOT code to it. Its not like 3dfx doesn't support OpenGL (although I have yet to see a fully compliant ICD, correct me if I missed it...since I use TNT's for development now) or Direct3D.

As for being happy about 3dfx putting out another competing 3D Board...you better damn well believe I'm happy. I love competition...it makes BOTH of the leaders better in the long run. Think about how competition has improved those frame rates and image quality just over the past 3 years. (thats even in spite of an initial head start with Glide by 3dfx.) Simply amazing really...

-- DW

Re:Why does anyone care? (1)

jmatthew3 (100802) | more than 14 years ago | (#1531696)

yes, i'll agree with your assessment of the sound card market, and yes, there is always a group of followers who will be happy with older equipment, but graphics cards still have a long way to go before commoditization.

it's much easier to reproduce a sound that sounds real than a 3d image that looks real, and it won't be for another 5-10 years that the video card market becomes totally commoditized.

now, the low end is already that way, and will slowly continue in that fashion for some time.

-jm3

Impressive...wrong, but impressive... (0)

Anonymous Coward | more than 14 years ago | (#1531697)

About 8 months ago, I saw two guys kissing in the park. That was the gayest thing I had ever seen until I saw the junk you're blowing out your hole.

Alright, here we go folks, one by one...

Starting off, OpenGL is slow. Period. While the licensing issues of Glide are a bit iffy, it sure as hell beats the pants off anything else significantly. This isn't something too open to debate, as benchmarks that we don't see from Mindcraft show us over and over and over...

3D works well with OpenGL in linux? Well that's news to my developer ears. I could have sworn the first thing you see in the video section of the UT demo readme is that anything attempting OpenGL is dog-ass slow. In fact, I seem to find that in _any_ game except Q3 (kudos to Carmack).

Glide is faster in Linux? Okay, lemme think...no. It's cut down to about 3/4 of the Win9x speed. Sometimes worse. Ever tried, bub?

I'm sure the 2D looks very nice, and I'm sure my 4MB generic Cirrus PCI card does a 1024x768x32bpp desktop just as nicely. Smart guy. No, just kidding. The poster that you're flaming might be a little out of touch, but come on man.

Re:STUPID IDIOT (1)

bornholtz (94540) | more than 14 years ago | (#1531698)

Do you know what a winmodem does? It makes the CPU do the work that a $16 chip could do, for a 10% performance hit.

Good. I saved $16. Why do I really care if it uses 10% of the CPU? My PC is idle most of the time anyway. Linux is single-user on my machine anyway.

Re:STUPID IDIOT (1)

billybob jr (106396) | more than 14 years ago | (#1531699)

Hmm...It's fair to say that most people don't use 100% of the cpu very often. Why is saving 16 dollars (manufacturer's cost, consumer's cost difference will be greater) at the expense of giving up 10% of your cpu so bad? It's not my choice personally. But for the general market, there's no reason not to save a few bucks and let a $50, 3 or 4 hundred mhz cpu do the work.

Re:Why does anyone care? (2)

mikera (98932) | more than 14 years ago | (#1531700)

I agree with you about the initial stages, but not regarding the later ones. I don't think we are anywhere close to seeing commoditisation of 3D graphics cards. The point is that sounds cards are fundamentally simple. They pump out waveforms, and that's about it. The system isn't at all complex, and a 44kHz output is fine for human ears. Sound content is by and large recorded, so there's not a lot of scope for sound processing beyond a bit of filtering and a few special effects. It was obvious very early on that a CD-quality sound card was going to turn up soon, and that this would be enough for most people. 3D Graphics cards are a completely different ballgame. Sure, they need to be able to reach sufficient resolutions, bit depths and refresh frequencies. No problem, 1600*1200*32 at 60Hz is about as much as a human eye needs. That kind of spec can and will become a commodity. But 3D rendering is insanely complex. Nothing in the world is even remotely close to being able to render complex realistic scenes in realtime. Even having a pentium processer for every pixel wouldn't be fast enough to do complex raytracing. Even if we double rendering capabilities every year, we're still over twenty years away from being able to do this in realtime on a commodity platform. Point is, there will always be scope for enormous innovation in the 3D cards market to implement new techniques. There are unlimited optimisations and innovations to be made. 3D cards are all about producing an approximation of a visual scene, and the winner will be the card that makes the best approximations, the most "realistic" scene while maintaining some kind compatibility with standards. Take, for example, hardware T&L technologies. Good idea, now becoming feasible to implement. Once it catches on cards without it won't have a chance. But even hardware T&L will get replaced as cards start to implement scene description langauges etc. My guess is that you will see ever-more processing delegated to the graphics subsystem until you have what is in effect a fully programmable, dedicated graphics computer. These will become the standard, and start to ramp up ever more powerful specs, rather like PCs at the moment. Clearly the graphics supercomputer-on-a-card is a long way off right now. This means there is going to be technological "leapfrogging" for the forseeable future. Sure, some people will be happy once technology gets to a certain point. These are the people who are already content with a decent hi-res 2D card to do their wordprocessing and run a few business apps. But in the 3D arena, there will *always* be something that looks and feels a lot better just around the corner, and that is exactly what all the hardcore gamers will want to buy. Still, I reckon that the first device to create photorealistic 3D scenes in realtime won't be a graphics card at all. It'll be a very clever genetic algorithm that "paints" the scene. Just give me ten years or so......

Re:It's fillrate, not HW T&L (1)

rikkitikki (91982) | more than 14 years ago | (#1531701)

Yeah, I remember that. Not really a good test of the NV10. They were comparing fill rates at 640x480. The NV10 shines in fill when you start doing 1024x768, 1280x1024, 1600x1200 etc. Also, I don't remember any of the tests taking advantage of hardware T&L. All the T&L code for those tests are done in software, even if you have the hardware for it. So, again, those tests didn't really touch anything the NV10 does well.

Re:Each are gambles (1)

billybob jr (106396) | more than 14 years ago | (#1531702)

Exactly... 3dfx has done a good job in the past of making good trade off's in their design. I will definitely be waiting to see how these two technologies compare to each other before buying a card.

Ray tracing? (0)

Anonymous Coward | more than 14 years ago | (#1531703)

Genuine ray tracing, probably backwards at first, later forwards. Lots of parallelism, one or more chip/functional unit/whatever per screen pixel. Enough RAM to texture map the planet.

Winmodems (1)

Molly (32733) | more than 14 years ago | (#1531704)

I think the real reason that Linux has no drivers for Winmodems is not that they suck, but that the manufacturers have chosen not to make programming information available.

I haven't ever used a Winmodem, so I don't know how hard they suck, if at all.

Molly.

It's still bad engineering (0)

Anonymous Coward | more than 14 years ago | (#1531705)

Performance issues aside, Winmodems suck because they won't work without Windows, and their manufacturers won't give out enough specs to allow them to work with other operating systems. Is this a MS plot? Who knows. It seems like a better "low-cost" alternative would be to build the modem processor into the motherboard, like the sound chips and video processors on many bulk systems.

Re:Divided (1)

spinkham (56603) | more than 14 years ago | (#1531706)

This is because each chip has it's own dedicated memory, in order to give decent overall memory rates..
2GB/32 chips per high end card = 64 megs a chip, the max that the chips can address...
The reason that they have the consumer market top out at 4 chips is there are diminishing returns with each chip..
2 should be nearly twice as fast, 4 less then four times as fast, 8 perhaps 6 times as fast, etc..
I don't believe with this architecture they can pull of very linear scaling.

Re:The importance of PCI: dual-head kings of tomor (3)

Alfthemack (17146) | more than 14 years ago | (#1531707)

How you managed to avoid having your post not moderated to flame-bait is beyond me.

Matrox offers an AGP card with two outputs. It's called the G400. Unlike the V4 and V5, it's already released and available.

Anyway, I'm somewhat off topic. But, I needed to correct this. The issue with PCI is bandwidth and texture swapping. If the card truly can have up to 2GB of RAM, (Yes, there are many simulation visualizations and mappings that can use this.), you'll need more than the 533 MB/s provided by the PCI bus. Even full AGP 4x (w/ RDRAM) has 1.06GB/s. An approximately 2s delay is damn noticeable.

If they can put multiple graphics processors on one card, why can't they put multiple output ports on the same card?

Re: 'heart of engineering' (1)

billybob jr (106396) | more than 14 years ago | (#1531708)

Good point, the bottom line is 3dfx and nVidia and all the other companies are building these chips and cards to make money. 3dfx has gotten some bad press because they didn't include 32 bit color on their Voodoo3 line. To me, that doesn't seem indicative of a company that is throwing on features mindlessly to market their product. On the other hand, one could argue that 3dfx didn't include 32 bit color for other reasons. 1. 3dfx sucks, nVidia rules 2. they just missed the boat on this one I don't know which is the case, probably no one but employees of 3dfx truly know. I do like 3dfx, but my BS meter goes up a little when they rant about no one needing 32 bit color, 60 fps is more important. Come on, any company is going to say what they have is better.

Re:Each are gambles (1)

ddwalker (67412) | more than 14 years ago | (#1531709)

Yes these are all gambles in a sense. But some things are not gambles. Anti-aliased polygons can make a scene look alot better (on the order of magnitude that bilinear and trilinear interpolation does) ... so I would consider this to be less of a gamble. I imagine that nVidia will support this soon. 3dfx may have just done it first. As for the "revolutionary" T-buffer [tm]...this is just the accumulation buffer, and I've been wondering when someone would support it. The lighting in T&L I think may be a bit more of a gamble. Most games still use precalculated lightmaps. I must admit that hardware lights looks really nice and allow for alot of cool effects, but the problem is the limit on the number of active lights in a scene (which results in the coder resorting to light maps again or perhaps implementing a partial lightmap/dynamic light engine.) As for the Transforms...this is a good idea and one that I think we'll see more of. I would wager that there will be a point when a 3d card has ALL of these nifty features in hardware (meaning everything that OpenGL can do and more is done in the hardware *drool*) but until then...we get to wait and see what the 3D card makers and game writers think is MOST important.

So I agree that these things are all gambles in a way...but I think they are all bound to be supported by everyone eventually anyway...its really just a matter of priorities.

Re:The usual... (1)

Ziviyr (95582) | more than 14 years ago | (#1531710)

Hey, its a great idea. Probably slower than normal RAM, but ALOT faster than virtual HD ram... Just give it a priority inbetween those two. (Well, I dunno if windows/linux has RAM priorities like the Amiga....)

I hope the _best_ card wins (1)

billybob jr (106396) | more than 14 years ago | (#1531712)

Of course, if no-one has high poly-count cards, then these games will be longer coming, because the market won't be there.

I truly hope that nVidia can gather enough support for the GeForce's geometry engine so that this battle is fought and won on which solution is technically better, not which company bullies the other better.

Re:Why does anyone care? (1)

spinkham (56603) | more than 14 years ago | (#1531714)

I agree. Soundcards can be compared much better to 2d cards, which have basically leveled off...
All most sound cards do is d/a conversion and *very* minimal processing.. Sound cards are mostly analog devices with a d/a converter.
2D cards are also the same sort of basic a very small amount of window processing, a d/a conversion, and analog components.
3D cards, on the other hand, are very complex processors, in recent generations as complex as the recent geleral CPU's in our computers. Just as CPU's are getting pretty fast, but are not yet (nor ever will be, i suspect ;-) fast enough, current 3D cards arent, and are still evolving at least as fast, if not more, then CPU's are.
However, you can also get less then cutting edge graphics cards (as well as CPU's) and still have a decent system.
I for one am still using a Voodoo 2, and am only now, 2 years after the purchase, looking to upgrade. I also have my k6-2 300 from the same era running my gateway/server, though my main workstation has a celeron 400 on it (comody tech, bought darn cheap).
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...