×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Nvidia and AMD Hug It Out, SLI Coming To AMD Mobos

timothy posted more than 2 years ago | from the feed-your-kill-o-watt dept.

AMD 120

MojoKid writes "In a rather surprising turn of events, NVIDIA has just gone on record that, starting with AMD's 990 series chipset, you'll be able to run multiple NVIDIA graphics cards in SLI on AMD-based motherboards, a feature previously only available on Intel or NVIDIA-based motherboards. Nvidia didn't go into many specifics about the license, such as how long it's good for, but did say the license covers 'upcoming motherboards featuring AMD's 990FX, 990X, and 970 chipsets.'"

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

120 comments

Simple reason really (0)

Anonymous Coward | more than 2 years ago | (#35972414)

Those Bulldozer chips must really be good if NVIDIA want the best possible benchmark scores.

Re:Simple reason really (5, Insightful)

fuzzyfuzzyfungus (1223518) | more than 2 years ago | (#35973008)

Hard to say(until bulldozer drops) whether NVIDIA thinks that they are really good, or good enough. Since NVIDIA no longer has an intel chipset business, tying SLI to Intel platforms no longer serves to move more product; but to restrict the size of their potential market(since anybody who wants the, often quite aggressive, price/performance of an AMD part won't be buying more than one NVIDIA card, at most).

So long as they are confident that AMD's CPUs will be good enough not to bottleneck SLI configurations, trying to sell multiple cards to people who purchase AMD CPUs seems only reasonable. If, of course, they think that the CPUs will be even better than good enough, the approach is even more reasonable, so it doesn't tell us too much about which it is.

Re:Simple reason really (1)

Joce640k (829181) | more than 2 years ago | (#35973240)

anybody who wants the, often quite aggressive, price/performance of an AMD part won't be buying more than one NVIDIA card

{snork}

Re:Simple reason really (4, Insightful)

hairyfeet (841228) | more than 2 years ago | (#35974070)

And what exactly was that snork for? The "bang for the buck" has been firmly in the AMD side of the aisle for quite some time, as you can see here [cpubenchmark.net] on this handy chart. Thanks to the Intel socket roulette it has gotten to the point that depending on the socket one can build an AMD quad complete for less than a dual Intel and motherboard alone, that is how much extra you are paying for all those ads (and the kickbacks to OEMs of course).

Since it came out that Intel was rigging their compiler and bribing OEMs (where the hell is the antitrust bust anyway?) I've been selling AMD exclusively and my customers couldn't be happier. The lower price of AM3 boards and CPUs means they can get better features and more options for less money and having those extra cores helps to future proof a system. I mean how can you complain when you can get an AMD triple OEM PLUS a good ASRock motherboard for under $100, or that I can deliver a fully loaded AMD quad WITH a Geforce 210 AND a wireless keyboard/mouse combo and Win 7 HP X64, all for $550 while still making a decent profit? I certainly can't and my customers couldn't be happier with the performance. Not everyone is only worried about ePeen benchmarks you know.

As for TFA this isn't really surprising and the only shock is that Nvidia didn't do this sooner. Intel in their usual douchebag manner killed the Nvidia chipset business (again WTF? Where the hell is the antitrust already?) by refusing them the right to build on anything past LGA775, thus forcing everyone on Intel boards to have a craptastic Intel IGP whether you wanted it or not, so working out a deal to have SLI on AMD seems only natural.

To me the bigger question is why the hell don't they sell a 1x GPU board designed for PhysX. There are many of us I'm sure that don't have dual x16 boards that would be happy to buy an Nvidia chip just to add PhysX but screwing us over by disabling PhysX support if it detects an AMD GPU is just stupid, especially since that pretty much kills any point of having Nvidia on AMD with nearly all AMD boards having Radeon IGPs now. If they want to sell more GPUs a 1x board dedicated to PhysX would probably sell like hotcakes for those of us on AMD platforms.

Personally until they quit screwing us with the drivers I'll keep using Nvidia for strictly HTPCs (you can get a Geforce 210 for like $20 after MIR) and for those that want to game on the cheap sticking with the HD4850 (can't beat 256 bit pipeline for $60 refurb, and makes a cheap crossfire monster) or the HD5770.

Re:Simple reason really (3, Insightful)

fuzzyfuzzyfungus (1223518) | more than 2 years ago | (#35974778)

In fairness to Intel, if you need pure bang, particularly if it is necessary that said bang be delivered to a single-threaded application, AMD has nothing on them.

However, as you say, AMD has plenty of more-than-fast-enough offerings that are dirt cheap, and tend to be supported by slightly cheaper motherboards. Given that many modern games tend to be GPU bound much of the time, gamers on budgets are generally pretty well served by cheaping out a bit on the CPU and going up a model or two on the GPU side. Since NVIDIA sells no CPUs(barring Tegra, which is irrelevant here) and no longer has a chipset business worthy of note, they'd be fools not to try to scoop up the "good enough AMD rig and enough cash left over for a slightly ludicrous video card" demographic.

Now, given Intel's strength, they aren't about to cancel SLI support on intel, despite being fucked over on the chipset side; but ignoring AMD doesn't make much sense.

Re:Simple reason really (1)

hairyfeet (841228) | more than 2 years ago | (#35979712)

Exactly, and I'd add that since most games are multiplatform even those that have PC enhancements are not really gonna slam the CPU enough to justify the huge price difference.

I like to game and looked at both sides when I built my machine, but for just $600 I got an AMD Phenom II 925 quad, dual 500Gb HDDs, a nice ECS business class motherboard with solid caps and plenty of USB headers, an HD4650 1Gb,8Gb of DDR 2 800Mhz, DVD burner and a nice black case with 8 USB ports. I just recently had my HD4650 replaced with an HD4850 and frankly the amount of bling bling "ooh purty!" I get on my 1600x900 monitor is just nuts and if I went Intel I'd have had to sink closer to $1000 or went with a shittier CPU like a Pentium or a dual core.

As I said after the Intel bribery and compiler scandals came out I switched my shop to AMD only sales (because i believe in free markets and putting my money where my mouth is) and my customers couldn't be happier. You can pick up 2.8Ghz-3.2GHz AMD chips chips cheap and unless as you said your needs are strictly single core performance (and even then I'd argue most folks wouldn't "feel" a difference in the chips) there just really isn't a point in using Intel unless you are like my two gamer customers and care about nothing but benchmarks.

BTW for those that look at benchmarks? Might want to keep in mind Intel has rigged their compilers so that anything you compile on them looks for the Genuine_Intel flag and if it detects it runs the SSE and above code and if not drops the thing to x87 even though AMD has supported SSE and above for half a decade now. So unless they say they compiled the benchmark with a non Intel compiler ANY and ALL benchmarks should be seen as suspect. It would be like saying a Mustang consistently destroys a Camaro on top end while conveniently neglecting to mention you tied a boat anchor to the Chevy.

So in the end since switching I haven't for a reason for using Intel in anything except for those wanting an ePeen or a money is no object workstation. everyone else, including gamers like me, will run great on AMD and will be able to use the savings to bump up graphics or RAM. I've even been going AMD on the laptop route, going so far as handing my oldest one of the new Turion X2 laptops and selling AMD netbooks, and again folks couldn't be happier. The new Radeon IGPs mean that when my oldest isn't in class he and his friends sit around a break room in college and have frag fests or play their MMOs, they get pretty good battery life, and the Radeon IGPs make video smooth as butter thanks to hardware acceleration of most formats.

Sorry about the length, I just hope this gets some of my fellow /.ers to take another look at AMD. I'm an old enough greybeard to remember what it was like to have an Intel without competition and it wasn't pretty, with assraping prices for even their shit CPUs. If you believe in free markets and fair competition you really need to support AMD after the Intel compiler and bribery bullshit, and with the money you save you can crank up the graphics or add an SSD for more performance.

Re:Simple reason really (1)

rhook (943951) | more than 2 years ago | (#35981384)

Your entire rant about Intel has been rectified. First AMD sued Intel, that case was settled over a year ago. Then the FTC gave Intel an anticompetitive smack down on top of that, which was settled nearly a year ago.

http://download.intel.com/pressroom/legal/AMD_settlement_agreement.pdf [intel.com]

http://www.ftc.gov/opa/2010/08/intel.shtm [ftc.gov]

Under the settlement, Intel will be prohibited from:

        conditioning benefits to computer makers in exchange for their promise to buy chips from Intel exclusively or to refuse to buy chips from others; and
        retaliating against computer makers if they do business with non-Intel suppliers by withholding benefits from them.

In addition, the FTC settlement order will require Intel to:

        modify its intellectual property agreements with AMD, Nvidia, and Via so that those companies have more freedom to consider mergers or joint ventures with other companies, without the threat of being sued by Intel for patent infringement;
        offer to extend Via’s x86 licensing agreement for five years beyond the current agreement, which expires in 2013;
        maintain a key interface, known as the PCI Express Bus, for at least six years in a way that will not limit the performance of graphics processing chips. These assurances will provide incentives to manufacturers of complementary, and potentially competitive, products to Intel’s CPUs to continue to innovate; and
        disclose to software developers that Intel computer compilers discriminate between Intel chips and non-Intel chips, and that they may not register all the features of non-Intel chips. Intel also will have to reimburse all software vendors who want to recompile their software using a non-Intel compiler.

Re:Simple reason really (0)

Anonymous Coward | more than 2 years ago | (#35974756)

Best possible? [hardocp.com] Maybe for single-card, single-display. But SLI scaling sucks compared to AMD's CrossFireX on their latest series.

Mix and match? (1)

eddy (18759) | more than 2 years ago | (#35972426)

I assume the still won't let you mix AMD and nVidia video cards. Asshats. (think dedicated physx)

Re:Mix and match? (3, Informative)

Anonymous Coward | more than 2 years ago | (#35972454)

You already can use an Nvidia card as dedicated physx with an AMD card, but in order to not create a bottleneck and actually experience a performance loss, you need an Nvidia card that is more or less on par with your AMD card. So if you have, say, an AMD 6970, you would need like an Nvidia 460 at the very least to get good enough performance boost for it to even be worth the extra cash instead of just going crossfire.

When did they change that? (0)

Anonymous Coward | more than 2 years ago | (#35972496)

Last thing I heard is that nVidia disables PhysX as soon as a Radeon is in the system. It's been a while but I haven't heard of a change of policy in that regard.

Re:When did they change that? (1, Interesting)

Tukz (664339) | more than 2 years ago | (#35972556)

Hacked drivers solves that problem.

Re:When did they change that? (1)

tepples (727027) | more than 2 years ago | (#35978844)

I thought that since Windows Vista introduced the kernel mode code signing requirement, hacked drivers required the user to reboot into "Test Mode", which places an always-on-top banner at all four corners of the screen. What am I missing?

Re:When did they change that? (1)

rhook (943951) | more than 2 years ago | (#35981408)

I do not know why this myth keeps getting spread, only the 64 bit versions of Vista and 7 check for signed drivers, and they give you an option to install the driver if it is not signed. In fact, you can disable the driver signing check quite easily, if you wish.

no proble for me (2)

sourcerror (1718066) | more than 2 years ago | (#35972974)

I can run PhysX fine on my machine and it has an both AMD proc and graphic card (HD 3800).

Re:no proble for me (1)

damnbunni (1215350) | more than 2 years ago | (#35973224)

That's because there are now software PhysX drivers.

Originally you needed a dedicated hardware PCI card that did PhysX and nothing but PhysX.

The problem there was that so few people had the hardware that no one would develop games for it, and since there were no games for it no one bought the card. The point of software PhysX was to let people without the card run the games, so the developer's investment in PhysX wasn't a total waste... and then there'd be enough PhysX games to get people to buy a PhysX card.

It didn't work out that way, especially after nVidia bought Ageia.

Re:Mix and match? (3, Informative)

Anonymous Coward | more than 2 years ago | (#35973460)

Parent is incorrect and it's therefore no suprise that he provided no evidence or even supportive argument for his assertions.

'physx' is a marketing term and an API currently only hardware accelerated through nVidia cards. Adding more AMD cards, as the parent suggests, doesn't do squat if what you want is 'physx' on a hardware path. Games typicall only have two paths, software or 'physx', so the load either lands on the main-CPU (you only have AMD card(s)) or on the GPU (you have nVidia card with physx enabled).

He's also incorrect about "bottlenecks" (what are those? Surely not PCIe lanes) and really, anyone who modded that informative did so without knowledge of the issues. Quite the opposite is true, even a lower-end card dedicated to this task will provide much better performance than having to go through the software path.

Of course, this would all be pretty moot if AMD could provide an API-conforming hardware path for THEIR cards, but for some reason that isn't happening.

Other posts talk about the old dedicated PhysX cards. Those are as relevant as a S3 Virge in this discussion. Forget about them.

And yes, nVidia explicitly disables physx when it detects a non-nVidia card is installed. You can use hacked drivers. That's not the point.

As I said. Asshats.

Re:Mix and match? (0)

Anonymous Coward | more than 2 years ago | (#35973170)

Didn't dedicated PhysX cards fail miserably? I can't see the point.

Re:Mix and match? (1)

rhook (943951) | more than 2 years ago | (#35981432)

I wouldn't say they failed, in fact I know quite a few people who went out and bought one. What happened was that after nVidia bought Aegis they decided to implement PhysX in CUDA, and ditch the dedicated card altogether.

Re:Mix and match? (1)

rhook (943951) | more than 2 years ago | (#35981404)

My motherboard (MSI Fuzion 870a) lets me mix CrossFireX and SLI cards.

http://www.newegg.com/Product/Product.aspx?Item=N82E16813130297 [newegg.com]

Powered by the Fuzion technology that offers Non-Identical & Cross-Vendor Multi-GPU processing, the MSI 870A Fuzion allows you to install two different level and brand graphics cards (even ATI and NVIDIA Hybrid) in a single system, providing flexible upgradability and great 3D performance.

RAM (1, Interesting)

im_thatoneguy (819432) | more than 2 years ago | (#35972448)

I would be more excited if they had announced a new initiative to enable fast memory access between the GPU and system RAM.

2GB for visualization is just too small. 8GB would be a good start, even if it was DDR3 and not DDR5. Something like Hypertransport that could enable low latency, high bandwidth memory access for expandable system memory on the cheap.

Either that, or it's high time we got 8GB per core for GPUs.

Re:RAM (4, Interesting)

adolf (21054) | more than 2 years ago | (#35972630)

I would be more excited if they had announced a new initiative to enable fast memory access between the GPU and system RAM.

Do you really think so? We've been down this road before and while it's sometimes a nice ride, it always leads to a rather anticlimactic dead-end.

(Notable examples are VLB, EISA, PCI and AGP, plus some very similar variations on each of these.)

2GB for visualization is just too small. 8GB would be a good start, even if it was DDR3 and not DDR5.

Maybe. I've only somewhat-recently found myself occasionally wanting more than 512MB on a graphics card; perhaps I am just insufficiently hardcore (I can live with that).

That said: If 512MB is adequate for my not-so-special wants and needs, and 2GB is "just too small" for some other folks' needs, then a target of 8GB seems to be rather near-sighted.

Something like Hypertransport that could enable low latency, high bandwidth memory access for expandable system memory on the cheap.

HTX, which is mostly just Hypertransport wrapped around a familiar card-edge connector, has been around for a good while. HTX3 added a decent speed bump to the format in '08. AFAICT, nobody makes graphics cards for such a bus, and no consumer-oriented systems have ever included it. It's still there, though...

Either that, or it's high time we got 8GB per core for GPUs.

This. If there is genuinely a need for substantially bigger chunks of RAM to be available to a GPU, then I'd rather see it nearer to the GPU itself. History indicates that this will happen eventually anyway (no matter how well-intentioned the new-fangled bus might be), so it might make sense to just cut to the chase...

Uncanny valley (1)

mangu (126918) | more than 2 years ago | (#35973142)

Maybe. I've only somewhat-recently found myself occasionally wanting more than 512MB on a graphics card; perhaps I am just insufficiently hardcore (I can live with that).

That said: If 512MB is adequate for my not-so-special wants and needs, and 2GB is "just too small" for some other folks' needs, then a target of 8GB seems to be rather near-sighted.

The most awesome upgrade I ever had was when I went from EGA to a Tseng SVGA card with 1 MB memory. The next awesomest was when I upgraded from a 4 MB card to a Riva TNT2 with 32 MB. Every time I upgrade my video card there's less shock and awe effect. I'm willing to bet that going from 2 GB to 8 GB would be barely perceptible to most people.

I think the top graphics cards today have gone over the local maximum point of realism. What I have been noticing a lot lately is the "uncanny valley" effect. The only upgrade I'd seriously consider today would be to absolute lifelike perfection, anything less isn't worthwhile.

Probably the next step in graphics cards will be real time ray tracing [brightsideofnews.com], I think that would be the next line of development that would justify an upgrade.

Re:Uncanny valley (3, Interesting)

adolf (21054) | more than 2 years ago | (#35973602)

You took my practical argument and made it theoretical, but I'll play. ;)

I never had an EGA adapter. I did have CGA, and the next step was a Diamond Speedstar 24x, with all kinds of (well, one kind of) 24-bit color that would put your Tseng ET3000 (ET4000?) to shame. And, in any event, it was clearly better than CGA, EGA, VGA, or (bog-standard IBM) XGA.

The 24x was both awesome (pretty!) and lousy (mostly do to its proprietarity nature and lack of software support) at the time. I still keep it in a drawer -- it's the only color ISA video card I still have. (I believe there is also still a monochrome Hercules card kicking around in there somewhere, which I keep because its weird "high-res" mode has infrequently been well-supported by anything else.)

Anyway...porn was never better than when I was a kid with a 24-bit video card, able to view JPEGs without dithering.

But what I'd like to express to you is that it's all incremental. There was no magic leap between your EGA card and your Tseng SVGA -- you just skipped some steps.

And there was no magic leap between your 4MB card (whatever it was) and your 32MB Riva TNT2: I also made a similar progression to a TNT2.

And, yeah: Around that time, model numbers got blurry. Instead of making one chipset at one speed (TNT2), manufacturers started bin-sorting and making producing a variety of speeds from the same part (Voodoo3 2000, 3000, 3500TV, all with the same GPU).

And also around that time, drivers (between OpenGL and DirectX) became consistent, adding to the blur.

I still have a Voodoo3 3500TV, though I don't have a system that can use it. But I assure you that I would much rather play games (and pay the power bill) with my nVidia 9800GT than that old hunk of (ouch! HOT!) 3dfx metal.

Fast forward a bunch and recently, I've been playing both Rift and Portal 2. The 9800GT is showing its age, especially with Rift, and it's becoming time to look for an upgrade.

But, really, neither of these games would be worth the time of day on my laptop's ATI x300. This old Dell probably would've played the first Portal OK, but the second one...meh. And the x300 is (IIRC) listed as Rift's minimum spec, but the game loses its prettiness in a hurry when the quality settings are turned down.

But, you know: I might just install Rift on this 7-year-old x300 laptop, just to see how it works. Just so I can have the same "wow" factor I had when I first installed a Voodoo3 2000, when I play the same game on my desktop with a 3-year-old, not-so-special-at-this-pint 9800GT.

The steps seem smaller, these days, but progress marches on. You'll have absolute lifelike perfection eventually, but it'll take some doing to get there.

Re:RAM (1)

Apocros (6119) | more than 2 years ago | (#35973546)

2GB for visualization is just too small. 8GB would be a good start, even if it was DDR3 and not DDR5.

Maybe. I've only somewhat-recently found myself occasionally wanting more than 512MB on a graphics card; perhaps I am just insufficiently hardcore (I can live with that).

That said: If 512MB is adequate for my not-so-special wants and needs, and 2GB is "just too small" for some other folks' needs, then a target of 8GB seems to be rather near-sighted.

I may be misinformed, but I'm pretty certain systems like Win7 with Aero keep normal 2D window contents (like firefox opened to /.) as textures stored exclusively in the framebuffer. A 1600*1200 window is going to be a little under 8MB, which means you can probably only keep about 70 such window textures resident in a 512MB framebuffer, assuming there's no other data stored there.

I wouldn't call myself hardcore, but others might. At work I usually have upwards of 30+ windows open at a time, with multiple web browsers (each with multiple tabs), xterms, emails, documents, etc. I don't really randomly switch between all of them, the work-flow is very much like a stack with an occasional flush/purge of really old stuff. When I've got a seriously deep stack building, it's easy to have twice as many windows open. And in such scenarios, switching to an older window results in it being a blank, gray rectangle for several seconds. I assume this is because the texture that was that window was forced out of the framebuffer per an LRU policy, and has to be rerendered before it can be displayed.

I have long suspected, but haven't made an effort to prove, that moving to a video card with a 2GB framebuffer would dramatically improve my desktop experience under such heavy-usage scenarios.

Re:RAM (1)

adolf (21054) | more than 2 years ago | (#35973746)

Assuming what you say is true (and I believe it can be if the API is properly implemented, for better or worse), then regular PCI Express x16 cards should be perfectly adequate.

Not to be offensive, but you used all that verbiage, and failed to realize that nobody needs or wants high-performance tab switching in Firefox. It'd be nice if it happened within a single monitor refresh period (60Hz, these days), but nobody notices if it takes a dozen times as long or more due to PCI Express bus transfers from main RAM.

In a game, or with an engineering visualization, however, is a different story.

But go ahead and chew on the math for that see what you come back with. I think you'll find that neither lack of video RAM nor the transfer speed between the GPU and system RAM are your limiting factor (spoken as someone who frequently has just as much, or more, stuff open at once as you do).

Re:RAM (2)

Apocros (6119) | more than 2 years ago | (#35974588)

It's anecdotal, to be sure... All I can tell you is that when I have tons of windows open on Win7, then switching to old ones takes a while to repaint (and it's quite noticeable). With few windows open, it's effectively instantaneous (i.e presumably within a few VSYNCs). And, no offense taken, but I absolutely do want high-performance tab/window switching in my desktop applications. If I don't have to wait for contents to be repainted, then I don't want to.

And yes, I'm quite well aware that transfers from system memory to the GPU (or any other device) over PCIe are plenty fast for desktop normal operations. That's why I suspect the framebuffer size, rather than bandwidth to/from system memory, is the limiting factor. It's the only resource that would be approaching its limit with a huge number of windows open.

Other anecdotal point, WinXP doesn't show this same behavior with similar numbers of windows. Haven't had a chance to play with compiz or OSX, so I cannot comment on those. My *nix usage is generally limited to remote connections over NX, VNC, or just a remote xterm.

Re:RAM (1)

Korin43 (881732) | more than 2 years ago | (#35977882)

It's anecdotal, to be sure... All I can tell you is that when I have tons of windows open on Win7, then switching to old ones takes a while to repaint (and it's quite noticeable). With few windows open, it's effectively instantaneous (i.e presumably within a few VSYNCs). And, no offense taken, but I absolutely do want high-performance tab/window switching in my desktop applications. If I don't have to wait for contents to be repainted, then I don't want to.

Since video memory transfers are so fast, it seems more likely that you're seeing normal swapping behavior -- Windows sees that you have 30 windows open but you're currently only using a few of them, so the rest get swapped out (even if you have a bunch of free memory). On Linux you can change the "swappiness" to fix this. You could see if there's a similar fix on Windows 7 (back when I used Windows XP I just disabled swap files and got a massive performance improvement).

Re:RAM (1)

Apocros (6119) | more than 2 years ago | (#35978984)

Well, yeah, could maybe be swapping. But the system can't use the video card framebuffer as general-purpose memory. So swapping the contents to disk would really only make sense if the usage of the framebuffer is at a very high percentage. Hence my (untested) belief that having a larger framebuffer would help. But, maybe they're using the same memory management/swap code as for the normal system VM, that tends to page out to disk earlier than absolutely necessary. So it could be that the "swapiness" of data in the framebuffer is unnecessarily high. Disabling the swapfile might be a good test for this... Thanks for that suggestion. :)

Re:RAM (1)

Korin43 (881732) | more than 2 years ago | (#35981378)

I don't mean the video card's memory being swapped, just that the memory for the programs you want to use it being swapped. If the program itself isn't ready, it doesn't matter how fast the video card can display it.

Re:RAM (1)

bored (40072) | more than 2 years ago | (#35981426)

That generally not how it works. Both X and the old windows GDI were on demand painters. Basically they simply had the application repaint screen as necessary, clipping the non visible regions. Of course caching a portion of the painting speeds things up, but generally if your running out of ram the image is just thrown away. So having 200 windows open doesn't require sufficient ram/graphics memory to contain 200 maximized windows.

Re:RAM (1)

Korin43 (881732) | more than 2 years ago | (#35978002)

Also, just to be more specific about how fast PCI express [wikipedia.org] is, a PCI express 3.0 16x slot transfers at 128 GB/s. Your 8 MB texture should be able to get to it in around 60 microseconds [google.com]. To put that into perspective, rendering the screen at 60 fps means one frame every 17 milliseconds, so even if the texture was transferred from main memory every frame, the actual rendering of the frame would take almost 300x longer.

Re:RAM (1)

drinkypoo (153816) | more than 2 years ago | (#35973654)

Once upon a time it used to be common for high-end video cards to have display memory and texture memory at different speeds, and sometimes you even got SIMM or DIMM slots for the texture memory, and you could add more. Are there not still cards like this in existence with a halfway decent GPU?

Re:RAM (1)

adolf (21054) | more than 2 years ago | (#35973976)

None that I've seen, though I've been wondering the same since I wrote that reply. (The video cards I remember had DIP sockets for extra RAM, but the concept is the same.....)

Perhaps the time is now for such things to return. It used to be rather common on all manner of stuff to be able to add RAM to the device -- I even used to have a Soundblaster AWE that accepted SIMMs to expand its hardware wavetable synth capacity.

Re:RAM (1)

lennier1 (264730) | more than 2 years ago | (#35972648)

Exactly. Professional engines like mental ray, finalRender and V-Ray are moving towards GPU rendering, but right now there's nothing between high-end gaming cards and professional graphics cards with more RAM (and a shitload of money to shell out for it).

Ummmm.... How? (3, Interesting)

Sycraft-fu (314770) | more than 2 years ago | (#35972748)

You realize the limiting factor in system RAM access is the PCIe bus, right? It isn't as though that can magically be made faster. I suppose they could start doing 32x slots, that is technically allowed by the spec but that would mean more cost both for motherboards and graphics cards, with no real benefit except to people like you that want massive amounts of RAM.

In terms of increasing the bandwidth of the bus without increasing the width, well Intel is on that. PCIe 3.0 was finalized in November 2010 and both Intel and AMD are working on implementing it in next gen chipsets. It doubles per lane bandwidth over 2.0/2.1 by increasing the clock rate, and using more efficient (but much more complex) signaling. That would give 16GB/sec of bandwidth which is on par with what you see from DDR3 1333MHz system memory.

However even if you do that, it isn't really that useful, it'll still be slow. See graphics cards have WAY higher memory bandwidth requirements CPUs. That's why they use GDDR5 instead of DDR3. While GDDR5 is based on DDR3 it is much higher speed and bandwidth. With their huge memory controllers you can see cards with 200GB/sec or more of bandwidth. You just aren't going to get that out of system RAM, even if you had a bus that could transfer it (which you don't have).

Never mind that you then have to contend with the CPU which needs to use it too.

There's no magic to be had here to be able to grab system RAM and use it efficiently. Cards can already use it, it is part of the PCIe spec. Things just slow to a crawl when it gets used since there are extreme bandwidth limitations from the graphics card's perspective.

Re:Ummmm.... How? (1)

fuzzyfuzzyfungus (1223518) | more than 2 years ago | (#35973058)

While it doesn't solve the problem of DDR3 being slower than GDDR5, AMD has been pushing their "Torrenza [wikipedia.org]" initiative to have assorted specialized 3rd party processors be able to plug directly into the hypertransport bus(either in an HTX slot, or in one or more of the CPU sockets of a multi-socket system). That would give the hypothetical GPU both fast access to the CPU and as-fast-as-the-CPU access to large amounts of cheap RAM.

Ludicrously uneconomic for gaming purposes, of course; but there are probably some compute tasks where the ability to have 32GB of comparatively slow RAM would be much handier than 1GB or 2 of the fast stuff.

Re:Ummmm.... How? (1)

Twinbee (767046) | more than 2 years ago | (#35973140)

For nice fast RAM access, doesn't the new AMD Fusion GPU share the same silicon with the CPU anyway? Nvidia are planning something similar to with their upcoming kepler/maxwell GPU.

The future is surely where you'll be able to buy a single fully integrated CPU/GPU/RAM module. Not very modular maybe, but speed, programming ease, power efficiency, size and weight would be amazing and more than make up.

Re:Ummmm.... How? (2)

Rockoon (1252108) | more than 2 years ago | (#35975014)

For nice fast RAM access, doesn't the new AMD Fusion GPU share the same silicon with the CPU anyway?

Indeed, and the Fusion chips are trouncing Atom-based solutions in graphics benchmarks mainly for this very reason.

The problem tho is it cant readily be applied to more mainstream desktop solutions because then you have CPU and GPU fighting over precious memory bandwidth. For netbooks and the like, it works well because GPU performance isnt expected to match even midrange cards, so only a fraction of DDR2/DDR3 bandwidth is acceptable. Even midrange desktop graphics cards blow the doors off of DDR3 memory bandwidth, so this really isnt the route to go.

..and before you think of it, you cant just up the memory bandwidth of the CPU to desktop GPU levels because that goes hand-in-hand with increased latency (the reason that Intel failed to develop Larrabee.) GPU's dont suffer much from increased latency, but CPU's and their random access patterns suffer greatly from it.

Re:Ummmm.... How? (1)

Khyber (864651) | more than 2 years ago | (#35974084)

"You realize the limiting factor in system RAM access is the PCIe bus, right?"

Not even close. NOT BY MILES.

DDR3 - 2.133GT/s
PCI-E lane Specification 2.0 - 5GT/s
PCI-E lane Specification 3.0 - 8GT/s

Actual RAW bandwidth is in plentiful supply on the PCI-E lanes.

Re:Ummmm.... How? (0)

Anonymous Coward | more than 2 years ago | (#35974192)

My i7 system has 30GB/s mem bandwidth. Not sure how that translates to PCI-E gigatransfers. A 16-lane card is capable of 4GB/s peak each direction, according to Wikipedia. The killer is latency, though.

Stop looking at raw bandwidth (1)

Sycraft-fu (314770) | more than 2 years ago | (#35975738)

Look at effective real world transfers. DDR3 RAM at 1333MHz gets in the realm of 16-20GB/sec when actually measured in the system. It transfers more than 1 bit per transfer, and the modules are interleaved (think RAID-0 if you like) to further increase transfer speeds.

PCIe does transfer 1-bit per transfer per lane. Hence a 16x PCIe 3 slot gets 16GB/sec of throughput.

Re:RAM (0)

Anonymous Coward | more than 2 years ago | (#35973046)

Either that, or it's high time we got 8GB per core for GPUs

Are you suggesting 8GB of memory per GPU processing core?
Nvidia GTX 590 = 1024 CUDA cores = 8192GB memory.
AMD Radeon HD 6990 = 3072 streaming processors = 24576GB memory.
Please tell me you bleed money from your eyes when you cry about your month-old "outdated" video card.

Even if you're referring to 8GB per card (16GB for dual-chip cards), that's still a lot of money to burn on tech with no common usage.
I'm sure Nvidia, AMD, and even Intel would happily take a chunk of your trust fund and build you a card with as much memory as you would like.

Re:RAM (1)

Khyber (864651) | more than 2 years ago | (#35973972)

"2GB for visualization is just too small"

Only for a shitty coder. Use a megatexture and/or procedurally generated textures, and you'll only require 8-64MB of video memory, with the rest going to framebuffer and GPU. Did people stop paying attention to Carmack or what?

Re:RAM (1)

guruevi (827432) | more than 2 years ago | (#35974878)

Why don't you get a pro video card if you need that? For games you don't need much more than 1GB these days (data for a couple of frames of 1080p just aren't that big). If you need to visualize anything better, you usually go with a Quadro or a Tesla (up to 6GB per card, up to 4 per computer), a Quadro Plex (12GB in an external device) or a rack-based GPU solution (however much you can put in your rack).

Re:RAM (0)

Anonymous Coward | more than 2 years ago | (#35978562)

8GB per core on a GPU? thatd be like 33TB of RAM.

i say yes?

Seems a smart move (5, Insightful)

Red_Chaos1 (95148) | more than 2 years ago | (#35972500)

Since all the exclusion did was hurt nVidia in sales for people who stay loyal to AMD and refuse to go intel just for SLi. Allowing SLi on AMD boards will boost nVidias sales a bit.

Re:Seems a smart move (3, Insightful)

osu-neko (2604) | more than 2 years ago | (#35972616)

Since all the exclusion did was hurt nVidia in sales for people who stay loyal to AMD and refuse to go intel just for SLi. Allowing SLi on AMD boards will boost nVidias sales a bit.

It works both ways. nVidia has loyal customers, too, and with CPUs and mobos so much cheaper than a good GPU these days, there are plenty of people who buy the rest of the system to go with the GPU rather than the other way around.

In any case, more choices are good for everyone, customers most of all.

Re:Seems a smart move (5, Insightful)

adolf (21054) | more than 2 years ago | (#35972706)

In any case, more choices are good for everyone, customers most of all.

Interoperability is particularly good for everyone. Choice just follows naturally.

Re:Seems a smart move (1)

Kjella (173770) | more than 2 years ago | (#35973156)

I don't think it's a secret that Intel has the fastest processors if you're willing to pay $$$ for it. And since a dual card solution costs quite a bit of $$$ already, I doubt there are that many that want to pair an AMD CPU with a dual nVidia GPU.

Re:Seems a smart move (0)

Anonymous Coward | more than 2 years ago | (#35975806)

Well consider me a prime example of your doubt. I am an AMD fan when it comes to CPUs BUT an nVidia fan when it comes to graphics cards. I have my reasons, and I know I am not alone.

Re:Seems a smart move (1)

644bd346996 (1012333) | more than 2 years ago | (#35979698)

For the hardcore gamers who don't have unlimited budgets, it might be logical to buy the cheapest CPU that won't bottleneck your games, and pair it with the fastest graphics cards you can afford. Particularly if your games can use the GPU for the physics engine, you might not need even AMD's high-end CPUs to keep a pair of NVidia cards busy.

The enemy of my enemy is my friend? (1)

lanner (107308) | more than 2 years ago | (#35972704)

With Intel the only one making SLI motherboards, NVidia needed to buddy up with AMD as leverage? Just a guess.

Re:The enemy of my enemy is my friend? (2)

compro01 (777531) | more than 2 years ago | (#35972728)

Intel and AMD are the only games in town for current desktop chipsets, as all the other competitors have dropped out. VIA, gone, SiS, gone (and good riddance), Acer Labs, gone, Nvidia, gone.

It makes plenty of sense not to cut out half the companies in the market, even if that same company also competes with you.

Re:The enemy of my enemy is my friend? (2)

gadget junkie (618542) | more than 2 years ago | (#35972980)

It might be that the name of the game is "Let's all gang up on Intel"... given that Intel has squeezed Nvidia put of motherboards, and AMD has integrated graphics all to herself for now, getting closer is sensible, because the dominating player has a)signaled that it wants to enter your arena, and b) has probably reached, as Microsoft has reached, a performance plateau after which further technology advances are not as valuable to consumers .
Having said that, the SLI market is, and will remain, marginal in term of number of system installations. Given the pace of advance, we may be reaching in graphics what we have reached in processors; the basic market needs are more than adequately satisfied by entry-level systems.

Re:The enemy of my enemy is my friend? (0)

Anonymous Coward | more than 2 years ago | (#35973050)

That's why i am surprised they haven't announced a SLI capable Opteron Chipset.

Not all that surprising (4, Insightful)

Sycraft-fu (314770) | more than 2 years ago | (#35972714)

nVidia and AMD got along great before AMD bought ATi. nVidia really helped keep them floating back when AMD couldn't make a decent motherboard chipset to save their life. nForce was all the rage for AMD heads.

Well it is in the best interests of both companies to play nice, particularly if Bulldozer ends up being any good (either in terms of being high performance, or good performance for the money). In nVidia's case it would be shooting themselves in the foot to not support AMD boards if those start taking off with enthusiasts. In AMD's case their processor market has badly eroded and they don't need any barriers to wooing people back from Intel.

My hope is this also signals that Bulldozer is good. That nVidia had a look and said "Ya, this has the kind of performance that enthusiasts will want and we want to be on that platform."

While I'm an Intel fan myself I've no illusions that the reason they are as cheap as they are and try as hard as they do is because they've got to fight AMD. Well AMD has been badly floundering in the CPU arena. Their products are not near Intel's performance level and not really very good price/performance wise. Intel keeps forging ahead with better and better CPUs (the Sandy Bridge CPUs are just excellent) and AMD keeps rehashing, and it has shown in their sales figures.

I hope Bulldozer is a great product and revitalizes AMD, which means Intel has to try even harder to compete, and so on.

Re:Not all that surprising (2)

serviscope_minor (664417) | more than 2 years ago | (#35972730)

Well AMD has been badly floundering in the CPU arena.

On the top end, power and money is no object, fastest single thread performance, then yes Intel are the clear winners, which is why I buy Intel for desktop development tasks, where I want really good per-thread performance.

For number crunching servers, AMDs 4x12 core have a slight edge, though which is faster depends rather heavily on the wrokload. The edge is bigger when power and price are taken into account.

Re:Not all that surprising (2)

Sycraft-fu (314770) | more than 2 years ago | (#35972798)

I've never found any case where they are winners performance wise. 4 core Intel CPUs outperform 6 core AMD CPUs in all the multi-threaded tasks I've looked at, rather badly in some cases. In terms of servers, Intel offers 10 core CPUs which I imagine are faster than AMD's 12 core CPUs much like in the desktop arena though I will say I've not done any particular research in the server arena.

Likewise the power consumption thing is well in Intel's court in the desktop arena. A Phenom 1100T has a 125watt TDP, any of the new quad core sandy bridges only have a 95watt TDP and they are much higher performance. For example Anand found in their x264 encoding test a 2500k did 100fps, and 1100T did 89fps, or in 3DSMax 9 the 2500k scored 17.4, the 1100T scored 14.5. These are heavily multi-threaded tests, yet the SB processor comes out on top despite having less cores and costing the same (it gets worse if you compare it to the 2600, but that costs more).

AMD's offerings just are not that good these days and tossing more cores at the problem really doesn't seem to gain them much other than to put them close to what you get with Intel by using 50% more cores, and then only for tasks that have 6 (or more) threads they can heavily use.

I'm really hoping Bulldozer is a better showing.

Re:Not all that surprising (3, Interesting)

Narishma (822073) | more than 2 years ago | (#35973028)

If you only consider the CPU then what you say is true, but you also have to take into account that AMD motherboards generally cost less than Intel ones.

Re:Not all that surprising (5, Informative)

Khyber (864651) | more than 2 years ago | (#35974168)

"4 core Intel CPUs outperform 6 core AMD CPUs in all the multi-threaded tasks I've looked at, rather badly in some cases."

Do raw x86 without any specialized instructions (minus multi-core stuff) and you'll find the opposite happening, AMD wins hands-down.

That's why AMD powers our food production systems. We don't need the specialized instructions like SSE3/4/4a/etc. and AMD's raw x86 performance wins.

Intel NEEDS those specialized instructions added on to keep pace.

Re:Not all that surprising (1)

Rockoon (1252108) | more than 2 years ago | (#35975404)

Intel NEEDS those specialized instructions added on to keep pace.

Note that Intel's compilers refuse to use those instructions when their output runs on AMD's and, unfortunately, the popular scientific libraries are all compiled with ones of Intel's compilers (ICC or their Fortran compiler) and only use the SIMD paths if they see "GenuineIntel" output from CPUID.

One of the most renowned software optimization experts studies this in detail in his blog. [agner.org]

Re:Not all that surprising (1)

644bd346996 (1012333) | more than 2 years ago | (#35979736)

Nowadays, you can actually force ICC to emit code that will use up to SSSE3 on AMD CPUs, but only if you don't use runtime code-path selection. (More specifically, you have to tell ICC that the least-common-denominator code path should use SSSE3, which defeats the purpose of runtime code-path selection. ICC will always choose the slowest available code path for an AMD CPU, but you can prevent it from including a non-SSE code path.)

Re:Not all that surprising (1)

serviscope_minor (664417) | more than 2 years ago | (#35974644)

I've never found any case where they are winners performance wise

The rest of your post (mostly) focusses on desktop class processors, where Intel certainly win on the high end. In the lower and mid range, AMD are more competitive, especially in flops/$.

In the quad server socket market, things are a different. AMD's 12 core clock speed has been creeping up, where as Intel's large multicore processors clock relatively slowly. One reason Intel win the top spots on the desktop is due to faster clocks and doing more per clock. In the 4 socket market, they have lost the advantage of faster clocks, so it's a trade-off between slightly faster per-thread performance versus slightly more cores.

Additionally, AMD have typically had worse memory performance on the desktop, but in the quad socket server market, they both have quad channel memory. AMD also has lower latency L1 cache.

But ultimately the performance is amazingly close. There are some workloads where AMD win by a large margin, and some workloads where Intel do.

then only for tasks that have 6 (or more) threads they can heavily use

If you're buying a cluster of quad socket servers, the chances are that your workload already scales cleanly across hundreds of cores, so that's not much of a problem.

AMDs quad socket systems also tend to be a few thousand dollars cheaper.

Re:Not all that surprising (2)

Rockoon (1252108) | more than 2 years ago | (#35975234)

Indeed, AMD is still crushing Intel's 4-chip solutions in performance [cpubenchmark.net]

This is certainly due to Intel not really refreshing its server lines at all, focusing mainly on the desktop space, while AMD has steadily updated its server lines.

Lets not forget that AMD is also about to unveil its bulldozer cores, while Intel has recently updated its desktop chips. Until this year, Intel had an extremely hard time competing in performance per $ in the desktop space, and expect that even if bulldozer doesnt match i7 levels it will again regain the performance per dollar crown that it had up until this year.

Certainly if you are spending $1000 on a CPU, you would go Intel today. But if you are spending $250 on a CPU, the choice today isnt so clear at all.

Re:Not all that surprising (1)

LWATCDR (28044) | more than 2 years ago | (#35976592)

But at each dollar range AMD usually wins. Frankly At this point the CPU is rarely the bottleneck for most desktop users. They will usually get a lot bigger bang for buck with more ram, faster drivers, and a better video card than with a faster CPU. If you are a hard-core gamer then yea but for the 95% of PC users a Core2Duo or an X2, X3, or X4 is more than good enough.

Re:Not all that surprising (1)

Deathlizard (115856) | more than 2 years ago | (#35973498)

nVidia and AMD got along great before AMD bought ATi. nVidia really helped keep them floating back when AMD couldn't make a decent motherboard chipset to save their life. nForce was all the rage for AMD heads.

During the Athlon XP era, AMD did make a good chipset in the AMD 750. The problem was that all of the mobo manufactures at the time were using the VIA 686b southbridge instead of the AMD 766, which had a bus mastering bug which tended to cause crashes and eventually hard drive corruption.

Just about all of the chipset out there before nforce sucked when it came to reliability. VIA's would crash, AMD's would work good if you could find one with a AMD southbridge, but good luck with that, and forget about ALi or SIS.

Then Nforce came out with dual channel DDR RAM and hypertransport, which widened the bus channel significantly and most importantly did not crash under heavy load. You could totally saturate the bus on an Nforce and it would still go strong unlike any other chipset at the time which would saturate on just a hard drive copy. Nforce2 was even better and was the chipset to beat on the Socket A platform.

It's a shame that this announcement is most likely going to result in the end of Nforce chipsets. Nvidia hasn't announced a new chipset for either Intel or AMD in years, Intel supports SLi, and now that AMD supports SLi, it just supports the rummors that Nvidia is killing the chipset division..

Re:Not all that surprising (1)

rhook (943951) | more than 2 years ago | (#35981790)

It's a shame that this announcement is most likely going to result in the end of Nforce chipsets. Nvidia hasn't announced a new chipset for either Intel or AMD in years, Intel supports SLi, and now that AMD supports SLi, it just supports the rummors that Nvidia is killing the chipset division..

Nvidia left the chipset market nearly 3 years ago.

Re:Not all that surprising (1)

fahlesr1 (1910982) | more than 2 years ago | (#35975910)

nVidia and AMD got along great before AMD bought ATi. nVidia really helped keep them floating back when AMD couldn't make a decent motherboard chipset to save their life. nForce was all the rage for AMD heads.

My desktop has an ASUS A8N-SLI motherboard based on the nForce 4 chipset. Think its about time for me to upgrade?

SLI: Sorely Lacking IMO (5, Informative)

RagingMaxx (793220) | more than 2 years ago | (#35972740)

Having built my last two gaming rigs to utilize SLI, my opinion is that it's more trouble than it's worth.

It seems like a great idea: buy the graphics card at the sweet spot in the price / power curve, peg it for all its worth until two years later when games start to push it to its limit. Then buy a second card, which is now very affordable, throw it in SLI and bump your rig back up to a top end performer.

The reality is less perfect. Want to go dual monitor? Expect to buy a third graphics card to run that second display. Apparently this has been fixed in Vista / Windows 7, but I'm still using XP and it's a massive pain. I'm relegated to using a single monitor in Windows, which is basically fine since I only use it to game, and booting back into Linux for two-display goodness.

Rare graphics bugs that only affect SLI users are common. I recently bought The Witcher on Steam for $5, this game is a few years old and has been updated many times. However if you're running SLI, expect to be able to see ALL LIGHT SOURCES, ALL THE TIME, THROUGH EVERY SURFACE. Only affects SLI users, so apparently it's a "will not fix". The workaround doesn't work.

When Borderlands first came out, crashed regularly for about the first two months. The culprit? A bug that only affected SLI users.

Then there's the heat issue! Having two graphics cards going at full tear will heat up your case extremely quickly. Expect to shell out for an after-market cooling solution unless you want your cards to idle at 80C and easily hit 95C during operation. The lifetime of your cards will be drastically shortened.

This is my experience with SLI anyway. I'm a hardcore gamer who has always built his own rigs, and this is the last machine I will build with SLI, end of story.

Re:SLI: Sorely Lacking IMO (0)

Anonymous Coward | more than 2 years ago | (#35972750)

but I'm still using XP

why?

Re:SLI: Sorely Lacking IMO (0)

Plasmoid2000ad (1004859) | more than 2 years ago | (#35972888)

^ This You are using SLI but living with the XP 3gb 32bit memory cap??? Remember, the more GPU Ram you have, the less address space for your regular Ram... and you have 2 GPU's with ram.

Re:SLI: Sorely Lacking IMO (1)

darthdavid (835069) | more than 2 years ago | (#35973304)

To be fair he could be using XP X64 (though then you get the joy of about the worst driver support you're going to find in a mainstream OS and all sorts of freaky bugs and 'issues')...

Re:SLI: Sorely Lacking IMO (1)

Khyber (864651) | more than 2 years ago | (#35974188)

XP-x64 was NOT mainstream. It was basically a tech demo based off of Server 2k3.

Re:SLI: Sorely Lacking IMO (2)

RagingMaxx (793220) | more than 2 years ago | (#35973544)

When I built this box, Vista had just come out. So I installed Windows XP, obviously. With two GeForce 8800GT cards, each with 512MB RAM I still have 2.25 GB RAM left for the system, which is plenty. I haven't had a problem running any game released in the last five years and hitting a steady 60fps minimum, which happens to be the refresh rate of my display. So thanks for the advice, but it's not really that helpful and totally ignores the rest of the points I raised.

I'd love to install Windows 7 64-bit, but I don't have $300 AUD laying around, and if I did I'd really rather not give it to MS, thanks. And yes I know how to pirate software, but as a developer myself I choose not to.

Re:SLI: Sorely Lacking IMO (1)

hairyfeet (841228) | more than 2 years ago | (#35974568)

If you decide you need more RAM and don't have the money for Win 7 X64 (which really is a kick ass OS BTW) you might want to look into SuperSpeed RAMDisk [superspeed.com] as with PAE it will let you use the RAM XP can't see as a RAMDisk which will really give a machine a kick in the pants.

I have a couple of customers still on XP as well as keeping an XP partition myself for some old software that doesn't like Win 7 (Cubase) and having 4.5Gb of RAM as a temp drive really speeds things up. It is butt simple to use, you can set it to save between sessions, its a really nice piece of software although in the end if you are planning on keeping that box awhile Win 7 really is the way to go, the support for NCQ and SuperFetch make Win 7 a better bet IMHO.

As for TFA personally I'm waiting for Nvidia to get bought by Intel. After the DOJ didn't say squat when it came to Intel rigging compilers or bribing OEMs I figured anything and everything goes and Intel IGPs still suck. Nvidia selling SLI to their former rivals at this point seems logical to me as Intel as been openly hostile to Nvidia for quite some time. Just look at how they wiped out the Nvidia chipset business by refusing to allow them on any socket past LGA775.

A couple of questions though: does Nvidia still disable PhysX if they detect a Radeon GPU? Because if they do this is gonna be kinda pointless as pretty much all AMD boards have Radeon IGPs now. And does anyone know if the Aegia PhysX boards are worth playing with? I've noticed them going for like $35 online and for those of us without Crossfire and running Radeon GPUs it might nice if it lets us run PhysX.

Re:SLI: Sorely Lacking IMO (1)

rhook (943951) | more than 2 years ago | (#35981836)

I'd love to install Windows 7 64-bit, but I don't have $300 AUD laying around

Ouch, Windows 7 Ultimate can be had for as little as $179 USD over here (system builders OEM), Home Premium is only $100.

Re:SLI: Sorely Lacking IMO (1)

drinkypoo (153816) | more than 2 years ago | (#35973678)

^ This You are using SLI but living with the XP 3gb 32bit memory cap??? Remember, the more GPU Ram you have, the less address space for your regular Ram... and you have 2 GPU's with ram.

Is that actually an issue with PAE?

I'm using XP for gaming even though I have 8GB RAM in my desktop system, because some of the games I play do not run on Windows 7, but none of them require it. The rest of the time I run Ubuntu which just bloody screams on this hardware compared to XP. XP may be old, but Microsoft is still selling it.

Re:SLI: Sorely Lacking IMO (1)

Khyber (864651) | more than 2 years ago | (#35974210)

Yes it is an issue considering even with PAE Windows XP 32-bit (Professional version) is LOCKED to 4GB maximum.

http://msdn.microsoft.com/en-us/library/aa366778(v=vs.85).aspx#physical_memory_limits_windows_xp [microsoft.com]

You simply CANNOT use more.

Re:SLI: Sorely Lacking IMO (1)

drinkypoo (153816) | more than 2 years ago | (#35974346)

OK, but it says it's locked to 4GB of physical RAM, not that it's locked to 4GB of address space. Your link does not back up the specific claims of the GP comment. The link specifically says that the driver manufacturer can place that wherever they want if PAE is enabled.

I don't know if nVidia is actually doing this but I do know that some apps work with PAE enabled and some get more crashy, so I added an option to my boot.ini to disable PAE when I like. This also disables NX so I don't use it all the time.

In either case, AFAIK on 32 bit windows no app can ever use more than 3GB anyway...

Re:SLI: Sorely Lacking IMO (2)

slackbheep (1420367) | more than 2 years ago | (#35973206)

Has SLI really been so troublesome? My last system warranted a full replacement by the time I was thinking about going SLI, and ended up going with an AMD system and Crossfire instead. I've yet to have an issue gaming with dual monitors outside of a couple games force minimizing everything on the second monitor when activated in full screen mode, but this was fixed by alt tabbing out and back into the client. It may be worth noting however that I've not tried XP in years.

Re:SLI: Sorely Lacking IMO (1)

RagingMaxx (793220) | more than 2 years ago | (#35973392)

On Windows XP, with the latest nvidia drivers, any card running in SLI mode will only output to one of its ports. The secondary card's outputs don't work at all.

If you want you can disable SLI every time you exit a game (it only takes about two minutes!), but don't expect Windows to automatically go back to your dual monitor config. It's like you have to set your displays up from scratch every time.

As annoying as it is though, the dual monitor limitation is really just an annoyance. Having to disable SLI to play certain games is absolutely ridiculous. Why did I buy that second card again? Oh yeah, to play games!

Re:SLI: Sorely Lacking IMO (1)

jittles (1613415) | more than 2 years ago | (#35973730)

I am running two 460 GTXs in SLI and am not having any heat problems (the cards stay around 40C at idle and 50-55C maxed). I also don't have any problems with my dual monitor config. Finally, turning SLI on and off is as simple as right clicking on the desktop, selecting the NVidia control panel, and going to SLI and hitting disable. I can't see that taking longer than about 10 seconds.

Re:SLI: Sorely Lacking IMO (1)

RagingMaxx (793220) | more than 2 years ago | (#35973892)

So, to paraphrase your entire post, "My computer and operating system are totally different to yours, and I am not experiencing the problems you are having."

Good talk.

Re:SLI: Sorely Lacking IMO (1)

Khyber (864651) | more than 2 years ago | (#35974322)

You have problems reading.

Also you're doing it wrong.

http://i.imgur.com/shrBtl.jpg [imgur.com]

SLI + quad monitor under XP. Yes, I game on it. Go read your manual and figure out what you're doing wrong, because it's guaranteed to be YOU.

Re:SLI: Sorely Lacking IMO (1)

RagingMaxx (793220) | more than 2 years ago | (#35974706)

From the nvidia driver download [nvidia.co.uk] page for their latest driver release:

*Note: The following SLI features are only supported on Windows Vista and Windows 7: Quad SLI technology using GeForce GTX 590, GeForce 9800 GX2 or GeForce GTX 295, 3-way SLI technology, Hybrid SLI, and SLI multi-monitor support.

I've tried every possible configuration available, it does not work. But thanks for your helpful and informative post, which yet again fails to invalidate my experience by way of your (highly questionable and completely unsubstantiated) claims.

Re:SLI: Sorely Lacking IMO (2)

Khyber (864651) | more than 2 years ago | (#35975070)

"(highly questionable and completely unsubstantiated)"

Oh, I'm sorry, raw photographic evidence isn't enough for you?

"The following SLI features are only supported on Windows Vista and Windows 7: Quad SLI technology using GeForce GTX 590, GeForce 9800 GX2"

Try to remember when the 9800GX2 came out. Revert to those drivers.

Quit using the newer drivers. Support for XP was present in older driver revisions.

Re:SLI: Sorely Lacking IMO (1)

RagingMaxx (793220) | more than 2 years ago | (#35975562)

Ok, first of all the nvidia Forceware Release 180 drivers are the first drivers to support multi monitor SLI. From the Tom's Hardware story [tomshardware.com] at the time:

Big Bang II is codename for ForceWare Release 180 or R180. The biggest improvement is the introduction of SLI multi-monitor. Yes, you’ve read it correctly, Nvidia has finally allowed more than one monitor to use multiple video cards at once, something it’s been trying to do since SLI’s introduction back in 2004.

From the nvidia 180 driver [nvidia.com] release:

*Note: The following SLI features are only supported on Windows Vista: Quad SLI technology using GeForce 9800 GX2, 3-way SLI technology, Hybrid SLI, and SLI multi-monitor support.

Even the SLI Zone (an official nvidia site set up for the 180 release) page for multi monitors [slizone.com] states:

System requirements > Microsoft® Windows® Vista 32-bit or 64-bit

Now if you're right and some mythical nvidia driver exists that supports dual monitors on Windows XP, just link to it. Or even a single article or forum post explaining how to make it work. Even if it means rolling back my drivers, I will do it and I will come back here and say "thank you Khyber, thank you for showing me the way, even though you were kind of a dick about it."

That's of course totally disregarding the fact that I shouldn't have to roll back my drivers and lose out on all the driver improvements and bugfixes from the last four years that make half the games I own playable. All of which leads right back to my original point, which is that SLI is more trouble than it's worth. Have a look through the bugfix section of almost any nvidia driver release and there will be an entire section devoted to SLI-only bugfixes.

In hindsight, instead of spending 10-20 hours over the last five years trying to get dual monitors to work, struggling with new games that crash constantly due to SLI bugs, driver updates and rollbacks, reinstalls, whatever, I should have just taken on 10-20 hours of additional paid work, which would have easily paid for a new video card every two years, saving me the massive hassle.

Oh and your "raw photographic evidence" is some random photo with a single display running XP? Are those other displays supposed to be connected to the same box? Is the box even running SLI with all the displays attached to the same card? I don't know because there's no fucking way for me to tell.

Re:SLI: Sorely Lacking IMO (1)

Khyber (864651) | more than 2 years ago | (#35976510)

"single display"

You don't see the monitor built into the actual computer box, or those other monitors to my left, either? Those are running off the same system.

Four monitors, Windows XP, SLI 9800GTX+ GPU inside of a Zalman case.

I built the thing, I know what's inside. That's my office.

Re:SLI: Sorely Lacking IMO (1)

sexconker (1179573) | more than 2 years ago | (#35976866)

The only way that is working for you is if SLI is disabled while at your desktop.

Re:SLI: Sorely Lacking IMO (1)

Khyber (864651) | more than 2 years ago | (#35980092)

http://www.tech-forums.net/pc/f78/sli-dual-monitors-works-168567/ [tech-forums.net]

*yawn*

WIndows XPx64 also doesn't need a workaround as it's based off of Server 2k3, it just WORKS.

Re:SLI: Sorely Lacking IMO (1)

sexconker (1179573) | more than 2 years ago | (#35980872)

That guide refers to using 1 (total) monitor on the SLI cards, and additional monitors on ADDITIONAL cards that aren't part of the SLI set.
You're wrong.
You got called out.
Deal with it.

Re:SLI: Sorely Lacking IMO (1)

Khyber (864651) | more than 2 years ago | (#35974268)

"Has SLI really been so troublesome"

Yes. 3Dfx would get pretty much 100% scaling.

You don't get that with nVidia or AMD/ATi, which IMHO makes it totally fucking useless.

Re:SLI: Sorely Lacking IMO (1)

bananaquackmoo (1204116) | more than 2 years ago | (#35975892)

+1. Your experiences basically mimic mine. SLI doesn't even win in terms of bang for buck. People think "oh, I can but a second video card later on and boost performance"... but you might as well buy a previous gen high end card for the same price, same performance, and lower power requirements (than running 2).

Re:SLI: Sorely Lacking IMO (1)

UnknownSoldier (67820) | more than 2 years ago | (#35976646)

> Having built my last two gaming rigs to utilize SLI, my opinion is that it's more trouble than it's worth.
My current Gaming Rig has a XFire 5770 (XFire = AMD/ATI version of nVidia's SLI). I got it almost a year ago -- I paid ~$125 x 2 and saved at least $50 over the 5870.

I regular play L4D, BFBC2, RB6LV2. I too have mixed opinions on XFire/SLI but for different reasons.

> worth until two years later when
No one says you have to wait 2 years :-) I waited 6 months.

Regardless of when you buy, you ARE saving money. Of course there are trade-offs as your post correctly mentions.

> Then buy a second card, which is now very affordable,
I am surprised you didn't mention the REAL reason why the perceived advantage doesn't pan out in practice... Try _finding_ the second card that is STILL SELLING and that IS _compatible_ with your existing card. (A lot of times it has to be made by the same manufacture of the first one!) You also need to be a little wiser upfront when you buy your initial video card -- which model will still be selling in 2 years? For AMD/ATI, the xx70 cards seem to be the ones that stick around.

> A bug that only affected SLI users.
1) This is shitty programming -- be vocal so that game devs can fix their game AND nVidia / ATI can fix their drivers!
        *glares at EA/Dice for Battlefield Bad Company 2*
2) Again, you missed the "reality" -- until the bug is fixed, you are forced to play the game with only 1 GPU. This sucks. At least ATI has been getting better in their SLI "game profiles" now.

> Having two graphics cards going at full tear will heat up your case extremely quickly.
I have to question your case cooling. My bottom case fan is "pull", my top case fan is "push". I haven't noticed any extreme temps.

Again, the "reality" you missed to mention is noise (dBA) and Load. Running 2 GPUs is a little louder and uses a little more juice.
i.e.
http://techreport.com/articles.x/19404/10 [techreport.com]

> and this is the last machine I will build with SLI, end of story.
I've been custom building gaming rigs since the '80s. This is my first XfFre rig and I _would_ consider it again. Basically the price points for me are...

~ $150 video card (XFire/SLI)
        OR
~ $350 video card (single card)

I usually wait 1-2 years before buying brand new games, as I'm sick and tired of paying $60, when I could pay ~$20. XFire has let me play all my current and past games with everything cranked and still get 60 Hz.

Cheers

SLI/Crossfire War Bad for Consumers (1)

BrendaEM (871664) | more than 2 years ago | (#35973908)

When the SLI/Crossfire war began it was bad for the consumer.
Fie, fie, fie on your proprietary video bus arraignments!
I wish the consumers would bend together and demand an end to it.

[I used to usually buy an AMD processor and and Nvidia video card. I missed the chipset updates, so this is good news for me.]

Of course (1)

HaZardman27 (1521119) | more than 2 years ago | (#35973978)

Didn't NVIDIA stop making motherboard chipsets? It would make sense that they would attempt to get their tech to work on as many platforms as possible.

Yeah... no thanks (0)

Anonymous Coward | more than 2 years ago | (#35974302)

The last time I had a system with an AMD CPU and nVidia chips, this happened [nvidiasettlement.com]. After the years of runaround and other bullshit I had to put up with, I'd sooner buy a Sony product than something from nVidia.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...