Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

NVIDIA's New Flagship GeForce GTX 580 Tested

CmdrTaco posted more than 3 years ago | from the waiting-for-4d dept.

Graphics 149

MojoKid writes "Even before NVIDIA's GF100 GPU-based GeForce GTX 480 officially arrived, there were a myriad of reports claiming the cards would be hot, loud, and consume a lot of power. Of course, NVIDIA knew that well before the first card ever hit store shelves, so the company got to work on a revision of the GPU and card itself that would attempt to address these concerns. Today the company has launched the GeForce GTX 580 and as its name suggests, it's a next-gen product, but the GF110 GPU powering the card is largely unchanged from the GF100 in terms of its features. However, refinements have been made to the design and manufacturing of the chip, along with its cooling solution and PCB. In short, the GeForce GTX 580 turned out to be the fastest, single-GPU on the market currently. It can put up in-game benchmark scores between 30% and 50% faster than AMD's current flagship single-GPU, the Radeon HD 5870. Take synthetic tests like Unigine into account and the GTX 580 can be up to twice as fast."

cancel ×

149 comments

Sorry! There are no comments related to the filter you selected.

Good write ups, good card (4, Informative)

Vigile (99919) | more than 3 years ago | (#34174534)

Re:Good write ups, good card (1)

Pojut (1027544) | more than 3 years ago | (#34174590)

Always been a big fan of [H]ard|OCP...they definitely have some of the best forums in the enthusiast scene.

Re:Good write ups, good card (1)

MojoKid (1002251) | more than 3 years ago | (#34174692)

Agreed!

Re:Good write ups, good card (2, Interesting)

Moryath (553296) | more than 3 years ago | (#34174744)

The problem is, how much does it cost? Radeon 5770s can be had for $120 at Newegg after rebate, so why the hell would I need to waste $500 on this card? I could hook up a pair of 5770's for much less and get similar performance.

And what the hell games on the PC is it actually supposed to be required to play?

The AMD cards do just fine from the last gen, when they were beating NVidia cards. And I'm willing to bet that the "next gen" AMD card will see similar performance increases as well when it hits by next month.

Re:Good write ups, good card (2, Funny)

afidel (530433) | more than 3 years ago | (#34174810)

The 5770 will also cost you significantly less in electricity and cooling during the warm months =)

Re:Good write ups, good card (1)

robthebloke (1308483) | more than 3 years ago | (#34175926)

.... although during the cold winter months, there's nothing better than roasting marshmallows on a quadro!

Re:Good write ups, good card (1)

tyrione (134248) | more than 3 years ago | (#34176876)

The 5770 will also cost you significantly less in electricity and cooling during the warm months =)

Yeah, I'm sure it's really competing against that 95% efficient multi-stage Gas Furnace you should have in your house, but don't for energy costs or the still electric base board running in your house, not to mention the electric range, on and on.

Re:Good write ups, good card (2, Informative)

afidel (530433) | more than 3 years ago | (#34177630)

According to this [overclockersclub.com] graph the difference at idle is almost 90W, or a difference of $180 over 2 years if you leave your PC on 24x7. And I was talking about during the summer, where the added BTU's are paid for in power draw and then again in AC draw.

P.S. My furnace is 93%/16 SEER and my house is only 1200sq ft so in percentage terms it can be a large cost.

Re:Good write ups, good card (2, Insightful)

vistapwns (1103935) | more than 3 years ago | (#34174826)

Depends on if you want 30 fps or 60 fps, and if you want high levels of AA and AF or none, and if you want high resolution or medium resolution. I'm getting a gtx 580 to replace my gtx 480, which I play to sell, because of lower noise and improved performance. I find 30 fps to be choppy, in fast paced FPS games, so I tend to go for 60, along with all the options cranked up.

Re:Good write ups, good card (1)

Vigile (99919) | more than 3 years ago | (#34174974)

"just fine" differs from person to person. No, GTX 580s aren't required to play PC games and most of the time the lower cost GTX 460/HD 6850s are fine. But sometimes more power is just better.

Re:Good write ups, good card (0)

Anonymous Coward | more than 3 years ago | (#34175222)

"just fine" differs from person to person. No, GTX 580s aren't required to play PC games and most of the time the lower cost GTX 460/HD 6850s are fine. But sometimes more power is just better

Agreed. But sadly, that's (mostly) not true for the PC gaming/enthusiast market. I know of no applications besides CAD/CAM that can take advantage of a card like this, and that has been true for a few years now. Software is lagging further and further behind the capabilities of the hardware.

We really need new software development paradigms. Not just language/tool support, but runtime support is severely lacking. By now, we should have had a plethora of different applications running on such a card: audio encoding, compression, encryption, gaming AI. I know about CUDA, but why aren't we seeing such applications? Are they held back because of lacking OS support? Lacking driver support? Lacking deployment infrastructure? Lacking developer initiative? Is the GPU architecture (disparate memory) unsuitable? Or is CUDA just woefully inadequate to express parallel problems, seeing as it's based on (one of) the most primitive of imperative languages?

Disappointed minds want to know...

Re:Good write ups, good card (1)

Moryath (553296) | more than 3 years ago | (#34175702)

Precisely the point I was making.

Even Crysis - which at one point was the "go-to" benchmark game - performs very well on a single 5770.

There are "games" which are basically tech demos meant to stress cards, and that's the category into which Crysis falls. For everyone else, the existing games are either a console port (in which case they are tuned for 5 year old hardware anyways) or are tuned down enough to run on 5 year old hardware (sometimes even those crapass Intel-onboard video solutions that come from manufacturers like Dell) in order to widen their possible-sales market wide enough to break even.

Re:Good write ups, good card (2, Interesting)

robthebloke (1308483) | more than 3 years ago | (#34176808)

By now, we should have had a plethora of different applications running on such a card: audio encoding, compression, encryption, gaming AI. I know about CUDA, but why aren't we seeing such applications?

Flash, web based video, Power DVD, and various others at the consumer end of the spectrum (where accuracy is not important). When I first bought an ION based netbook (about 12 months ago), half the websites on the net could not play video on it without dropping a hideous number of frames. Since I've owned it, there has been a gradual stream of updates to various libs/SDK/apps (flash video was the most obvious!) that have made my netbook usable (by utilising the ION GPU).

Are they held back because of lacking OS support? Lacking driver support? Lacking deployment infrastructure? Lacking developer initiative? Is the GPU architecture (disparate memory) unsuitable? Or is CUDA just woefully inadequate to express parallel problems, seeing as it's based on (one of) the most primitive of imperative languages?

Disappointed minds want to know...

It's much simpler than that - it's all about available dev time. For any given app, any new feature has to work on all available systems (and by that I mean, it has an Intel GPU). This means you have to target your code to run on the CPU first. Later, if you have time (or performance is sucky enough to warrant the development effort) you can add in a GPU codepath in places where it makes sense. Sadly, most users don't tend to notice the difference between an app using 30% of the CPU, or one using 5%. As a result, GPU codepaths tend to get dropped down the priority list somewhat.

Writing code for the GPU is not fun (well, it is fun in the hobby project sense, but not so much for a paid job). You have to target your code for GL2.1 Intel, GL2.1 ATI, GL2.1 NVidia, GL3.3 ATI, GL3.3 Nvidia, GL4.0 ATI, GL4.0 Nvidia. At best you've just added an extra week to your QA process. At worst it's batted back and forth between QA and the dev team for a month or more. The bean counting senior management do a quick cost/benefit analysis, and almost always find that the added development time cannot be justified.

Finally..... There were a few features lacking from GPU's (until very recently) that tended to prevent them from being used in any serious environments. (The lack of double precision or ECC memory support spring to mind). That is slowly changing, but until the costs for development on the GPU start to fall, I doubt you'll see too many apps moving to the GPU.....

Re:Good write ups, good card (2, Interesting)

robthebloke (1308483) | more than 3 years ago | (#34176874)

p.s. As for CAD/CAM software. They actually don't push the GPU as much as you might expect. They tend to use the simplest single pass shading available, so don't actually need too many GPU cores. What's more important for those apps is lots and lots of fast DDR5 ram....

SLI/Crossifre isn't always valid (2, Insightful)

Sycraft-fu (314770) | more than 3 years ago | (#34175042)

For one, there are a lot of motherboards that don't support it. Even new, reasonably high end boards. I have an Intel P35 board with a Core 2 Quad at home, but it has only 1 16x slot. At work, a Dell Precision T1500 with an i7, again only 1 16x slot. Crossfire/SLI cannot be done in these cases. You have to buy a single, heavier hitting, card if you want performance.

Also you need to do a bit more research if you think multi-card solutions work well all the time. They can, but they also can have some serious problems. Some games work great, others can't use a second card at all. There is something to be said for the simplicity of a single card that does what you need.

In terms of needing the speed? Well depends on what you have and your tastes. You certainly don't need it to play any game, all games are playable on less. However you might need it if you desire extremely high resolutions and high frame rates. If you have a 30" monitor and want to drive it at its native, beyond HD rez (2560x1600) you need some heavy hitting hardware to take care of that, particularly if you'd like the game to run nice and smooth, more around 60fps than around 30. You then need still more if you'd like to crank up anti-aliasing and so on.

Now that clearly isn't for everyone, but that's fine. There is no reason not to have a high end as well as a mid range. You should hate on people who want more performance than you do. In fact, you should thank them. Know why the 5770 is so cheap? Because the 5870 is not. That high end card financed the development of the new tech, it recouped a lot of the R&D costs, making an economical midrange card a reality.

This is why nobody seems to be able to break in and compete with nVidia and ATi in graphics. They target midrange or lower end, because development costs on high end stuff is so much. However nVidia and ATi have extremely solid mid and low range lineups, because they can take the tech on their high end cards, and scale it down.

Re:SLI/Crossifre isn't always valid (0)

Anonymous Coward | more than 3 years ago | (#34175276)

Also you need to do a bit more research if you think multi-card solutions work well all the time. They can, but they also can have some serious problems. Some games work great, others can't use a second card at all.

Citation needed.

I'm the owner of a a i7-930 with two AMD 5870s. AFAIK Crossfire mode is a driver setting, not an in-game setting. However, you do have me curious if I was duped by the salesman when I designed this particular build...

Re:SLI/Crossifre isn't always valid (1)

QuantumBeep (748940) | more than 3 years ago | (#34177826)

Unless you have six monitors, you were probably duped. A single 5870 is a goddamned powerhouse.

Re:SLI/Crossifre isn't always valid (2, Interesting)

Slime-dogg (120473) | more than 3 years ago | (#34176154)

If you have a 30" monitor and want to drive it at its native, beyond HD rez (2560x1600) you need some heavy hitting hardware to take care of that, particularly if you'd like the game to run nice and smooth, more around 60fps than around 30. You then need still more if you'd like to crank up anti-aliasing and so on.

Isn't the point of AA to make things look better at lower resolutions? Running at resolutions beyond the HD rez, even on large screens, eliminate any sort of need for FSAA. At that point, you just don't get jaggies that need to be smoothed.

Re:SLI/Crossifre isn't always valid (2, Informative)

Guppy (12314) | more than 3 years ago | (#34176320)

Running at resolutions beyond the HD rez, even on large screens, eliminate any sort of need for FSAA. At that point, you just don't get jaggies that need to be smoothed.

You still get pixel-shimmer though, which FSAA greatly reduces.

Re:SLI/Crossifre isn't always valid (0)

Anonymous Coward | more than 3 years ago | (#34177028)

doubtful

Re:SLI/Crossifre isn't always valid (1)

kalirion (728907) | more than 3 years ago | (#34176754)

Running at resolutions beyond the HD rez, even on large screens, eliminate any sort of need for FSAA. At that point, you just don't get jaggies that need to be smoothed.

You most definitely have jaggies without AA, even at high resolutions. It's especially noticeable on "thin" objects like grass/foliage, power lines, etc. The more fine detail in the scene, the more jaggies.

Re:Good write ups, good card (1)

AltairDusk (1757788) | more than 3 years ago | (#34175550)

Why get two 5770's when Newegg has the 5870 for just under $270 right now?

Re:Good write ups, good card (1)

Moryath (553296) | more than 3 years ago | (#34175714)

I can buy two 5770's, AND have $50 in my pocket, for the price of one 5870...

Re:Good write ups, good card (1)

robthebloke (1308483) | more than 3 years ago | (#34176928)

..... although that $50 will leave your pocket instantly when you nee to buy a new SLI capable PSU.

Re:Good write ups, good card (1)

makomk (752139) | more than 3 years ago | (#34176390)

Alternatively, you could get two Radeon 6870s for slightly less - that's got some really nice Crossfire scaling results in reviews. Sadly this review doesn't include them, but they seem to pull ahead a fair amount. Also, ATI's new top end GPU is due out in a couple of weeks (which is probably why no-one's offering any kind of 6870x2 card).

no shortages of reviews (-1, Troll)

adeelarshad82 (1482093) | more than 3 years ago | (#34174550)

seems like there was no shortages of GTX 580 reviews this morning http://bit.ly/dlinFY [bit.ly] http://goo.gl/fmfJM [goo.gl]

Re:no shortages of reviews (-1, Redundant)

Anonymous Coward | more than 3 years ago | (#34174782)

seems like there was no shortages of GTX 580 reviews this morning

http://bit.ly/dlinFY [bit.ly]
http://goo.gl/fmfJM [goo.gl]

seems like there was no shortages of GTX 580 reviews this morning

http://bit.ly/dlinFY [bit.ly]
http://goo.gl/fmfJM [goo.gl]

http://managementsoftwares.org/: yeah true buddy

Competition is good. (4, Insightful)

QuantumBeep (748940) | more than 3 years ago | (#34174552)

I am very glad to see the performance crown handed back and forth.

Now if only this was happening in the CPU market...

Re:Competition is good. (1)

aliquis (678370) | more than 3 years ago | (#34175692)

Maybe in the ARM vs Atom championships?

No Alpha, no high-end PPC :/ (Power and weird Sun/Fujitsu chips still there. And yeah, I do understand that you mean AMD64 compatible processors. Maybe one day.)

Re:Competition is good. (1)

TheEyes (1686556) | more than 3 years ago | (#34175896)

Maybe in the ARM vs Atom championships?

When it comes to Atom, AMD is making a bid with their upcoming Ontario lineup [anandtech.com] for netbook/nettop dominance.

Re:Competition is good. (0)

Anonymous Coward | more than 3 years ago | (#34176532)

Do you know anything about ARM vs Atom performance?

I've got the impression ARM is rather equal at much lower energy requirements. Is there a place for Atom and equivalents at all?

Re:Competition is good. (0)

Anonymous Coward | more than 3 years ago | (#34176358)

Back and forth? nVidia has always been on top. Plus their drivers, while not perfect, are a hell of a lot better than anything from ATI.

Re:Competition is good. (2, Informative)

QuantumBeep (748940) | more than 3 years ago | (#34176598)

I'm gonna feed this troll.

What about Radeon 9700, 9800, x800, 4800, 5800 before Fermi, and 6850 before GF110?

Also, ATI cards play games and do it well. I don't know what driver issues you're talking about.

Re:Competition is good. (1)

robthebloke (1308483) | more than 3 years ago | (#34177024)

You can go back further than that...... The original 32Mb ATI Rage fury was the first consumer card that supported full 32bit 3D acceleration (and could handle high end graphics apps such as maya). An amazing card in it's time...

CPU, GPU... (1)

TheBilgeRat (1629569) | more than 3 years ago | (#34174588)

At what point are Nvidia and AMD going to supplant the need for an Intel or AMD cpu? This (graphics) processor in a card is blazingly fast.

Re:CPU, GPU... (0)

Anonymous Coward | more than 3 years ago | (#34174704)

This (graphics) processor in a card is blazingly fast.

never?
Speed alone does not a good general purpose CPU make. (as you yourself note..)

Re:CPU, GPU... (0)

Anonymous Coward | more than 3 years ago | (#34174726)

It's blazingly fast at highly parallel workloads, yes, but not at the kind of scalar branchy code that the Intel CPU is blazingly fast at.

Different beasts for different purposes.

Re:CPU, GPU... (4, Informative)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#34174822)

In an absolute, architectural sense, essentially never. A screamingly fast vector processor isn't going to do much for all your x86 code, and never mind all the little housekeeping chores that the CPU does(most of the modern ones include the system RAM controller(s), do a lot of peripheral wrangling, may be the root of the PCIe bus, and so forth).

In a "designing your next gaming build" sense, they largely already have. Unless you are a money-is-no-object-e-penis-must-get-longer type gamer, you can generally get better bang for your buck by going with a cheaper CPU and spending the savings on a nicer graphics card. It depends on the game, and there are situations where a truly epic(2x or 3x of the top of the line GPU ganged together with SLI or crossfire) graphics system will be CPU bound without the best CPU available; but Joe Gamer is, most of the time, better off with a third tier CPU and a second tier GPU, or a 2nd tier CPU and a 1st tier GPU.

In smaller systems(where board footprint really counts) or in cheap systems(where package costs and board size really count) the integration of CPU and GPU into a single package proceed apace, with AMD rolling low-end ATI tech into certain of their newer parts, and Intel trying to make their GMA stuff suck less. The only real wild card is Nvidia: Unlike Intel or AMD, they have no x86 cores to speak of, on the other hand, their GPU-computing initiatives are arguably the most advanced, in terms of tool and driver maturity. The question is, will they eventually produce an Nvidia equivalent to AMD and Intel's CPU/GPU combo packages(perhaps by buying VIA, who has adequate-but-deeply-unexciting x86 assets; but utter shit GPUs), or will they persist purely as a maker of high end gaming GPUs and GPU-based compute cards?

Unless the heriditary line of the "PC" as we know it is wholly extinguished, there will always be an x86 CPU floating around somewhere in the block diagram(and, in other types of systems, likely an ARM CPU); but it is already the case that, for many applications, the CPU has gotten fast enough to hit diminishing returns for many applications, and the GPU(or just the embedded h.264 decoder) is where the action is.

Re:CPU, GPU... (1)

Idbar (1034346) | more than 3 years ago | (#34175682)

Isn't NVidia already releasing CPU/GPU hybrids with ARM processors on their Tegra line?

Re:CPU, GPU... (2, Informative)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#34176446)

They are definitely releasing Tegra-branded ARM SoCs that include their own GPU tech as one of the functional blocks. If that is what you mean by "hybrid", then yes.

To the best of my knowledge, though, neither Nvidia, with their ARM SoCs, nor Intel with their on-package GMAs, nor AMD with their upcoming on-die ATI tech are creating what you might call a full "hybrid"(ie. a CPU whose instruction set also includes GPU-esque instructions, like MMX or SSE on steroids). At present, they are all just more heavily integrating, for economic and latency reasons, a discrete "CPU" block and a discrete "GPU" block.

Re:CPU, GPU... (1)

QuantumBeep (748940) | more than 3 years ago | (#34177880)

They are falling down on the execution; the additional cost of Ion 2 over GMA seems to push netbooks and nettops into the low-end desktop PC price range.

About 6 days from never (1)

Sycraft-fu (314770) | more than 3 years ago | (#34174916)

What may happen, what AMD would like to see happen, is for GPU functions to become a part of the CPU, that GPUs go away because CPUs can do it. However that'll be because CPUs have GPU like logic in addition to their own. GPUs are great and blazingly fast... But only at some things. I've written up the list before and don't feel like doing it again but more or less you find that some things run great on a GPU, others run for shit. More or less it has to be floating point, massively parallel, and have very little branching. Some things meet that criteria, others don't.

CPUs are CPUs because they are good at everything. They can do any task well. GPUs are specialized processors, they do only certain tasks, but do them much better. You can go a step further to ASICs, Application Specific Integrated Circuits. They do one and only one thing, but are amazing at it. ASICs are how a tiny, cheap, 5 watt switch can forward 16gb/sec of traffic. Try having a computer do that and you'd need a beefy system to process all of it but a tiny ASIC on the switch can handle it. However it does nothing but switch packets, it is completely inflexible in its design.

Re:About 6 days from never (2, Insightful)

Rockoon (1252108) | more than 3 years ago | (#34175156)

What may happen, what AMD would like to see happen, is for GPU functions to become a part of the CPU, that GPUs go away because CPUs can do it. However that'll be because CPUs have GPU like logic in addition to their own.

The problem, as Intel found out with Larrabee, is that a cache that works well for CPU tasks does not work well for GPU tasks, and vise-versa. For a GPU the bandwidth is everything, while for a CPU its the latency that matters most.

Our CPU's L1 caches are 32K/64K in size because smaller caches have significantly smaller latencies than larger ones. Its quite obvious that a 64K cache is way too small for a GPU, which could literally process 64K of data in only a few of its clock cycles.

Intel never could solve the problem. Larrabee could either be a GPU with poor CPU-capabilities, or a multi-core CPU with poor GPU-capabilities.

Maybe in the future... not with todays memory types.

Re:About 6 days from never (1)

David Greene (463) | more than 3 years ago | (#34175814)

The problem, as Intel found out with Larrabee, is that a cache that works well for CPU tasks does not work well for GPU tasks, and vise-versa. For a GPU the bandwidth is everything, while for a CPU its the latency that matters most.

That's often application-dependent, but generally you are correct. One obvious solution is to implement specialized caches used by different instructions. Just as we have instruction caches and data caches, it's not hard to imagine scalar caches and vector/streaming caches. We certainly have enough transistors.

This is not a new idea. People have been talking about paired temporal/non-temporal caches for years.

Re:About 6 days from never (1)

robthebloke (1308483) | more than 3 years ago | (#34177092)

I assume you know larrabee has been reborn as knights ferry and knights corner...... ? I for one am looking forward to getting my hands on the final hardware. For my purposes (film FX), it's exactly what we've been screaming out for, for over a decade...

here is a problem I have with these cards (1)

Osgeld (1900440) | more than 3 years ago | (#34174594)

Its not really driving down the cost of the old cards as much as they used to, how much does this thing cost, and why does a 8800GTX still cost tween 200$ and 400$ and its not uncommon to see GTX280's for nearly 500 bucks

yes I know they have cheaper cards, I have a GTS250 but still

Re:here is a problem I have with these cards (1)

spire3661 (1038968) | more than 3 years ago | (#34174666)

i think the 8800 GTX is anomaloy in pricing due to lack of finding replacement parts now, and how very poopular it was in SLI rigs. IT keeps value as a replacement part.

Re:here is a problem I have with these cards (1)

QuantumBeep (748940) | more than 3 years ago | (#34177916)

8800GTX is not "worth" $200 of gaming performance. It's worth maybe $65 on a few select cases where power consumption isn't a concern.

It costs that much because they're a niche item.

Purely out of curiosity... (2, Interesting)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#34174604)

This GTX580 is a 3 billion transistor chip(not counting the RAM on the same card, just the GPU die itself). Does anybody know what year the number of transistors on the entire planet reached the number on this die?

Re:Purely out of curiosity... (3, Funny)

Lieutenant_Dan (583843) | more than 3 years ago | (#34174706)

August 29th, 1997. At that point we lost all communication with Skynet.

Re:Purely out of curiosity... (4, Funny)

sexconker (1179573) | more than 3 years ago | (#34175168)

August 29th, 1997. At that point we lost all communication with Skynet.

And Michael Jackson turned 39.
Coincidence? You decide.

Re:Purely out of curiosity... (1, Insightful)

Anonymous Coward | more than 3 years ago | (#34174948)

That's only 7462 Intel 286 processors. A very low number. So somewhere between 1954 and 1982.

Re:Purely out of curiosity... (2, Informative)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#34175310)

Yeah, I'd assume that it is somewhere around the time that integrated circuits hit the street. 3 billion discretes, especially with what transistors used to cost, seems a touch unlikely; but it cannot have been long after the integrated stuff became available.

Re:Purely out of curiosity... (0)

Anonymous Coward | more than 3 years ago | (#34175038)

I wonder how long it will be before we'll be able to say "There are as many transistors on a GPU (or CPU, as the case may be) as there are people on the planet". One year? Two?

Re:Purely out of curiosity... (1)

interkin3tic (1469267) | more than 3 years ago | (#34175106)

I don't know, but I do know that we can be sure that transistors are not people. Hard drives though are another story. CAVIAR GREEN IS PEOPLE!!!

Get 'em while they're hot (1)

leptechie (1937384) | more than 3 years ago | (#34174632)

I'm more mystified by the form factors of these things with every new release - I really miss being able to actually see the PCB on my new hardware. At what point is it going to be more expedient for me to simply place my GPU in its' own little box outside the case to expedite cooling, perhaps a dedicated power supply, of course lots of fans... A Graphics Appliance if you will.

Re:Get 'em while they're hot (2, Interesting)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#34174856)

With a 244 watt TDP, I suspect that they need every inch of the front of the card, and are constrained only by PCIe form-factor concerns from using more of the back, just to keep the thing from burning out without a fan that sounds like a legion of the damned every time you boot the thing. The entire front of the card is a combination of heatsink(and not your extruded aluminum jobby, a phase-change vapor chamber unit) and a shroud to direct air flow.

If you want to see the board, back off a few price/performance tiers, and you'll get a 90% bare PCB with a dinky little slug of aluminum or copper on the main chip.

49.4 GigaTexels/sec Fillrate... (1)

digitaldc (879047) | more than 3 years ago | (#34174672)

...ought to be enough for anybody.

Re:49.4 GigaTexels/sec Fillrate... (0)

Anonymous Coward | more than 3 years ago | (#34174828)

why does that remind me of Bill Gates...

Re:49.4 GigaTexels/sec Fillrate... (1)

Joce640k (829181) | more than 3 years ago | (#34175296)

I dunno, why does it remind you of Bill Gates? It's not like he ever said that or anything...

Re:49.4 GigaTexels/sec Fillrate... (1)

aliquis (678370) | more than 3 years ago | (#34175724)

Compare that to the DS ;)

Next gen? (2, Interesting)

Issarlk (1429361) | more than 3 years ago | (#34174880)

So, this card is about as fast, and consumes about the same power as a 480, but it's "next gen" anyway ?

That looks like a 480 with the 4 replaced by a 5. Hardly a revolution.
Just watercool the 480, it's how it's supposed to be used.

Re:Next gen? (2, Informative)

Issarlk (1429361) | more than 3 years ago | (#34175544)

To those rating me as troll:

From TFA:
Unigine Heaven Benchmark v2.0: 18% better
580: 879
480: 742

Quake war: 14% better
580: 176 FPS
480: 154 FPS

Farcry2: 14% better
580: 109 FPS
480: 95 FPS

Alien vs Predator: 16% better
580: 43 FPS
480: 37 FPS

...

Power consumption: 96% of that of the 480
580: 377
480: 392

Woot, 15% increase in performance for same consumption ! Clearly the 580 is "as its name suggests, it's a next-gen product".
If you mean same-gen as the 480, right. If you mean next-gen compared to the 480, clearly not.

Synthetic Benchmarks - (3, Interesting)

Fibe-Piper (1879824) | more than 3 years ago | (#34174966)

Does anyone assume that the synthetic benchmarks achieved by either AMD or NVIDIA are representative of anything more than these companies' efforts to tweak their driver sets against the pre-existing criteria for getting a "good score"?

Both companies I believe have been accused over the years of doing just that and pointing the finger at the other as taking part in shennaniganism"

So go read some non-synthetic ones (2, Informative)

Sycraft-fu (314770) | more than 3 years ago | (#34175620)

HardOCP is famous for their real gameplay ratings. They go and actually play through the game while testing performance. They then find the highest settings that the reviewer finds playable. Now while there is some subjectivity to it they do back it up with FPS numbers, and it is the same reviewer trying everything out. So it gives real, in game, actually playing, results. I find it maps nicely to what actually happens when I get a card and play games.

http://hardocp.com/article/2010/11/09/nvidia_geforce_gtx_580_video_card_review [hardocp.com]

Re:So go read some non-synthetic ones (1)

Fibe-Piper (1879824) | more than 3 years ago | (#34177136)

My point was not meant to imply that it is impossible to quantify the performance of graphics cards - rather that breathless statements like the following:

"Take synthetic tests like Uningine into account and the GTX580 can be up to twice as fast."

...need to be taken with a HUGE grain of salt

Re:Synthetic Benchmarks - (1)

Bios_Hakr (68586) | more than 3 years ago | (#34175986)

>>Does anyone assume that the synthetic benchmarks achieved by either AMD or NVIDIA are representative of anything more than these companies' efforts to tweak their driver sets against the pre-existing criteria for getting a "good score"?

In short, no.

However, we have sites like HardOCP and AnandTech that run the cards through a variety of games and give the results. You can look at the results and decide if your current card is better or worse than the new card.

If you are trying to decide between a brand-new NVIDIA and a brand new ATI, well, they will both be pretty damn good. And if one gets a few FPS more, well, most people probably won't care.

But if you are deciding to purchase an upgrade, the new card will almost certainly be better than what you have now. Then it becomes a cost/benefit analysis. And that's a personal decision.

Re:Synthetic Benchmarks - (1)

ifrag (984323) | more than 3 years ago | (#34176162)

I thought this was why software like 3DMark Vantage have simulations which are basically equivalent to the rendering performed in actual games. There's 2 full runs of very detailed 3D scenes which actually must be done by the card, there's no way to sneak around actually rendering them. User is presented with the rendering on screen in real-time while doing the benchmark.

Also, the large gauntlet of actual game benchmarks helps give weight to any synthetics actually meaning something. I haven't seen a video card review any time recently that didn't include at least 4 or so actual game tests with it. Nobody is really going to believe a review that doesn't include at least an assortment of those.

Re:Synthetic Benchmarks - (1)

QuantumBeep (748940) | more than 3 years ago | (#34178034)

Remember the FX5900 and 3DMark03?

That's why we don't trust synthetics.

Awww come on... (0, Flamebait)

sudden.zero (981475) | more than 3 years ago | (#34174972)

everyone knows if it's an NVIDIA it's gotta be good....wink..wink.

anti-overclocking technology (1)

0111 1110 (518466) | more than 3 years ago | (#34175040)

I hope someone can figure out how to bypass the anti-overclocking tech. Otherwise, AMD is going to have an easier ride this round. Why are all manufacturers so damn evil? What's wrong with a little overclocking to boost speeds? When I'm spending this much money on a video card the least they could do is allow me to boost my speeds a little. They've also made water cooling the card pointless with their new current limiter. It's so easy to hate Nvidia. I'll buy from whichever company has the fastest card (without totally unreasonable pricing), but Nvidia's behavior is just so low. If anything they should be making it easier to overclock as the motherboard manufacturers do. An easy to adjust voltage control would be a much better feature than a current limiter. It would even give you the ability to undervolt to run quiet and cool.

Re:anti-overclocking technology (1)

Tr3vin (1220548) | more than 3 years ago | (#34175352)

There is a difference between motherboards and GPUs/CPUs. The motherboards use overclocking as a feature that you pay extra for. GPUs and CPUs sell based on their clock, so the idea of an end user overclocking their chip is frightening to the manufacturer. They want you to pay extra, just like you did for the motherboard, for the overclocked chip. An easy to overclock chip isn't an enticing business move. It is far better to sell small gains for a large price.

That's Crazy (1, Troll)

MogNuts (97512) | more than 3 years ago | (#34175150)

Am I the only one not dumbfounded by how incredible these things are? And how much progress has been made with graphics cards and PC Gaming. You can get a graphics solution running Crysis at max settings at almost 60 fps for only $300 (GTX 460 SLI). With that, you can run every other game at max settings, 1900x1200, at like 100 FPS. I don't ever remember PC gaming giving this much bang-for-your-buck or making this much available.

I'm not knocking consoles, they have their places. But it just hurts to play any game at the low resolution of a console and its low graphics settings. And I find myself thinking Steam is even easier plug-and-play than the consoles are. And all the new advanced graphics features are in the new cards. I can't imagine the consoles ever catching up. And it wouldn't be economical for the console makers to do so. I think the future of gaming really may lie in the PC.

Just my opinion. Please, no flamewars here. I have both a PC and a 360 and I appreciate them both in different ways. I'm just floored at how far PC gaming have come in only 3 years.

P.S. I know the responses are coming, so I'll just put a disclaimer that I have my computer hooked up to my HDTV in my living room and yes I run most recent and current games at max or high graphics settings on an ancient 8800 GTS 512 with a Core 2 Duo and 2GB RAM.

Re:That's Crazy (1)

whiteboy86 (1930018) | more than 3 years ago | (#34175772)

how much progress has been made with graphics cards

Yep, these are now about 10x as power hungry as they use to be..

Re:That's Crazy (1)

PitaBred (632671) | more than 3 years ago | (#34175780)

Just remember to factor in your power usage prices. If you game a lot, it might be more economical to buy more of a card upfront and it'll use less power overall.

Yeah but... (1)

blind biker (1066130) | more than 3 years ago | (#34175416)

Is it powerful enough to run Civilization V?

Re:Yeah but... (1)

Massacrifice (249974) | more than 3 years ago | (#34176822)

Is it powerful enough to run Civilization V?

Only if you run a Beowolf cluster of them.

Terrible Summary (2, Interesting)

Godai (104143) | more than 3 years ago | (#34175490)

The /. summary ends with:

It can put up in-game benchmark scores between 30% and 50% faster than AMD's current flagship single-GPU, the Radeon HD 5870.

But if you read the original article, the one flaw in the (otherwise good) nVidia card is that is still loses to the 5970 which is -- according to the article -- 'about a year old'. So why is that other article mentioned in the summary talking about the 5870 as if its the flagship? Clearly the 5970 is. Or am I missing something?

Re:Terrible Summary (1)

Delarth799 (1839672) | more than 3 years ago | (#34175646)

If you see the little part there where it mentions flagship SINGLE-GPU. The part SINGLE is the key word, meaning 1. The 5970 has TWO GPUs on the card.

Re:Terrible Summary (1)

imaginieus (897756) | more than 3 years ago | (#34175686)

The 5970 has two gpu's built into one card. It would be more comparable to two nVidia cards running in SLI than a single GPU card.

Re:Terrible Summary (0)

Anonymous Coward | more than 3 years ago | (#34175856)

No, what's comparable to the 5970 is another single card solution of the same size. What primarily matters to the customer is the one card performance/size/heat/cost. What matters to the financials of each company is the performance / silicon area, primarily. It matters practically to *no one* that the 5970 is a MCM instead of a SCM with larger silicon area per die, other than how it impacts the above criteria.

It's true that 2x 5970 might not scale as well as 2x GTX 580 because the 5970 is already technically at 2x, but that shows up in benchmarks.

Re:Terrible Summary (1)

kevloral (786401) | more than 3 years ago | (#34176268)

Common users just don't care if the graphics card they are about to buy has one GPU, a couple of them or a hundred. They care about performance, cost and possibly energy comsumption. Therefore, comparing a Radeon 5970 to a GTX 580 seems completely justified to me given that the MSRP is nearly the same for both of them. On the other hand, comparing a GTX 580 ($500) to a Radeon 5870 ($300) does seem a bit misleading to me.

Re:Terrible Summary (0)

Anonymous Coward | more than 3 years ago | (#34176552)

Mod parent up Which do you prefer for parallel tasks? A faster and more energy efficient dual-CPU system or a slower and power hungry single-CPU of the same size?

Re:Terrible Summary (2, Informative)

blankinthefill (665181) | more than 3 years ago | (#34175822)

It can put up in-game benchmark scores between 30% and 50% faster than AMD's current flagship single-GPU, the Radeon HD 5870.

The 5970 is a dual GPU solution. TBH, it's no surprise that it's faster than a single GPU solution that is a year newer. I would expect the last gen card in a dual GPU setup (this, or SLI/Crossfire) to outperform the latest next gen card, especially when the new card is really just an iteration of the architecture used in the last gen. Nothing really surprising about it at all. And I bet you if you get two of the GTX 580's in SLI, they'll stomp the 5970. That's a bit more of an apples to apples comparison (although not 100%, since there are specific bottlenecks that tend to keep 2 GPU's on a card from performing as well as two discrete GPUs in SLI).

Re:Terrible Summary (1)

makomk (752139) | more than 3 years ago | (#34177274)

I would expect the last gen card in a dual GPU setup (this, or SLI/Crossfire) to outperform the latest next gen card, especially when the new card is really just an iteration of the architecture used in the last gen.

Why? Bear in mind that it's not like there's any new features in the "next gen" 580 over the previous generation - the only improvement is peformance. Having about the same performance as a card that's been in the market for a year at a similar power consumption and price tag isn't exactly great progress.

And I bet you if you get two of the GTX 580's in SLI, they'll stomp the 5970.

I'd hope so, given that each of those two cards individually costs the same price as the 5970 and uses nearly as much power. For a dual-580 setup, we're talking $1,000 just for the cards alone, not including the cost of the computer required to run them. (Remember - you'd need a power supply rated non far short of 1KW and an expensive premium motherboard. Unlike AMD's Crossfire, SLI only runs on approved boards.) Sure, there's probably nothing that'd beat it - but who has the money for that kind of setup?

Re:Terrible Summary (1)

webmistressrachel (903577) | more than 3 years ago | (#34175844)

The Radeon 5870 is a single-GPU card, as is the Geforce GTX 580. The 5970 is a dual-GPU card, and so can't be fairly compared to the NVidia board.

The article compares ATi's fastest single GPU card (their "flagship") to NVidia's fastest single GPU card, and finds NVidia's to be faster.

Re:Terrible Summary (1)

Godai (104143) | more than 3 years ago | (#34176050)

Thanks all, I get it now. I didn't see mention of the distinction between the 5970 & the 5870 (which is fair, its a 580 review, not an ATI review). Though I was skimming, so its possible they point that out early & I just missed it.

Though why is the dual-CPU chip using less power than the single GPU nVidia card? Is there some subtle interplay between dual GPU processors I'm not aware of that makes them use less power, or is the nVidia 580 just a hog?

Re:Terrible Summary (0)

Anonymous Coward | more than 3 years ago | (#34176388)

The 5970 has two GPU cores, so it's not in the same class as the single-core 580.

Fast open source drivers coming.. (2, Interesting)

xiando (770382) | more than 3 years ago | (#34175576)

..never as long as Nvidia refuses to release even a hint of documentation and insists that GNU/Linux users accept their Binary Blob World Order. I don't really care if this new card is faster than the fastest AMD card, atleast I can (ab)use those for something. I still have a Nvidia PCI (not PCIe) card on some shelf which does NOT work with the Binary Blob under GNU/Linux, nor does it work with nouveau joke of a free driver.

Re:Fast open source drivers coming.. (1)

Hatta (162192) | more than 3 years ago | (#34176130)

So how good is the AMD open source driver? How much luck have you had running 3d games under Wine with it?

Re:Fast open source drivers coming.. (0)

Anonymous Coward | more than 3 years ago | (#34177434)

So how good is the AMD open source driver? How much luck have you had running 3d games under Wine with it?

The r600c driver on an R4980 gives me steady 60 fps in Warsow. Maybe it's being capped by Vsync, which I didn't bother to look into. I don't know about Wine / other.

Re:Fast open source drivers coming.. (1)

xiando (770382) | more than 3 years ago | (#34178028)

So how good is the AMD open source driver? How much luck have you had running 3d games under Wine with it?

Both the AMD r600c and the new r600g free software drivers are slow and phoronix benchmark story is that their evil binary blob is faster than both of those. Still, there is a very big difference between AMD and Nvidia; AMD worker-drones regularly work on the driver and the OpenGL support through MESA and they are making documentation available as fast as they can write it. As for Wine: I do not have the license for any 3d games or other Windows software for that matter, so I haven't tried running anything at all under Wine. If you know any good free software Windows games then I could download and try them. I do know you can play the Linux version of World of Padman using the free AMD drivers, and it's quite fun.

Re:Fast open source drivers coming.. (0)

Anonymous Coward | more than 3 years ago | (#34176272)

I'm curious, is this an "I don't trust the code unless I can read it?" or an "ALL CODE SHOULD BE FREE BECAUSE ALL THINGS BELONG TO THE PEOPLE" argument?

I can understand one, but the other is grounds for dismissal.

I have all sorts of nvidia cards laying around and they all "work" with the free drivers enough to run desktop environments. Unless you are trying to do 3d/gaming, but then you'd happily install the nvidia driver which works easily.

Re:Fast open source drivers coming.. (1)

Shark (78448) | more than 3 years ago | (#34176956)

I think what scares them most is that an open source driver would not intentionally cripple OpenGL rendering. The three to four times markup on Quadro cards is nasty business if you ask me. I'm fine with a card being more expensive because it offers you testing and support on professional apps, but comparing a Quadro 3700 with a 8800GT does not shine a very good light on nVidia.

Mind you, I suspect AMD is equally bad, I just never looked at that market in detail.

Re:Fast open source drivers coming.. (1)

Machtyn (759119) | more than 3 years ago | (#34177820)

Heck, I just loaded Ubuntu for the first time on my gaming computer after a hard drive crash. (I've been threatening to do this for a long time.) While my experience has been mostly positive, Canonical has done a very good job of making the install process of software and drivers easy, I'm still mystified why I can't turn on all the nice graphics features of my OpenGL games... yes, they're running through Wine. Perhaps, I haven't hit the correct sites that contain the info I need. At this point, I'm not sure whether to blame the card/driver, the game designer (for making it Windows only), or Wine's inability to pass the full capability of hardware to the application at hand.

NVidia GTX N+110 Scarp your roomheater edtition! (1)

Seth Kriticos (1227934) | more than 3 years ago | (#34176008)

The fastest single GPU consumer card available. TDP 244W! You can safely scrap your room heater now!

Re:NVidia GTX N+110 Scarp your roomheater edtition (1)

maestroX (1061960) | more than 3 years ago | (#34177450)

nvidia's previous flagship GTX480 uses more power at high loads and idle, competitor's ATi range 5xxx use more juice in idle, so it's definitely an improvement in all areas except price.

As its name suggests? (1)

highfidelitychris (1448915) | more than 3 years ago | (#34177532)

The name only suggests to me that they need a naming convention instead of just mashing the keyboard each release.

Summary inaccurate (1)

Khyber (864651) | more than 3 years ago | (#34178014)

Looks like HardOCP hasn't been testing AMD's most recent flagship product, the Radeon HD 6xxx series, which a single card alone eats a 480GTX for breakfast.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?