×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

NVIDIA Driver Update Causing Video Cards To Overheat In Games

Soulskill posted more than 4 years ago | from the i-thought-this-only-happened-to-ati dept.

Upgrades 155

After a group of StarCraft II beta testers reported technical difficulties following the installation of NVIDIA driver update 196.75, Blizzard tech support found that the update introduced fan control problems that were causing video cards to overheat in 3D applications. "This means every single 3D application (i.e. games) running these drivers is going to be exposed to overheating and in some extreme cases it will cause video card, motherboard and/or processor damage. If said motherboard, processor or graphic card is not under warranty, some gamers are in serious trouble playing intensive games such as Prototype, World of Warcraft, Farcry 3, Crysis and many other games with realistic graphics." NVIDIA said they were investigating the problem, took down links to the new drivers, and advised users to revert to 196.21 until the problem can be fixed.

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

155 comments

Wow (0)

Anonymous Coward | more than 4 years ago | (#31369038)

That's hot.

Re:Wow (5, Funny)

Anonymous Coward | more than 4 years ago | (#31369102)

Prototype, World of Warcraft, Farcry 3, Crysis

One of these things is not like the others~

Re:Wow (0)

Anonymous Coward | more than 4 years ago | (#31369650)

You mean FarCry 3 or WoW?

Wow realistic? (0)

Anonymous Coward | more than 4 years ago | (#31369072)

WoW realistic? psssssssssshawwwww

Re:Wow realistic? (3, Insightful)

Beelzebud (1361137) | more than 4 years ago | (#31369202)

It's not realistic, but it can be a very demanding game, especially when raiding with 24 other people, and a room full of boss spells going off at once.

Glad it didn't fry mine. (4, Interesting)

Beelzebud (1361137) | more than 4 years ago | (#31369078)

Oddly enough, I played World of Warcraft and Fallout 3 quite a bit since upgrading to these drivers, and my performance has been much better than the previous win7 64x driver. I hear the fan ramping up like it should, and the card hasn't gotten close to overheating. Maybe it's only affecting certain models. I have an 8800ultra.

Re:Glad it didn't fry mine. (1)

brucmack (572780) | more than 4 years ago | (#31369608)

Yes, it probably only affects newer cards.

The newer cards have so many execution units, that the cards aren't actually able to run all of them full-out at the same time - it would take too much power and produce too much heat. The logic behind this is that for most applications, total performance is bottlenecked somewhere, so every part of the chip is never going to be active at the same time. Apparently something in their driver update has either changed this or (more likely) broken the logic to throttle back the card if it is running too hard.

Re:Glad it didn't fry mine. (0)

Anonymous Coward | more than 4 years ago | (#31370248)

Unless you work at nVidia, I'm not buying it. What you say is equivalent to building a quad core processor but only being able to run three cores at one time. I don't think you know what you're talking about and as much, you are taking up valuable space.

The cards are designed inside of a thermal and power envelope and this is part of the where the 6 and 8 pin pci-e power plugs straight from the psu come in.

Also: The article is pretty uninformative. Are these cards that are burning out laptop parts or desktop parts? But, it sounds as though it is just a fan issue. And yes the firmware/driver should detect the temp rising and throttle the card back, but it doesn't mean that the whole card can't run all out...it just means that someone put the wrong fan ramp up / temps table into the driver.

Re:Glad it didn't fry mine. (2, Insightful)

maxwell demon (590494) | more than 4 years ago | (#31370478)

While I don't know much about GPUs, I think it makes sense. AFAIK the GPU contains quite specialized hardware for certain tasks; unlike the CPU cores which are all identical generic hardware. In which case it indeed makes sense to have more units in total than can be used at once.

To fix your CPU analogy:

Imagine a CPU which has different types of cores. Some cores are efficient integer units, but don't do floating point. Others are very good at floating point, but only have rudimentary integer capabilities. Now floating point heavy applications usually don't do too much integer processing, and vice versa. Now imagine that some physical limitation (heat, power supply, whatever) only allows a certain number of cores to be active at the same time, but die space allows for more. Now if you put exactly as many cores on your CPU as your physical limitations allow, then you have to decide: Either you put many floating point cores on your die, then you'll have excellent floating point performance, but would suck at integer-heavy applications. Or you put many integer cores on it, then your integer performance will be great, but you'll such at FP. Or you use about the same number of integer and floating point units, and then you'll get mediocre performance for both.

However if you put more cores on the die than you can run at the same time, then you can give the FP-heavy app many FP cores and get great FP performance (the lack of fast-integer cores won't hurt the FP-heavy app), and give the integer-heavy application many integer cores and get great integer performance (the lack of fast-FP cores won't hurt the int-heavy application).

Re:Glad it didn't fry mine. (1)

Prof.Phreak (584152) | more than 4 years ago | (#31370474)

or some intern optimized a complicated piece of logic by noticing it's essentially an idle loop---a very important idle loop.

Re:Glad it didn't fry mine. (2, Funny)

Minwee (522556) | more than 4 years ago | (#31370718)

or some intern optimized a complicated piece of logic by noticing it's essentially an idle loop---a very important idle loop.

You mean it wasn't a Speed-up Loop [thedailywtf.com] ?

YouTube RivaTuner Guide to work around this (1)

allcoolnameswheretak (1102727) | more than 4 years ago | (#31371144)

So nVidia FINALLY acknowledged that there is a problem with their newer graphics cards.

I've been having this problem for over 5 months since I got a new GTX 275. Games would crash or freeze because the fan duty cycle would stay fixed at 40% even with temperatures higher than 75. I reported my problems to the nVidia forums, but people there said it had nothing to do with the driver, but was probably a manufacturer BIOS or chipset issue. Still, since the problem can be solved by software using RivaTuner, I don't see why nVidia can't take responsability and provide a fix for this issue in their drivers.

Here is a YouTube guide on how to configure RivaTuner so that you can play your games again:
http://www.youtube.com/watch?v=rXr6IIj1sLY&feature=player_embedded# [youtube.com]

Keep in mind that for some reason, you have to have RivaTuner -AND- the Hardware Monitoring both running for the settings to take effect.

Re:Glad it didn't fry mine. (1)

gad_zuki! (70830) | more than 4 years ago | (#31371118)

>Oddly enough, I played World of Warcraft and Fallout 3 quite a bit since upgrading to these drivers, and my performance has been much better than the previous win7 64x driver.

If you read the release notes you'll see big performance gains on a lot of games from this driver. This is something I've never seen from Nvidia. Anyone have the details on what happened? Maybe they found some new way to be efficient or found some long standing bug.

Crappy Nvidia driver has multiple issues (1)

flyingfsck (986395) | more than 4 years ago | (#31369104)

Apart from the fan problem, is this version more stable? The last version causes my laptop to crash every few minutes, making it unusable, so I have to run the VESA driver.

Re:Crappy Nvidia driver has multiple issues (1, Insightful)

Anonymous Coward | more than 4 years ago | (#31369148)

Laptop? You should probably use the drivers from your laptop manufacturer, they often customize things to get clock frequencies etc right for their specific model.

Re:Crappy Nvidia driver has multiple issues (0)

Anonymous Coward | more than 4 years ago | (#31369280)

Or they don't offer drivers at all for certain operating systems.

Re:Crappy Nvidia driver has multiple issues (3, Insightful)

yacc143 (975862) | more than 4 years ago | (#31369318)

What a stupid recommendation, I mean, they usually stop to provide updates, the moment the next model comes out.
Consumer laptop models have seldom a life much beyond 6-12 months. Some consumer laptops can be quite useable way longer than 12 months. (and that assumes that you buy it on the first day it's out)

Hence you are forced to use the upstream drivers.

Re:Crappy Nvidia driver has multiple issues (1)

ZosX (517789) | more than 4 years ago | (#31369368)

I always use the untouched nvidia drivers for my laptop. If they can't figure out how to make drivers for their own chipsets then shame on them. I mean most integrated nvidia chipsets have fairly fixed clock frequencies. I've even overclocked my laptops gpu slightly, without a great deal more heat generated. (thanks evga!) I always get the best performance from stock nvidia drivers. I tried the dox drivers but found them to be problematic and no faster than the nvidia stock. I guess YMMV of course......

Re:Crappy Nvidia driver has multiple issues (1)

SecondaryOak (1342441) | more than 4 years ago | (#31369574)

Maybe they do that when their machine is first released. I doubt they do that with each new driver version.

Re:Crappy Nvidia driver has multiple issues (1)

kannibal_klown (531544) | more than 4 years ago | (#31370294)

Laptop? You should probably use the drivers from your laptop manufacturer, they often customize things to get clock frequencies etc right for their specific model.

In my limited experience, the laptop manufacturer releases drivers at a very slow pace and they stop releasing new versions after a while. After some point the most they might do is release a new version if a new OS comes out.

For the most part that's alright (especially in non-gaming environments), but some games need a more recent version of a driver due to a fix or a new feature.

I tried playing Champions Online with the manufacturer's 1+ year-old video drivers on my laptop after a reformat. The game warned me of old drivers and performed horribly. I then installed something more recent directly from nVidia: the message went away and the performance went way up.

Re:Crappy Nvidia driver has multiple issues (4, Informative)

scalarscience (961494) | more than 4 years ago | (#31369550)

This issue is related to automatic fan control not working due to improper registry keys, and so GPU's that run warm (9800 series for instance) can quickly overheat and potentially suffer damage. I'm having no issues with mine, but I set fan profiles manually as I'm using a machine that has a very hot MCH & fb-dimms (2008 Xeon) and don't want the gpu contributing more. However for anyone interested (and using a GT200 or at least G80/G92 on up) here's the fix: http://forums.nvidia.com/index.php?showtopic=161767 [nvidia.com]

Re:Crappy Nvidia driver has multiple issues (1)

mmalove (919245) | more than 4 years ago | (#31369674)

I wonder if this is what was happening to me then?! I run a 9800M GS (laptop version of the 9800). Been overheating for months now, finally resorted to using ntune to underclock the processors by 25%. Fixed the crashing with minimal impact to WOW (hurray for modern GPUs on a 5 year old game).

Re:Crappy Nvidia driver has multiple issues (1)

realityimpaired (1668397) | more than 4 years ago | (#31369996)

It being a laptop, usually there isn't actually a fan specific to the video card. My last laptop had an 8600M GT 256MB (I only retired it on Wednesday for the new one; that system was almost 3 years old). Inside the case, there was a heatpipe from the video card to a set of fins that was right next to the fins for the CPU heatsink (with a similar heatpipe setup), and a separate fan that blew air through both sets of fins. The cooling fan wasn't actually connected to either the graphics card or the CPU, and was controlled directly by the system's BIOS as a case fan.

I have not opened the case on the new laptop (nor will I until it's out of warranty), but I imagine it's a very similar setup... the new one has a Core i7 920QM and a 1GB Radeon HD4670. Pretty much every laptop I've owned has been that kind of setup.

If you're noticing that it's overheating, it could be something much more mundane than a bad driver... How old is this laptop, and do you use it in a dusty environment, or somewhere it's possible for stuff like pet hair to get into the fan? I live with 2 cats and a dog, and have had to crack the case on a laptop to pull out a layer of felt that builds up between the heatsink fins and restricts airflow.

Re:Crappy Nvidia driver has multiple issues (0)

Anonymous Coward | more than 4 years ago | (#31370210)

wearing that felt as some sort of Wig replacement or WOW trophy is not considered acceptable behaviour. But go ahead anyway.

Re:Crappy Nvidia driver has multiple issues (1)

mmalove (919245) | more than 4 years ago | (#31370464)

While I appreciate the advice, I've cracked open the innards of the laptop and do clean it regularly. It is indeed set up as you describe with the heat sink pipe leading to a single fan/exhaust system. Maybe the fan on that's just choking independent of driver issues.

Ah well, it runs significantly cooler underclocked - should carry me till I'm ready to replace the system I think.

Convinience (0)

Anonymous Coward | more than 4 years ago | (#31369110)

Playing wow while making eggs and bacon without leaving the PC?

Actually, that gives me a nerdier idea (2, Funny)

Moraelin (679338) | more than 4 years ago | (#31369752)

Playing wow while making eggs and bacon without leaving the PC?

Actually, that sounds even better to me. It's just a watercooling block and a nozzle away from a coffee maker. Just imagine it. The non-virtual Java Machine :P

As an ATI user... (-1)

Anonymous Coward | more than 4 years ago | (#31369142)

... all I have to say is "neeener neeener nah nah!"

Processor damage, really? (3, Insightful)

cbope (130292) | more than 4 years ago | (#31369156)

Wait a minute... just how is an overheating graphics card causing damage to a CPU? As an EE, I'd love to hear the basis for that. Even motherboard damage is extremely unlikely, unless the card bursts into flames and torches the PCIe slot. Or the graphics card gets hot enough to re-flow solder, which then drips onto the PCIe slot or motherboard components. Not to mention most cases are vertically oriented these days. Not a chance in hell, I'd say.

I'm not saying there isn't an issue, but it sounds like the issue is just a bit over-hyped... or someone has an agenda and just wants to bash NVIDIA.

Re:Processor damage, really? (0)

Anonymous Coward | more than 4 years ago | (#31369192)

The GPU explodes. That's how.

Re:Processor damage, really? (1, Interesting)

theeddie55 (982783) | more than 4 years ago | (#31369228)

if it causes a short circuit, the feedback could easily blow the CPU, though in practice, any half decent power supply should cut out before that can happen.

Re:Processor damage, really? (1)

cbope (130292) | more than 4 years ago | (#31370374)

Umm... not likely. A short circuit in the power circuit of the GPU would only affect the graphics card itself and probably the power supply. At the most it would probably trip the over-current protection in the power supply which would simply shut down. The only electrical connections between the CPU and GPU are data lanes and these are not sufficient to bring down a CPU. They are only signal circuits, not power. Even shorting a signal to ground is unlikely to do damage. Remember, a binary zero is 0V (or very close to 0V).

Re:Processor damage, really? (5, Interesting)

mkairys (1546771) | more than 4 years ago | (#31369238)

Laptops for example generally have the same heat pipe connected to the CPU and GPU. If one overheats, so can the other.

Re:Processor damage, really? (1, Informative)

Anonymous Coward | more than 4 years ago | (#31369358)

And this is an additional problem since all decent GPUs can survive much higher temperatures then CPUs.

Water cooling from the same reservoir & same cycle and such is fine, but a shared heatpipe would be questionable in most (but not all) cases. The difference in max operating temperature is just too high.

Re:Processor damage, really? (4, Informative)

mkairys (1546771) | more than 4 years ago | (#31370108)

Spot on. My 8600GT started overheating in my laptop and while it survived, my CPU was hitting 105C and would shut down randomly and required the processor, motherboard and many other components to be replaced (the heat ruined the life of the battery). The GPU was holding out at the temperatures fine but because of the heat pipe it was connected to, it was cooking the CPU in the process.

Re:Processor damage, really? (2, Informative)

Manip (656104) | more than 4 years ago | (#31369308)

The slot can be damaged by overheating cards, and if it is your only 16x slot then you could wind up throwing away the entire motherboard. Although typically this is more often seen when a card overheats multiple times causing the material to expand and contract until it eventually fails (as opposed to this case when cards just die).

My only guess about CPU damage is unregulated power spikes but that is just conjecture. Plus if anything was going to get damaged by power spikes it wouldn't be the CPU it would be the RAM.

Re:Processor damage, really? (0)

L4t3r4lu5 (1216702) | more than 4 years ago | (#31369430)

If the fan fails to spin up and the gfx overheats, the ambient temp in the case rises. Without good airflow it would be easy for an overheating gfx card to seriously affect the CPU heatsink's heat dispersion properties. An ambient temp of 100f means your cpu will be that temperature at least. I don't know the physics, but my "seat of my pants" maths tells me that you'll add 50% onto that temperature from the cpu, bringing up the core temp to almost 150f / 60c. My old prescott ran at that temperature, and it wasn't pleasant.

Plus, aren't most tower cases built with the gfx directly underneath the CPU? I never understood why the hottest component would be lowest in the case...

Re:Processor damage, really? (1)

Calinous (985536) | more than 4 years ago | (#31369580)

AT cards for desktop cases had the processor and memory on the right side while the expansion cards were grouped on the left side. When the tower AT cases appeared, they put the hottest item (the processor, at that time running without a heat sink, I've seen even a 486SX in a Dell running very hot) at the top of the case, and near the power source (the only source for ventilation in those cases were the fans in the PSU). The hard drives were situated near openings in front of the case, so whatever airflow there was would cool them too.
      Now, moving to ATX boards, the processor remained the hottest element (Pentium at 15W with active cooling, 486 at 10W with passive or active cooling) while the video cards of the times had passive cooling or none whatsoever. Also, to simplify the life of case builders the ATX layout is similar to AT layout (I've had a motherboard that would accept either AT or ATX power sources - a Soyo mainboard for K7 processors, and I think there were cases that accepted AT and ATX mainboards).
      Now, as the most heat comes out from the video card(s) (in the usual extreme gaming rigs you have some 100W from the CPU and 200+W from the video cards), things changed. I think Intel's BTX standard separated the expansion slots, but BTX is almost a dead standard in availability, and we only have ATX in userland (there are other standards in server land, but those are even harder to find than BTX)

Re:Processor damage, really? (1)

fuzzyfuzzyfungus (1223518) | more than 4 years ago | (#31369984)

BTX never really took off in the "third party components and DIY" sector, even back when Intel was cranking out chips that really could have used the extra cooling help; but it was a pretty big hit in corporate basic-box land. To this day, a substantial proportion of PCs from the various vendor's business lines are either actually BTX or heavily BTX inspired in terms of cooling layout.

Re:Processor damage, really? (0)

Anonymous Coward | more than 4 years ago | (#31369312)

Strange an EE does not understand something every overclocker does. I guess education can't make up for experience.

Re:Processor damage, really? (1)

yacc143 (975862) | more than 4 years ago | (#31369370)

Well, I'm not an EE, but I seem to remember from university, that changing temperatures can lead to changes in voltage/current. Then you've got the extreme case of a short circuit.

So I think it's quite possible to have motherboard damage, e.g. GPU takes more power than is good for the MB, MB dies. Slowly or quickly, depending upon how extreme the effect is.

As an official example, see GPU that have a seperate power connection, where the documentation explicitly states that the GPU and/or the motherboard can die if it's left unconnected, because the GPU will overuse the PCIe provided power.
So obviously, something that the GPU does can effect the MB and other parts of the system, phrased nicely by the manufacturers.

Re:Processor damage, really? (1)

asdf7890 (1518587) | more than 4 years ago | (#31369392)

Wait a minute... just how is an overheating graphics card causing damage to a CPU?

Depends on the airflow in your case, but many are not well laid out in terms of airflow. If the card is pumping out more heat than usual and this isn't being drawn out correctly, it may build up in the case generally, reducing the ability of the CPU's HS+F to cool it properly. Similarly, if the heat built up is sufficient for an appreciable amount of time (say, over the course of a long gaming session) you may also find drives and other components start failing due to overheating though the CPU is the item most at risk from this collateral warming and would most likely be the first to go (rescuing other parts by falling first, hopefully stopping the heat build up as no more head generating tasks will be run by it of give the the GPU by it) if the situation became extreme enough.

Another GPU-killing-CPU-by-heat scenario exists in liquid cooled systems. If the GPU and CPU share coolant and the GPU heats up so much that it overwhelms the liquid cooling arrangement there may not be a sufficient temperature gradient between the coolant and the CPU for the CPU to be usefully cooled.

I'd file both of these situations under "quite unlikely, but far from impossible".

Re:Processor damage, really? (1)

databyss (586137) | more than 4 years ago | (#31369694)

The excessive heat can overwhelm the standard cooling system on a PC.

As an EE, I'm sure you're well aware that heat has negative effects on CPU's and other electrical components.

Re:Processor damage, really? (1)

wisnoskij (1206448) | more than 4 years ago | (#31370082)

I have had a video card overheat and break my motherboard.

I am not sure about the technical side but I imagine that the motherboard was not designed to run at extreme temperatures.

Re:Processor damage, really? (1)

mcgrew (92797) | more than 4 years ago | (#31370628)

Or the graphics card gets hot enough to re-flow solder, which then drips onto the PCIe slot or motherboard components. Not to mention most cases are vertically oriented these days. Not a chance in hell, I'd say.

In hell you wouldn't even need to turn the PC on for all the solder to melt!

Re:Processor damage, really? (1)

idontgno (624372) | more than 4 years ago | (#31370792)

You're forgetting laptops. Everything integrated in close proximity on one motherboard, with shared fixed-capacity cooling. If the driver update pushes a heat-constrained laptop GPU harder, you could easily exceed whole-system thermal limits leading to CPU or MB damage.

The less obvious case is if a desktop system is ventilated just well enough to handle normal heat from its components, and the GPU goes into thermal overdrive because of this driver. In that case, intra-case temps will go up and, if not noticed, overheat other components. Probably not damage the CPU at that point, but thermal shutdown is likely.

I'm not saying there isn't an issue, but it sounds like the issue is just a bit over-hyped... or someone has an agenda and just wants to bash NVIDIA.

NVIDIA is doing a damn fine job of bashing themselves. They lost my trust with their piss-poor chip engineering and deceptive PR and warranty practices in the Bumpgate fiasco. This driver screwup doesn't help that. After years of devoted NForce/GeForce fanboism, I'm now thoroughly on the AMD/ATI bandwagon. We'll give NVIDIA another look when they appear to have gotten their crap together.

Planned obsolescence... (0)

Anonymous Coward | more than 4 years ago | (#31369182)

... programmed obsolescence, literally :-).

If it ain't broke.. (3, Insightful)

Mascot (120795) | more than 4 years ago | (#31369198)

WoW seems an odd companion to those other games, I've always felt the CPU was the primary bottleneck in that beast, but be that as it may..

For me, I can't recall ever solving an issue or getting noticeable performance improvements from upgrading graphics drivers. I have, however, had several issues introduced by it.

Nowadays I stick to the old "if it works don't try to fix it" mantra, with a few exceptions. For example, I kept up-to-date for a bit after Win7 release, assuming there would be teething issues for a few revisions. If buying a bleeding edge recently released card I would also stay on top of drivers for a month or two. But other than that, just leave them be I say.

Re:If it ain't broke.. (0)

Anonymous Coward | more than 4 years ago | (#31369348)

Well you should grab a copy of GPU-Z http://www.techpowerup.com/downloads/SysInfo/GPU-Z/ [techpowerup.com] some time and watch the gpu load graph when you idle in the login screen with the fancy dragon ....

Re:If it ain't broke.. (1)

lowlymarine (1172723) | more than 4 years ago | (#31369400)

Now watch the same graph once you log into Dalaran, as it drops off to something like 10-20% usage. Admittedly your processor usage on a quad-core is still going to be 30% as well, so it's just horrible, horrible coding on Blizzard's part, but still.

Re:If it ain't broke.. (0)

Anonymous Coward | more than 4 years ago | (#31371188)

The problem with Dalaran is your hard drive. Watch it next time you go there. It's why you see toons popping up slowly. The game has to load all the textures from the drive.

Re:If it ain't broke.. (2, Interesting)

Mascot (120795) | more than 4 years ago | (#31371038)

I'm not sure how GPU-Z showing me below 30% load (on an almost two year old card) is proving the point you implicitly appear to try to make..?

Re:If it ain't broke.. (2, Informative)

L4t3r4lu5 (1216702) | more than 4 years ago | (#31369404)

The shadows implemented with v3 crippled WoW graphics performance. I have an C2Q Q6600@2.8GHz, 4GB DDRII RAM, 8800gtx running everything at max settings except shadows (blob only), 1920x1200 with min 60fps. If I turn shadows up one level I get 40 fps, full shadows bring the thing to a crawl even in open areas like The Shimmering Flats.

I can easily see the gfx being a bottleneck with the shadows up, but other than that I agree. Loading the other players in Dala is horrid.

Re:If it ain't broke.. (0)

Anonymous Coward | more than 4 years ago | (#31369702)

I used to moonlight as a hardware reviewer specialized in graphics cards, and I can assure you, performance and quality DOES differ between driver versions.

I cannot give any recent examples since I'm not longer in the business, but i've seen drivers that gave up to 33 FPS more in newly released games in comparison with preview driver versions. That said, those gains were sometimes fueled by quality drops, but not always.

Re:If it ain't broke.. (0)

Anonymous Coward | more than 4 years ago | (#31370218)

That seems a bit contradictory. You state that you have never had an issue resolved by upgrading a driver, yet state an upgraded driver has caused issues for you? So let me guess you never updated beyond that point either, since it's quite possible a newer release of the driver fixes the issues the previous one caused? To offer the counter view Mass Effect 2 didn't get into the menu screen on my machine, a issue that was fixed by updating the driver. Yours works, then good for you.

What about added features to offload graphical processing? The introduction of PhysX makes a very noticeable difference to games which support it.

The if it ain't broke mentality may work for office types but there's a lot of people who push their cards to the limit and the differences in drivers can have quite a significant impact.

Re:If it ain't broke.. (1)

Blakey Rat (99501) | more than 4 years ago | (#31370454)

More to the point, World of Warcraft has "realistic graphics?" Even if you ignore the art style, which is as far from realistic as you can get, the engine is something like 5-10 years older than all the other games listed there and quite frankly looks like ass.

I wish people would proofread before they publish an article that thousands will read.

You Can't See Inside! (0)

Anonymous Coward | more than 4 years ago | (#31369260)

that's what you get for using proprietary software!

Nvidia driver causing overheating? Oh really. (0)

Anonymous Coward | more than 4 years ago | (#31369282)

I'm no fan of Nvidia or ATI, but I have to question Blizzard and their programmers and beta testers on this. Leaving WoW sitting at the title or login screen has caused overheating in my 8800GT's for over a year now, no matter what drivers I've used. I've noticed temperatures reaching 105-110 Celsius in under 2 minutes flat as recently as last week when I made the mistake of letting WoW sit at the login screen. This only ever occurs in said game and accompanying areas. Similarly, my laptop's ATI 3650 tends to jump to 75+ Celsius in said areas of WoW. Pardon me, but I'm a little skeptical about Nvidia's drivers being the ultimate source of the problem.

Re:Nvidia driver causing overheating? Oh really. (5, Insightful)

omglolbah (731566) | more than 4 years ago | (#31369330)

A game should not be able to cause an overheat in a card, ever.
The card's firmware or hardware should throttle down before damage occurs.

If not the design is broken. Simple as that.

Re:Nvidia driver causing overheating? Oh really. (0)

Anonymous Coward | more than 4 years ago | (#31369390)

The best way to solve this, Is to turn v-sync on, And triple buffering off in your nvidia panel. As that'll cap your framerate at 60 (Or whatever your monitor would be.)

I've had a few 8800ultras and gtx 280s, And anything that's 'high performance' generally uses alot of power, and renders at high framerates.

(Also, Laptop cards use hardly any power, so they generate hardly any heat, assuming you have a decent cooler, My laptop with a 8800GTX in it never goes above 55c in games like the STALKER series)

Re:Nvidia driver causing overheating? Oh really. (0, Offtopic)

fostware (551290) | more than 4 years ago | (#31370824)

I have a XPS 1730. My SLi 8800GTXs *idle* at 55C :)

I'll admit the "laptop" tag doesn't fit that well though...

(BTW, It's a insurance replacement for what was both a portable games and work machine, before anyone bitches about heat being a result of my choices...)

A little more info from the story (4, Interesting)

L4t3r4lu5 (1216702) | more than 4 years ago | (#31369304)

The EVGA tool has been used to manually set fan speed to 77% to compensate. I see no reason for other low-level customisation tools (RivaTuner etc) to not behave in the same way.

If you get a performance boost from this new driver, download RivaTuner or a similar tool and manually set the fan speed for gaming.

Terrible design (3, Insightful)

QuoteMstr (55051) | more than 4 years ago | (#31369458)

Software should not be able to destroy hardware, period. The GPU's cooling system should be designed to safety operate for sustained periods at peak load --- anything less is artificially crippling the hardware and leads to both security and reliability problems.

Great job, NVIDIA: now, malware can not only destroy your files, but destroy your expensive graphics card as well.

Re:Terrible design (1)

ZeRu (1486391) | more than 4 years ago | (#31369750)

Software should not be able to destroy hardware, period

There's a piece of software which is able to do that for a long time, if not used properly. It's called BIOS.

Re:Terrible design (0, Flamebait)

QuoteMstr (55051) | more than 4 years ago | (#31369874)

Sure, you can flash the BIOS with garbage. But you can restore it with the right equipment -- the hardware itself lives on. You can't physically destroy the computer that way.

Re:Terrible design (2, Insightful)

MikeBabcock (65886) | more than 4 years ago | (#31370826)

Wanna bet? I can tell my BIOS to shut off all the case fans and not sound the overheat alarm.

Re:Terrible design (0)

Anonymous Coward | more than 4 years ago | (#31369804)

This is more likely due to a defective card.. Should have RMA'd it. Haven't gotten any crashes on any of my nvidia cards in years.

Re:Terrible design (0)

Anonymous Coward | more than 4 years ago | (#31369888)

What's the difference between hardware and software? I'm not being ridiculous, it's a genuine if somewhat rhetorical question. Graphics cards have firmware and microcode, clock frequencies can be changed via software, etc.. When you get down to the bare bones of the system, the distinction between hardware and software becomes blurred, and the clear separation between the two is not so obvious.

Re:Terrible design (3, Funny)

Kleppy (1671116) | more than 4 years ago | (#31370002)

"Software should not be able to destroy hardware, period."

Tell that to Toyota.....

Re:Terrible design (1, Interesting)

rotide (1015173) | more than 4 years ago | (#31370106)

Software (read: applications) isn't destroying hardware in this case. The hardware itself is now "faulty" as the drivers have a pretty bad bug.

In my mind, this is no different than taking the the heatsink/fan off a CPU. That's a hardware issue. Doesn't matter what games, etc, you run, you risk killing that CPU because the CPU is under an abnormal operating condition.

While drivers are in control in the case we have here with nVidia, I see the drivers as part of the hardware since they were released by the manufacturer.

Re:Terrible design (1, Flamebait)

QuoteMstr (55051) | more than 4 years ago | (#31370150)

I see the drivers as part of the hardware since they were released by the manufacturer.

Congratulations! You've won the "Stupidest Thing Dan Has Read In The Last 24 Hours" award.

Re:Terrible design (2, Funny)

rotide (1015173) | more than 4 years ago | (#31370216)

Do you care to discuss something? Or does it simply make you feel better to make fun of people that disagree with you and/or have a different opinion?

Re:Terrible design (1)

maxwell demon (590494) | more than 4 years ago | (#31370240)

I see the drivers as part of the hardware since they were released by the manufacturer.

So Mac OS X is hardware, too, because it's released by the hardware manufacturer, i.e. Apple?

Re:Terrible design (1)

rotide (1015173) | more than 4 years ago | (#31370318)

If you want to take my post out of context, I guess.

But my point, in this case, is what happens if the firmware was causing the problem? Ok, that's software too, should that "never" damage hardware as well? I mean, it's code written and compiled, right?

When it comes to video cards, there are at least two sets of software released by the manufacturer that run the card. One is the firmware and two is the driver. If either one bugs, it's software causing the hardware to fail.

I took the OP to mean "application" level software and I stand by my post. If he meant any software at all then he'd have to explain all the firmware screw ups over the years.

Re:Terrible design (1)

VGPowerlord (621254) | more than 4 years ago | (#31371040)

Software (read: applications) isn't destroying hardware in this case. The hardware itself is now "faulty" as the drivers have a pretty bad bug.

In my mind, this is no different than taking the the heatsink/fan off a CPU. That's a hardware issue. Doesn't matter what games, etc, you run, you risk killing that CPU because the CPU is under an abnormal operating condition.

Er, no, because the hardware clearly still works fine with older drivers.

While drivers are in control in the case we have here with nVidia, I see the drivers as part of the hardware since they were released by the manufacturer.

If it was an issue with the Firmware, then it'd be a hardware issue, as firmware is part of the card itself. Drivers are a piece of software on the computer that knows how to talk to the hardware device that can be changed out by the user as needed.

The real issue here is that the firmware has no override if it thinks the driver is giving it the wrong fan control values.

Re:Terrible design (1)

maxwell demon (590494) | more than 4 years ago | (#31370138)

Old monitors could be killed by software as well (by just selecting a too high sync frequency). Later monitors added a protection against that.
Also, don't some motherboards allow to set the CPU voltage in the BIOS? I guess that means you could fry your CPU from software as well.

Re:Terrible design (1)

syousef (465911) | more than 4 years ago | (#31370420)

Software should not be able to destroy hardware, period

Good luck with that since software controls the hardware. Whether it's in bios or drivers, software that operates hardware is going to be able to fry it if written poorly.

The GPU's cooling system should be designed to safety operate for sustained periods at peak load --- anything less is artificially crippling the hardware and leads to both security and reliability problems.

Yes, that's why they built a fan or heat sync into the graphics card.

Great job, NVIDIA: now, malware can not only destroy your files, but destroy your expensive graphics card as well.

You must be new to computing because the ability for a virus to destroy hardware is not new. The only reason it's not done more often is that there's no money or glory to be made in such asshole behaviour. So instead viruses focus on stealing bank account details.

Silence (1)

DrYak (748999) | more than 4 years ago | (#31370554)

Software should not be able to destroy hardware, period. The GPU's cooling system should be designed to safety operate for sustained periods at peak load

And that's certainly the strategy in the corporate* world (for servers, for example).

On the other hand, some other people, the kind who only occasionally play games and use their computer most of the time for office-type work (ie.: non graphically intesive tasks), would appreciate not having to endure the sound of an Airbus A380's takeoff coming out of their computer case every single moment during which the computer is on.

Thus the fan aren't working at constant speed, but are varying their speed to constantly find the perfect balance between silence and avoiding the card catching fire under the load.
Thus you have a small chip controlling the fan. Of course to simplify Q/A, in field bug fixing, etc. this small chip has a small firmware. (Just imagine a non programmable chip controlling the fans, and the same bug. Every single card produced with the bug has to be called backed and replaced - a logistic night mare).
This firmware is setup by the drivers.
A buggy drivers *could* damage the hardware by setting the fan too low. And all that because the end user *wants* a fan that slows down when it's not necessary to have some silence.

The only thing that could have been done, is adding a safe guard which fires a software alarm and either shuts down or massively underclocks the 3D core in case a temperature threshold is crossed. (That's how it's done to protect CPU in case of faulty fan).

*: And then, there's the question of wear of mechanical parts like fans - in server land, it's more a quesiton of balance between mechanical wear of the fans and the server surviving a /. effect (and the wear of the computer due to thermal expansion).

Re:Terrible design (1)

null8 (1395293) | more than 4 years ago | (#31371276)

That is not realistic. If you want to provide people with a possibility of BIOS update to fix some hardware bugs, you can overwrite you bios for example with some garbage that can apply incorrect voltages, which will physically destroy your mainboard, it once happened to me. If you know how you even can load new microcode, which can kill a CPU. One can theoretically open multiple tristate gates and cause some kind of short circuit. I mean you can say "noone should kill another person, period", everyone will agree with it, but it's also not realistic.

Er.... (0)

Anonymous Coward | more than 4 years ago | (#31369518)

Farcry 3? Really?

nVidia and the dreaded nv4_disp.dll bug (1)

CuteSteveJobs (1343851) | more than 4 years ago | (#31369628)

Don't expect it fixed... ever! In 2005 bought a "top end" Nvidia card that worked fine most of the time, but occasionally it would go through fits where it threw up a BSOD announcing an infinite loop was detected in the display driver nv4_disp.dll.

Many reported it to nVidia - me included - but they ignored everyone through every avenue. The bug stayed there through releases of new generation nVidia cards, and Google shows people still finding the bug and trying to "fix" it to this day.

I can only presume nVidia knew about it, but the problem would have required a card recall. So they just ignored it and kept selling the buggy cards. Many solutions were suggested by users, posted and tried, but none worked. No solutions ever came from nVidia, who wouldn't say a word on the issue. Their FAQ fobbed you off to the OEM who of course had no clue. Last time I checked you couldn't even submit a bug report through their site. They may be successful, but they have the worst tech support ever. Don't expect a fix. In the end I tossed the card.

http://www.google.com/search?q=nvdisp+4+nvidia+bsod [google.com]

nVidia and bug reports (1)

achurch (201270) | more than 4 years ago | (#31369818)

I haven't heard of that particular problem, but I should point out that nVidia does in fact accept bug reports (on Linux, just run nvidia-bug-report.sh and it'll tell you where to mail your report), and I have actual experience with reporting a bug (nvidia_drv crashed X when switching to another virtual console while an OpenGL window was minimized) and having it fixed.

my 8800 ultra died a few days ago (0)

Anonymous Coward | more than 4 years ago | (#31369808)

And I installed the new drivers a week before that. Co-incidence possibily.. but funny enough the gfx died after playing a long stint of Aion and PC locked up (like a heat related crash) rebooted and blue lines were going across the screen in bios. :(

Oh well I guess NVIDIA could think maybe they could up their sales on cards because people needed to replace their old ones... well guess what NVIDIA! I baught ATI and I'm not looking back. /hugs new 5870

Far Cry 3 (4, Informative)

Karem Lore (649920) | more than 4 years ago | (#31369820)

Hi,

Please do tell where I can get Far Cry 3....Unless bittorrent has seriously moved into time travel of course...

196.21 had issues as well (1)

Darcojin (146576) | more than 4 years ago | (#31369842)

I had to revert back to 195.62 driver because the 196.21 was causing my system to randomly lock up, even more so when i was playing games such as Star Trek Online. Boy am I Glad I didn't see the newer one. I will tell you this however, these last to driver revs from Nvidia are sure starting to make look more closely at ATI again.

Re:196.21 had issues as well (0)

Anonymous Coward | more than 4 years ago | (#31370188)

It certainly is unfortunate to hear about these overheating issues. I bought an ATI 4850 for myself, and I was constantly having overheating issues in most any 3d application (100C or higher). I cleaned the card off as best I could a number of times and verified that the fan speed was going as intended - 100% speed when over 90C sounds like a jet engine taking off. After reapplying thermal grease, it generally stays around 90C. When I initially bought the card, it ran at about 65-75C under load.

In Windows XP SP 2 and 3, I consistently have driver issues. Any online version of the ATI drivers just wouldn't work with the card at all, and the CD drivers would boot me up in a desktop resolution of approximately 320x240 (I couldn't tell for sure, but it was way smaller than 640x480). Eventually I figured out that I needed to install the drivers without ASUS's "GamerOSD" software, and everything would work fine.

I also bought my mom a 5770, and I found that any online version of the ATI drivers causes bluescreens in WoW for her. Thus far, the only thing that doesn't BSOD her is the drivers that were included on the CD. She's on Win XP SP 3.

Due to these issues, I was pretty set on waiting for Nvidia to release a card that is close in performance/cost ratio to the 58xx series ATI cards. Now I'm not so sure on which side I'll aim for :(.

WoW has realistic graphics? (1)

Drethon (1445051) | more than 4 years ago | (#31370198)

Since when?

Re:WoW has realistic graphics? (1)

maxwell demon (590494) | more than 4 years ago | (#31370604)

You only think it's unrealistic because you think the unrealistic graphics the Matrix gives you is the reality. The real reality of course looks exactly like WoW graphics.

WoW and realistic in the same sentence! (1)

carlhaagen (1021273) | more than 4 years ago | (#31370258)

That's very odd. Also odd is that from the article it seems that the overheating has to do with how realistic the game looks; as if the card just KNOWS the content looks realistic, and suffers a spell of worry, feeling stressed about performing, and thus not managing to cope. Oh, the poor GPUs, they deserve better. Spread the love.

SUE THEM!!! (0)

Anonymous Coward | more than 4 years ago | (#31370670)

Anyone whose hardware is damaged should SUE THESE BASTARDS!!!!

evil... (1)

GNUPublicLicense (1242094) | more than 4 years ago | (#31371182)

nvidia is evil since they don't publish their hardware programming manual like AMD(ATI)/Intel. Buy AMD(ATI) or Intel. Avoid like hell nvidia till they release their manuals.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...