Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Nvidia's Fermi Architecture Debuts; Nouveau Driver Already Working

timothy posted more than 2 years ago | from the those-ndas-must-have-really-hurt-your-backs dept.

Graphics 70

crookedvulture writes Nvidia has lifted the curtain on reviews of its latest GPU architecture, which will be available first in the high-end GeForce GTX 680 graphics card. The underlying GK104 processor is much smaller than the equivalent AMD GPU, with fewer transistors, a narrower path to memory, and greatly simplified control logic that relies more heavily on Nvidia's compiler software. Despite the modest chip, Nvidia's new architecture is efficient enough that The Tech Report, PC Perspective, and AnandTech all found the GeForce GTX 680's gaming performance to be largely comparable to AMD's fastest Radeon, which costs $50 more. The GTX 680 also offers other notable perks, like a PCI Express 3.0 interface, dynamic clock scaling, new video encoding tech, and a smarter vsync mechanism. It's rather power-efficient, too, but the decision to focus on graphics workloads means the chip won't be as good a fit for Nvidia's compute-centric Tesla products. A bigger GPU based on the Kepler architecture is expected to serve that market." Read on below for good news (at least if you prefer Free software) from an anonymous reader. Update: 03/22 19:35 GMT by T : Mea culpa -- that headline should say "Kepler," rather than Fermi; HT to Dave from Hot Hardware (here's HH's take on the new GPU).Our anonymous friend writes "The open-source Nouveau driver project that reverse-engineers the official NVIDIA driver to provide a free software alternative has made some big accomplishments. Nouveau announced today they have same-day Kepler support and are now de-staging on Linux. The GeForce GTX 680 'Kepler' launch just happened hours prior to Nouveau, somehow managing initial mode-setting support with early hardware, from a project that NVIDIA 'officially' does not support. The de-staging in the Linux kernel now means that the driver is at version 1.0 with a stable ABI."

cancel ×

70 comments

Fermi this pal! (-1)

Anonymous Coward | more than 2 years ago | (#39442483)

Fr15t p05t! It's a celebration biatches!!!

Wrong architecture! (5, Informative)

kz26 (1017248) | more than 2 years ago | (#39442545)

I believe you mean Kepler, not Fermi, in the story title.

Re:Wrong architecture! (0)

Anonymous Coward | more than 2 years ago | (#39442653)

I believe you mean Kepler, not Fermi, in the story title.

They that read wrong.

Re:Wrong architecture! (2)

Cainam (10838) | more than 2 years ago | (#39442749)

Exactly. Fermi launched two years ago.

It even mentions Kepler in the summary. (1)

dstyle5 (702493) | more than 2 years ago | (#39442857)

"A bigger GPU based on the Kepler architecture is expected to serve that market." Doh.

Fermi ? (0)

Anonymous Coward | more than 2 years ago | (#39442563)

Isn't it Kepler ?
I thought Fermi was the previous generation's name...

Re:Fermi ? (4, Funny)

billcopc (196330) | more than 2 years ago | (#39442593)

Exhibit A: "Posted by timothy"

The prosecution rests, your honor.

Re:Fermi ? (2)

WrongSizeGlass (838941) | more than 2 years ago | (#39442735)

Exhibit A: "Posted by timothy"

The prosecution rests, your honor.

Your honor, we're asserting an affirmative defense based on the fact the it's nap time.

Re:Fermi ? (1, Informative)

Lunix Nutcase (1092239) | more than 2 years ago | (#39442955)

The saddest part is the summary correctly mentions that it's Kepler. Timothy once again shows off either his piss poor editing skills or the fact that he's illiterate.

Re:Fermi ? (3, Funny)

Anonymous Coward | more than 2 years ago | (#39442989)

What's it got to do with Timothy?

A careful reading of the source clearly shows...

Oh. Never mind.

fail (1)

Muramas95 (2459776) | more than 2 years ago | (#39442623)

someone made a total fail post here

In b4 (0, Funny)

Anonymous Coward | more than 2 years ago | (#39442649)

OMGAWD its too fast for me i only play 15 year old games. wheres mah lunix support???

Never change slashbots.

Re:In b4 (1)

webheaded (997188) | more than 2 years ago | (#39443029)

It actually has an entire paragraph about Linux support right in the summary and how impressive it is that they immediately had support in the OSS driver. I don't think anyone is going to be making that claim unless they are retarded.

Troll AC, I know, but still...quit being a dumb ass.

Re:In b4 (0)

Anonymous Coward | more than 2 years ago | (#39443183)

We both know nobody around here reads the summary.

Re:In b4 (2)

chill (34294) | more than 2 years ago | (#39443331)

Epic fail on your part. Nouveau got it to light up. Gaming support comes from acceleration support.

From the actual article on Phoronix:

There isn't any acceleration support yet for Kepler or anything besides mode-setting on Nouveau, but this is welcoming at least so early Kepler adopters won't need to fall-back to the xf86-video-vesa driver and likely some less-than-ideal resolution.

Next Consoles... (1)

tgetzoya (827201) | more than 2 years ago | (#39442833)

We're probably looking at the GPU for next gen Xbox/Playstation consoles, probably in a multi-GPU version.

Re:Next Consoles... (2)

Narishma (822073) | more than 2 years ago | (#39442939)

The rumours around all point to all next gen consoles having AMD GPUs in them though.

Re:Next Consoles... (2)

dstyle5 (702493) | more than 2 years ago | (#39446805)

With XBOX 1 Nvidia burnt their bridges with Microsoft over licensing and Microsoft moved to ATI for the 360 since ATI would design the GPU for Microsoft, but Microsoft owns it.

I don't know what licensing agreement Sony & Nvidia had with the PS3, but if I'm Sony and I see what the other guys are doing I would rather go with the more flexible GPU design house. That and ATI's Fusion experience would probably help tip the scales in their favor too.

Re:Next Consoles... (0)

Anonymous Coward | more than 2 years ago | (#39442967)

Nope

Re:Next Consoles... (0)

Anonymous Coward | more than 2 years ago | (#39443191)

Hahaha no. Consoles have already been confirmed to be using almost the very best of GPUs from 2009. Even Fermi (previous-gen GPUs) will be outpacing them.

Re:Next Consoles... (0)

Anonymous Coward | more than 2 years ago | (#39444245)

Neither Microsoft nor Sony are going to do business with the nVidia assholes ever again. They learned their lessons with the first xbox and the PS3. AMD will supply the GPU for all 3 consoles.

Nouveau (5, Interesting)

Narishma (822073) | more than 2 years ago | (#39442919)

If the Nouveau project doesn't get support from Nvidia, how did they manage to support this new chip before it's release? Have they had access to one of the cards sent to the press?

Re:Nouveau (4, Insightful)

Kryis (947024) | more than 2 years ago | (#39442971)

It doesn't *officially* get support from Nvidia. That isn't the same as not getting support at all.

Re:Nouveau (1)

shish (588640) | more than 2 years ago | (#39443429)

It doesn't *officially* get support from Nvidia

To be fair, the summary says it officially doesn't get support, which to me brings images of the CEO phoning up some newspapers and saying "We don't support open source work. That is all, *click*"...

Re:Nouveau (1)

Ranguvar (1924024) | more than 2 years ago | (#39443985)

They have secretaries to deal with the open-source 'wierdos'.

Not what it means (1)

Sycraft-fu (314770) | more than 2 years ago | (#39444461)

It more means that nVidia helps them out in some ways, however it is at nVidia's discretion. They also aren't going to help you out if it doesn't work and so on.

So nVidia officially supports their binary driver, this one they are willing to help the project out when they want, but that's it.

Re:Not what it means (0)

Anonymous Coward | more than 2 years ago | (#39445517)

Well, I would love it that was true. Nvidia never helped nor hindered us.

I wouldn't mind having free hw shipped to my doorstep for me to have fun, but when that's the case, it is either companies or users that send me their cards.

Re:Nouveau (1)

Lunix Nutcase (1092239) | more than 2 years ago | (#39442987)

Nouveau, somehow managing initial mode-setting support with early hardware, from a project that NVIDIA 'officially' does not support.

Straight from the summary....

Re:Nouveau (1)

Narishma (822073) | more than 2 years ago | (#39444253)

Yes, I read that. That's why I was asking how they got the hardware.

Re:Nouveau (-1)

Anonymous Coward | more than 2 years ago | (#39443153)

To say Nouveau supports much of anything is being generous. When they say "it works" they mean they can turn on the display, set the resolution, and maybe display some stuff.

Full support, as in accelerated graphics, is almost non-existent with Nouveau.

In other words, Nouveau is not that great and barely useful in the scheme of things.

Re: Nouveau (5, Informative)

Anonymous Coward | more than 2 years ago | (#39443569)

Troll spotted

As a Nouveau dev, I can tell what's wrong with Nouveau and it is not the lack of acceleration!

First of all, we have 2D and 3D acceleration (up to OpenGL 3 and a toy directx 10/11 support that runs Unigine Heaven) for all cards, back to the TNT 2 (of course, no hw opengl 3 there). OpenGL has been good-enough for me to play many games at decent framerates and have a composited desktop running on all my cards minus one. This one is the half-Fermi/half-Kepler nvd9 that still needs some love.

Up until the G50, there mostly was no real power management. Clocks were set at boot time and that was enough for us.
G50 introduced reclocking support on mobile GPUs. The boot clocks were no longer set to the stock values but only to lower clocks (let's say half the normal frequency). Most desktop GPUs lacked power management.
GT215 extended the laptop power management scheme to desktops.
Fermi, of course, kept that scheme but pushed it a little further. Now, boot clocks are terribly low (core = 50MHz, memory = 100MHz) at boot time.

On my GTX460, Nouveau is perfectly usable on kde 4.8 (I have 100fps with KWin and the OpenGL backend) but games are obviously really slow, about 30fps for xonotic.

At the same clocks, Nouveau's performance is about 80% of the proprietary driver and thus, not bad. Our real problem, is that we need reclocking support to get more performance out of the cards. We have been working on it for about 1.5 years and trust me, it isn't the easiest part of the hardware to reverse engineer.

So, what's the current state of reclocking support?
- G50->GT200: Clocks can be set to the desired frequency and the operation should be stable. Some cards don't work but we are ironing the corner cases. In some cases, the screen turns black for a few ms while reclocking. It's a bug I'm working on.
- G215 -> GF100: Clocks can be set for all engines and memory but the end-result isn't usually working because of some black voodoo we aren't doing right now. It is being addressed.
- GF100 series: Only the engines can be reclocked. Nothing but very experimental memory reclocking. It is being worked on.
- Kepler: Hey, it was released today, most of us haven't put my hands on anything yet.

If reclocking is supported on your card, dynamic reclocking is a piece of cake (compared to reclocking) and the support for it has already been written.

To sum up, we have hw acceleration on all cards but nvd9 (unless you use some microcode from the blob) and Kepler. The only problem with 3D is the lack of proper power management but it is being worked on and we have made great progress. As cards are all different but in fact doing the same thing for it (even across generations), I have good hopes that Kepler will be fully functional 3D-wise before a new series come.

Remember that, contrarily to the blob, we do support cards older than geforce 7 AND we provide out of the box/open source hw acceleration that is already way sufficient for desktop usage. Also, remember that this work is mostly done by a core team of less than 10 people, most of us being students and only one being paid by Red Hat.

Martin Peres, PhD student working on power management on Nouveau

Re: Nouveau (4, Insightful)

PopeRatzo (965947) | more than 2 years ago | (#39443777)

Martin Peres, thank you for your hard work. It is no small thing that you do with Nouveau, especially considering the general lack of appreciation shown by some.

Anybody who uses their time and talent to develop OSS stuff deserves a lot of respect, and at least a little thanks, IMO.

I'm not an OSS dev, or a dev of any kind, but as a professional music recordist, I have done a lot of work with OSS devs in the audio realm, and although I've been too impatient sometimes with the progress of OSS music production on Linux, there has been some pretty impressive work done in the last couple of years, to the point where I've been able to do my first all-OSS music production project last year and get absolutely first-rate results. There are still rough patches, but today there is finally the possibility of serious creative audio work using all OSS, thanks to a lot of people like you.

So, salut!

Re: Nouveau (1)

Anonymous Coward | more than 2 years ago | (#39443823)

Thanks. I do this work because I learn a lot from it AND I get to improve Linux and push towards more openness. I don't mind the lack of appreciation because I know why I'm doing this.

I have done a lot of work with OSS devs in the audio realm, and although I've been too impatient sometimes with the progress of OSS music production on Linux, there has been some pretty impressive work done in the last couple of years, to the point where I've been able to do my first all-OSS music production project last year and get absolutely first-rate results.

Good to know! I used to do some MAO on Linux a while back. I really loved Jack but I was such a noob at it that I grew tired of it and stuck to playing the instruments although I used ardour to record some fun little music projects.

Salut ;)

Re: Nouveau (0)

Anonymous Coward | more than 2 years ago | (#39444341)

Are there really 3D acceleration for pre NV30 cards ?

Last time I checked there was some code in mesa but it was never finished.
Most of the dev was done on gallium capable card (nv40, NV50).

Also nouveau didn't support video overlay with kms. This made watching movie on old card (nv10) slow

Re: Nouveau (1)

Anonymous Coward | more than 2 years ago | (#39444643)

Are there really 3D acceleration for pre NV30 cards ?

Last time I checked there was some code in mesa but it was never finished.
Most of the dev was done on gallium capable card (nv40, NV50).

Well, it should works but as very few users are actively using it and reporting bugs, it isn't our main focus. However, It is being partialy rewritten because libdrm's API has been drastically updated. So you may expect some improvements/fixes.

Real support comes from nv30 (it has been massively re-written and should bring up to 100% speed improvement in nexuiz). That work isn't released yet. Nouveau_vieux should be updated first so as libdrm can be merged and all the drivers will start using it and nv30 will land in mesa.

Also nouveau didn't support video overlay with kms. This made watching movie on old card (nv10) slow

Hmm, we have xv support with KMS. Videos, even full HD, are perfectly smooth on all my cards (all of them being PCIE, so quite new).

Does that answers your questions accurately? You can join us on #nouveau, freenode if you have more questions.

Re: Nouveau (1)

bzipitidoo (647217) | more than 2 years ago | (#39444599)

Does Nouveau really have 3D acceleration now? Last time I tried Nouveau was a few years ago, and it didn't have good 3D acceleration then. Was fine otherwise. But Google Earth was so horribly slow I had to switch back to the proprietary driver.

Re: Nouveau (1)

Anonymous Coward | more than 2 years ago | (#39444867)

And this was the OP's point before he got modded into oblivion. Then some dev comes on and basically repeats the same thing except "oh, things are actually really good!" and gets modded up.

The fact is, Nouveau's performance sucks balls compared to the official nVidia driver. Also, good luck getting multiple monitors to work (they barely work with the official driver; mostly due to X.org/randr's crappiness).

Re: Nouveau (1)

Anonymous Coward | more than 2 years ago | (#39445345)

When did you last test nouveau?

Nouveau can be fast and really reach 80% of the blob's speed when clocked properly (not even sure this one was clocked properly, it is a gt 220 which has very experiemental reclocking code): http://openbenchmarking.org/embed.php?i=1201287-BY-NOUVEAURE42&sha=a43fdd7&p=2

The following example shows my point better. Fermis are slow *when gaming* due to missing reclocking. The following two are not set to the right frequencies. The last one is clocked at the right frequency at boot (I have this one so I can tell). The last one also shows that Nouveau can outperform the blob: http://openbenchmarking.org/embed.php?i=1201135-BY-MESA80NOU33&sha=a43fdd7&p=2

After reclocking, the main problem with nouveau is stability across all boards but when it does work, performance is more than enough for every day usage (not gaming usage). We will never have performance parity but that's not our goal. We want something good-enough.

Re: Nouveau (1)

Tet (2721) | more than 2 years ago | (#39449447)

Also, good luck getting multiple monitors to work

I've had far more success with multiple monitors using nouveau than with the proprietary Nvidia drivers. I'm currently running with 4 monitors using nouveau, and have been for many years. Further, in the last few years I haven't encountered anyone else that's had problems with multiheaded support in nouveau either, and we have an office full of people doing so here.

Re: Nouveau (1)

batkiwi (137781) | more than 2 years ago | (#39445009)

There is one, and only one, reason that I use the nvidia propriatary driver:

VDPAU.

Re: Nouveau (0)

Anonymous Coward | more than 2 years ago | (#39445185)

It is being worked on. But video decoding is so complicated, do not expect results any time soon unless you accept running some non-free microcodes. In this case, it is already done for some card IIRC.

Re: Nouveau (1)

Zebedeu (739988) | more than 2 years ago | (#39448669)

As a user of the nouveau driver in a system where the binary blob causes a lot of instability (which nouveau doesn't have), you have my eternal gratitude for your efforts.

I don't play games on my PC, so for me the performance is more than satisfactory. Desktop composition and effects are smooth enough, and I can play videos in any quality without any hickups.
The only issues I have now is that I can't set the brightness level (aparently it works in the new kernel. I'll check that once I upgrade my Ubuntu installation), and it seems that my battery runs out quicker than with the binary drivers. But it's nice to know that that's being worked on.

So thanks for your work. I understand it can sometimes be a thankless job, and people on the internet tend to focus on the negatives, so I hope that you know the positive effect you're having in the computing world -- if it weren't for nouveau, I'd probably be in Windows right now.

Re: Nouveau (1)

Ancantus (1926920) | more than 2 years ago | (#39484981)

I know you probably got a lot of comments like this, but thank you Martin Peres for your work.

But can it compare to AMD (0)

Anonymous Coward | more than 2 years ago | (#39443275)

for mining bitcoins?

Bad for GP-GPU computing (5, Informative)

Anonymous Coward | more than 2 years ago | (#39443283)

Firstly, this new architecture (GK104) has a great number of cores (192 versus 32 of the Fermi architecture) sharing a single control logic within a stream multiprocessor (SM). Internally, each SM is SIMD, so this move is bad for divergent kernels, i.e., algorithms containing if-then-else constructs. Secondly, as usual from Nvidia, the GeForce brand has poor double-precision performance, only 1/8 of the SP's. On the other hand, the AMD Radeon HD7000 family doubles this fraction, being much faster at DP operations, which is a must for scientific computing.

Re:Bad for GP-GPU computing (1)

0123456 (636235) | more than 2 years ago | (#39443373)

A GPU manufacturer optimising their cards for 3D graphics performance? Shocking!

Re:Bad for GP-GPU computing (0)

Anonymous Coward | more than 2 years ago | (#39443617)

A GPU manufacturer artificially deoptimizing their cards for GP-GPU performance.

Re:Bad for GP-GPU computing (1)

0123456 (636235) | more than 2 years ago | (#39443785)

Hint: the G in GPU stands for 'Graphics'. They only started offering them as compute cards when the graphics market began to run out of steam.

And I'm guessing that they're salivating at the prospect of being able to sell dedicated compute cards for 10x the price of 3D cards rather than having cheapskates just load their systems with cheap consumer 3D hardware.

Re:Bad for GP-GPU computing (1)

epyT-R (613989) | more than 2 years ago | (#39447061)

why call the consumer cheapskate for using the full capabilities of his hardware? you should be calling the company grubby for artificial scarcity.

Re:Bad for GP-GPU computing (1)

epyT-R (613989) | more than 2 years ago | (#39447151)

actually, no, nvidia artificially limits performance to specific profiles.. geforce has shitty gpgpu performance, quadro has decent gfx and gpgu, and their 'tesla' stuff is all gpgpu.

Re:Bad for GP-GPU computing (0)

Anonymous Coward | more than 2 years ago | (#39443787)

The anandtech article states that the GK104 has 192 fp32 only processing units and 8 fp64 processing units.

Comparing fp32 to fp64 comes down to 1/24, not 1/8.

Ref: http://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-review/2

The results is:
http://www.tomshardware.com/reviews/geforce-gtx-680-review-benchmark,3161-15.html

So it's great for games, rubbish for gpgpu.

is there a 192 fp64 chip (0)

Anonymous Coward | more than 2 years ago | (#39446485)

in fact ive got a gtx550 and its got 192 fp32 stream processors. dont know if nvidia is being very honest.. is there a 192 fp64 chip out there somewhere and how much does it cost ??

Re:Bad for GP-GPU computing (1)

Ranguvar (1924024) | more than 2 years ago | (#39443997)

NVIDIA artificially limits their double-precision performance to boost sales of their Quadro chips.

Re:Bad for GP-GPU computing (3, Informative)

jensend (71114) | more than 2 years ago | (#39444447)

The double precision situation is a lot worse than that. For GK104, fp64 performance is only 1/24 fp32. Previous to this, NV's consumer cards did fp64 at 1/12 (midrange) or 1/8 (high-end) fp32; I guess that wasn't enough handicapping to protect their Tesla line so they bumped it up.

If you need more precision than fp32 and want to use nV consumer GPUs you should consider software emulation. A very simple software double emulation scheme can give you 1/6 - 1/4 of fp32 performance. Of course it's less precise than fp64- it has 48 significand bits (double fp32's 24, less than fp64's 53) and 8 exp. bits (same as fp32, 3 less than fp64), and to get ~1/4 of fp32 performance you have to skip a lot of error/NaN/inf handling type stuff. But it's probably sufficient for a lot of applications where people use fp64. Even software "quad-single" (96 significand bits using 4 32-bit floats) would likely be faster than nV's native fp64.

OTOH, AMD doesn't have much reason to handicap its cards, as you mention, its cards do fp64 at 1/4 fp32-- and that's with full IEEE 754 compliance. They used to be at a big disadvantage for GPGPU, but with their new compute-oriented GCN architecture and their now-huge fp64 lead for $2000 cards, I think a lot of GPGPU folks will switch.

Re:Bad for GP-GPU computing (1)

jensend (71114) | more than 2 years ago | (#39444465)

That should say "sub-$2000 cards" - I forgot that slashdot eats less than signs unless you use HTML entities.

Re:Bad for GP-GPU computing (1)

Anonymous Coward | more than 2 years ago | (#39444883)

I don't know where you get your numbers from. Fermi class hardware (C20xx) has 1/2 the fp64 performance (~450GF) compared to fp32 (~1TF), and old Tesla (C10xx) has about 1/8 or so.

Realistically unless you load tiny chips of data and wail on them from shared memory for a good long time it doesn't matter because there's no way main memory bandwidth (let alone streaming data in over pci-e) can keep up anyway.

You fail english (1)

jensend (71114) | more than 2 years ago | (#39447341)

Try reading, it's fun!

If you bothered to read my post you would notice I said those were the performance figures for GK104 and consumer cards. Of course Tesla has fp64 at 1/2 fp32, but to get a worthwhile Tesla card you're looking at ~$2000.

Re:Bad for GP-GPU computing (0)

Anonymous Coward | more than 2 years ago | (#39445819)

Graphics doesn't really need fp64, try to find a pixel shader that uses a single fp64 instruction. Why add units that do nothing but burn leakage and area on graphics applications and make gamers pay for them?

Re:Bad for GP-GPU computing (0)

Anonymous Coward | more than 2 years ago | (#39445159)

Rubbish. The SM (or SMX as they decided to call it) has 4 warp schedulers which can schedule up to two instructions from a warp each. Warps will still be 32-SIMD (just as on the fermi and on the tesla). Divergent kernels will be approximately as bad as on the fermi.

Still, as you say, double-precision does suck (possibly more than previously). Memory bandwidth is not up compared to the 480 (or 580?). I've not heard about any interesting new GPGPU/Cuda features yet, which means that there's probably nothing groundbreaking going on there.

Re:Bad for GP-GPU computing (1)

rodsoft (892241) | about 2 years ago | (#39528979)

Firstly, this new architecture (GK104) has a great number of cores (192 versus 32 of the Fermi architecture) sharing a single control logic within a stream multiprocessor (SM). Internally, each SM is SIMD, so this move is bad for divergent kernels, i.e., algorithms containing if-then-else constructs.

Actually this is not true. The SIMD width (warp size) is still 32. Divergent kernels won't suffer more with kepler. Maybe you got the wrong impression because nvidia's diagram with its architecture might be oversimplified.

Stupid Nvidia (4, Funny)

SgtDink (1930798) | more than 2 years ago | (#39443581)

Once again they don't have same day OS/2 support. Do they seriously expect to remain viable if they don't know who there customers are?

Re:Stupid Nvidia (0)

Anonymous Coward | more than 2 years ago | (#39443693)

Where?

Re:Stupid Nvidia (1)

SgtDink (1930798) | more than 2 years ago | (#39443753)

sry "they're customers"

Re:Stupid Nvidia (1)

omnichad (1198475) | more than 2 years ago | (#39443949)

their?

Re:Stupid Nvidia (0)

Anonymous Coward | more than 2 years ago | (#39444395)

thems

Re:Stupid Nvidia (0)

Anonymous Coward | more than 2 years ago | (#39447131)

their?

Its' painful to watch him struggle

Re:Stupid Nvidia (0)

Anonymous Coward | more than 2 years ago | (#39446087)

I was already gleaming for the though of digging my Warp and Windows 3.1 floppies to get that real, object oriented desktop experience and still run a few notepads and solitaires on the side.

O.o (0)

Anonymous Coward | more than 2 years ago | (#39444639)

thats weird. I've had fermi since last summer - hope I didn't break the NDA

Open Source will help TDR Fix? (0)

Anonymous Coward | more than 2 years ago | (#39444801)

Nvidia Fermi cards have problems since drivers 280.xx they are causing TDR while browsing Firefox and using Flash Player. It's an documented problem and they can't find the cause (read their forums). Let's hope the open source community can fix this!

Pulled a fast one... (1)

Retron (577778) | more than 2 years ago | (#39448285)

NVidia have pulled a past one here, which doesn't seem to have been widely picked up yet.
The codename for the 680 is GK104. The 460 and 560 cards were based on the cut-down GF104 and GF114 GPUs respectively and were midrange parts. The 480 and 580 high-end parts were based on the full GF100 and GF110 GPUs respectively and had a 384-bit memory bus (rather than the 256-bit bus used on the GF1x4 parts).

In other words - the 680 is really what would otherwise have been called the 660, it's just that nVidia's worked out they can make some extra cash by marketing it as a high-end part. Don't be at all surprised when in a few months time a 685 or 690 appears, based on the "full" GK100 (with a 384-bit memory bus and a fair bit of extra oomph....

Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...