Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

NVIDIA Previews GF100 Features and Architecture

CmdrTaco posted more than 4 years ago | from the visualize-this dept.

Graphics 101

MojoKid writes "NVIDIA has decided to disclose more information regarding their next generation GF100 GPU architecture today. Also known as Fermi, the GF100 GPU features 512 CUDA cores, 16 geometry units, 4 raster units, 64 texture units, 48 ROPs, and a 384-bit GDDR5 memory interface. If you're keeping count, the older GT200 features 240 CUDA cores, 42 ROPs, and 60 texture units, but the geometry and raster units, as they are implemented in GF100, are not present in the GT200 GPU. The GT200 also features a wider 512-bit memory interface, but the need for such a wide interface is somewhat negated in GF100 due to the fact that it uses GDDR5 memory which effectively offers double the bandwidth of GDDR3, clock for clock. Reportedly, the GF100 will also offer 8x the peak double-precision compute performance as its predecessor, 10x faster context switching, and new anti-aliasing modes."

cancel ×

101 comments

Sorry! There are no comments related to the filter you selected.

Wait... (3, Insightful)

sznupi (719324) | more than 4 years ago | (#30807274)

Why more disclosure now? There doesn't seem to be any major AMD or, gasp, Intel product launch in progress...

Re:Wait... (5, Insightful)

Anonymous Coward | more than 4 years ago | (#30807312)

Because I needed convincing not to buy a 5890 today.

Happy Dead Nigger Day! (-1, Troll)

Anonymous Coward | more than 4 years ago | (#30807446)

Dead Nigger Storage

"I told you, it ain't none of my fucking business!" --Quentin Tarantino on Dead Nigger Storage

"It was an accident, right! Like Battlefield Earth!" --John Travolta on Storing Dead Niggers

"If I go to Indo China and pop a cap in a Nigger's ass, I won't hide him in a bowl of rice, I want him stored at Dead Nigger Storage" --Customer testimonial on Dead Nigger Storage

Dead Nigger Storage Inc is a successful business founded in 1994 by Toluca Lake, Los Angeles resident Jimmie Dimmick, after a misunderstanding with two acquaintances from the local underworld. In an interview made in 2004 with Pulp Magazine, Dimmick stated that the idea for his business originally came from his dealings with a mysterious "Mr Wolfe" several years previously.

Dead Nigger Storage Inc is publicly traded on the Nasdaq stock market under the symbol DEDNIG.

Business Overview

The business focuses on a simple service provision as the basis for their corporate offering, namely the creation of storage facilities specially built to store dead and/or decaying afro-americans. With offices in Alabama, Elko, Georgia, Louisiana, Dead Nigger Storage Inc now has more branches throughout the Confederate States of America than both KFC and Big Kahuna Burgers combined.

Originally run from Jimmie and Bonnie Dimmick's garage, the business' growth rate within the first few months of operating forced them into a rethink. In 1998, the Dimmicks purchased Monster Joe's Truck and Tow in Downtown Los Angeles, which has remained their base of operations to this day.

With the catchy friendly slogan of "Storing Dead Niggers is our business" Dead Nigger Storage Inc remains a market leader at the forefront of ethnic minority storage, despite the recent upsurge in the market for companies such as Jews on Ice and the Cracker Barrel.

Very recently, Dead Nigger Storage Inc has expanded into a chain with several branches outside of the United States. Though each branch outside the USA are largely similar to their American counterparts, most customers note a handful of "little differences". For example, in America one can store a decapitated Nigerian. In the Paris branch, however, one stores un Nigirié guillotin. In general, dead niggers are still called dead niggers, but over there they're called les dead niggers and corpse sizes are measured differently because of the metric system.

In 1999 Detroit became the largest Dead Nigger Storage facility in the western hemisphere.

Traditional Methods of Storing Dead Niggers

"You know what they preserve dead niggers with in Holland instead of synthetic petroleum based chemical preservatives? Mayonnaise." --Vincent Vega on storing Dead Niggers

Many individuals have struggled with the issue of dead nigger storage, including Jefferson Davis and John C. Calhoun who favoured the time-attested methodology of dry suspension, a technique that preserved by hanging them in carefully controlled environments for up to 21 days.

Other techniques utilised include smoking, often over specially constructed firepits or pyres. Although this often provides a more pleasurable flavour and aroma, it often led to a complete burning of the subject.

Pulverization is often utilised, either through the use of sticks, or in more extreme case through "dragging", a technique thought to include a pick-up truck. Another practice designed to aid tenderization is referred to as "curbstomping".

Dead Nigger Storage in Popular Culture

Dead Nigger Storage is subtly referenced in 14 separate Quentin Tarantino movies including Reservoir Dogs and the two Kill Bill films. The company also has numerous placements with Tarantino's latin lover Robert Rodriguez' movies, including The Adventures of Sharkboy and Lavagirl.

English Murder Mystery Writer Agatha Christie, referenced the company in perhaps her most famous work, Ten Dead Negroes made into the 1957 film The Only Good Injun is a Dead Injun. Perhaps her most famous reference remains the Hercule Poirot "quote" "Sacre bleu! C'est un morte negro, non?" in The Murder of Michael Donald.

One of the main accusations of racism aimed at George Lucas over his Star Wars franchise was his portrayal of certain species along stereotypical lines. Famously, in the scene when Jar Jar Binks is fatally wounded in the head whilst riding in the back seat of Mace Windu's landspeeder, a small sign can be seen in the background stating "Dead Gungan Storage".

Re:Wait... (2, Interesting)

ThePhilips (752041) | more than 4 years ago | (#30808086)

I do not need convincing: 5870 (and likely rumored 5890) simply do not fit my PC case.

Though question left open is whether the GF100 based cards would. Or rather: Would GF100 with PSU it would likely require together fit my case.

Re:Wait... (2, Interesting)

Kjella (173770) | more than 4 years ago | (#30808590)

Considering the rumor is it'll pull 280W, almost as much as the 5970, my guess would be no. I settled for the 5850 though, plenty oomph for my gaming needs.

Re:Wait... (1)

Stepnsteph (1326437) | more than 4 years ago | (#30809034)

Agreed. I ordered the 5850 just last night. The bundled deal at Newegg comes with a free 600 watt Thermaltake power supply (limited time of course: http://www.newegg.com/Product/Product.aspx?Item=N82E16814102857 [newegg.com] ). I'll be gaming at 1920x1080, so this should be quite enough for me for a fair while (though I wish I could've justified a 2 GB card).

Normally I wouldn't do $300 for a vid card. I've paid the $600 premium in the past and that made me realize that the $150 - $200 cards do just fine. Last night's expenditure was to top out this system (card + 8 GBs RAM). Basically I'm betting on it lasting me (as a gaming machine) until DDR3 & related technologies drop to reasonable prices.

Perhaps by then nvidia will have a mid-to-low range version of this new toy that's worth buying.

Re:Wait... (1)

dosilegecko (1609441) | more than 4 years ago | (#30809162)

Yeah the 5850 outperforms my 8800GTX by a huge margin and it was only 300$, and its much shorter.

Re:Wait... (2, Interesting)

L4t3r4lu5 (1216702) | more than 4 years ago | (#30808716)

No, the question is:

Is the price / performance difference worth the investment in the pricier card, or does opting for the cheaper option allow me to buy a case which will fit the card for a net saving?

If GF100 price > 5870 + New case, you have an easy decision to make.

Re:Wait... (2, Informative)

Cornelius the Great (555189) | more than 4 years ago | (#30808760)

Given the sizes reported by those who saw the actual card at CES, they're stating it's ~10.5 inches, similar to the 5870.

I would wait for a GF100 or 5870 refresh first. AMD is rumored to be working on the 28nm refresh that should be available by mid-year. GlobalFoundries has been showing off wafers that have been fabbed on a 28 nm process [overclock.net] , and rumors indicate that we'll be seeing 28nm GPUs by the mid-year. I would imagine that nvidia is planning a 28nm refresh of GF100 not long after. Smaller GPU = less power = smaller PCB, so the cards will be shorter.

Re:Wait... (1)

sznupi (719324) | more than 4 years ago | (#30808984)

28nm...that should be interesting for Nvidia. Considering 40nm TSMC process is still painful for them; and I don't see Nv going eagerly to Global Foundries.

Re:Wait... (1)

TheKidWho (705796) | more than 4 years ago | (#30810336)

28nm isn't on the roadmap for Global Foundries for production until 2011 at the earliest.

Re:Wait... (2, Insightful)

Tridus (79566) | more than 4 years ago | (#30808870)

Yeah, seriously. The board makers don't take this problem as seriously as they should. The GTX 260 I have now barely fit in my case, and I only got that because the ATI card I wanted outright wouldn't fit.

It doesn't matter how good the card is if nobody has a case capable of actually holding it.

Re:Wait... (2, Interesting)

w0mprat (1317953) | more than 4 years ago | (#30810554)

With AMD's 5000 series is now coming down price, by the time NVIDIA gets it's 100 series shipping, it won't be *that* much faster since it's a similar generational leap, and similar process size, but it will be high priced until a significant ammount of stock hits channels... oh and to sting the early adopter fanboys.

Just like the sucess of AMD 4800 cards, many people will go for the significantly better bang for buck in the 5800 line. It looks like AMD is in a good position.

Re:Wait... (2, Informative)

hairyfeet (841228) | more than 4 years ago | (#30812480)

Bingo! Give that man a cigar! By shouting about its vaporware now (and by my definition until I can shell out money and have one in my hot little hands it is vaporware) Nvidia is trying to stem the bloodflow from getting its ass kicked by AMD. It is looking more and more like buying ATI was a damned smart move, as even when AMD hasn't got the hottest benchmarking chips it can offer a compelling "whole smash" top to bottom solution with even the IGP chips being quite impressive.

But right now Nvidia is getting its ass royally kicked by the 4xxx and 5xxx series, which is forcing Nvidia to continue to either lower prices or lose marketshare, a position it can't afford, especially after the bad solder fiasco took a big bite out of their bottom line. Since they don't really have a product to compete with the new 5xxx series they pretty much have to spam the market with press in the hopes of keeping the hardcore gamers from jumping to AMD.

Frankly as a lifelong Intel + Nvidia man, going back to the 486DX and the TNT series, I ended up jumping to camp AMD. The bang for the buck on AMD PCs is truly scary, both on CPU and GPU. Sure I am not gonna beat a core I7, but when I can buy a 925 Quad with 8Mb of total cache for $140, and outfit a really nice quad core rig for under $750? I just couldn't resist. I'l probably pick up a 5xxx this summer to replace my 4650, as I'm not that hardcore of a gamer, but the hardware acceleration for all the major formats and how nicely it offloads Windows 7 effects to the GPU definitely makes me interested.

From what I have read this new Nvidia is gonna be a real space heater, a P4 of a GPU, and with the price of power going nowhere but up I think I'll pass. AMD has been good about lowering power with the second and third gens, so by the time summer rolls around I wouldn't be surprised if they cut power for the 5xxx by a good 30-40%. I finally got rid of the last of my P4 space heaters, really don't want to go that route with my GPU.

Re:Wait... (2, Insightful)

Anonymous Coward | more than 4 years ago | (#30807436)

At the end of TFA it states that the planned release date is Q1 2010, so releasing this information now is simply an attempt to capture the interest of those looking to buy now/soon ... with the hope they'll hang off on a purchase until it hits the store shelves.

Re:Wait... (5, Informative)

galaad2 (847861) | more than 4 years ago | (#30807456)

280W power drain, 550mm^2 chip size => no thanks, i'll pass.

http://www.semiaccurate.com/2010/01/17/nvidia-gf100-takes-280w-and-unmanufacturable [semiaccurate.com]

Re:Wait... (4, Insightful)

afidel (530433) | more than 4 years ago | (#30807596)

Compared to the watts you would need to run a Xeon or Opteron to get the same double precision performance it's a huge bargain.

Re:Wait... (4, Insightful)

Calinous (985536) | more than 4 years ago | (#30807764)

But most of us will compare it with the watts needed to run two high end AMD cards

Re:Wait... (4, Informative)

EmagGeek (574360) | more than 4 years ago | (#30807820)

I think he's talking about dissipation of such a large amount of power in such a small package size.

The die size is barely larger than a square inch, and 280W is a tremendous amount of energy to dissipate through it.

Cooling these things is going to be an issue for sure.

Re:Wait... (2, Informative)

afidel (530433) | more than 4 years ago | (#30807908)

Prescott dissipated 105W from only 112 mm^2, or about twice the power density of this chip, I don't think cooling will be a major problem.

winter (0)

Anonymous Coward | more than 4 years ago | (#30808432)

Prescott dissipated 105W from only 112 mm^2, or about twice the power density of this chip, I don't think cooling will be a major problem.

Especially as this announcement came out during winter (for those of us in the northern hemisphere).

Re:winter (1)

Cornelius the Great (555189) | more than 4 years ago | (#30808802)

But this card won't be available until march (spring, for those of us in the northern hemisphere). Nvidia missed its opportunity to make my feet all warm and toasty. ;-)

Re:Wait... (1)

A Friendly Troll (1017492) | more than 4 years ago | (#30808782)

Prescott dissipated 105W from only 112 mm^2, or about twice the power density of this chip, I don't think cooling will be a major problem.

That's why it was called Preshot and why Intel had given up on Netburst. It was a bitch to cool down, and - unlike GPUs - it had the benefit of cooling with bigger heatsinks and larger fans.

Prescott... (1)

earnest murderer (888716) | more than 4 years ago | (#30811474)

also made your PC sound like a vacuum cleaner.

Re:Wait... (0)

Anonymous Coward | more than 4 years ago | (#30807732)

I'll pass, too. I prefer them to be perki, not just fermi, on my GF.

Re:Wait... (1)

TheKidWho (705796) | more than 4 years ago | (#30808850)

No one knows what the power draw actually is for Fermi except for engineers at Nvidia, 280W is highly suspect.

Also, Semiaccurate? Come on, all that site does is bash Nvidia because of the writers grudge against them.

Remember when G80 was being released and Charlie said it was "Too hot, too slow, too late." Yeah that turned out real well didn't it...

Re:Wait... (1)

Hal_Porter (817932) | more than 4 years ago | (#30808614)

The engineers had a cool idea and the asked the sales guys. And the sales guys said "Dude that's fucking awesome! What are you waiting for? Stick it on the web?"

Linux (0)

Anonymous Coward | more than 4 years ago | (#30807356)

...but will it run Linux?

Re:Linux (2, Funny)

XavierGr (1641057) | more than 4 years ago | (#30807404)

I thought the saying/meme is: "But will it run Crysis?"

Re:Linux (1)

cptnapalm (120276) | more than 4 years ago | (#30810018)

It changed back to the "But will it run Linux?" meme because, with the current hardware roadmap from the hardware manufacturers, nothing will be able to run Crysis until Duke Nukem hits store shelves.

Re:Linux (1)

flex941 (521675) | more than 4 years ago | (#30808914)

.. it will not even run _on_ Linux probably.

Should AMD sue them too? (1)

i_ate_god (899684) | more than 4 years ago | (#30807368)

For making the better GPU? :P

Re:Should AMD sue them too? (1)

ThePhilips (752041) | more than 4 years ago | (#30808300)

Well... Intel's past anti-competitive practices were never really a secret. Dell in past was constantly bragging about the deals they were getting by remaining loyal to Intel.

Also AMD/ATI at the moment do better GPUs - as consumers are concerned. Buying now gt200 card is pointless as it is a well known fact that nVidia literally abandons support of previous GPU generation when they release new one. Waiting for GF100 based cards just to find that one has to sell an arm and a leg to afford one (especially in this economy)... Better get one/two of 5870/5850 and enjoy the ride now.

Re:Should AMD sue them too? (0)

Anonymous Coward | more than 4 years ago | (#30809146)

Oh noes! A company choose to only buy from one vendor! THE HORROR!!! THE HORROR!!!!

Re:Should AMD sue them too? (1)

ThePhilips (752041) | more than 4 years ago | (#30809798)

THE HORROR!!! THE HORROR!!!!

The problem was that Intel treated different OEMs differently. There were many other vendors who were capable of selling in volumes, yet Intel dealt only with selected OEMs.

And that is discriminatory, unfair and often illegal.

P.S. IANAL

Re:Should AMD sue them too? (3, Informative)

Lunix Nutcase (1092239) | more than 4 years ago | (#30809218)

Buying now gt200 card is pointless as it is a well known fact that nVidia literally abandons support of previous GPU generation when they release new one.

Such bullshit. For example the latest Geforce 4 drivers date to Nov 2006 which was when the GeForce 8 series came out 4 years after the initial Geforce 4 card. Even the Geforce 6 has Win7 drivers that came out barely 2 months ago and thats 5 series back from the current 200 series.

Re:Should AMD sue them too? (1)

ThePhilips (752041) | more than 4 years ago | (#30809714)

As proud ex-owner of TNT2 and gf7800 I can personally attest: yes, the drivers exists.

But all those problems pretty much everybody experienced complained and reported to nVidia in the drivers never got fixed for the older cards. (*) And those were normal stability, screen corruption and game performance problems.

I'm sorry but I have to conclude that they do in fact abandon support. OK, that's my personal experience. But with two f***ing cards in different times I got pretty much the same experience with nVidia.

(*) As I haven't owned new cards I can't confirm whether the problems were actually fixed for newer cards. The list of changes supplied with drivers even stop mentioning the older cards: IIRC one of the problems I had with 7800 was fixed in a driver update, yet it was billed as a gf8000-specific fix.

Re:Should AMD sue them too? (1)

Lunix Nutcase (1092239) | more than 4 years ago | (#30809930)

I'm sorry but I have to conclude that they do in fact abandon support.

Which is a entirely different claim. You said they abandon a product right after the next generation which is completely unadulterated bullshit. Of course they will eventually abandon support for old products because they get no revenue from doing so and most of the older cards they drop support for have a tiny market share.

Re:Should AMD sue them too? (1)

TheKidWho (705796) | more than 4 years ago | (#30812040)

You think ATI is any better? They abandon support after 3 years too, to the point where you better not bother even trying to install newer drivers on your system.

Re:Should AMD sue them too? (1)

Lunix Nutcase (1092239) | more than 4 years ago | (#30814726)

What he's complaining about is what ALL hardware companies do. No piece of hardware has indefinite support unless you're paying them a bunch of money to do so.

Re:Should AMD sue them too? (1)

acohen1 (1454445) | more than 4 years ago | (#30810004)

Seriously nobody should complain about the lack of Win7 driver support for cards that game out before XP SP1.

Re:Should AMD sue them too? (1)

Lunix Nutcase (1092239) | more than 4 years ago | (#30810532)

Especially when one of the cards he's whining about that gets no support anymore is a fucking 11 year old TNT2.

Wow, that article is terribly written... (2, Informative)

dskzero (960168) | more than 4 years ago | (#30807374)

I understand most of the time people who write about computers aren't exactly literature graduates, but wtf, at least write correctly. Use some spell checker or have someone proof read it.

Anandtech (4, Informative)

SpeedyDX (1014595) | more than 4 years ago | (#30807398)

Anandtech also has an article up about the GF100. They generally have very well written, in-depth articles: http://www.anandtech.com/video/showdoc.aspx?i=3721 [anandtech.com]

Re:Anandtech (4, Interesting)

dunezone (899268) | more than 4 years ago | (#30807570)

According to the article at anandtech the following are still unknown: Clock Speeds, Power Usage, Pricing, Performance. Pretty much the breakers are unknown.

"The GPU will also be execute C++ code." (1)

maxwell demon (590494) | more than 4 years ago | (#30807430)

From the article:
"The GPU will also be execute C++ code."

They integrate a C++ interpreter (or JIT compiler) into their graphics chip?

Re:"The GPU will also be execute C++ code." (2, Informative)

TeXMaster (593524) | more than 4 years ago | (#30807642)

From the article: "The GPU will also be execute C++ code."

They integrate a C++ interpreter (or JIT compiler) into their graphics chip?

That's a misinterpretation of part of the NVIDIA CUDA propaganda stuff: better C++ support in NVCC

Re:"The GPU will also be execute C++ code." (1)

LordKronos (470910) | more than 4 years ago | (#30807670)

Without more details, I suspect they've just made a more capable language that lets you write your shaders and stuff in something that looks just like C++.

Re:"The GPU will also be execute C++ code." (1)

TheKidWho (705796) | more than 4 years ago | (#30809690)

No, the GPU can execute native C++ code now. It's one of the big new features of Fermi for GPGPU.

Re:"The GPU will also be execute C++ code." (1)

LordKronos (470910) | more than 4 years ago | (#30809968)

Although I may not have worded it very well, that's pretty much what I meant. I was thinking that others might be under the incorrect impression that it could offload code bound for the CPU and run it on the GPU instead.

Re:"The GPU will also be execute C++ code." (1)

Suiggy (1544213) | more than 4 years ago | (#30810092)

You have been mislead. Their CUDA compiler tool chain uses LLVM as the backend to compile C++ code into CUDA machine code. There is no such thing as "native C++ code," but there is "standard C++ code" which is what nVidia's marketing goons really mean.

Re:"The GPU will also be execute C++ code." (1)

TheKidWho (705796) | more than 4 years ago | (#30810268)

You're picking apart at Semantics here.

Re:"The GPU will also be execute C++ code." (1)

Suiggy (1544213) | more than 4 years ago | (#30810366)

Well, I've already had to correct a number of other people on a couple of other forums I frequent who misunderstood. They thought it would be possible to run compiled windows/linux x86/x86-64 executables that were originally C++, without any translation whatsoever. They thought C++ uses a virtual machine somewhat akin to the JVM or .NET, and that such executables were not native machine code. And we're not even getting into platform specific libraries. In other words, they thought it would be a complete replacement for x86 processors.

Re:"The GPU will also be execute C++ code." (1)

TheKidWho (705796) | more than 4 years ago | (#30810710)

Yes well, most forums tend to be filled with what I coin Intelligent-Idiots. Especially ones that focus on gaming.

You're right however, I did state it wrong.

Re:"The GPU will also be execute C++ code." (1)

dskzero (960168) | more than 4 years ago | (#30807702)

That's rather ambiguous.

double-precision (1)

pigwiggle (882643) | more than 4 years ago | (#30807442)

is where it's at for scientific computation. Folks are moving their codes to GPUs now, betting the double-precision performance will get there soon. 8x increase in compute performance looks promising, assuming it translates into real world gains.

Can someone who is more knowledgeable tell me... (2, Interesting)

Immostlyharmless (1311531) | more than 4 years ago | (#30807458)

Why it is that they would stick with a 256 bit memory bus (aside from the fact that clock for clock its really the same speed as a 512 bit bus of slower memory?) Is it just because the rest of the card is a bottle neck? I don't think I can recall another card, that when all other things were equal, a faster bit bus didn't result in a sizable increase in processing power? It was obviously implemented in the previous generation of cards, so why not stick with it, use the GDDR5 and then end up with a card thats even faster?

Can anyone explain to me why they would do this (or not do this, depending on how you look at it?)

Re:Can someone who is more knowledgeable tell me.. (1, Informative)

Anonymous Coward | more than 4 years ago | (#30807536)

A wide memory bus is expensive in terms of card real-estate (wider bus = more lines) this increases cost. It also increases the amount of logic in the GPU and requires more memory chips for the same amount of memory.

Re:Can someone who is more knowledgeable tell me.. (2, Interesting)

afidel (530433) | more than 4 years ago | (#30807662)

This monster is already 550 mm^2, I don't think the couple million transistors needed to do a 512bit bus would be noticed, nor would the cost of the pins to connect to the outside. The more likely explanation is that they aren't memory starved and that trying to route the extra high precision lanes on the board was either too hard or was going to require more layers in the PCB which would add significant cost.

Re:Can someone who is more knowledgeable tell me.. (1)

Calinous (985536) | more than 4 years ago | (#30807740)

They _think_ that the card won't be memory starved at the usual loads. More memory lanes means higher complexity also in assuring the same "distance" (propagation time) for all the memory chips.
      People think that the newest AMD card (5970?) is huge, I wonder how big cards with Fermi will be, and how much bigger they should be if needing even more memory chips and memory lanes.

Re:Can someone who is more knowledgeable tell me.. (1)

John Whitley (6067) | more than 4 years ago | (#30810916)

nor would the cost of the pins to connect to the outside.

Are you kidding? The pin driver pads take up more die real-estate than anything else (and they suck up huge amounts of power as well). Even on now-ancient early 80's ICs, the pads were gargantuan compared to any other logic. E.g. a logic module vs. a pad was a huge difference... like looking at a satellite map of a football field (pad) with a car parked beside it (logic module). These days, that's only gotten orders of magnitude worse as pin drivers haven't shrunk much at all when compared to current logic process sizes.

There's a reason that there's a fair bit of active research towards integrating optical off chip communication directly into current silicon processes. The hope is that such approaches will represent a big improvement in chip die size, power dissipation, and available bandwidth -- all just from removing the pad drivers and pins from the equation.

Re:Can someone who is more knowledgeable tell me.. (1)

afidel (530433) | more than 4 years ago | (#30811080)

Huh? The outer left and right rows on this [techtree.com] picture are the memory controllers, that's what 5-10% of the total die area? Adding 1/3rd more pins would add a couple percent to the overall cost of the chip. Now on lower level parts where there's half as many logic units it would be more significant, but there's a reason that lower end parts have less memory bandwidth (and they need less since they can process less per clock.)

Re:Can someone who is more knowledgeable tell me.. (1)

TheKidWho (705796) | more than 4 years ago | (#30812298)

Increasing the bus size has the effect of increasing the perimeter of the chip. Which drives up costs because of the increased die area.

Re:Can someone who is more knowledgeable tell me.. (1)

Creepy (93888) | more than 4 years ago | (#30808196)

Not to mention there is certainly not a 1:1 gain in speed from doubling the bandwidth. Double bandwidth is nice for, say, copying blocks of memory, but it doesn't help for performing operations, and sometimes added latencies can make it under perform slower memory - early DD3 for instance, had CAS latencies double or more of DDR2 without a huge gain in bandwidth (800 to 1066) and often could be beaten by much cheaper DDR2. Without a more comprehensive analysis it is hard to say which is faster.

Costs more (3, Informative)

Sycraft-fu (314770) | more than 4 years ago | (#30808018)

The wider your memory bus, the greater the cost. Reason is that it is implemented as more parallel controllers. So you want the smallest one that gets the job done. Also, faster memory gets you nothing if the GPU isn't fast enough to access it. Memory bandwidth and GPU speed are very intertwined. Have memory slower than your GPU needs, and it'll be bottlenecking the GPU. However have it faster, and you gain nothing while increasing cost. So the idea is to get it right at the level that the GPU can make full use of it, but not be slowed down.

Apparently, 256-bit GDDR5 is enough.

Re:Costs more (1)

hattig (47930) | more than 4 years ago | (#30811168)

Apparently, 256-bit GDDR5 is enough.

(figures from http://www.anandtech.com/video/showdoc.aspx?i=3721&p=2 [anandtech.com] )

GF100 has a 384-bit memory bus, likely with a 4000MHz+ data rate. HD5870 has a 4800MHz data rate, so let's assume the same.

The GTX285 had a 512-bit memory bus, with a 2484MHz data rate

So the bandwidth is (384/512) * 4800 / 2484 = 1.45x higher.

Re:Costs more (1)

BikeHelmet (1437881) | more than 4 years ago | (#30813644)

Have memory slower than your GPU needs, and it'll be bottlenecking the GPU. However have it faster, and you gain nothing while increasing cost. So the idea is to get it right at the level that the GPU can make full use of it, but not be slowed down.

My old 7900GS was the first card where I felt like the memory wasn't being fully utilized by the GPU.

It had a near negligible performance impact running 4xAA on most games.

My next card (8800GS) had a higher framerate, but also a bigger hit from 4xAA.

What's with the terrible naming (5, Insightful)

LordKronos (470910) | more than 4 years ago | (#30807498)

So we've had this long history with nvidia part numbers gradually increasing. 5000 series, 6000 series, etc. up until the 9000 series. At that point they needed to go to 10000, and the numbers were getting a bit unwieldy. So understandably, the decided to restart with the GT100 series and GT200 series. So now instead of continuing with a 300 series, we're going back to a 100. So we had the GT100 series and now we get the GF100 series? And GF? Serieously? People already abbreviates GeForce as GF, so now when someone says GF we can't be sure what they are talking about. Terrible marketing decision IMHO.

Re:What's with the terrible naming (2, Insightful)

ZERO1ZERO (948669) | more than 4 years ago | (#30807882)

I find your ideas intriguing and would like to subscribe to your newsletter.

Re:What's with the terrible naming (1)

Zantetsuken (935350) | more than 4 years ago | (#30807936)

No, no, no... the GF abbreviation is for "Girl Friend 100"

Re:What's with the terrible naming (2, Informative)

TheKidWho (705796) | more than 4 years ago | (#30808754)

GF100 is the name of the chip. The cards will be called the GT300 series.

Re:What's with the terrible naming (4, Funny)

carou (88501) | more than 4 years ago | (#30810660)

GF100 is the name of the chip. The cards will be called the GT300 series.

Great! That's not confusing at all.

Re:What's with the terrible naming (0)

Anonymous Coward | more than 4 years ago | (#30809986)

Type, Architecture, chip
Gpu, Fermi, 100
Gpu, Tesla, 200

The actual cards will probably be called something like GTX380 or whatever.

biz;nat3h (-1, Troll)

Anonymous Coward | more than 4 years ago | (#30807654)

wait a minute... (2, Insightful)

buddyglass (925859) | more than 4 years ago | (#30807658)

What happened to GDDR4?

Re:wait a minute... (3, Funny)

Calinous (985536) | more than 4 years ago | (#30807748)

Was renamed GDDR5...
      Only joking

Re:wait a minute... (1)

grimJester (890090) | more than 4 years ago | (#30807874)

Well, they could have gone to GDDR4, but thought... [theonion.com]

Re:wait a minute... (1)

nedlohs (1335013) | more than 4 years ago | (#30808214)

The sad thing is, the cheapo non-brand razor I used this morning had 7 blades... And a strip...

Re:wait a minute... (1)

scotch (102596) | more than 4 years ago | (#30808482)

How is that sad? 

Re:wait a minute... (1)

nedlohs (1335013) | more than 4 years ago | (#30808640)

Because in just a few years reality overtook the exaggeration in an onion article.

Insufficient Hyperbole (1)

zippthorne (748122) | more than 4 years ago | (#30812974)

I was disappointed by that article as well. It was obvious even then that they simply weren't exaggerating enough.

Re:wait a minute... (1)

Spatial (1235392) | more than 4 years ago | (#30808108)

According to Wikipedia, it was used with some of AMD's 1000 and 2000 series GPUs.

Re:wait a minute... (1)

maestroX (1061960) | more than 4 years ago | (#30811524)

What happened to GDDR4?

It was bundled with Leisure Suit Larry 4.

Tesselation could rescue PC gaming (4, Insightful)

Colonel Korn (1258968) | more than 4 years ago | (#30808374)

Now that graphics are largely stagnant in between console generations, the PC's graphics advantages tend to be limited to higher resolution, higher framerate, anti-aliasing, and somewhat higher texture resolution. If the huge new emphasis on tesselation in GF100 strikes a chord with developers, and especially if something like it gets into the next console generation, games may ship with much more detailed geometry which will then automatically scale to the performance of the hardware on which they're run. This would allow PC graphics to gain the additional advantage of having an order of magnitude increase in geometry detail, which would make more of a visible difference than any of the advantages it currently has, and it would occur with virtually no extra work by developers. It would also allow performance to scale much more effectively across a wide range of PC hardware, allowing developers to simultaneously hit the casual and enthusiast markets much more effectively.

Re:Tesselation could rescue PC gaming (0)

Anonymous Coward | more than 4 years ago | (#30810064)

As I understand it, tesselation is nothing new. OpenGL has had primitives for it in place for some time, and ATI has had tesselation functions-- and relatively wide developer support-- in hardware for nearly a decade now (circa 2001).

So to clarify your point, the "new emphasis" on tesselation is only in a NVidia / Windows / DirectX context. In all other contexts, it's been around for some time.

Re:Tesselation could rescue PC gaming (0)

Anonymous Coward | more than 4 years ago | (#30810182)

XBox 360 already has hardware tessellation.

Tessellation isn't magic, and experts I've talked to don't think the visual differentiation will ever be that extraordinary. I think you might agree if you look at any of the DX11 Tessellation videos that have been posted to YouTube.

Re:Tesselation could rescue PC gaming (1)

crc79 (1167587) | more than 4 years ago | (#30810736)

I remember seeing something like this in the old game "Sacrifice". I wonder if their method was similar...

Re:Tesselation could rescue PC gaming (1)

John Whitley (6067) | more than 4 years ago | (#30811276)

Now that graphics are largely stagnant in between console generations

I'm afraid that you've lost me. XBox to XBox 360, PS2 to PS3, both represent substantial leaps in graphics performance. In the XBox/PS2 generation, game teams clearly had to fight to allocate polygon budgets well, and it was quite visible in the end result. That's not so much the case in current generation consoles. It's also telling that transitions between in-game scenes and pre-rendered content aren't nearly as jarringly obvious as they used to be. And let's not forget the higher resolutions that current consoles are expected to seamlessly tackle.

Console makers won't be interested in stopping this trend until cost or engineering concerns block them, or until customers stop caring. I expect the latter to happen only around the time uncanney-valley-crossed photorealistic scenes can be simulated and rendered in realtime.

Re:Tesselation could rescue PC gaming (1)

coxymla (1372369) | more than 4 years ago | (#30814628)

I think he meant that until the next generation of consoles launches, graphics within a generation are fairly stagnant. This has pretty much always been the case, but these days seems to be affecting the PC games market more than it used to.

Re:Tesselation could rescue PC gaming (1)

MistrBlank (1183469) | more than 4 years ago | (#30814504)

I'd hardly consider graphics stagnant.

A perfect example is the difference between Mirror's Edge on the Xbox and PC. There is a lot more trash floating about, there are a lot more physics involved with glass being shot out and blinds being affected by gunfire and wind in the PC version. Trust me when I say that graphics advantages are still on the PC side, and my 5770 can't handle Mirror's Edge (a game from over a year ago) at 1080p on my home theatre. Now it's by no means a top end card, but it is relatively new. So yes, there is still plenty of room to move forward in processing, even at an equal level of resolution to the console brethren for detail and developers have already taken advantage of downplaying.

So I don't get what you're saying. It seems to me like what you are saying already is happening and it isn't doing squat for "rescuing" PC gaming.

What do you mean by rescue? (3, Informative)

mjwx (966435) | more than 4 years ago | (#30815892)

The PC market isnt going anywhere. Not even EA is willing to abandon it despite the amount of whinging they do.

Now that graphics are largely stagnant in between console generations

Graphical hardware power is a problem on consoles not PC. Despite their much touted power the PS3 or Xbox360 cannot do FSAA at 1080p. Most developers have resorted to software solutions (hacks, for all intents and purposes) to get rid of jaggedness.

Most games made for consoles will work the same, if not better on a low end PC (if they don't do a crappy job on porting but Xbox to PC this is pretty hard to screw up these days). The problem with PC gaming is that it is not utilised to its fullest extent. Most games are console ports or PC games bought up at about 60% completion and then consolised.

the PC's graphics advantages tend to be limited to higher resolution

PC Graphics 1280x1024 upwards tend to look pretty good. Compare that to Xbox (720p) or PS3 (1080p) which still look pretty bad at those resolutions. Check out the screenshots of Fallout 3 or Far Cry 2, the PC version always looks better no matter the resolution. According the the latest Steam survey 1280x1024 is still the most popular resolution, 1680x1050 the second.

anti-aliasing, and somewhat higher texture resolution

If you have the power, why not use it.

If the huge new emphasis on tesselation in GF100 strikes a chord with developers

Dont get me wrong however, progress and new idea are a good thing but the PC gaming market is far from in trouble.

Someone please tell me (1)

maroberts (15852) | more than 4 years ago | (#30809562)

What video card do people recommend you fit in your PC nowadays
a) on a budget (say £50)
b) average (say £100)
c) with a bigger budget (say £250)

Bonus points if you can recommend a good (fanless) silent video card....

Re:Someone please tell me (2, Informative)

TheKidWho (705796) | more than 4 years ago | (#30809772)

a) 5670 or GT240 if you can find one cheap enough... However depending how British pounds convert, the true budget card is a gt 220 or a 4670.
b) 5770 or GTX260 216 core
c) Radeon 5870 or 5970 if you can afford it.

Re:Someone please tell me (1, Informative)

Anonymous Coward | more than 4 years ago | (#30809890)

What's that funny squiggly L-shaped thing where the dollar sign is supposed to be? :-P

I'm running a single Radeon 4850 and have no problem with it whatsoever.

A friend of mine is running two GeForce 260 cards in SLI mode which make his system operate at roughly the same temperature as the surface of the sun.

We both play the same modern first person shooter games. If you bring up the numbers, he might get 80fps compared to my 65fps. However I honestly cannot notice any difference.

The real difference is that he spent over $400 (approx. squiggly L-shape650) compared to my $125 (approx. squiggly L-shape230) and has to open the windows to his room in the middle of winter to cool it down.

Re:Someone please tell me (0)

Anonymous Coward | more than 4 years ago | (#30810014)

Sorry, got the conversions backwards! Damn the economy!
400 USD ~ 245 GBP
125 USD ~ 75 GBP
Oh, forgot to aim for the bonus points as well: My system runs so quietly I can hardly tell when its running. My friend's sounds like an Airbus A380 preparing for takeoff.

Re:Someone please tell me (0)

Anonymous Coward | more than 4 years ago | (#30811320)

My friend's sounds like an Airbus A380 preparing for takeoff.

And NVIDIA's GF100 sounds like a Boeing Dreamliner, full of paying passengers, actually flying a commercial route.

Oh, wait ...

Re:Someone please tell me (1)

Spatial (1235392) | more than 4 years ago | (#30814178)

A: Nothing. Save money and get a B type later, these cards are not good value. Alternatively, try the used market for a cheap Geforce 8800GT or Radeon HD4850, which will serve you pretty well.

B: Radeon HD4870. Great card, extremely good value. [overclockers.co.uk]

C: Radeon HD5850 kicks ass. Diminishing value for money here though. [overclockers.co.uk]

Beware the future upgrade (1)

Megahard (1053072) | more than 4 years ago | (#30811242)

from GF100 to MRS100.

Beware the future downgrade (0)

Anonymous Coward | more than 4 years ago | (#30814836)

GF100 to MRS100 is easy, it is the future downgrade from MRS100 to MR50/MS50 that's the really killer ;^)

I keep waiting for... (0)

Anonymous Coward | more than 4 years ago | (#30811504)

"But wait, there's still more!"

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>