×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

NVIDIA GeForce GTX TITAN Uses 7.1 Billion Transistor GK110 GPU

timothy posted about a year ago | from the one-transistor-earthling-please dept.

Graphics 176

Vigile writes "NVIDIA's new GeForce GTX TITAN graphics card is being announced today and is utilizing the GK110 GPU first announced in May of 2012 for HPC and supercomputing markets. The GPU touts computing horsepower at 4.5 TFLOPS provided by the 2,688 single precision cores, 896 double precision cores, a 384-bit memory bus and 6GB of on-board memory doubling the included frame buffer that AMD's Radeon HD 7970 uses. With a make up of 7.1 billion transistors and a 551 mm^2 die size, GK110 is very close to the reticle limit for current lithography technology! The GTX TITAN introduces a new GPU Boost revision based on real-time temperature monitoring and support for monitor refresh rate overclocking that will entice gamers and with a $999 price tag, the card could be one of the best GPGPU options on the market." HotHardware says the card "will easily be the most powerful single-GPU powered graphics card available when it ships, with relatively quiet operation and lower power consumption than the previous generation GeForce GTX 690 dual-GPU card."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

176 comments

What's the point? (2)

i kan reed (749298) | about a year ago | (#42945155)

All games that have the budget for graphics these days are targeted at console limitations. I can't really see any reason to spend that much on a graphics card, except if you're a game developer yourself.

Re:What's the point? (1)

h4rr4r (612664) | about a year ago | (#42945179)

The biggest reason with this class of cards is always epeen. The next biggest might be that with ps4 and xbox720 on the horizon and if money is no object you will not have to upgrade for a long time.

Re:What's the point? (1)

Ruede (824831) | about a year ago | (#42945657)

word!
my years old nvidia 8800gts640mbyte is still usable today.

Re:What's the point? (1)

ByOhTek (1181381) | about a year ago | (#42946115)

That card has been around for a lot more than 2 years. It's 8 generations ago.

Re:What's the point? (2)

Khyber (864651) | about a year ago | (#42946271)

You might want to redo that number, considering several 'generations' are just fucking rebadges of prior-gen tech.

Re:What's the point? (1)

durrr (1316311) | about a year ago | (#42945981)

Multi display gaming and 4K monitors right around the corner may also give it a run for its money.

Re:What's the point? (0)

Anonymous Coward | about a year ago | (#42946163)

XPlane 10 does not support SLI. There is at least one application for a single-GPU card with better performance than a GeForce 680. When the interstate system in the game uses real data, it's very realistic [x-plane.com]. No other single application I am aware of needs 75GB for a complete installation.

Re:What's the point? (2, Informative)

Anonymous Coward | about a year ago | (#42945199)

Really? Like right in the summary man: for the "HPC and supercomputing markets"

Not so you can run Quake at 500,000 fps

Re:What's the point? (1)

Jeremy Erwin (2054) | about a year ago | (#42945649)

of course, if you read beyond the summary, you'll discover references to "crysis 3", ultra fast small form factor gaming PCs", "display over clocking" and other gimmicks that have no place in a supercomputing environment. In fact, the only real concession to HPC is double precision performance, which I suspect is only marginally useful in games.

Re:What's the point? (0)

Anonymous Coward | about a year ago | (#42945409)

All games

Be careful with that broad generalization.

Re:What's the point? (1, Troll)

Sockatume (732728) | about a year ago | (#42945505)

In five years' time they'll be able to crank these out as integrated graphics chips for low-end Dell laptops. They might as well ship them to a handful of enthusiasts and ahead-of-the-curve game developers now.

Re:What's the point? (1)

jitterman (987991) | about a year ago | (#42945555)

I'm not 100% convinced of that. While it's only one example in a sea of thousands, CDProjekt looks like the next iteration of The Witcher will be a real step up. Also, there are always people who want and can afford the bleeding edge, even if they don't need it.

Re:What's the point? (5, Insightful)

Sockatume (732728) | about a year ago | (#42945881)

The Next Big Thing is all-real-time lighting. Epic has been demoing a sparse voxel based technique that just eats GPU power.

Re:What's the point? (3, Informative)

Luckyo (1726890) | about a year ago | (#42946207)

That is simply not happening this decade. The jump in required computing power is ridiculous, while the current "fake lighting" is almost good enough. At the same time, you can't really utilize the current GPU types efficiently for real time lighting because that's simply not what they're optimized for.

Re:What's the point? (2)

Joce640k (829181) | about a year ago | (#42947023)

By the time 'real lighting' (whatever that is) becomes possible the current fake lighting will also be able to do far more than it does today. Bang-for-buck, the current techniques will always win because the cost of simulating 'real' is exponential.

Re:What's the point? (1)

DigitAl56K (805623) | about a year ago | (#42945563)

I've been looking into GPU-assisted rendering recently. Blender introduced the Cycles renderer not so long ago, and it runs on nVidia cards to accelerate ray traced rendering (apparently there were some problems with AMD). This allows for real-time previews but performance is obviously limited by the card and currently also by the memory on the card, which can limit your scene setup. There is also support for acceleration in LuxRender. This is a welcome addition to their lineup for me, since nVidia's 6xx series were not as strong performance-wise as some of their 5xx series cards for this purpose - at least I believe one of the Cycles developers had stated this at some point, plus a number of Cycles or LuxRender benchmarks led me to this conclusion - and the prospect of buying dual 6xx cards was very pricey in terms of up-front cost (2 cards fully loaded with 3GB is very pricey, plus big PSU), and in terms of keeping them running (power bill).Haven't bought anything yet, but this is definitely interesting.

If you're using it just for gaming, yeah, it's in the over-over-overkill category.

Re:What's the point? (4, Interesting)

fuzzyfuzzyfungus (1223518) | about a year ago | (#42945575)

All games that have the budget for graphics these days are targeted at console limitations. I can't really see any reason to spend that much on a graphics card, except if you're a game developer yourself.

Buying the absolute-top-of-range card(or CPU) almost never makes any sense, just because such parts are always 'soak-the-enthusiasts' collectors items; but GPUs are actually one area where (while optional; because console specs haven't budged in years) you actually can get better results by throwing more power at the problem on all but the shittiest ports:

First, resolution: 'console' means 1920x1080, maximum, possibly less'. If you are in the market for a $250+ graphics card, you may also own a nicer monitor, or two or three running in whatever your vendor calls their 'unified' mode. A 2550x1440 is pretty affordable by the standards of enthusiast gear. That is substantially more pixels pushed.

(Again, all but the shittiest ports) you usually also have the option to monkey with draw-distance, Anti-aliasing, and sometimes various other detail levels, particle effects, etc. Because consoles provide such a relatively low floor, even cheap PC graphics will meet minimum specs, and possibly even look good doing it; but if the game allows you to tweak things like that(even in an .ini file somewhere, just as long as it doesn't crash), you can throw serious additional power at the task of looking better.

It is undeniable that there are some truly dire console ports out there, that seem hellbent on actively failing to make use of even basic things like 'a keyboard with more than a dozen buttons'; but graphics are probably the most flexible variable. It is quite unlikely(and would require considerable developer effort) for a game that can only handle X NPCs in the same cell as the player on the PS3 to be substantially modified for the PC release that has access to four times the RAM or enough CPU cores to handle the AI scripts or something. That would require having the gameplay guys essentially designing and testing parallel versions of substantial portions of the gameplay assets, and potentially even require re-balancing skill trees and things between platforms.

In the realm of pure graphics, though, only the brittlest 3d engines freak out horribly at changing viewport resolutions or draw distances, so there can be a reward for considerably greater power(for some games, there's also the matter of mods: Skyrim, say, throws enough state around that the PS3 teeters on the brink of falling over at any moment. However, on a sufficiently punchy PC, the actual game engine doesn't start running into (more serious than usual) stability problems until you throw substantially more cluttered gameworld at it.

Re:What's the point? (1)

Talderas (1212466) | about a year ago | (#42945643)

there's also the matter of mods: Skyrim, say, throws enough state around that the PS3 teeters on the brink of falling over at any moment. However, on a sufficiently punchy PC, the actual game engine doesn't start running into (more serious than usual) stability problems until you throw substantially more cluttered gameworld at it.

That's why you mod Skyrim so that bodies take a longer time to disappear... like say 30 days instead of 10 and you crank down the cell respawn time from 10/30 days to 2/3 days.

Or you install the mod that summons maggots... hundred of writhing maggots.....

Actually right now (4, Informative)

Sycraft-fu (314770) | about a year ago | (#42946047)

Console rez means 1280x720, perhaps less. I know that in theory the PS3 can render at 1080, but in reality basically nothing does. All the games you see out these days are 1280x720, or sometimes even less. The consoles allow for internal resolutions of arbitrary amounts less and then upsample them, and a number of games do that.

Frame rate is also an issue. Most console games are 30fps titles, meaning that's all they target (and sometimes they slow down below that). On a PC, of course, you can aim for 60fps (or more, if you like).

When you combine those, you can want a lot of power. I just moved up to a 2560x1600 monitor, and my GTX 680 is now inadequate. Well ok, maybe that's not the right term, but it isn't overpowered anymore. For some games, like Rift and Skyrim, I can't crank everything up and still maintain a high framerate. I have to choose choppy display, less detail, or a lower rez. If I had the option, I'd rather not.

Re:Actually right now (2)

Khyber (864651) | about a year ago | (#42946297)

"I know that in theory the PS3 can render at 1080, but in reality basically nothing does."

Mortal Kombat, Disgaea 3, Valkyria Chronicles, DBZ Budokai Tenkaichi, all of these are 1080p true-resolution games.

Re:Actually right now (1)

Anonymous Coward | about a year ago | (#42946587)

And that's what, 1% of the total base of games for the PS3? Nice. I would consider that "basically nothing."

Re:What's the point? (2, Interesting)

K. S. Kyosuke (729550) | about a year ago | (#42946121)

First, resolution: 'console' means 1920x1080, maximum, possibly less'. If you are in the market for a $250+ graphics card, you may also own a nicer monitor, or two or three running in whatever your vendor calls their 'unified' mode. A 2550x1440 is pretty affordable by the standards of enthusiast gear. That is substantially more pixels pushed.

And almost all those pixels go to waste. I'm still waiting for display units that would be able to track in which direction you're actually looking and give the appropriate hints to the graphics engine. You'd save a lot of computational power by not displaying the parts of scene falling into the peripheral vision area in full resolution. Or, alternatively, you could use that computational power to draw the parts you *are* looking at with greater amount of details.

Re:What's the point? (1)

Joce640k (829181) | about a year ago | (#42947057)

Buying the absolute-top-of-range card(or CPU) almost never makes any sense

Why does it have to make sense if you've got the money?

Re:What's the point? (1)

alen (225700) | about a year ago | (#42945607)

It's not for games anymore

Hedge funds use nvidia branded servers with gpu's for trading. Lots of scientific uses as well for medicine, oil and gas exploration, etc. how do you think they know where to frack for natural gas or dig sideways for the hard to reach oil?

Re:What's the point? (2)

marcello_dl (667940) | about a year ago | (#42946267)

> how do you think they know where to frack for natural gas or dig sideways for the hard to reach oil?

Rhabdomancy!

*ducks*

Re:What's the point? (3, Insightful)

Anonymous Coward | about a year ago | (#42945611)

I have no need for this therefore nobody does.

Why do people find this argument convincing? It's just dumb.

Re:What's the point? (1)

locopuyo (1433631) | about a year ago | (#42945743)

There are a lot of good PC games with great graphics that completely ignore consoles. You just have to look harder. While console games have advertisements all over TV and stores PC games have stuck to the more niche advertisements they always have used. Look at PC gaming websites The high end PC games target future hardware and you won't be able to get a high frame rate on the highest settings even if you have the latest and greatest card.

Even for ports a super high end graphics card is beneficial. Console games are typically rendered at 1280x720 or even lower resolutions and typically target 30 frames per second, 60 if you are lucky. PC games are typically run at 1920x1080+ and people have 120hz+ monitors, some have multimonitor setups with enormous resolutions.

Re:What's the point? (1)

Luckyo (1726890) | about a year ago | (#42946235)

1080p is budget in PC world. Cutting edge enthusiasts are looking at 2140p (4k) and 3D monitors (requires double frame rate). Current high end chokes on these unless run in SLI.

Re:What's the point? (1)

Jaqenn (996058) | about a year ago | (#42945745)

Oculus Rift is going to be asking you to push dual monitors at 60 fps with VSync enabled at whatever resolution they settle on. That's difficult for many cards today.

Re:What's the point? (1)

durrr (1316311) | about a year ago | (#42946147)

They use a single monitor split in half. And the resolution is quite low.
It's unlikely that they'll suddenly opt for dual 4k monitors unless they plan to release the retail version by 2018

Re:What's the point? (1)

zlives (2009072) | about a year ago | (#42947313)

from my understanding, the production version will have a substantially higher res... not sure about the 4k bit though!

Re:What's the point? (0)

Anonymous Coward | about a year ago | (#42945801)

You never played BF3, Crysis 3 on anything higher than medium settings, which is what consoles play at (but with FSAA - make it look nicer).

You are missing out and deluding yourself if you think consoles are "good enough".

Re:What's the point? (1)

sl4shd0rk (755837) | about a year ago | (#42945833)

except if you're a game developer yourself.

That would explain the problem with most games out there. They pimp all these super-bitching graphics effects but most people do not have $1000 to spend on their gfx.

Re:What's the point? (1)

K. S. Kyosuke (729550) | about a year ago | (#42946039)

I can't really see any reason to spend that much on a graphics card, except if you're a game developer yourself.

TFLOPS are the new inches. *ducks*

Re:What's the point? (1)

cant_get_a_good_nick (172131) | about a year ago | (#42946049)

This probably isn't targeted at any gamer you know. GPUs are now better thought of as vector/parallel processing machines. Im sure a lot of Wall Street firms will pick these up, and the "graphics" card will never ever drive a monitor.

The other class would be guys who need to do visualizations. We've been promised "real time raytracing" for years now. Maybe Industrial Light and Magic will pick some up.

Re:What's the point? (1)

Luckyo (1726890) | about a year ago | (#42946087)

3D on high resolution screens is one of the biggest reasons for this. Most of the current budget stuff does fine rendering non-3D at 1080p. 2140p at 120FPS for 3D monitor? Even SLI's choke.

If you're buying a thousand USD video card, you likely have a similar monitor to use with it.

Re:What's the point? (1)

glittermage (650813) | about a year ago | (#42946343)

Please don't generalize. Star Citizen [robertsspa...stries.com] is a PC game set to be released in late 2014 and has no console aims and pretty awesome pre-release budget.

I would buy one of these cards when the price drops to $700 range. I budget $350 per card and use two GPUs in two different PCs (one with two NVidia GTX 560 Tis and one with two HD 7850s). So yes, I would buy one of these GPUs to help with power consumption and support the economy.

Re:What's the point? (0)

Anonymous Coward | about a year ago | (#42946425)

Pssst.... Bitcoin miners use GPUs

GK110 vs. 7970 (3, Interesting)

Anonymous Coward | about a year ago | (#42945171)

Hmm. $999 for 4.5 TF/s vs. $399 for 4.3 TF/s from AMD Radeon 7970. Hard to choose.

Re:GK110 vs. 7970 (2, Insightful)

NoNonAlphaCharsHere (2201864) | about a year ago | (#42945247)

Hmm. $999 (2013) for 4.5 TF/s vs. $15 million (1984) for 400 MF/s from Cray-XMP. Hard to believe.

Re:GK110 vs. 7970 (2)

CodeReign (2426810) | about a year ago | (#42945313)

Are you measuring acceleration of of calculations? TF already contains a time unit.

Re:GK110 vs. 7970 (0)

Anonymous Coward | about a year ago | (#42945399)

TFLOPS is per second... TFLOP is not. TF is... just not right.

Re:GK110 vs. 7970 (5, Funny)

Anonymous Coward | about a year ago | (#42945433)

Maybe a tera-farad? One heck of a capacitor....

Re:GK110 vs. 7970 (4, Funny)

fuzzyfuzzyfungus (1223518) | about a year ago | (#42945587)

Hmm. $999 (2013) for 4.5 TF/s vs. $15 million (1984) for 400 MF/s from Cray-XMP. Hard to believe.

This is why I've stopped buying hardware altogether and am simply saving up for a time machine... Importing technology from the future is, by far, the most economically sensible decision one can make.

Re:GK110 vs. 7970 (3, Informative)

bytestorm (1296659) | about a year ago | (#42945779)

I think this new board does ~1.3TF of double-precision (FP64), whereas the Radeon 7970 does about 947MF, which, while not double, is a significant increase (radeon 7970 src [anandtech.com], titan src [anandtech.com]). They also state the theoretical FP32 performance is 3.79 TF for radeon 7970, which is lower than the number you gave. Maybe yours is OC, I didn't check that.

tl;dr version, FP64 performance is 37% better on this board.

Re:GK110 vs. 7970 (0)

Anonymous Coward | about a year ago | (#42946245)

Probably used 7970 GHz edition [anandtech.com] numbers.
2048 stream processors * 1.05GHz core * 2(muladd) = 4.3 TF peak SP
DP rate on Tahiti is 1/4 SP, so that'd be 1.075 TF DP ...

Btw, If you are into that sort of number wankery, Dual-Tahiti cards have been out for a while, ~$900, 7.3TF SP / 1.8TF DP.

Re:GK110 vs. 7970 (2)

Shinobi (19308) | about a year ago | (#42946823)

Nvidia: Easy to use, easy to program for, good I/O capability, good real-world performance, hence their popularity in the HPC world.

AMD: Awesome on paper. However, crap programming interfaces, Short Bus Special design in terms of I/O, and unless something's changed during the last month, it's STILL completely fucking retarted in requiring Catalyst Control Center and X RUNNING on the machine to expose the OpenCL interface(yeah, that's a hit in the HPC world.....)

I'm going with Nvidia or Intel, thank you very much.....

Serious stuff (2, Informative)

Anonymous Coward | about a year ago | (#42945183)

And here I was, thinking that TI-83 has pretty cool graphics.

Alternative usage with payback? (0)

Anonymous Coward | about a year ago | (#42945213)

Can I throw a bunch of these in my Bitcoin rig and pay back my intial fiat outlay with purestrain mined bitcoin? I don't see why that wouldn't work and could level the playing field against the ASICs that are coming online now and pushing older GPUs into the background.

Add-on CPU (2)

Anonymous Coward | about a year ago | (#42945249)

Wow. 3x as many transistors as a Core i7 3960X? I guess the days are finally here when you buy your graphics card and then figure out what kind of system to add on to it, rather than the other way around.

Re:Add-on CPU (2)

JTsyo (1338447) | about a year ago | (#42945475)

GPU to CPU is not a 1 to 1 comparison.

Re:Add-on CPU (0)

Anonymous Coward | about a year ago | (#42945933)

I know, but it is still pretty amazing. Also, if you're spending $1000 on a card, the rest of the system does seem a bit like an afterthought.

Re:Add-on CPU (1)

Novogrudok (2486718) | about a year ago | (#42945603)

> I guess the days are finally here when you buy your graphics card and then figure out what kind of system to add on to it

Of course! That has been the case for a long time already. If you want to install, for example, 2 NVidia GTX cards in SLI configuration, you need to think about having enough power, then measure your case, because some cards are longer than the miniATX motherboard and may stick out to the space occupied by your SSD RAID. The motherboard should have PCIE slots spread out wide enough for cards with a lot of heat sinking. Stick as much RAM as you can. And only then you add a latest Intel CPU, does not matter which one, as long as it has a bigger number, like i7 (and definitely not i5!) ;).

Re:Add-on CPU (2)

fuzzyfuzzyfungus (1223518) | about a year ago | (#42945635)

I wonder what kind of yields Nvidia is getting... 3 times as many transistors as one of Intel's fancy parts, and on a slightly larger process(28 vs. 22nm) that's a serious slice of die right there.

On the plus side, I image that defects in many areas of the chip would only hit one of the identical stream processors, which can then just be lasered out and discounted slightly, rather than something critical to the entire chip working. That probably helps.

Re:Add-on CPU (1)

jandrese (485) | about a year ago | (#42947067)

That is in fact exactly what they do. Usually everything but the highest end part will have at least one core disabled in hardware, that being the core that failed testing.

Re:Add-on CPU (1)

Luckyo (1726890) | about a year ago | (#42946273)

GPU is about slamming more of the small cores into the package. Processing power scales in linear fashion with number of cores because of how parallelizible graphics calculations are.

CPUs cannot do this. They need powerful generalist cores and their support structures instead. So they can't increase amount of cores and expect linear increase of performance. So they don't grow as big as GPUs.

Most Powerful GPU (2)

Westwood0720 (2688917) | about a year ago | (#42945395)

"will easily be the most powerful single-GPU powered graphics card available when it ships"

Yep, for the first week or two. I'll stick with my 670 that runs BF3 at max settings with 50+ FPS. Graphics card like the Titan is as useless as Anne Frank's drumset for the typical gamer.

Re:Most Powerful GPU (2)

l3v1 (787564) | about a year ago | (#42945529)

"for the typical gamer"

"targeting towards HPC (high performance computing) and the GPGPU markets"

Nuff said.

Re:Most Powerful GPU (0)

Anonymous Coward | about a year ago | (#42945641)

I would assume these to be for hardcore enthousiasts only, for example if you do multiscreen gaming.
Among ofcourse other uses such as rendering machines.

Hardware is waaay ahead of software... (3, Insightful)

dtjohnson (102237) | about a year ago | (#42945439)

Software (other than games) that can actually benefit from this type of hardware is scarce and expensive. This $1000 card will probably be in the $5 bargain box at the local computer recycle shop before there is any significant software in widespread use that could put it to good use.

Re:Hardware is waaay ahead of software... (1)

Zocalo (252965) | about a year ago | (#42945595)

There's plenty of software in fairly widespread use already that can use this much power, although whether you class it as "significant" or not probably depends on your field. You do need to think beyond rendering pretty pictures on a screen at high framerates, at which it's obviously going to excel, though. I'm more curious how these cards will stack up for stuff like transcoding production quality video (I can flatten my current card with Sony Vegas), running the numerous @Home type distributed computing apps that support GPUs (lots of people running these), brute forcing encryption/passwords (computer crime/forensics) and other stuff of that nature.

What about artificial intelligence? (1)

elucido (870205) | about a year ago | (#42945857)

The software isn't coming because until recently it's been hard to program to take advantage of the hardware. When I can use Python to interact with this hardware then the software will come from people like me.

Re:Hardware is waaay ahead of software... (1)

cozziewozzie (344246) | about a year ago | (#42945969)

Software (other than games) that can actually benefit from this type of hardware is scarce and expensive.

I write software that can actually benefit from this type of hardware.

Re:Hardware is waaay ahead of software... (1)

TomR teh Pirate (1554037) | about a year ago | (#42946001)

I'm not so sure. Even Folding@Home is highly parallelized in such a way that folks running on modern GPUs are getting way more points than systems relying solely on x86 platforms.

Re:Hardware is waaay ahead of software... (0)

Anonymous Coward | about a year ago | (#42946487)

But can it run Crysis?

Re:Hardware is waaay ahead of software... (1)

JBMcB (73720) | about a year ago | (#42946689)

Adobe Photoshop, AfterEffects and Premiere. Pretty much every modern video encoder and decoder. Pretty much every on-line computing initiative (BOINC, SETI@home, Folding@home, Bitcoin)

Wolfram Mathematica. MATLAB/Simulink. Arcview. Maple. Pretty much all simulation/engineering/visualization software (Ansys, OrCad, NX, etc...)

Pretty much every 3D and compositing package in existence (3ds Max, Maya, Softimage, Mud, Flame, Smoke, Media Compose VRay, DaVinci, BorisFX, Red, Nuke, Vegas, Lightwave, Cinema4D, Nuke)

There are also various CUDA/OpenCL accelerated versions of random codecs - LAME, FLAC, FAAC, Opus, etc...

Refresh Rate Overclocking (0)

Anonymous Coward | about a year ago | (#42945463)

...so it includes support for modelines?

3d gaming. (1)

Anonymous Coward | about a year ago | (#42945487)

I have a 3d 120hz monitor, it would be nice to be actually able to play new release games at 120fps minimum with all the eye candy on, in at least 1920x1080.

Question for the HPC/maths crowd (2)

benjfowler (239527) | about a year ago | (#42945605)

I thought that most HPC users needed double-precision maths.

Why, then, would a card aimed at the HPC market have so many single-precision cores alongside the double-precision cores?

Re:Question for the HPC/maths crowd (1)

Dizzer (251533) | about a year ago | (#42945697)

Often times mixed-precision calculations are used for optimum balance between performance and accuracy. The single-precision cores are very helpful for that,

Re:Question for the HPC/maths crowd (1)

mc6809e (214243) | about a year ago | (#42946491)

I thought that most HPC users needed double-precision maths.

Why, then, would a card aimed at the HPC market have so many single-precision cores alongside the double-precision cores?

I'm not sure it has separate DP cores along side SP cores.

It's possible that the double-precision features of the card are made possible by first taking the outputs of the single-precision circuits and then building on that so that there is no separate DP core -- just extra circuitry added to the SP cores.

Re:Question for the HPC/maths crowd (1)

mc6809e (214243) | about a year ago | (#42946541)

Note that the number of single cores divided by double cores is exactly 3.

2688/896 = 3

It isn't too much of stretch to assume that NVIDIA have figured out a way to use 3 SP cores to make a DP core.

Re:Question for the HPC/maths crowd (1)

shadowofwind (1209890) | about a year ago | (#42946699)

Most electromagnetics applications that I have experience with don't actually need double precision, but scientists tend to use double precision anyway because they don't want to hassle with making sure that numerical issues aren't hurting them with single precision. If you have the time, you can try to characterize how much precision you need, and write your application mostly in single precision, with double precision in any critical places that require it. Often the measurement errors in the data you're working on are much larger than the errors introduced with a judicious use of single precision. Most people don't want to mess with this, they want to work on the scientific problem, not the digital implementation problem. But if speed performance is really important, you can often get better performance with single precision without sacrificing accuracy.

I've got a GTX 680 M (1)

GodfatherofSoul (174979) | about a year ago | (#42945719)

And, I'd say it's way overpowered. Right now, I can play BF3 and Eve simultaneously w/ no problems. I got it for future proofing my gaming needs. Hardware has to be ahead, though. If it wasn't gamers would be in a constant cycle of upgrading hardware. By getting the latest/greatest, I've seen that I can go about 5 years before needing an upgrade to stay current.

Re:I've got a GTX 680 M (1)

Westwood0720 (2688917) | about a year ago | (#42945835)

Right now, I can play BF3 and Eve simultaneously w/ no problems.

I'm considering the same. Mine in monitor two, BF3 in monitor one. I just don't have the attention span to do both. That and I'm not sure if my 670 can handle it. =P

Re:I've got a GTX 680 M (1)

Luckyo (1726890) | about a year ago | (#42946301)

Try running at 2140p at 120FPS for 60FPS 3D on highest settings. It's been tried. Result: you need 2x680 in SLI not to get severe FPS drops.

all this power nowhere near realism. (1)

Vince6791 (2639183) | about a year ago | (#42945763)

and this card will run your current games at 200+ fps at 1080p while newer games will run at 40fps which means you will then have to buy a newer card. What is the wattage usage? 1000 watt psu minimum requirement.

What we need is gameplay which is missing in today's pc games this is why I always find myself going back to emulators. These day's pc's are consuming watts like air conditioners.

AMD's solution... (0)

Anonymous Coward | about a year ago | (#42945871)

add more cores

Are we still building custom PCs? (0)

Anonymous Coward | about a year ago | (#42946011)

How can we still claim to be building custom PCs when half the processing power is confined to a little sealed appliance? The way things are going PC gamers will just be putting custom faceplates on their AMD/NVidia consoles.

Re:Are we still building custom PCs? (1)

Luckyo (1726890) | about a year ago | (#42946337)

Start with monitor. Running something like this needs a good monitor to show the results on. That likely means some sort of 4k 3D 120FPS monitor.

That also means you'll probably want a lot of RAM, a high end i7 and SSD to feed it, and obviously if you're about high fidelity you'll likely drop another 1k on sound card and 5.1 or even 7.1 system to output signal to (through for gamers 7.1 kinda sucks, most games can only do 5.1).

It is NOT the limit of current technology (1)

tlhIngan (30335) | about a year ago | (#42946033)

7.1B transistors in 551mm^2? That's atrociously low transistor density.

Most of us probably use things that are probably 1/8th the size with 16B+ transistors on it. You probably know them as little 32GB+ memory cards.

The thing is - memory devices (all memory - flash (NOR/NAND), RAM (SRAM/DRAM) etc) are the most transistor-dense things around - their sheer density makes it so that they're limited by how much silicon area they can use - if you double the silicon area, you double the storage. Moore's law helps here because you can stick twice as many transistors on for twice the storage, but the same silicon area.

Even silicon area isn't that impressive - a good dSLR will have a camera sensor with a large silicon area. Hell, there are FPGAs with just as big silicon dies as well.

In fact, that chip, the vast majority of those 7.1B transistors will probably occuply less than 25% of the entire area - being used for onchip caches and temporary buffers and memory, which are extremely dense structures. The rest of the area is taken up by very few transistors. Instead, what takes up the silicon area is... wiring. Even with 10 layer metal, there's miles of wiring inside the cihp - typically around the logic parts. The reason for this is logic is often called "random" because there isn't generally a regular rhythm to the blocks (sure each processor is regular, but within each one it isn't).

Finally - you can fab a chip up to the size of the water you use - it'll be uneconomic because every wafer is flawed, so the yield you get goes down exponentially as the size ramps up. Those FPGAs I mentioned? Easily $30K a pop. Yes, $30,000 for ONE CHIP. Because yields are horrendously low due to how big they are (if you're lucky, you'll get one good one per wafer. Which has to pay for all the bad ones).

Yield is a huge issue - the bigger the chip, the greater the chance of it encountering the imperfect part of the wafer, which cuts yield down. In addition, a larger die means you can fit less on a wafer, so you have less chips to pay for the fixed costs of a wafer and processing.

551mm^2 is probably the limit for economic production of this chip, but it's hardly stressing the technology.

Re:It is NOT the limit of current technology (1)

slew (2918) | about a year ago | (#42947265)

Even silicon area isn't that impressive - a good dSLR will have a camera sensor with a large silicon area. Hell, there are FPGAs with just as big silicon dies as well.

Not all technologies are comparable...

Canon's APS-H sensor is 550mm^2, it has ~120Mpixels which about 4 transistors/pixels isn't very dense (mostly photodiode area).

State of the art DRAM is 4Gbit which at about 2 transistors/bit is only 8G transistors.
Samsung's leading edge NAND flash chip is the TLC (triple level cell or 3bits/transistor) 17.2B transistor chip (about 48Gibits and generally high-density flash drives are built out of several lower-density chips and a flash drive controller chip).

Intel's Ivy bridge processor has 1.4B transistors on a 160mm^2 die and their 10-core Xeon has 2.6B transistors.
Xylinx Virtex-7 2000T FPGA has 6.8B transistors (but it uses stacked silicon interconnect 2.5D technology)

It may not be at the limit of wafer technology, but it's right up there with the Xylinx Virtex-7 2000T...

However, you may not be aware, that random defect density often isn't the only limiter to die size. There are also limits to the reticle size and the stepper. Often you have run-out limited, alignment-limited or stepper-limited sizes to stamping out die on a wafer. For TSMC, that limit is generally about 600mm^2 (intel apparently can do 700mm^2), so 551mm^2 is pretty close to maxing this out. Thus if you could patten a wafer scale device you would eventually reach the random defect density limited yield limit, however, long before this, the current stepper photo-lithographic alignment technology will have been greatly exceeded.

4k Resolution Gaming? (1)

Anonymous Coward | about a year ago | (#42946045)

Probably would not be able to run my $100,000 Triple display 85" 4k resolution computer screens so that I can play angry birds at 60fps.
Does it mention you can SLI these cards? Seriously I doubt one would be enough for my gaming needs.

Re:4k Resolution Gaming? (1)

Luckyo (1726890) | about a year ago | (#42946355)

These are about running games like BF3 at highest quality at those resolutions in 3d (120FPS). Current gen high end chokes unless in SLI when doing this.

No this is not a Troll comment (0)

Anonymous Coward | about a year ago | (#42946075)

But what about for mining bitcoins?

Yes, I know that it's a dubious but it might make mining bit coins temporarily profitable again.

And this isn't meant as a troll comment, I'm just looking for opinion from the bitcoin community.

Mission Accomplished (0)

Anonymous Coward | about a year ago | (#42946227)

Finally, I can have my dream of more transistors on my graphics card than there are people on this planet. And since we used the advancing graphics card vs losing people way, no one had to die!

Hardly "close", certainly "big". (2)

chrysrobyn (106763) | about a year ago | (#42946389)

With a make up of 7.1 billion transistors and a 551 mm^2 die size, GK110 is very close to the reticle limit for current lithography technology!

I believe there are two modern lithography lens manufacturers, one at 32mm x 25mm and the other at 31mm x 26mm, although I'm having trouble seeing publicly available information to confirm that. Either way, 800mm2 is the approximate upper bound of a die size, minus a bit for kerf, which can be very small. Power7 [wikipedia.org] was a bit bigger. Tukwila [wikipedia.org] was nearly 700mm2. Usually chips come in way under this limit and get tiled across the biggest reticle they can. A 6mm x 10mm chip might get tiled 3 across and 4 up, for example.

CUDA (1)

updatelee (244571) | about a year ago | (#42947315)

Hopefully their CUDA performance isnt crippled like it was on the 6xx series. The 580 has significantly better CUDA performance then the 680

UDL

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...