×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

NVIDIA Unveils Dual-GPU Powered GeForce GTX 690

samzenpus posted about 2 years ago | from the check-it-out dept.

Graphics 93

MojoKid writes "Today at the GeForce LAN taking place in Shanghai, NVIDIA's CEO Jen Hsun Huang unveiled the company's upcoming dual-GPU powered, flagship graphics card, the GeForce GTX 690. The GeForce GTX 690 will feature a pair of fully-functional GK104 "Kepler" GPUs. If you recall, the GK104 is the chip powering the GeForce GTX 680, which debuted just last month. On the upcoming GeForce GTX 690, each of the GK104 GPUs will also be paired to its own 2GB of memory (4GB total) via a 256-bit interface, resulting in what is essentially GeForce GTX 680 SLI on a single card. The GPUs on the GTX 690 will be linked to each other via a PCI Express 3.0 switch from PLX, with a full 16 lanes of electrical connectivity between each GPU and the PEG slot. Previous dual-GPU powered cards from NVIDIA relied on the company's own NF200, but that chip lacks support for PCI Express 3.0, so NVIDIA opted for a third party solution this time around."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

93 comments

Wake me up when GK110 hits. (0)

Anonymous Coward | about 2 years ago | (#39839779)

Zzzzzzz..

Re:Wake me up when GK110 hits. (-1)

Anonymous Coward | about 2 years ago | (#39840093)

And some fags think they announced Half Life 3

Re:Wake me up when GK110 hits. (-1)

Anonymous Coward | about 2 years ago | (#39840113)

Indeed.

I'm sure that this thing will perform admirably until a design flaw causes them to burn out. The card will continue to ship when said flaw is discovered and they will play dumb and shirk responsibility when the public learns about it.

Re:Wake me up when GK110 hits. (2)

poly_pusher (1004145) | about 2 years ago | (#39840331)

Not likely, it's the third generation of their Fermi architecture and has the lowest power draw compared with the last 2 generations. I have a 480 and a 580 that are both still performing spectacularly.

Re:Wake me up when GK110 hits. (4, Interesting)

poly_pusher (1004145) | about 2 years ago | (#39840289)

That's what I'm waiting for as well. Nvidia got pretty lucky with GK104. Most speculation is that it was intended to be the GTX 660 and GK110 was supposed to be the 680. However, GK104 was faster than AMD's fastest offering so why not sell it as a 680. The specs for GK110 "Big Fermi" are pretty intimidating and worth waiting for. I was also dissatisfied with 2 GB of memory for GK104, there are 4 gb cards coming out but they're around 800 bucks. GK110 will come with 4 gb standard.

I do have to hand it to Nvidia. The power requirements for the current 680 are very low and performance is quite impressive but GK 110 is going to be a monster...

Re:Wake me up when GK110 hits. (0)

Anonymous Coward | about 2 years ago | (#39840623)

And the power requirements for the 690 are even more impressive.

Re:Wake me up when GK110 hits. (1)

drinkypoo (153816) | about 2 years ago | (#39844209)

Amen. I have a GT240 specifically because it's low-power. I'm not installing any graphics card so power-hungry it needs its own magical power connector. I did once and it was a mistake.

Re:Wake me up when GK110 hits. (1)

poly_pusher (1004145) | about 2 years ago | (#39852017)

Well..., I actually want the power slurping beast! 300 watts "400 with a hearty overclock" is fine with me for the biggest baddest card out there and that's what I'm expecting with GK110. I guess what I was trying to get at is that If the current 680 is any indication of performance per watt then GK110 is gonna be a whole lotta woah...

From the TFA: the top right connector is different (2)

TheCouchPotatoFamine (628797) | about 2 years ago | (#39839799)

The top right connector is different; any idea why this is? I also have cables that look like that, and in a moment of lazy weakness and a lack of initial comment would love it if someone cleared that up for us?

Re:From the TFA: the top right connector is differ (0)

Anonymous Coward | about 2 years ago | (#39839839)

It is a Single-Link DVI connection, vs the lower row which are dual-link DVI connections

Re:From the TFA: the top right connector is differ (4, Informative)

Anonymous Coward | about 2 years ago | (#39840051)

They're all dual-link (at least the connectors are - that doesn't guarantee the hardware behind them is). Single-link connectors have two blocks of nine pins on each side, and the middle block of nine pins is only on dual-link connectors. The top connector is dual-link DVI-D, while the others are dual-link DVI-I. A DVI-D port will not support a VGA adapter.

Sure... (5, Funny)

froggymana (1896008) | about 2 years ago | (#39839817)

But can it mine bitcoins?

Re:Sure... (2, Interesting)

cnettel (836611) | about 2 years ago | (#39839885)

Can it mine bitcoins while running Crysis n at 240 FPS and 4K resolution?

Re:Sure... (3, Funny)

Anonymous Coward | about 2 years ago | (#39840013)

Yeah, but can it do all that....on weed???

yes, it can (1)

Anonymous Coward | about 2 years ago | (#39840129)

Mine bit coins, run Crysis, and ignite your weed.

But can it feel love?

Re:Sure... (0)

Anonymous Coward | about 2 years ago | (#39846791)

Yes but not all that well; the GTX 680 is not a monster for CUDA or OpenCL, it's actually worse than a GTX 580. Two 680s will be better than one, but ATI is a better choice for computing (as opposed to graphics) for the moment. Presumably there will be a later Kepler device that is tuned for computing; I don't see NVidia abandoning the supercomputing market.

Great (1, Informative)

Xenkar (580240) | about 2 years ago | (#39839931)

It is pretty much impossible right now to get a GTX 680 unless one wants to get gouged due to the short supply.

When will nVidia get enough chips out so my searches aren't forever out of stock?

Re:Great (1)

Anonymous Coward | about 2 years ago | (#39839963)

Use auto-notify on Newegg and if you miss it once or twice, complain. You will get the next one then. That's how I got an eVGA GeForce GTX 590 Classified last year. No need to upgrade for quite a while. It's still kickass..

Re:Great (0)

Anonymous Coward | about 2 years ago | (#39840009)

maybe in america, but in europe there are plenty.
probably because only western Europeans can afford it nowadays ;D

When the demand is filled (2)

Sycraft-fu (314770) | about 2 years ago | (#39842389)

It isn't like they are doing this on purpose. The 680 is just a card that a lot of people want. The thing is, there's only so fast they can have them produced. TSMC is their sole supplier, and they only have one 28nm production line up and running. That line is still having some troubles (TSMC has been a bit over ambitious with its half-node plans and has had trouble at the beginning with them) so total yields aren't what they might like.

Then the real problem is just that everyone wants a piece. TSMC has a lot of customers who want 28nm chips. So a single customer can only get so many wafers per day. They aren't going to snub another company to try and fill nVidia's demand, they have to think long term and that means keeping everyone happy.

However demand for these things isn't infinite. It isn't like people are buying them, tossing them in a hole, and then buying more. As more people get their fill, it'll stabilize. How long I can't say.

This new card isn't likely to affect things much, because there won't be many of them made or sold. It is going to cost $900-1000. There aren't many people who will spend that kind of scratch on a gaming video card. It'll be a low production run.

Oh man! (4, Funny)

multiben (1916126) | about 2 years ago | (#39839979)

Mine sweeper is going to look great on this thing!

Re:Oh man! (1)

Delarth799 (1839672) | about 2 years ago | (#39840001)

To hell with mine sweeper just imagine how bad ass Solitaire will be!

Re:Oh man! (0)

Anonymous Coward | about 2 years ago | (#39840041)

especially if you win

Re:Oh man! (0)

Anonymous Coward | about 2 years ago | (#39840649)

It was the most beautiful thing I have ever seen. Words cannot describe it, let alone the emotions that it evoked. My life is now complete.

Obama ate a dog. (-1)

Anonymous Coward | about 2 years ago | (#39839993)

Obama ate a dog.

Re:Obama ate a dog. (-1)

Anonymous Coward | about 2 years ago | (#39840059)

It's not as greasy as you might think.

Re:Obama ate a dog. (-1, Offtopic)

turkeyfish (950384) | about 2 years ago | (#39840089)

and the GOP nominated one, which pretty much tells you how this election will go.

Re:Obama ate a dog. (0)

Anonymous Coward | about 2 years ago | (#39841903)

And? Lots of people eat dog. It's no different than eating a rabbit, a cow, a fish, a bird or any other animal.

I remember how this ends... (-1)

Moryath (553296) | about 2 years ago | (#39840055)

The last company to get all "multiple core happy" and "SLI On A Board" happy was 3dfx. Who NVidia bought out when they... oh yeah, crash and burned.

Whoops.

Re:I remember how this ends... (3, Insightful)

Anonymous Coward | about 2 years ago | (#39840103)

Except Nvidia has had SLI based multi gpu boards since at least the 8000 series, whereas 3dfx hit the limits of their Voodoo architecture, and required external wall power by the time Voodoo5 came out, and for all the extra hassle, you had a card that performed about as well as a GeForce256, but which also took a spot on your power strip. That's why 3dfx died, not because of SLI boards.

Re:I remember how this ends... (0)

Anonymous Coward | about 2 years ago | (#39840595)

If you ask me wall outlets were a very good idea. GPU's are the number 1 reason we have to upgrade our power supplies. And the necessity for power requirements to be correct means that bringing your own power supply can be the source if a plethora of bugs and crash's. Consistent power and precise currents with power hungry 3.5 billion transistor microchip's is a necessity. Pairing the power supply with the board means resolving a very real problem most end customers don't know exists.

On a side note anonymous coward postings deserve score-ability (without which most people never see these posts. However valid they might be.)

Re:I remember how this ends... (1)

jaymemaurice (2024752) | about 2 years ago | (#39842471)

If you ask me wall outlets were a very good idea. GPU's are the number 1 reason we have to upgrade our power supplies. And the necessity for power requirements to be correct means that bringing your own power supply can be the source if a plethora of bugs and crash's. Consistent power and precise currents with power hungry 3.5 billion transistor microchip's is a necessity. Pairing the power supply with the board means resolving a very real problem most end customers don't know exists.

I disagree and think it's quite valid to need a new internal power supply when the hardware requirement consumes more power... a seperate power supply means another point of failure and a pain in the ass for nothing really. I wouldn't however buy a new power supply because it doesn't have enough leads or the right type of leads (that do the same thing, but with just a different plug)

On a side note anonymous coward postings deserve score-ability (without which most people never see these posts. However valid they might be.)

Welcome to slashdot, get an account, it's free.

Re:I remember how this ends... (0)

Anonymous Coward | about 2 years ago | (#39842723)

why would i want to loose my privacy? if i was so stupid i could have 4 digit ID considering when i started reading slashdot

Re:I remember how this ends... (1)

Khyber (864651) | about 2 years ago | (#39842795)

You must be stupid, as ACs can get rated/modded up.

4-digit UID my ass. Maybe 7.

Re:I remember how this ends... (2, Insightful)

O('_')O_Bush (1162487) | about 2 years ago | (#39840187)

This is a slashvertisement, nothing revolutionary is being reported here. Theyve been making double cards like this since at least the GX2 steps in the line, and maybe before.

Re:I remember how this ends... (0)

Anonymous Coward | about 2 years ago | (#39841173)

The drop tanks on WW2 fighters weren't revolutionary, but they were nonetheless very very useful to B17 pilots since fighter escort could stick with them all the way to target.

Fuck revolution. Give me evolution 7 days a week.

Re:I remember how this ends... (1)

fuzzyfuzzyfungus (1223518) | about 2 years ago | (#39840645)

The last company to get all "multiple core happy" and "SLI On A Board" happy was 3dfx. Who NVidia bought out when they... oh yeah, crash and burned.

Whoops.

I'm pretty sure that both ATI and Nvidia(or one of their OEM partners at the time) have kicked out a 'logically speaking, this is two cards in SLI/Crossfire; but on one card!!!' product for basically every generation since the introduction of the respective GPU-linking features.

The hard part is the fancy tricks that make cooperation between two separate GPUs work at all. Once the vendors decided that they did, in fact, want that to work, the rest is constrained largely by the fact that people willing to pay $1k for a graphics card aren't all that common(especially now that motherboards with 4 PCIe 16x slots are quite reasonably available, so even one's excessive desires can be satisfied with cheaper, more common, single chip cards).

Re:I remember how this ends... (0)

Anonymous Coward | about 2 years ago | (#39841185)

If $500 extra for the top of the line card makes my million dollar trainer work better, I'll buy it all day long.

Re:I remember how this ends... (1)

symbolset (646467) | about 2 years ago | (#39842119)

People in HPC buy these things in 10,000 lots. Now that you can put 4 of them in one server, that's going to happen more and more. It's not all about the videogames any more.

Re:I remember how this ends... (1)

wisty (1335733) | about 2 years ago | (#39842711)

Tian-He 1A has 7,168 Teslas, and is the fastest supercomputer using GPUs. Titan (formally Jaguar) will have 18,000 GPUs. Amazon probably has quite a few.

The very top HPC projects may buy 10,000 lots, but most don't.

Re:I remember how this ends... (1)

fuzzyfuzzyfungus (1223518) | about 2 years ago | (#39843823)

At least for the present generation, I'm pretty sure that all the Tesla boards are 1 GPU per card. Nvidia supports, and sellers offer, arrangements with a fair number of cards per node(including external enclosures for expansion of systems that can't accommodate all those cards internally, 4 cards in a 1U connected by a special PCIe cable seems to have replaced the previous toaster-shaped 'deskside' chassis, with 'deskside' now handled by motherboards with loads of PCIe x16 slots); but the two-GPUs/one-card arrangements only seem to crop up on the gaming side.

I don't know why exactly this is, whether routing traces for 6 GB of RAM, per GPU, is bad enough with only one on the board, whether it is a thermal thing, or whether customer demand is such that it is cheaper to produce only single-GPU cards, and multi-slot chassis for the heavy users, rather than multiple flavors of card with various chip populations...

CUDA Double Precision? (1)

turkeyfish (950384) | about 2 years ago | (#39840083)

Does anyone know if this new card will be capable of taking advantage of double precision under CUDA as is the case with some of their other high end Tesla boards?

Re:CUDA Double Precision? (1)

Anonymous Coward | about 2 years ago | (#39840171)

Yes.

http://en.wikipedia.org/wiki/CUDA

Re:CUDA Double Precision? (4, Interesting)

cnettel (836611) | about 2 years ago | (#39840419)

They are. However, their relative FP64 performance has dropped compared to the previous generation. If I remember correctly, there is now separate silicon to do FP64, rather than just a modified path in the FP32 cores. In the previous architecture, we were down to 1/12 of FP32 performance, only a third of some of the Fermi chip cores could do FP64, and at half speed. In the new chip, the FP64 cores can do full-speed calculations, but there are only 8 such cores, versus 192 conventional cores, giving a 1/24 performance ration.

However, Ryan Smith at Anandtech [anandtech.com] speculated that the existence of dedicated FP64 cores means that a future Fermi based on Kepler will be a mean beast, if they do a tape-out with exclusively FP64 cores. The only thing holding back double-precision then will be memory bandwidth (which would be a large enough deterrent in many cases).

Re:CUDA Double Precision? (1)

cnettel (836611) | about 2 years ago | (#39840553)

Uh, replying to myself. I of course meant that a future TESLA based on Kepler would be a beast, not Fermi.

Re:CUDA Double Precision? (1)

The Master Control P (655590) | about 2 years ago | (#39841581)

Every card which supports compute architecture 1.3 (-sm13 to nvcc) or later supports ieee754 double precision, i.e. every card made for at least 3 years on a brief check of the wiki table. Your FLOPS may vary though - 2.0 is vastly better than 1.3 in this regard.

How long before video processors are external? (0)

Anonymous Coward | about 2 years ago | (#39840411)

I'd really appreciate having this as an external device even if it cost $100 more.

I read about external pciexpress years ago, is it still happening?

Re:How long before video processors are external? (0)

Anonymous Coward | about 2 years ago | (#39841799)

Not happening any time soon in consumer class gear. Intel basically has no intention of creating a high enough bandwidth external interface on a consumer level gear because if such an interface was GPU capable it's now compute capable and threatens threatens their CPU+integrated GPU hegemony over the discrete GPU. About the best you are going to see for now is Thunderbolt (aka Lightpeak basically external PCIe) which is 20G *bits*/sec which isn't even near 16x PCIe gen 3...

Re:How long before video processors are external? (1)

symbolset (646467) | about 2 years ago | (#39842197)

The run length on PCIe 3.0 is quite limited (about a foot, I believe, though it's not given). There is an ePCIe spec. There are external devices that will do external PCIe 2.0 and are used for external card chassis or to host SSD storage with run length to two meters. While it's theoretically possible to do a laptop dock with one of these inside it, I don't see that happening anytime soon because there's not enough market for it. As the frequencies increase the distance a workable signal can propagate is reduced (very roughly).

For a while there was some talk about there not ever being a PCIe 4.0 spec because the run length would be down to mere centimeters - not enough even to get out to add-in cards. I see now that they've found a way - or at least think they have.

As a game developer... (2, Interesting)

Anonymous Coward | about 2 years ago | (#39840547)

As a game developer, I can tell you that the only thing that significantly affects frame rate in a GPU-bound game is GFLOPs. And as the owner of a 3-year old PC with a stock power supply, I'm most interested in the "x40" cards, because those are the highest card you can install in a machine with a stock 350W power supply.

According to what I see on Wikipedia [wikipedia.org], NVIDIA apparently pulled a fast one this generation and re-branded some 500 series cards as the PCIe 2.0 x16 versions, while all the cards with impressive performance are PCIe 3.0 x16. The impressive ones get ~2x higher GFLOPs/W.

Old PCs like mine can't use PCIe 3.0, so this means the GF116-based GT 640 that gives 415 GFLOPs at 75W is still the fastest card that you're likely to find in a 2-3 year old PC with a game enthusiast who updates his card every generation. Compare that to the GT215-based GT 240 from 2009 which gets 386 GFLOPs at 69W, and you can see that there is ZERO reason to upgrade this generation, unless you also plan on upgrading your motherboard.

So yes, you can get a GK107-based GT 640 with 730 GFLOPs at 75W, but you have to upgrade your machine and get a PCIe 3.0 x16 motherboard. Boo, NVIDIA. BOO.

Re:As a game developer... (1)

fuzzyfuzzyfungus (1223518) | about 2 years ago | (#39840665)

PCIe 3.0 is, at least in magical specifications-actually-working-as-planned land, backwards compatible with 2.0 and 1.0, though obviously only at the highest mutually available speed between the two devices.

Is this optimistic theory a horrible pack of lies in general, are Nvidia products specifically broken in this respect, or do the newer ones make assumptions about bus speed that cause them to underperform on PCIe 2.0 boards?

Re:As a game developer... (0)

Anonymous Coward | about 2 years ago | (#39840897)

If it actually works, I wonder why they even bother selling the GF116-based GT 640, since it only gets 57% as many GFLOPs as the GK107-based version (at the same 75W).

I really hope you're right. I'd love to upgrade to a GK107-based GT 640 without having to ditch my Q6600 for a same-speed-3-years-later i5. :)

Re:As a game developer... (1)

symbolset (646467) | about 2 years ago | (#39842211)

Are you sure about that? I thought PCIe cards were only backward compatible one generation of the spec (3.0 to 2.0 for ex).

Re:As a game developer... (2)

fuzzyfuzzyfungus (1223518) | about 2 years ago | (#39843777)

According to the PCI-SIG [pcisig.com](or at least their press flacks, actual standards are members only) revision 3 is compatible with both 2 and 1, and 2 is compatible with 1(excepting one minor hiccup where 2.1 increased the allowable PCIe x16 slot power draw compared to 2.0 and earlier, so there are 2.1 cards in the wild that are logically compatible with 2.0 and 1.0; but which will only function with auxiliary power).

Internet anecdote suggests that this glorious vision may or may not actually be 100% realized, your BIOS isn't our problem, go cry to your vendor, etc; with graphics cards being the most common offenders(probably both because they are the most common user-installed PCIe peripheral, and because most motherboards won't POST properly if they can't find a working video device, while all but the really dysfunctional ones can ignore missing or confused NICs and such)...

I don't have enough personal experience to give any odds, but the SIG says yes and some people say 'not in my case'....

Re:As a game developer... (0)

Anonymous Coward | about 2 years ago | (#39840699)

Well I did some looking and discovered that I actually have a 1.x motherboard and a 2.0 card. Wikipedia [wikipedia.org] says "PCIe 2.0 cards are also generally backward compatible with PCIe 1.x motherboards, using the available bandwidth of PCI Express 1.1. Overall, graphic cards or motherboards designed for v2.0 will work with the other being v1.1 or v1.0a."

In the section on PCIe 2.1 says you may need a BIOS update to use 2.1 cards in a 1.x slot, and the section on PCIe 3.0 does not even mention backwards compatibility. I haven't been able to find anything on the web saying if you can use 3.0 cards in 1.x motherboards. Does anyone here know?

Re:As a game developer... (0)

Anonymous Coward | about 2 years ago | (#39841403)

As a game developer supposedly developing 3d games, why don't you have a better machine?

You sound like you are letting yourself be limited to some abitary figure due to stubbornness rather then anything else. Buy yourself a decent computer, with a decent power supply, and a decent graphics card!

And putting a PCI-e 3.0 device into a computer only supporting PCI-e 2.x does not hurt performance with the current generation of cards. Otherwise, the reviewers would have noticed this (since many reviewers are using sandy bridge 1155 based systems which don't have PCI-e 3.0)

I call BS. (1)

Jennifer3000 (921441) | about 2 years ago | (#39841563)

As an alleged "game developer", you'd think that you could afford more than an "old" "3-year old PC". Sure - you might keep a few older machines around as backups, or for testing on 32-bit OSes, legacy peripheral support, etc., but you're not "developing" on a modern PC? That's ludicrous. You are lying.

Re: raising your call (0)

Anonymous Coward | about 2 years ago | (#39842035)

Golden rule: if you don't want your game to run like molasses for most users, then you have to develop on the machine you think your average user has.

p.s. I also test on an older laptop with integrated video. The laptop represents the game's minimum requirements.

Re: raising your call (1)

symbolset (646467) | about 2 years ago | (#39842225)

Then this card is so advanced you don't need to be testing on it at all. If you want to target high-end gamers then get yourself a rig with PCIe 3.0 slots. Or at least one to develop on.

Can someone explain... (1)

msobkow (48369) | about 2 years ago | (#39840851)

Can someone explain to me why general purpose CPU-memory interfaces don't have this kind of bandwidth to keep the newer 6 and 8 core monsters well fed with data and code to crunch?

Re:Can someone explain... (1)

Darinbob (1142669) | about 2 years ago | (#39841621)

Because gamers pay big bucks for a couple more FPS. Office workers won't get one tiny bit of speed out of a faster CPU. Scientists have real computers to use instead of PCs.

Re:Can someone explain... (1)

Osgeld (1900440) | about 2 years ago | (#39842125)

"Scientists have real computers to use instead of PCs."

really then what do they use?

ok sure I am sure somewhere somebody has a cray ... which is powered by a fuckload of x64 cpu's, but really I bet almost all of them are using dell laptops with i7's and nvidia quadros

Re:Can someone explain... (1)

msobkow (48369) | about 2 years ago | (#39844597)

That super computer/cluster market is precisely why I would have thought there would be a market for super-bandwidth CPUs. Such systems tend to use the highest of the high end processors already, along with custom memory interfaces and backbones to speed up the communications within the cluster.

Some posters seem to have assumed I was talking about PCs. I specifically said CPU because I wasn't concerned about maintaining compatibility with desktop architectures, but the really big data crunching engines that live in data centers and labs.

Re:Can someone explain... (0)

Anonymous Coward | about 2 years ago | (#39841653)

The memory on a graphics card is directly connected to the circuit board. If you want sockets & flexibility in what components it accepts, you have to accept a speed penalty.

Re:Can someone explain... (1)

The Master Control P (655590) | about 2 years ago | (#39841687)

There are a lot of caveats to achieving the 150+GBps theoretically available on a modern GPU, chiefly among them that all your memory read/write operations must occur in groups of 64 or 128 bytes (you can access 1 byte, but the smallest physical IO transaction is 32B, with 64/128 preferred).

Plus, your GPU doesn't have to deal with some random manufacturer's memory chips hiding behind plug interfaces. If I take 1/3 of the ram out of one of my boxes (the furthest of 3 slots), memory timing magically tightens up and bandwidth goes from 8.5 to 10GBps.

Re:Can someone explain... (1)

symbolset (646467) | about 2 years ago | (#39842239)

It's not magic. The lead length to reach those far dimms is actually a prominent part of why overall memory accesses slow down.

Re:Can someone explain... (1)

slew (2918) | about 2 years ago | (#39841913)

Three things:

1. Datawidth: CPUs use one-channel 64-bit wide DIMMs (sometimes 2 if you are lucky), you can find high end GPUs with 12 to 16 32-bit channel to dram chips. Hard to find that many spare pins on a CPU package.

2. DIMMs: People that buy CPUs want to plug in memory modules and the physics of connectors and their electrical limitations limit the performance. For example, DDR3 DIMMs need read/write "leveling" per-bit-lane compensation for clock time-of-flight across the DIMM, GPUs tend to use soldererd-in DRAMs and can control clock skew on the board level and achieve more optimal electricals.

3. Specialized DRAM issues: DRAMs made for GPUs are designed by companies (like Samsung) to be bleeding edge in performance and price (sometimes 3x times the price for 25% more clock speed). These specialized DRAMs also tend to be have a wider native data interface than ones used in DIMMs (e.g, 32-bit vs 8 or 16-bit). That's not something that designers of CPU chipsets would target (back when there were chipsets), and now that CPUs are directly connected to DIMMs, that makes it even less economical to target something this specialized. The perf/$ ratio isn't that good, so it's only good for someone who wants perf.

There's no technical reason why it couldn't be done, but there hasn't been a *general* market for it though (you asked why *general* purpose CPU-memory interfaces don't have this bandwidth).

$1000 a pop. (0)

Gordo_1 (256312) | about 2 years ago | (#39841469)

TSMC's yield on 28nm has been really low. They priced it sky high because they simply don't have enough chips to make many of these monsters -- supply and demand I suppose.

The real story in my mind is how the tech press will go gaga over a part that few will ever own and how that will inevitably help frame the entire nVidia 6xx product line and sell parts that are not the GTX 690. I guess it's no different than Chevrolet building a high performance sportscar to improve the perception of the bowtie logo.

I'm not sure I understand all this mumbo-jumbo... (1)

ZeroPly (881915) | about 2 years ago | (#39841865)

... just tell me how much it would cost for 4 of these with the SLI bridge thingie so I can make WoW run faster.

Re:I'm not sure I understand all this mumbo-jumbo. (1)

symbolset (646467) | about 2 years ago | (#39842339)

About $4000, and $3000 more for the box to put it in.

Re:I'm not sure I understand all this mumbo-jumbo. (1)

drinkypoo (153816) | about 2 years ago | (#39844195)

About $4000, and $3000 more for the box to put it in.

$3500, you forgot the power supply

whoppie (1)

Osgeld (1900440) | about 2 years ago | (#39842111)

now I can play my xbox360 ports (that would run pretty decent with a geforce 8 series) at 180fps instead of 120, let me just shit myself

here nvidia, have 1000 bucks!

Re:whoppie (1)

benthurston27 (1220268) | about 2 years ago | (#39866337)

It's still not going to magically fix the fact that the game is designed for a xbox360 controller and not a keyboard and mouse, though.

heat? (0)

Anonymous Coward | about 2 years ago | (#39869479)

how's the heat on these things? Are they better than a space heater for my toesies?

Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...