Slashdot: News for Nerds


Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Nvidia GeForce GTX 780 Ti Review: GK110, Fully Unlocked

Unknown Lamer posted about 9 months ago | from the that's-some-hardware-there dept.

Graphics 88

An anonymous reader writes "Nvidia lifted the veil on its latest high-end graphics board, the GeForce GTX 780 Ti. With a total of 2,880 CUDA cores and 240 texture units, the GK110 GPU inside the GTX 780 Ti is fully unlocked. This means that the new card has an additional SMX block, 192 more shader cores, and 16 additional texture units than the $1,000 GTX Titan launched back in February! Offered at just $700, the GTX 780 Ti promises to improve gaming performance over the Titan, yet the card has been artificially limited in GPGPU performance — no doubt in order to make sure the pricier card remains relevant to those unable or unwilling to spring for a Quadro. The benchmark results simply illustrate the GTX 780 Ti's on-paper specs. The card was able to beat AMD's just-released flagship, the Radeon R9 290x by single-digit percentages, up to double-digits topping 30% — depending on the variability of AMD's press and retail samples."

cancel ×


heh (5, Funny)

jakobX (132504) | about 9 months ago | (#45378683)

Offered at JUST 700 dollars. Nice try anonymous Nvidia.

Re:heh (1)

arbiter1 (1204146) | about 9 months ago | (#45378719)

that is just at stock mhz, gk110 is known to get a good 200mhz OC and least another 250-500mhz on memory so that lead is a little bit bigger. the AMD card givin how HOT it runs can't overclock it.

Re:heh (2)

dimeglio (456244) | about 9 months ago | (#45379415)

Who said you can't overclock the R9? Just add better cooling. Amazing how suddenly there's a problem because of heath.

Re:heh (2)

ifiwereasculptor (1870574) | about 9 months ago | (#45382501)

Amazing how suddenly there's a problem because of heath.

Unless you're talking about The Dark Knight, you may have misspelled.

Re:heh (1)

smash (1351) | about 9 months ago | (#45397921)

By that reasoning, just apply more (equivalent to AMD) cooling to the GeForce and overclock it more?

Re:heh (2)

Fwipp (1473271) | about 9 months ago | (#45378723)

For a $150 price premium over the 290X, I'd expect more than "single-digit percentages." I know there's always a tax on the high end cards, but 27% pricier for (up to) 9% speed doesn't seem like a great trade-off.

Re:heh (2)

Fwipp (1473271) | about 9 months ago | (#45378733)

Aaaaaand I just looked back at the summary and noticed the "up to 30%." I should really double-check before I post these things.

Re:heh (5, Informative)

Billly Gates (198444) | about 9 months ago | (#45378867)

Tomshardware is known to be biased as they take in ad money and partnerships with Nvidia and Intel. They put in x87 non IEEE FPU tests where Intels own chips win and declare anything AMD/ATI a loser as a result rather than real world performance. They do not test the later versions of Skyrim which have proper FPU support as an example in their benchmarks.

For a more accurate benchmark click here [] ?

Re:heh (4, Insightful)

PopeRatzo (965947) | about 9 months ago | (#45379189)

No, you were right the first time. Toms Hardware is the only site claiming these benchmark victories for the nVidia card. I'm not saying they allow their advertising department to influence their reporting and rankings, but it's a bit fishy that they're such an outlier regarding the flagship video cards of the two manufacturers.

It's also worth noting that comparing these cards without taking AMD's Mantle technology into account is to say the least, incomplete.

Re:heh (1)

Necronomicode (859935) | about 9 months ago | (#45379979)

This sounds a bit biased to me. Firstly Toms isn't the only site. Secondly taking into account something which is currently non-existent to test is pretty tricky.

What performance advantage are you going to give the Mantle API without knowing anything about it? The best they can do is give the figures they currently get for both cards and add an addendum which states that Mantle may improve the performance for the AMD cards. They should probably add that in real world situations the ATI may have performance degradation due to heat if they're being fair.

From my perspective the current AMD offering is too hot, the Nvidia is too expensive. AMD are being cheap releasing a card which may not perform due to temps, Nvidia were being cheap not releasing the Ti before they needed to due to competition. Swings and roundabouts as they say, but competition is good.

I'm genuinely interested in what Mantle will bring to the table - it better be good or AMD are gonna get roasted for the hype train they've created, but fingers crossed.

Re:heh (2)

PopeRatzo (965947) | about 9 months ago | (#45380695)

These are both reference cards. Once Gigabyte and MSI and Sapphire get hold of them, I think the AMD's will run cooler and the nVidia cards will become a little more cost-effective.

All it all, it's great to have them actually competing.

Re:heh (1)

Dr Max (1696200) | about 9 months ago | (#45381381)

Water cooling is getting pretty cheap, and it's quite effective it will also eliminate a lot of noise. If you run linux it's worth while going amd, they are more open and it shows (like being able to give a virtual machine access to the graphics card).

Re:heh (1)

aliquis (678370) | about 9 months ago | (#45381563)

But isn't their drivers also slower? Because that's why I _don't_ want to go AMD on Linux.

Re:heh (1)

Dr Max (1696200) | about 9 months ago | (#45381599)

What? I always thought it was the other way around. Maybe with steam OS coming out, Nvidia has stepped up it's game, but i haven't seen any proof yet. Feel free to enlighten me.

Re:heh (1)

aliquis (678370) | about 9 months ago | (#45381951)

As far as I know the Nvidia drivers has always been better.

Linus and friends may have liked AMD more for providing more information and hence make a better open-source driver.

Personally I wouldn't want to use an open-source driver if it gave half performance even if that would be better than say 10% performance =P. And at least previously I was under the impression not even the closed driver from AMD was competitive against the Nvidia one. But that could had changed I suppose.

Also Nvidias one also existed for FreeBSD and Solaris which AMDs didn't afaik.

Re:heh (1)

aliquis (678370) | about 9 months ago | (#45381543)

It's also worth noting that comparing these cards without taking AMD's Mantle technology into account is to say the least, incomplete.

If it's game benchmarks what are they supposed to do?

For anything except Battlefield 4 I think it's correct because that's the only title I know of which make use of it so far.

Maybe it would be nice to see a demo which really uses it vs some other traditional method and in the future for more games but for now this is what we have.

Re:heh (1)

edxwelch (600979) | about 9 months ago | (#45384537)

mantle won't appear in battlfield 4 until next, but it'll be very interesting to see how it benefits performance

Re:heh (0)

Anonymous Coward | about 9 months ago | (#45392303)

that's not the only technology that's incomplete. Try their crapalyst drivers under ANY OS, they're phailwhale.

The entire reason that I moved back to nvidia after being in crapalyst hell for 4+y(notebook, not simple to swap GPUs as it would involve fabrication of new heatsinks, fan mounts, etc not to mention finding a card that was electrically compatible(yes MXM is a standard but mfg f' with the standard, e.g. ASUS sometime reverses the connections on the MXM cards), then finding/hacking a vbios that will with your notebook bios or finding/hacking that to work as well.

I just decided that enough is enough, so long ATI and your crap sw. Your hw used to be competitive(if I need a stove now, I'll but a hawaii), and cheaper(why does this sound familiar with AMD?) it's just not worth the crap quality driver headache(and the fact that they drop support so quickly now w/o bothering to fix all the bugs, just throwing up their hands saying 'there that's as good as we can get...' rrrriiiiggghhhhttttt....).

(I ordered one of these as I had been contemplating a Titan or plain 780, but TBH these cards are for everyone. Most people could be completely happy with much cheaper/less capable cards, and I'm not even talking about the R290X here, mor like $3-400 range... I don't even really need this right now, but I fully intend to add more monitors before long and the 670 just isn't going to scale well beyond two, and maybe not even two dependent upon what I end up buying, and yes I game, so don't even bother with office, et. al. would be just as happy with the bottomfeeder cards/IGP...)

Re:heh (1)

smash (1351) | about 9 months ago | (#45397923)

AMD's mantle technology was taken into account. They ran all the common software currently utilising it.


Anonymous Coward | about 9 months ago | (#45378759)

For the parent is an idieto!


Anonymous Coward | about 9 months ago | (#45378887)

The price/performance mismatch between AMD and Nvidia is a trap that recently cost me several hundred dollars and a bunch of hassle. AMD has significantly better price/performance but it is incompatible with CUDA and PhysX.

Mod grandparent up to save "idieto"s like me from ourselves :P


arbiter1 (1204146) | about 9 months ago | (#45379043)

Well considering the performance hit new AMD 290 cards take cause heat, the number is a bit closer then people thing. Less you run fan at 50%+(keep in mind this is on an Open air test bench) over periods of time of gaming AMD card slows down cause heat soak in the cooler sets in so card slows down to prevent over heating. AMD had a good card on paper but failed to control heat. Biggest reason card is set to run 95c was AMD wanting to beat nvidia, problem end up being like when 7790 was released. Nvidia had a card ready to go to beat it.

So stop using CUDA? (3, Insightful)

dutchwhizzman (817898) | about 9 months ago | (#45379249)

OpenCL is the future, why use CUDA if you have a choice?

Re:So stop using CUDA? (0)

Anonymous Coward | about 9 months ago | (#45379755)

CUDA is still much better in many ways. Also, nVidia, being somewhat obnoxious that way, seem to have some interest in opening things up... but just barely. I would not be surprised that they do something to make OpenCL irrelevant should it ever become enough of a competitor.

Re:So stop using CUDA? (5, Insightful)

Anonymous Coward | about 9 months ago | (#45380015)

Ironically, in the industry I work in (computer vision, embedded signal processing, etc) - we've slowly been moving AWAY from OpenCL - it's dead/stagnant, still hasn't caught up to where CUDA was 3 years ago, and to put this in general terms, there's fragmentation among both support of STANDARD (as in, the specification) features of OpenCL and worse that prevent you using simple things like images or barriers properly.

On top of that, radically different implementations obviously have radically different optimization processes - for one particular kernel, we were looking at hitting 13 different optimizations of the same kernel/function, to target different devices.

Now days, we use CUDA - compiled to PTX (for nVidia, x86_64, and ARM NEON) and CAL for legacy AMD, HSAIL for GCN AMD, and GLSL for Intel.

Say what you will about CUDA/PTX vendor lock in, PTX is far more device agnostic than OpenCL is, just as portable (thanks to great open source efforts like LLVM (and the vendor support nVidia/AMD/Intel/etc provide) and GPUOcelot), and far more mature than HSAIL is.

That's by basis for NOT using OpenCL, where's yours?

Re:So stop using CUDA? (0)

Anonymous Coward | about 9 months ago | (#45382421)

Just to add to this: CUDA also has way better dev tools. E.g.: GDB for debugging, nvprof, VisualStudio integration (profiling etc). I also prefer the offline C++-ish CUDA compiler over OpenCL (which additionally is C only). Not only does it have much better diagnostics during compilation, but you can additionally inspect its output easily (either PTX or device ASM via cuobjdump).

Also, CUDA being basically C++ with some fluff, enables people to write good-performing libraries (Thrust), which is harder for OpenCL. Compilation times may suck hard for CUDA, but hey, with CUDA those are build-time, rather than run-time (OpenCL), so who cares?

Re:So stop using CUDA? (1)

cheesybagel (670288) | about 9 months ago | (#45385021)

OpenCL caches the binaries after the first compile. Your NVIDIA driver will be doing the same thing except its compiling from PTX binaries to the specific card rather than straight from source. There are debuggers and profilers for OpenCL. Plus LLVM also supports OpenCL.

Re:So stop using CUDA? (1)

cheesybagel (670288) | about 9 months ago | (#45385029)

Also Adobe seems to disagree with you since they have been shifting from CUDA to OpenCL for some time now.

Re:So stop using CUDA? (0)

Anonymous Coward | about 9 months ago | (#45380017)

Because CUDA is the past and present. If you inherited tons of poorly written, poorly documented code that uses CUDA, you use CUDA. If the OpenCL implementation doesn't support X feature you need (non-inlined function calls for my purposes) then you use CUDA. If you want better documentation, you use CUDA. The list goes on.

Re:So stop using CUDA? (0)

Anonymous Coward | about 9 months ago | (#45382085)

OpenCL is the future, why use CUDA if you have a choice?

On Linux OpenCL is an AMD exclusive API with limited support from NVIDIA and almost non existant support from Intel, which in practice gives it the same hardware independence CUDA has: none. Intel drivers only support the CPU as computing back-end, the unofficial OpenCL driver for Intel cards is at version 0.2 and has just enough implemented to execute a kernel.

CUDA has active support from NVIDIA, great tutorials, great development tools, OptiX (I know real-time ray tracing is not for everyone) and is slightly less restrictive as a language (OpenCL everything has to be passed in via kernel args, CUDA can set global state - makes it easier to hide implementation details from calling functions) .

Re:So stop using CUDA? (0)

Anonymous Coward | about 9 months ago | (#45385385)

OpenXL has always had the drum beating to it with none of the goosestepping behind it

Re:heh (0)

Anonymous Coward | about 9 months ago | (#45378793)

Nvidia has CUDA and PhysX support which means they can charge a premium unless AMD starts taking GPGPU seriously enough to offer a compatibility layer.

Given that AMD took over a year to add non-inline function call support to OpenCL (crippling GPGPU for applications like Blender), I wouldn't hold my breath.

Re:heh (1)

Billly Gates (198444) | about 9 months ago | (#45378877)

That is marketing fluff.

Every GPU supports these things in hardware. What Nvidia did was refuse to support these in directX and required proprietary code in CUDA to do things that ATI does with directX. In other words Nvidia is the IE of video cards.

Re:heh (1)

arbiter1 (1204146) | about 9 months ago | (#45378945)

IT was a seperate card/chip when physx was announced. nvidia bought the company that made it, So why SHOULD nvidia then take already made code and recode it in that way spending their own money to help the competition? Pretty stupid thing to do. Nvidia DID offer a license option to AMD/ATI but they turned it down so in the end its on AMD for not supporting it.

Re:heh (0)

Anonymous Coward | about 9 months ago | (#45380035)

Nope, nvidia's OpenCL implementation included the feature just fine. AMD's OpenCL implementation was the broken one, and they didn't fix it for over a year despite the fact that their slow response time prohibited compatibility with a number of pro apps. They have nobody to blame but themselves.

Promises that these problems will eventually be fixed are fluff. Assertions that porting is trivial are marketing fluff. The large set of existing programs that you cannot run on AMD cards, especially scientific and professional apps, is a fact.

Re:heh (0)

Anonymous Coward | about 9 months ago | (#45378985)

actually MANTLE is much, much better for GPGPU than CUDA or PhysX or OpenCL, much higher performance, much better control of hardware

Re:heh (1)

smash (1351) | about 9 months ago | (#45382221)

not cross platform = don't care. OpenCL runs on AMD, Nvidia and Intel (on Mac, no doubt support for all processors will follow on other platforms in due course).

Re:heh (1)

Shinobi (19308) | about 9 months ago | (#45382559)

OpenCL "runs" on AMD with really crap real-world performance, bad drivers and absolutely retarded software requirements (such as needing X running to be able to expose the OpenCL interface...)
OpenCL "runs" on nVidia, but CUDA is better in all ways(and can be cross-compiled to various other platforms, as another poster in another thread has already shown)
OpenCL "runs" on Intel, with crap performance(and on Linux you only get the CPU as a target device...)

And that's not even going into all the faults with OpenCL as a solution, how badly designed and how divorced from the real world it is etc. Despite claims to the contrary, AMD and Intel have just about abandoned OpenCL. That's what MANTLE is about for AMD, for example.

CUDA offers better developer support, better flexibility and, in the real world, better performance, and more solid software, which is why a huge part of the HPC world uses CUDA and not OpenCL.

Re:heh (1)

cheesybagel (670288) | about 9 months ago | (#45385127)

You ignore all the Android devices which use OpenCL including the ARM processors from Apple, Qualcomm, Samsung, and others. Mali, Exynos, and PowerVR have OpenCL acceleration. I have tried to use 3rd party CUDA implementations. They all suck. The reason is fairly obvious. They support the standard to various degrees and the "fragmentation" is worse than in OpenCL. The main problem with OpenCL is that the open source implementations suck and NVIDIA does not update their implementation from anything post OpenCL version 1.1. That is the problem. AMD's OpenCL drivers and tools are excellent both on the CPU and GPU. Plus there are several people modifying clang to compile to OpenCL to NVIDIA graphics PTX format. Eventually clang may target all the platforms. As for CUDA...

Re:heh (1)

smash (1351) | about 9 months ago | (#45397937)

Pretty much. And it is in neither AMD or Nvidia's interests to support OpenCL over their own proprietary stuff. no matter, intel are on board and eventually, what intel supports will win out, as intel ship a shitload more units than anyone else.

Re:compared (1)

Chemisor (97276) | about 9 months ago | (#45378863)

This card is only $700. And how much are you paying for that iPhone and its data plan that you almost never use?

Re:compared (2)

Billly Gates (198444) | about 9 months ago | (#45378919)

Well according to those who are broke the iPhone is free of charge!

It wont cost anything to use at all compared to these silly users who pay up front ... well off to go to pay $120 a month for my 1 user phone bill. Wow I can't imagine why it is so high and why I can't leave for 2 years?!

Re:compared (0)

Anonymous Coward | about 9 months ago | (#45379445)

Because you're a sucker. I pay half that for my iPhone 5S 2 year plan with 6gb of data plus all the bells and whistles.

Re:compared (1)

Billly Gates (198444) | about 9 months ago | (#45379865)

I pay $50 a month for my cell phone. Multiply what you pay x 24? You paid $1200

Doesn't sound like I am the sucker if you ask me and explains why poor people remain poor.

Re:compared (0)

Anonymous Coward | about 9 months ago | (#45389917)


he pays half that - 60 dollars
you pay 50 dollars

so; he gets a better (read: newer) phone than you for 240 dollars.

Compared to whatever price you paid. (lets pretend it was a Nexus ~300 dollars) and he is still kicking your ass.

Re:heh (1)

supremebob (574732) | about 9 months ago | (#45379223)

I like the part where they said "Depending on the variability of AMD's press and retail samples."

The variability in the results was mostly caused by some last minute driver changes that caused a performance boost in the Radeon 290 and 290X cards, but the submitter seems to make it look some sort of "golden sample" conspiracy from AMD.

More like fully unleashed (1)

cosmin_c (3381765) | about 9 months ago | (#45378725)

Basically the Titan was a publicity stunt. The 780Ti is faster and a lot cheaper. Then again, the R290s are just as fast as a Titan and a lot cheaper, but it seems they have some cooling (throttling) issues. One should wait for the custom cooled R290s if the target are games and just get a 780Ti if you really need the CUDA processing (e.g. Adobe applications like Premiere). Also, as for the price, it's top-end, it's normal to be expensive. One can game with a GTX670 or a GTX770, which are more than enough and a lot cheaper.

Re:More like fully unleashed (1)

Anonymous Coward | about 9 months ago | (#45379039)

The Titan is and always ha been geared toward entry level 3D graphics level performance applications. NVidia noticed many of the just starting out game/movie creators were using hacks to enable their gaming cards that cost less than a thousand dollars to be used in stead of their 2-8 thousand dollar professional series. This is why it it has 6GB of Ram, for a cheap option for those just getting into that type of system.

The fact that it worked great for games was a side benefit, allowing nVidia to grab some more money form those that really want the best.

Re:More like fully unleashed (0)

Anonymous Coward | about 9 months ago | (#45379251)

Well, the 280X blows the Titan out of the water for double precision FLOPs per dollar, if you don't mind using OpenCL instead of CUDA (depends on your application) - it gives you nearly a teraflop of DP for $300. Funnily enough, it even beats the 290 and 290X since AMD decided to limit the DP performance on them too. I'm surprised at how little attention it's been getting.

Re:More like fully unleashed (1)

wagnerrp (1305589) | about 9 months ago | (#45379969)

That has always made me curious. Since early on, ATI has claimed higher compute performance in every generation, and yet nVidia owns that market. Is OpenCL an inferior interface to CUDA? Did nVidia just get there first with CUDA? Are they buying the market with programming support? Is it just that the compute market runs Linux, an area where nVidia has been stellar and ATI has been awful?

Re:More like fully unleashed (0)

Anonymous Coward | about 9 months ago | (#45380099)

OpenCL is not an inherently inferior interface, but CUDA was first and better documented. You're half right: they basically bought the supercomputing market with better support.

Worse, AMD has been doing a very poor job of supporting their own OpenCL implementation. They let critical bugs fester for years and they don't have a compatibility layer of any kind for CUDA. I think they're counting on MANTLE to save them, but that's a shitty bet: people too clueless/lazy get max performance out of OpenCL won't be able to get max performance out of MANTLE. People who *could* get max performance out of OpenCL won't have anything to gain by switching to MANTLE, except incompatibility.

I really wish AMD would get their shit together. I'm a scientist paying for my predecessor's GPGPU lessons through premiums on my graphics cards. It sucks.

Re:More like fully unleashed (1)

UnknownSoldier (67820) | about 9 months ago | (#45380875)

CUDA is beautifully designed. It is trivial to pick up -- the docs are great, there are plenty of examples, it is not verbose like OpenCL but concise and compact. Basically nVidia has supported CUDA significantly much better then AMD has with OpenCL.

At the end of the day, sure eventually both of them are feature-parity but you'll probably get up to to speed with CUDA faster. I keep checking out OpenCL from time to time and it is just easier to use CUDA.

Re:More like fully unleashed (1)

cheesybagel (670288) | about 9 months ago | (#45385171)

The main difference is you cannot inline OpenCL in C like you can do with CUDA because its done differently. The rest is fluff and I do not consider it beautifully designed. In fact it is a lot more cryptic than OpenCL.

Re:More like fully unleashed (0)

Anonymous Coward | about 9 months ago | (#45379933)

GCN has been supported by mercury/compute in Premiere pro 6 + for a while now.

Owned a 7970 ref since launch, at 1.3ghz it's practically a 2 year old titan killer.... with full compute and full colour gamut for half the price.

Re:More like fully unleashed (2)

UnknownSoldier (67820) | about 9 months ago | (#45380837)

> Basically the Titan was a publicity stunt.

Right, you want to tell that to the World's #1 Super computer which has 18,688 Tesla K20X GPUs. []

Second, you are glossing over the fact that:

Titan = 6 GB VRAM, 780 Ti = 3 GB
Titan Float64 performance = 1/3 FP32, 780 Ti = 1:24 FP32

For gamers they couldn't give a shit about that. The fact that Titan could also game was a bonus. It was never primarily targeted at gamers just budget scientific computing.both: Process 3 TB of data (so the extra 3 GB provides a nice bonus) AND game at 120+ Hz gaming.

Re: More like fully unleashed (0)

Anonymous Coward | about 9 months ago | (#45382981)

You're forgetting gamers wanting to drive multiple displays. That eats RAM.

Sticking with ATI (1)

Billly Gates (198444) | about 9 months ago | (#45378899)

At least I do not have driver issues that plauge Nvidia cards like the 320.x drivers which are known to brick their cards and mess around with Aero on Windows 7.

Before I got modded down or flamed galore for this ... I am taking about 2010 - 2013 ATI drivers vs Nvidia ones. I kept having quality issues when I was a nvidia fan boy and switched to an ATI 5750 and now a 7850 with HDMI and things have been great since! 2002 with ATI rage pro's ... well that is a different story altogether compared for today.

If I were to buy a new GPU (if I still kept my ati 5750) I would go with ATi again as even the 270x has 2 trillion ops per damn second of power which is incredible and a better value for $/performance.

Re:Sticking with ATI (1)

arbiter1 (1204146) | about 9 months ago | (#45378917)

so 1 small bad nividia set of drivers vs all the years of bad AMD drivers? How many years did AMD's crossfire not even really work right and who's tool was that that was released before AMD even did something about it? So if you want to complain about driver issues, keep in mind AMD has had issues since day 1 and still do.

Re:Sticking with ATI (0)

Anonymous Coward | about 9 months ago | (#45378931)

I see no evidence of any bad ATI driver issues besides the 7970s which came out with its own driver set for a few months since 2011.

Re:Sticking with ATI (1)

arbiter1 (1204146) | about 9 months ago | (#45378997)

i will just use crossfire issue as 1 example, how many years have people on the forums complained about stuttering and bad performance on it? 4-5+ years? took an NVIDIA tool of all things before amd fixed it.

Re:Sticking with ATI (0)

Anonymous Coward | about 9 months ago | (#45379503)

4 or 5 years? They can't make a mouse cursor work after a decade.

So what? (2)

xhrit (915936) | about 9 months ago | (#45378935)

What does it matter, since no game that will be released in the next 10 years is going to need more graphics power then the shitty xbox one can crap out?

Re:So what? (2)

Billly Gates (198444) | about 9 months ago | (#45378963)

What does it matter, since no game that will be released in the next 10 years is going to need more graphics power then the shitty xbox one can crap out?

... uh Crysis and anything on 4k. This card is still not fast enough at that resolution.

Graphics still are not photorealistic yet and we have a long way to go before that happens. At 1080p these would suffice but Battle Field 4 barely plays at a lousy 40 fps at 4k even with this Titan?!

Re:So what? (1)

smash (1351) | about 9 months ago | (#45382243)

I think the guy's point is that no one is going to write games that REQUIRE that because they're all likely to be directly ported between Xbone/PS4/PC as its all PC hardware.

Re:So what? (1)

nhat11 (1608159) | about 9 months ago | (#45399601)

4k easily... in fact 2+ monitors are becoming the norm. People keep underestimating the need for more powerful gfx cards.

TITAN has one advantage...possibly (1)

_Shad0w_ (127912) | about 9 months ago | (#45378989)

The main advantage the TITAN has over the 780 Ti is the memory; having 6GB compared to the 780 Ti's 3GB. If you're only looking at running 1080p that's not such an issue, but if you're one of those people with more money than sense and you're looking at running a 4k panel, it is.

I really don't know why they didn't stuff 6GB on the 780 Ti. My *580* has 3GB. I'd really expect two series down the line for there to be a bit more RAM on it as standard (3GB wasn't the standard memory configuration for a 580; that was 1.5GB I think).

Re:TITAN has one advantage...possibly (4, Informative)

gman003 (1693318) | about 9 months ago | (#45379217)

The other advantage of the Titan is the double-precision performance. Almost all of Nvidia's cards, including the 780 Ti, run double-precision floating-point calculations at 1/24th the rate of single-precision, but for the Titan and the Tesla pure-GPGPU cards, it's 1/3rd the rate.

While I'm not sure if that's an actual hardware difference, or if it's some software limitation, or a mix of both or whatever, it's definitely real. That's the main reason a Titan is still $1000 - it's being sold as a low-end compute card, not a high-end gaming card.

Re:TITAN has one advantage...possibly (1)

Pinhedd (1661735) | about 9 months ago | (#45380371)

It's a limitation implemented in firmware/microcode for marketing purposes.

The Titan, GTX 780 and GTX 780 Ti all use the same physical chip.

Re:TITAN has one advantage...possibly (0)

Anonymous Coward | about 9 months ago | (#45394119)

Fp Spongei. (-1)

Anonymous Coward | about 9 months ago | (#45379119)

weel-known official GNAA irc Fastest-growing gAY play area Try not

Who's buying these cards? (1)

rsilvergun (571051) | about 9 months ago | (#45379143)

Are there that many people running multi-mon flight sim and driving sims? I know there's the guy on here that bought a $1000.00 card 10 years ago, sells it every year for $800 and then buys another $1000.00 card with the proceeds. But I can't believe there are that many people on the bleeding edge. Heck, Crisis 3 ran on a 360...

Re:Who's buying these cards? (0)

Anonymous Coward | about 9 months ago | (#45379419)

Did you see that note about GPGPU performance? If you're shelling out more than $300 for a graphics card you either:
1) Have more money than sense
2) Are a professional who needs the performance for something other than computer games

Re:Who's buying these cards? (0)

Anonymous Coward | about 9 months ago | (#45379767)

Let's see here...

1) Have more money than sense

Not really... anyone want to pay me more?

2) Are a professional who needs the performance for something other than computer games

Yup, bingo. Though let's be honest, at the end of the day, when it's not being used for high performance computing, there's no reason not to use it for computer games too....

Re:Who's buying these cards? (1)

Nemyst (1383049) | about 9 months ago | (#45379837)

Flight sims and driving sims aren't graphics showpieces anymore and haven't for a while. Shooters are usually where it's at, and games like Crysis 3 or Battlefield 4 can put very high end cards to their paces, especially at >1080p resolutions. Crysis 3 ran on a 360 with a sub-720p resolution and a lot of settings notched down... You can see the difference very easily between PC and consoles.

Re:Who's buying these cards? (1)

Billly Gates (198444) | about 9 months ago | (#45379887)

Shit there are people who pay $250,000 for a freaking bathub and do not blink buying a $50,000 Ford F-350 complete with $10,000 shocks, strucks and tires to show off how much money they have compared to you and I on the road.

People have money and even if the HPs and Dells are in decline, the high end motherboard industry is taking off as many play more PC games. So yes $400 is cheap for a $1800 computer to MMO with their friends.

Not everyone is a poor young professional with student loan debt. However, that demographic is increasing though.

Re:Who's buying these cards? (1)

Cederic (9623) | about 9 months ago | (#45384085)

More to the point, the poor young professionals that grew up PC gaming have now cleared their student debt, the mortgage is a pitiful percentage of their net pay and the wife's earning as much as they are.

Gaming isn't just for kids.

Re:Who's buying these cards? (1)

Billly Gates (198444) | about 9 months ago | (#45386189)

Still have 40k on mine and no house yet.

Value cards for me it is for now. If you graduated past 2006 40k to 100k with only 10 - 12/hr temp jobs when you graduate seem to be the new norm in the great recession.

But if you graduated in the 1990s you paid 15k to 25k and could get a house for 1/4th the cost as someone graduating today. Economics do not count homes, services, rent, nor food in inflation indexes which is silly because it is a big problem as well as not counting debt and only income.

Re:Who's buying these cards? (0)

Anonymous Coward | about 9 months ago | (#45380071)

If fps drops below 60, it is very annoying. The higher fps - the better (assuming the monitor has high enough framerate). For competitive shooters this is not only preference, it is very important to play good. If you look at the results, there are games for which only gtx 780 and stronger run at average fps above 60 at modest resolution 1920x1080. At higher resolution situation is even worse.

still lousy at hashing passwords (0)

dutchwhizzman (817898) | about 9 months ago | (#45379263)

It's nice that NVidia is lowering their prices, but they are just not that competitive if you use these cards for password hashing or openCL. I had been using NVidia for the last 12, but I recently switched to an AMD card since at half the price, it was still faster at brute forcing crypto than the NVidia board was. I think NVidia should work on their openCL performance and AMD should work on the number of shaders and such on their chipsets.

Re:still lousy at hashing passwords (0)

Anonymous Coward | about 9 months ago | (#45379787)

Um... what legitimate use do you have to be doing that anyway? Inquiring minds and all.

Re:still lousy at hashing passwords (0)

Anonymous Coward | about 9 months ago | (#45381217)

Maybe he got Adobe data? not legitimate though.. :D

Re:still lousy at hashing passwords (1)

smash (1351) | about 9 months ago | (#45382269)

Security audit? If i can crack password hashes with a single GPU then who knows how quickly a determined attacker can break them.

Another PAY-FOR-PLAY Slashdot ad (-1)

Anonymous Coward | about 9 months ago | (#45379917)

We do know how much money Slashdot's owners are now making from this site every year, for posting advertisements posing as 'stories'? Of course, Slashdot's owners are now placing trojanware in Sourceforge downloads, via the Windows installers.

Nvidia has been paying large sums of money to ANY major technical site that agrees to denigrate AMD's new 290 family, or laud Nvidia's less new 780 family, using the specific language and bullet points provided by Nvidia.

The 780TI mentioned above is an absolute atrocity, costing vastly more than AMD's 290, using the SAME power when matching performance in taxing games, and only offering a meagre, non-future proof, 3GB of memory. Despite costing a fraction of the amount demanded by Nvidia, the overclocked 290 essentially draws equal with the overclocked 780TI when used at resolutions and game settings that high-end gamers expect.

Given that NEITHER card make sense in the long term (at the end of 2014, AMD will have much better parts available on the 20nm process, and Nvida, as usual, will lag behind by at least 3 months until its 20nm also arrive), the AMD 290 has by far the most disposable price point.

It gets FAR worse for Nvidia. AMD has built in super-sophisticated sound processing AND Mantle support, which will give the 290 a whole new lease of life in your secondary computer system when you retire the card from primary use at the end of next year. And in your secondary system, its excessive power use can be simply removed by downclocking the card by a modest amount.

On the other hand, Nvidia dumps the 780 architecture entirely at Spring next year, when it releases its Maxwell family of parts (on 28nm, the current OLD process). So, the 780 is COMPLETELY out-of-date, while the 290 has the second generation of GCN, AMD's cross platform GPU architecture used in BOTH new consoles, its forthcoming ARM parts, its new x86 APU parts, and its current range of discrete graphics cards.

Nvidia is spending money like water to BUY favourable coverage because AMD has Nvidia backed into a corner. And worse, Nvidia has obsolete 2GB cards priced where AMD offers 3GB, and obsolete 3GB cards where AMD offers 4GB. And YES, the amount of memory does matter, because console ports at the end of 2014 will need 3GB at least for 1080P resolution, requiring Nvidia owners to have to significantly drop their quality settings to maintain framerate.

Nvidia is already demanding that sites review the cards using 2xMSAA anti-aliasing (a dreadful low quality setting) rather than 4xMSAA (the first setting where anti-aliasing looks good) because their tiny memory sizes collapse their performance compared to AMD when decent anti-aliasing is selected.

Nvidia is currently targeting the developers of the better INDY games to only use single threaded rendering, and the dreadful PhysX for the in-game physics, because ALL AAA game developers, no matter what their previous relationship with Nvidia, are now switching to proper coding methods and middleware, and these methods massively improve new console performance, and PC performance when the owner has 4+ CPU cores, and an AMD GCN GPU card.

Blech (0)

Anonymous Coward | about 9 months ago | (#45380491)

These cards run too hot and noisy, and will shortly be replaced by more efficient and less costly parts. Not worth it.

Linux (1)

fa2k (881632) | about 9 months ago | (#45382285)

Great that there's competition. I'm sure this blows the AMD ones away in Linux, but do they still have the tearing problems for video? Video has looked like crap on both my recent nVidia cards because it's splitting in the middle, though only when using two monitors. The exception is a few video players that use vdpau, but I can't expect all my videos to be playable on those. While I've preferred AMD before, I'm trying to not care about the brand as long as they don't do something hostile to users (Sony), so I'd love to get an nVidia if they got their non-accelerated video playback stuff together.

This is spam (0)

Anonymous Coward | about 9 months ago | (#45385531)

I'm aware that slashdot recently acquired new owners but I hadn't noticed any major differences but this is troubling.

This post appears to have been written by a 13-year-old on an intro to copywriting course. Do the people running this site seriously think slashdotters are gonna fall for a thinly-veiled piece of (poor) advertising masquerading as a serious topic for discussion.

This is BS

Fully unlocked... yet limited? (1)

naturaverl (628952) | about 9 months ago | (#45392247)

"the GK110 GPU inside the GTX 780 Ti is fully unlocked" ... "yet the card has been artificially limited in GPGPU performance"... ... So which is it? To me "fully unlocked" can't be true of a card that is "artificially limited"
Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account