×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

NVIDIA Predicts 570x GPU Performance Boost

ScuttleMonkey posted more than 4 years ago | from the lets-talk-about-diminishing-returns dept.

Graphics 295

Gianna Borgnine writes "NVIDIA is predicting that GPU performance is going to increase a whopping 570-fold in the next six years. According to TG Daily, NVIDIA CEO Jen-Hsun Huang made the prediction at this year's Hot Chips symposium. Huang claimed that while the performance of GPU silicon is heading for a monumental increase in the next six years — making it 570 times faster than the products available today — CPU technology will find itself lagging behind, increasing to a mere 3 times current performance levels. 'Huang also discussed a number of "real-world" GPU applications, including energy exploration, interactive ray tracing and CGI simulations.'"

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

295 comments

anonymous coward predicts first post (-1, Troll)

Anonymous Coward | more than 4 years ago | (#29236263)

suck my asshole, homos

Goody! (5, Funny)

Anonymous Coward | more than 4 years ago | (#29236273)

Then we can use our GPUs as our CPUs!

FOUL!! (0, Redundant)

Runaway1956 (1322357) | more than 4 years ago | (#29236331)

No fair! That obvious response was mine! I want it back! But, the question remains: should we use our CPU's as GPU's?

Nope! (4, Funny)

goombah99 (560566) | more than 4 years ago | (#29237177)

Then we can use our GPUs as our CPUs!

No No. GPU's only become CPU's when they are 570.34567 times faster. You will note that he precisely said only 570 times faster. That is he did not say an even 600 or 1000 or 500, but precisely 570, so we can assume he knew it was not 570.34567.

haha yeah right (-1, Troll)

Anonymous Coward | more than 4 years ago | (#29236277)

just like how intel said they'd have 4nm chips? haha, wee dont believe that shit. we know far more than the companies about fabrication. QUEUEUE a 1000 posts about how 560x is unrealistic.

Re:haha yeah right (3, Interesting)

4D6963 (933028) | more than 4 years ago | (#29236349)

Intel said 4 nm for 2022, that's in 13 years. What precisely allows you to doubt that claims, except maybe the fact that deadlines are often missed? Let me rephrase that, what allows you to think that it'll be reached much later than anything else?

Also, queue a dozen+ posts explaining to the armchair pundits how 560x is possible.

Re:haha yeah right (0)

Anonymous Coward | more than 4 years ago | (#29236433)

WHOOPAWW. It's the new woosh.

Re:haha yeah right (2, Informative)

Martin Blank (154261) | more than 4 years ago | (#29236569)

The IEEE figures that semiconductor tech will be at the 11nm level around 2022. Intel and Nvidia both claim that they'll be significantly further along the path than the IEEE's roadmap. Maybe they're right, and I hope they are, but there are some very significant problems that appear as the process shrinks to that level.

Re:haha yeah right (1)

4D6963 (933028) | more than 4 years ago | (#29236653)

Undoubtedly, however this is too early to call, for all we know they might just make it.

570x is not that far (2, Funny)

gravos (912628) | more than 4 years ago | (#29236883)

Keep in mind that is only ~3x per year because 3^6 = 729. If Moore's law holds with a 2x every 18 months that would be 16x in 6 years 570/16 = 35.652. The sixth root of 35 is 1.8. So they only have to improve the architecture by ~2x every year and ride Moore's law.

Re:haha yeah right (2, Insightful)

PIBM (588930) | more than 4 years ago | (#29236743)

Stupid I know, but I would have had more confidence in a 500x increase, just because there's less significant digits and a wider error margin.

Re:haha yeah right (0)

Anonymous Coward | more than 4 years ago | (#29237073)

I'd be more inclined to believe they might get their current GPUs working properly within 6 years.
They cant do bumps right and have had real problems with 45nm GPUs.

But a 500x increase sounds line an estimate pulled out of his head/ass/PR department but 570x is more like some geek calculated figure.
Besides, he meant 5.70x faster but the Nvidia GPU burned out and miscalculated.

Re:haha yeah right (1)

vertinox (846076) | more than 4 years ago | (#29236903)

Intel said 4 nm for 2022, that's in 13 years. What precisely allows you to doubt that claims, except maybe the fact that deadlines are often missed? Let me rephrase that, what allows you to think that it'll be reached much later than anything else?

I'm dunno. Most CEOs don't make claims unless their business plan includes said claims else they look like a fool at the next shareholder meeting. That doesn't stop them from making claims that don't come through.

Remember Steve Jobs saying they would break the 3.0ghz with the IBM by the next WWDC... And then they didn't... Coincidentally they dropped IBM shortly after and went with Intel.

Anyways... Intel seriously uses Moore's Law as their road map so its a self predicting prophecy.

Also, queue a dozen+ posts explaining to the armchair pundits how 560x is possible.

Simple. Move the goal posts.

Re:haha yeah right (1)

RightSaidFred99 (874576) | more than 4 years ago | (#29237023)

Anyways... Intel seriously uses Moore's Law as their road map so its a self predicting prophecy.

No, they don't. It's descriptive, not something that ties your hands or, conversely, guarantees anything.

Predictions of the future (1)

mollog (841386) | more than 4 years ago | (#29236301)

I see a few tags that cast doubt on the prediction. Why? I'll bet there were skeptics of Moore's Law when that became widely disseminated.

What troubles me is that this sort of cell GPU is not more widely used in everyday applications. We who program for a living are feeling like we have been engaging in 'self stimulation' for years and wish there were some new target platform/market that we could so some interesting work in.

Re:Predictions of the future (-1, Troll)

Anonymous Coward | more than 4 years ago | (#29236391)

Panties Stink!
They really, really stink!
Sometimes they're red, sometimes they're green,
Sometimes they're white or black or pink
Sometimes they're satin, sometimes they're lace
Sometimes they're cotton and soak up stains
But at the end of the day, it really makes you think
Wooooooo-wheeeee! Panties stink!

Sometimes they're on the bathroom floor
Your girlfriend- what a whore!
Sometimes they're warm and wet and raw
From beneath the skirt of your mother-in-law
Brownish stains from daily wear
A gusset full of pubic hair
Just make sure your nose is ready
For the tang of a sweat-soaked wedgie
In your hand a pair of drawers
With a funky feminine discharge
Give your nose a rest, fix yourself a drink
cause wooooooo-wheeeeeee! panties stink!

Re:Predictions of the future (3, Insightful)

TheRealMindChild (743925) | more than 4 years ago | (#29236425)

Well, it comes down to simple math. For the performance to get to 570-fold more than what it is now, in the same style package, either:
  1. The GPU has to become 570-fold more efficient
  2. The GPU has to become ~570-fold smaller so they can fit 570 of the things onto a card

Both seem highly unlikely.

Re:Predictions of the future (0)

Anonymous Coward | more than 4 years ago | (#29236545)

The claim is further undermined by the fact that nvidia's last two "generations" of GPUs were just rebrands and die shrinks.

Re:Predictions of the future (4, Informative)

volsung (378) | more than 4 years ago | (#29237081)

The GeForce 9 series was a rebrand/die shrink of GeForce 8, but the GTX 200 series has some major improvements under the hood:

* Vastly smarter memory controller including better batching of reads, and the ability to map host memory into the GPU memory space
* Double the number of registers
* Hardware double precision support (not as fast as single, but way faster than emulating it)

These sorts of things probably don't matter to people playing games, but they are huge wins for people doing GPU computing. The GTX 200 series has also seen a minor die shrink during the generation, so I don't know if the next generation will be more of a die shrink or actually include improved performance. (Hopefully the latter to keep up with Larrabee.)

Re:Predictions of the future (4, Insightful)

LoudMusic (199347) | more than 4 years ago | (#29236693)

Well, it comes down to simple math. For the performance to get to 570-fold more than what it is now, in the same style package, either:

  1. The GPU has to become 570-fold more efficient
  2. The GPU has to become ~570-fold smaller so they can fit 570 of the things onto a card

Both seem highly unlikely.

You don't feel it could be a combination of both? Kind of like they did with multi-core CPUs? Make a single unit more powerful, then use more units ... wow!

There is more than one way to skin a cat.

Re:Predictions of the future (1, Informative)

Anonymous Coward | more than 4 years ago | (#29236773)

Even generously assuming they'd achieve an 8x die shrink, that'd need to be producing chips with a 41000mm^2 die area. (Their current chips are already the biggest at 576mm^2.)

Re:Predictions of the future (0)

Anonymous Coward | more than 4 years ago | (#29236893)

Because using more units is cheating? C'mon, someone can probably find a way to glue 570 GPUs together now. That doesn't mean that GPUs saw a 570-fold increase in performance in the time it took me to come up with my brilliant plan.

Re:Predictions of the future (1)

Twinbee (767046) | more than 4 years ago | (#29236901)

Or 3d-erize the chip?

Re:Predictions of the future (1, Insightful)

Anonymous Coward | more than 4 years ago | (#29237169)

Doing multiple layers either via lamination or deposition would make sense. But then there's this problem: How do you get the heat out of it? Those things aren't exactly running cool as they are now.

But then again, maybe they figured something out that we don't know.

Re:Predictions of the future (1)

fuzzyfuzzyfungus (1223518) | more than 4 years ago | (#29236709)

Even more curious, he claims that GPUs will see 570x improvement, with CPUs only getting 3x.

One wonders what night-miraculous improvement in process, packaging, logic design, etc. will improve GPUs by hundreds of times, while somehow being virtually useless for CPUs...

Re:Predictions of the future (1, Informative)

Anonymous Coward | more than 4 years ago | (#29237151)

Simple. GPUs are already massively parallell in a way that is actually usable. While the gigahertz-race has pretty much stopped, the transistor-race is still on, but getting the most out of a multi core CPU for a single application is non-trivial. GPUs however are a completely different ballgame, where the performance of the card pretty much scales with the number of shader cores.

Re:Predictions of the future (5, Interesting)

BikeHelmet (1437881) | more than 4 years ago | (#29236911)

Or... not.

Currently CPUs and GPUs are stamped together. Basically, they take a bunch of pre-made blocks of transistors(millions of blocks, billions of transistors in a GPU), and etch those into the silicon, and out comes a working GPU.

It's easy - relatively speaking - and doesn't require a huge amount of redesign between generations. When you get a certain combination working, you improve (shrink) your nanometre process and add more blocks.

However, compiler technology has advanced a lot recently, and with the vast amounts of processing power now available, it should be simpler getting more complex blocks fully utilized. A vastly more complex block, with interconnects to many other blocks, could perform better at a swath of different tasks. This is evident when comparing the performance hit from Anti-Aliasing. Previously even 2xAA had a huge performance hit, but nVidia altered their designs, and now Multisampling AA is basically free.

I recall seeing an article about a new kind of shadowing that was going to be used in DX11 games. The card used for the review got almost 200fps at high settings - with AA enabled that dropped to about 60fps, and with the new shadowing enabled, it dropped to about 20fps. It appears the hardware needs a redesign to be more optimized for whatever algorithm it uses!

Two other factors you're forgetting...

1) 3D CPU/GPU designs are coming slowly, where the transistors aren't just on a 2D plane... that would allow vastly denser CPUs and GPUs. If a processor had minimal leakage, and low power consumption, 500x more transistors wouldn't be a stretch.

2) Performance claims are merely claims. Intel claims a quad-core gives 4x more performance, but in many cases it's slower than a faster dual-core.

570x faster for every game? Doubtful. 570x faster at the most advanced rendering techniques being designed today, with AA and other memory-bandwidth hammering features ramped to the max? Might be accurate. A high end GPU from 6 years ago probably won't get 1fps on a modern game, so this estimate might even be low.

A claim of 250x the framerate in Crysis, with everything ramped to the absolute maximum, might be even accurate.

But general performance claims are almost never true.

Re:Predictions of the future (3, Insightful)

eln (21727) | more than 4 years ago | (#29236429)

I don't doubt the prediction at all, I just have concerns about the vat of liquid nitrogen I'm going to have to immerse my computer in to keep that thing from overheating, and the power substation I'm going to need to build in my backyard to power it.

Re:Predictions of the future (1)

LoudMusic (199347) | more than 4 years ago | (#29236681)

I don't doubt the prediction at all, I just have concerns about the vat of liquid nitrogen I'm going to have to immerse my computer in to keep that thing from overheating, and the power substation I'm going to need to build in my backyard to power it.

But GPUs today are somewhat more than 570x more powerful than they were several years ago and we haven't had to submerge them in a vat of liquid nitrogen yet, so what makes you think that's going to be the case in the next 570x power increase? (whenever that happens ...)

Re:Predictions of the future (1)

eln (21727) | more than 4 years ago | (#29236705)

Maybe not, but they do require a lot more cooling and power than they did before.

Re:Predictions of the future (1)

negRo_slim (636783) | more than 4 years ago | (#29236821)

Perhaps for the top end models that holds true, but as the market for roll your own HTPC's has shown (at least in terms of cooling), there are plenty of passive heat sink options available.

Jen-Hsun Huang is full of shit (3, Insightful)

Rix (54095) | more than 4 years ago | (#29236469)

He constantly runs his mouth without any real thought to what he's saying. It's just attention whoring.

Re:Jen-Hsun Huang is full of shit (1, Flamebait)

vivek7006 (585218) | more than 4 years ago | (#29236515)

Mod parent up.

Jen-Hsun Huang is a certified clown who just a short while back was running around saying things like 'we will open a can of whoop-ass on Intel'.

What a dumbass ...

Re:Predictions of the future (2, Interesting)

javaman235 (461502) | more than 4 years ago | (#29236513)

Its easy to get a 570x increase with parallel cores. You will just have a GPU that is 570 times bigger, costs 570 times more and consumes 570 times more energy. As far as any kind of real break through though, I'm not seeing it from the information at hand.

There is something worthy of note in all this though, which is that the new way of doing business is through massive parallelism. We've all known this was coming for a long time, but its officially here.

Re:Predictions of the future (5, Informative)

Anonymous Coward | more than 4 years ago | (#29236547)

The prediction is complete nonsense. It assumes that CPU processors only get 20% faster per year (compounded). That would only be true if they did not add more cores to the CPU. And finally GPUs are hitting the same thermal/power leakage wall that CPUs hit several years ago - they will at best get faster in lock step with CPUs.

A GPU is not a general purpose processor, as is a CPU. It is only good at performing a large number of repetitive single precision (32 bit) floating point calculations without branching. Double precision (64 bit) calculations - double in C speak - is 4 times slower than single precision on a GPU. And the second you have an "if" in GPU code, everything grinds to a halt. Conditions effectively break the GPU SIMD (single instruction multiple data) model and bring the pipeline to a halt.

Re:Predictions of the future (0)

Anonymous Coward | more than 4 years ago | (#29236761)

Moore didn't predict that processors would be 736x more powerful in 240 months.

In other news... (5, Informative)

Hadlock (143607) | more than 4 years ago | (#29236317)

In other news, ATI is selling their 4870 series cards for $130 on newegg, which are twice as fast as an Nvidia 9800GTS which is the same price (at least on Left 4 Dead, Call of Duty, and any other game that matters). ATI is blowing Nvidia out of the water in terms of performance per dollar and will continue to do so through at least the middle of next year. See here:

http://www.tomshardware.com/charts/gaming-graphics-cards-charts-2009-high-quality/benchmarks,62.html [tomshardware.com]

Yeah, I'd be making outrageous statements too if I were Nvidia.

Re:In other news... (2, Informative)

Hadlock (143607) | more than 4 years ago | (#29236373)

Here's the L4D comparo, sorry for the wrong link:

http://www.tomshardware.com/charts/gaming-graphics-cards-charts-2009-high-quality/Left4Dead,1455.html [tomshardware.com]

The 9800GT and 8800GT are both in the 40-60fps while the 4870 (single processor) is in the 106fps range. It's a pretty staggering difference.

Re:In other news... (0)

Anonymous Coward | more than 4 years ago | (#29236459)

I don't understand what they are measuring?

I use it to play videos in all sort of ways, and nvidia is there even for older nvidia cards using directshow etc, atleast 2-3x faster than the fastest ATI. I would never recommend an ATI to my customers, the ati- drivers are even more buggy than NVIDIA...

Re:In other news... (1)

clarkn0va (807617) | more than 4 years ago | (#29236971)

Agreed. My primary use for the nvidia gpu is watching HD. Let's do some math.

1080 * sqrt570 = 25784

I like. Considering even the most basic of today's gpus, the ion and tegra, for example, are capable of 1080p, Mr Nvidia is predicting that my handheld 6 years hence will be able to smoothly decode mkvs and output them real-time to my new UltraMegaFullHD(TM) 25784p tv? Bring on the future!

Re:In other news... (2, Informative)

Spatial (1235392) | more than 4 years ago | (#29236733)

Troll mod? No, this is mostly true.

While his example is wrong (Nvidia's competitor to the HD4870 is the GTX 260 c216), AMD do have better value for money on their side. The HD4870 is evenly matched but a good bit cheaper.

The situation is similar in the CPU domain. The Phenom IIs are slightly slower per-clock than the Core 2s they compete with, but are considerably cheaper.

Re:In other news... (3, Informative)

MrBandersnatch (544818) | more than 4 years ago | (#29237113)

Depending on vendor it is now possible to get a 275 less than a 4890 and a 260 for only slightly more than a 4870; at lower prices its very competitive too. My point is that both NV and ATI are on pretty level ground again and the ONLY reason I now choose NV over ATI is because of the superior NV drivers (both Linux and Windows side)...oh and the fact that ATI pulled a fast one on me with their AVIVO performance claims. Shame on you ATI!

Re:In other news... (3, Interesting)

TeXMaster (593524) | more than 4 years ago | (#29237109)

In other news, ATI is selling their 4870 series cards for $130 on newegg, which are twice as fast as an Nvidia 9800GTS which is the same price (at least on Left 4 Dead, Call of Duty, and any other game that matters). ATI is blowing Nvidia out of the water in terms of performance per dollar and will continue to do so through at least the middle of next year. See here:

http://www.tomshardware.com/charts/gaming-graphics-cards-charts-2009-high-quality/benchmarks,62.html [tomshardware.com]

Yeah, I'd be making outrageous statements too if I were Nvidia.

Even when it comes to GPGPU (General Purpose computing on the GPU), ATI's hardware is much better than NVIDIA's. However, the programming interfaces for ATI suck big times, whereas NVIDIA's CUDA is much more comfortable to code for, and it has an extensive range of documentation and examples that provide developers with all they need to improve their NVIDIA GPGPU programming. It also has much more aggressive marketing.

As a sad result, NVIDIA is often the platform of choice for GPU usage for HPC, despite it having inferior hardware. And I doubt OpenCL is going to fix this, since it basically standardizes the low-level API, keeping NVIDIA with its superior high-level API.

Orrr... (0)

Anonymous Coward | more than 4 years ago | (#29236321)

CPUs will start looking more like GPUs... with multiple stream processing units... kind of like the Cell processor...

But how? (3, Insightful)

Anonymous Coward | more than 4 years ago | (#29236333)

I read the article, but I don't see any explanation of how exactly that performance increase will come about. Nor is there any explanation of why GPUs will see the increase but CPUs will not. Anyone have a better article on the matter?

Re:But how? (0)

Anonymous Coward | more than 4 years ago | (#29236471)

Parallelism... and creative accounting

A GPU with 48 Stream processing units count as one GPU...

A Die with 48 Cores counts as 48 CPUs.

Re:But how? (3, Insightful)

Spatial (1235392) | more than 4 years ago | (#29236579)

It's Nvidia. Aren't they always saying things like this?

It'll come about because BUY NVIDIA GPUS THEY ARE THE FUTURE, CPU SUX

Re:But how? (0)

Anonymous Coward | more than 4 years ago | (#29236967)

This "CPU SUX" - I'm unfamiliar with it. Is it the tenth SU meaning the older version is SUIX? Was the fifth version SUV? The third version SUIII?

Or does it mean, "System Unified eXtension" ?

I'm older and I can't keep up with all this new technology and acronyms.

Arbitrary number? (0)

forceman130 (1233754) | more than 4 years ago | (#29236383)

I'm just curious how they came up with the 570x number - as opposed to say, 565x or 593x.

Re:Arbitrary number? (5, Funny)

eln (21727) | more than 4 years ago | (#29236483)

The marketing guys originally wanted to say 1000x, but when they ran it past the engineers, the engineers couldn't stop laughing at such a ridiculous assertion. The marketing guys kept lowering the number, but the engineers just couldn't stop laughing. 570x is how low they got before the engineers passed out from laughing so much, which the marketing guys interpreted as agreement.

Re:Arbitrary number? (0)

Anonymous Coward | more than 4 years ago | (#29236909)

Here's the math [xbitlabs.com]

It goes something like this:


2015 Projection

CPU-Alone: 1.2^6 = 3X

CPU+GPU: 50 * 1.????underpants??? = 570X

Re:Arbitrary number? (1)

Twinbee (767046) | more than 4 years ago | (#29236717)

I'm wondering how you came up with 565 and 593x instead of say 564.8 or 593.82745109200174822x

And you know what... (1, Troll)

JustNiz (692889) | more than 4 years ago | (#29236423)

that will be minimum spec for Windows 2026, even though Windows 2026 won't have any more useful functionality than XP has.

Re:And you know what... (1)

characterZer0 (138196) | more than 4 years ago | (#29236651)

It will at least have the same useful functionality over that Server 2008 has - non-admin users can schedule tasks, and Powershell.

Re:And you know what... (0)

Anonymous Coward | more than 4 years ago | (#29236731)

Holding out for Windows 2600... for nostolgia reasons

Re:And you know what... (0)

Anonymous Coward | more than 4 years ago | (#29237103)

Predictable, lame, FUD. You know damn well you're writing from your XP partition.

Good to know! (5, Insightful)

CopaceticOpus (965603) | more than 4 years ago | (#29236431)

Thanks for the heads up, Nvidia! I'll be sure to hold off for 6 years on buying anything with a GPU.

Re:Good to know! (1)

kramulous (977841) | more than 4 years ago | (#29236999)

That was my immediate thought as well.

We're about to drop $250K on a GPU cluster and if performance increases to that amount in 6 years, why on earth would we buy now?

Dammit, there's just no win when you fork out for clusters (of any kind).

Should spend 50K now, stick the rest into stocks and buy 50K every year. Of course, the dudes up the tree don't like that kind of thinking.

put an umbrella on it (0, Troll)

gmermnstinsmermwords (1627107) | more than 4 years ago | (#29236435)

This marks a Huge mile marker in our collective ambitions to compose all modules for the fullframe everlasting nature graphics art rendering systems

So... (3, Funny)

XPeter (1429763) | more than 4 years ago | (#29236439)

I have to wait six years to play Crysis?

Re:So... (1)

Spatial (1235392) | more than 4 years ago | (#29236485)

That Crysis joke is getting old. You can run it easily on a $100 GPU now.

Now Arma 2...

Re:So... (1)

XPeter (1429763) | more than 4 years ago | (#29236539)

That may be so. but I've yet to see a graphics setup that can run it on a 2560X1600 30" Monitor at Very High settings with 8X AA enabled.

I looked up the Arma sys req, and it's not as intense as Crysis was for the time.

Now Alan Wake...

And one more? (1)

mustafap (452510) | more than 4 years ago | (#29236499)

> 'Huang also discussed a number of "real-world" GPU applications, including energy exploration, interactive ray tracing and CGI simulations.'"

Add to that 'MD5 collisions etc"

GPU coding really is going to separate the men from the boys. I sense a return to the old days, where people had to think about coding, and where brilliant discoveries were made.
( like this: http://en.wikipedia.org/wiki/HAKMEM [wikipedia.org] )

Darn, pity I'm too old now. I'll have a play though...

The math (1)

StreetStealth (980200) | more than 4 years ago | (#29236501)

6 years = 72 months

Moore's Law states a doubling in transistors (but we'll call it performance) at every 18 month interval, so:

72/18 = 4 Moore cycles

2^4 = 32

So in six years, Gordon Moore says we should have 32x the performance we have now.

But it's indeed interesting... Silicon was a much easier-to-predict medium in the 20th Century. And yet here we have these two mature, opposing approaches to silicon-based computing, represented by the CPU and the GPU, with some predicting unprecedented growth for one and stagnation for the other. What will happen? How will hard material and process limitations affect the development of these? Will something exotic like artifical diamond-based ICs disrupt the market? An exciting time is in store for the sector, no doubt.

Re:The math (0)

Anonymous Coward | more than 4 years ago | (#29236791)

2^4=16, actually.

And Moore's law doesn't say shit about performance. It only talks about the transistor density. So, if twice as many transistors means twice the performance, then Moore's law can also be applied to performance. But, if you need 10 times as many transistors to convert a single-cycle processor to a 5-stage pipeline processor, performance might only increase by a factor of 1.5. In general, performance won't double anywhere near as quickly as Moore's law might seem to suggest.

Re:The math (1)

JJJK (1029630) | more than 4 years ago | (#29237153)

You're right, nothing about performance in that law. But with GPUs, things are a bit different... if you can squeeze twice as many shader units onto the die, you'll probably get almost twice the performance if you stick to the special class of "GPU-compatible" programs (those that need massive parallelism with little synchronization etc).

Though I would have expected the GPU people to use some of those extra transistors to implement double precision and generally make the GPU cores look more like CPU cores (more pipelining, branch prediction) to make them better suited for more complex problems like raytracing.

Re:The math (1)

McNihil (612243) | more than 4 years ago | (#29236845)

Or what he is actually saying is that they (nVidia) will have more than 9 generations (~9.15) within 6 years... 1.5 generations/year... which I believe is fairly doable and actually slightly slower than the 6 month release cycle we have been accustomed to since 1998.

In other words "business as usual"

Re:The math (4, Insightful)

BikeHelmet (1437881) | more than 4 years ago | (#29236985)

So in six years, Gordon Moore says we should have 32x the performance we have now.

No - 32x the transistors.

You fail to predict how using those transistors in a more optimized way(more suitable to modern rendering algorithms) will affect performance.

Just think about it - a plain old FPU and SSE4 might use the same number of transistors, but when the code needs to do a lot of fancy stuff at once, one is definitely faster.

(inaccurate example, but you get the idea)

GPUs need more RAM for us (4, Insightful)

Entropius (188861) | more than 4 years ago | (#29236541)

I do high-performance lattice QCD calculations as a grad student. At the moment I'm running code on 2048 Opteron cores, which is about typical for us -- I think the big jobs use 4096 sometimes. We soak up a *lot* of CPU time on some large machines -- hundreds of millions of core-hours -- so making this stuff run faster is something People Care About.

This sort of problem is very well suited to being put on GPU's, since the simulations are done on a four-dimensional lattice (say 40x40x40x96 -- for technical reasons the time direction is elongated) and since "do this to the whole lattice" is something that can be parallelized easily. The trouble is that the GPU's don't have enough RAM to fit everything into memory (which is understandable, they're huge) and communications between multiple GPU's are slow (since we have to go GPU -> PCI Express -> Infiniband).

If Nvidia were to make GPU's with extra RAM (could you stuff 16GB on a card?) or a way to connect them to each other by some faster method, they'd make a lot of scientists happy.

Re:GPUs need more RAM for us (0)

Anonymous Coward | more than 4 years ago | (#29236751)

You mean like the Tesla [nvidia.com]?

Re:GPUs need more RAM for us (1)

Entropius (188861) | more than 4 years ago | (#29237085)

That product was actually specifically mentioned in the plenary talk at the 2009 Lattice Gauge Theory conference as the most likely contender for doing QCD on GPU's. It's still got the problem I mentioned, though -- not enough RAM to store everything, and not enough bandwidth to talk to the other units that are storing it.

Re:GPUs need more RAM for us (0)

Anonymous Coward | more than 4 years ago | (#29236763)

nvidias quadro fx series goes up to 4 gb :P

Re:GPUs need more RAM for us (0)

Anonymous Coward | more than 4 years ago | (#29236831)

Not only for scientist, but for game players.

I dream of the day I can max out GTA4, but I only have 1GB of video ram.

Re:GPUs need more RAM for us (1)

Chirs (87576) | more than 4 years ago | (#29236837)

Could you grab some motherboards with multiple expansion slots and load them up with dual-gpu boards?

Re:GPUs need more RAM for us (4, Interesting)

Entropius (188861) | more than 4 years ago | (#29237101)

You can -- that's what people are trying now. The issue is that in order for the GPU's to communicate, they've got to go over the PCI Express bus to the motherboard, and then via whatever interconnect you use from one motherboard to another.

I don't know all the details, but the people who have studied this say that PCI Express (or, more specifically, the PCI Express to Infiniband connection) is a serious bottleneck.

Re:GPUs need more RAM for us (1)

ae1294 (1547521) | more than 4 years ago | (#29236889)

If Nvidia were to make GPU's with extra RAM (could you stuff 16GB on a card?) or a way to connect them to each other by some faster method, they'd make a lot of scientists happy.

Do you really need to ask them to do this for you? I'd think if you are a grad student you might be able to get together with some Electrical Engineering students and rig up something and turn a profit! The only thing you really need to know is how much memory the GPU can address, if you can get a hold of the source for the drivers, etc..

A video card isn't much more than a GPU with memory soldered on to it...

Re:GPUs need more RAM for us (1, Interesting)

Anonymous Coward | more than 4 years ago | (#29237031)

I do a number of molecular dynamics simulations myself, and while computational science on GPUs has been intriguing, for my purposes, it's been hampered by the lack of double precision. That may not happen, as it's not necessary for actual graphics, but if nVidia wants to market to a big cadre of computational scientists, that's what this community would need.

Re:GPUs need more RAM for us (1)

Entropius (188861) | more than 4 years ago | (#29237129)

That too.

It turns out single precision is enough for lattice QCD. The step that requires the most CPU time doesn't need to generate an exact result; it only needs to get close. If the result is too far off then you wind up wasting time, but the result will still be valid.

(This is the Metropolis procedure, if you're familiar with it: the accept/reject step takes care of any computational errors that occur)

Re:GPUs need more RAM for us (0)

Anonymous Coward | more than 4 years ago | (#29237045)

I find this is a common misconception amongst scientists (and some computer scientists). The most effective way to use GPUs are as streaming processors, think big vector computational units. I don't know much about lattice QCD calculations, but if it is truly easily parallelized sections of the volume can be segmented nicely, there is no reason that one card needs the full volume. Using a streaming model hides the latency from GPU to GPU because hopefully no GPUs will talk to each other, they would work in isolation, and then the results would be merged together in a separate pass. So in short extra RAM and lower latency should not be too exciting for scientists with the current 1GB+ GPGPUs when their problems have high computational costs.

You might want to check out http://portal.acm.org/citation.cfm?id=1412825

The big question is. (3, Funny)

Dyinobal (1427207) | more than 4 years ago | (#29236729)

Will I need a separate power supply or two to run these new video cards? or will they include their own fission reactors?

Think about that for a minute... (1)

dbet (1607261) | more than 4 years ago | (#29236913)

I currently run many shooter games at around 60-80 FPS at 1280x1024, and my GPU is hardly at the top end. So in 6 years, I can run a 12,800x10,240 resolution screen (if such a thing exists) at over 300 FPS. Um, sure.

We should not buy NVIDIA products until 2014 (0)

Anonymous Coward | more than 4 years ago | (#29236943)

Why bother buying one now when such better stuff is around the corner? ;)

Five years is a long time to wait with graphics cards, but the jump in speed will be worth it!

Brilliant sales pitch (3, Insightful)

Minwee (522556) | more than 4 years ago | (#29237025)

"Did I mention that our next model is going to be SO amazing that you'll think that our current product is crap? The new model will make EVERYTHING obsolete and the entire world will need to upgrade to it when it comes out. People won't even be able to give away any older products. Sooooo... how many of this year's model will you be buying today?

"Hello? Are you still there?

"Hello?"

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...