Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Moore's Law Will Die Without GPUs

CmdrTaco posted more than 4 years ago | from the i've-heard-this-song-before dept.

Hardware 250

Stoobalou writes "Nvidia's chief scientist, Bill Daly, has warned that the long-established Moore's Law is in danger of joining phlogiston theory on the list of superseded laws, unless the CPU business embraces parallel processing on a much broader scale."

Sorry! There are no comments related to the filter you selected.

An observation (5, Informative)

Anonymous Coward | more than 4 years ago | (#32084140)

Moore's is not a law, but an observation!

Re:An observation (5, Funny)

binarylarry (1338699) | more than 4 years ago | (#32084250)

Guy who sells GPUs says if people don't start to buy more GPUs, computers are DOOMED.

I don't know about you, but I'm sold.

Re:An observation (2, Insightful)

Pojut (1027544) | more than 4 years ago | (#32084314)

Yeah, pretty much this. It's akin to the oil companies advertising the fact that you should use oil to heat your home...otherwise, you're wasting money!

Re:An observation (-1, Troll)

Anonymous Coward | more than 4 years ago | (#32084576)

Yeah, pretty much this. It's akin to the oil companies advertising the fact that you should use oil to heat your home...otherwise, you're wasting money!

Or it's like when niggers want civil rights. Hahaha, yeah right, they WOULD, wouldn't they? 'Course, back when they wanted civil rights they were much more respectable and followed people like Martin Luther King Jr that anyone could admire. Now they can't speak fuckin' English correctly, think being a thug is GREAT, and follow these fucking clowns like Jesse Jackson and Al Sharpton. Maybe they shouldn't get civil rights until they disown Jackson and Sharpton.

Oh and "this" or "pretty much this" is a cuntal way to say "I support your position". Are you such a lazy bastard?

Heat and power consumption. (4, Insightful)

jwietelmann (1220240) | more than 4 years ago | (#32085088)

Wake me up when this NVIDIA's proposed solution doesn't double my electrical bill and set my computer on fire.

Re:An observation (0)

Anonymous Coward | more than 4 years ago | (#32084282)

Just as Murphy's Law is...

But, "Moore's Law" rolls off the tongue much better than "Moore's Observation", or "Moore's Hypothesis"

Re:An observation (3, Insightful)

maxume (22995) | more than 4 years ago | (#32084312)

It is also a modestly self-fulfilling prediction, as planners have had it in mind as they were setting targets and research investments.

Re:An observation (5, Informative)

TheRaven64 (641858) | more than 4 years ago | (#32084372)

It's also not in any danger. The law states that the number of transistors on a chip that you can buy for a fixed investment doubles every 18 months. CPUs remaining the same speed but dropping in price would continue to match this prediction as would things like SoCs gaining more domain-specific offload hardware (e.g. crypto accelerators).

Re:An observation (3, Insightful)

hitmark (640295) | more than 4 years ago | (#32084928)

yep, the "law" basically results in one of two things, more performance for the same price, or same performance for cheaper price.

thing is tho that all of IT is hitched on the higher margins the first option produces, and do not want to go the route of the second. The second however is what netbooks hinted at.

The IT industry is used to be boutique pricing, but is rapidly dropping towards commodity.

Re:An observation (2, Insightful)

Bakkster (1529253) | more than 4 years ago | (#32085778)

The law states that the number of transistors on a chip that you can buy for a fixed investment doubles every 18 months. CPUs remaining the same speed but dropping in price would continue to match this prediction as would things like SoCs gaining more domain-specific offload hardware (e.g. crypto accelerators).

Actually, parallel processing is completely external to Moore's Law, which refers only to transistor quantity/size/cost, not what they are used for.

So while he's right that for CPU makers to continue to realize performance benefits, parallel computing will probably need to become the norm, it doesn't depend upon nor support Moore's Law. We can continue to shrink transistor size, cost, and distance apart without using parallel computing; similarly by improving speed with multiple cores we neither depend upon nor ensure any improvement in transistor technology.

Re:An observation (1)

contrapunctus (907549) | more than 4 years ago | (#32084650)

It's a perfect example of a law. It offers no explanation and it predicts. Take Newton's laws of motion. They are just observations too in the same sense that you use it.

Re:An observation (1)

Opportunist (166417) | more than 4 years ago | (#32085258)

So? Welcome to science!

A whole lot of "laws" were formulated and used, considered correct and useful until at one day they were proven incorrect. Considering how insignificant Moore's law is when it comes to the scientific community, I could think of worse contradictions.

Re:An observation (0)

Anonymous Coward | more than 4 years ago | (#32084816)

It's a purely economic law, not a technical or physical law. It simply states that apparently chip manufacturers need their chips to be certain percentage better than the predecessor's, otherwise consumers will walk over to the competitor that can offer it. So a more honest statement by Nvidia would have been "we want computer manufacturers to believe that they can gain an edge over their competitors by buying our products".

What's a 'law'? (2, Insightful)

MoellerPlesset2 (1419023) | more than 4 years ago | (#32085322)

Well, no, Moore's Law was never passed by any legislative authority, no.

As for a scientific law, 'laws' in science are like version numbers in software:
There's no agreed-upon definition whatsoever, but for some reason, people still seem to attribute massive importance to them for some reason.

If anything a 'law' is a scientific statement that dates from the 18th or 19th century, more or less.
Hooke's law is an empirical approximation.
The Ideal Gas law is exact, but only as a theoretical limit.
Ohm's law is actually a definition (of resistance).
The Laws of Thermodynamics are (likely) the most fundamental properties of nature that we know of.

The only thing these have in common is that they're from before the 20th century, really.

Re:An observation (1)

WalkingBear (555474) | more than 4 years ago | (#32085602)

THIS! ^^^

Moore's "law" is a description of a trend in development of technologies. It is not a "law" that governs the physical world, nor is it a a legislative 'Law' that governs people's actions.

You know what will happen if Moore's Law is no longer an accurate predictor of technological growth? Nothing. That's what. A new rule of thumb will come about based on the new growth curves.

CPU's are not holding back Moore's Law (-1, Troll)

Anonymous Coward | more than 4 years ago | (#32084152)

MICROSOFT is. Locked down, proprietary operating systems is. Proprietary, closed source software is.

Open source software isn't. The code is open, so people can inspect the code and see where Moore's Law is being broken, and then fix it.

FAIL (1)

pastafazou (648001) | more than 4 years ago | (#32084246)

I don't know if you're trying to be funny or not with this post, so you earned a FAIL.

Re:FAIL (0)

Anonymous Coward | more than 4 years ago | (#32084352)

Your meta-comment = off-topic = also FAIL. My meta-meta-comment is +5 fantastic though.

Whatever you say, Microsoft Shill (0)

Anonymous Coward | more than 4 years ago | (#32084620)

So, how much are they paying you?

Moderator FAIL (0)

Anonymous Coward | more than 4 years ago | (#32084418)

Come on, that was funny! The poor soul obviously knew you moderators have been out for blood, oh, these last many years or so.

I am The Law (5, Informative)

Mushdot (943219) | more than 4 years ago | (#32084166)

I didn't realise Moore's Law was purely the driving force behind CPU development and not just an observation on semiconductor development. Surely we just say Moore's Law held until a certain point, then someone else's Law takes over?

As for Phlogiston theory - it was just that, a theory which was debunked.

Re:I am The Law (1)

Anonymous Coward | more than 4 years ago | (#32084272)

Moore's law was an observation and prediction that became a self-fulfilling prophesy.

Moore noticed the current trend, said that we could double the number of transistors on a chip for another decade (putting us into the 70s), chip-makers however, worked hard to 'keep up' and Moore's law was the metric everyone seemed to use for what 'keeping up' was. Interestingly Moore's law has nothing to do with processing speed as it is usually used in reference to, but only the number of transistors fit onto a chip.

Captcha: instruct

Technology development vs. natural laws (1)

gwolf (26339) | more than 4 years ago | (#32085708)

Moore's law is describing the human abilities to make better processes leading to better miniaturization, leading to more precise printing of higher density transistors on smaller spaces. It is not a law that concerns natural processes, obviously -- And although it does hold true for now, it is bound to reach an end of life.

Moore's law will not be debunked, but we will surely go past it sooner or later. We cannot keep shrinking transistor size forever, as molecules and atoms give us an absolute minimum size, and upon reaching it, no law will replace Moore's - That will be it.

Objectivity? (4, Insightful)

WrongSizeGlass (838941) | more than 4 years ago | (#32084178)

Dr. Daly believes the only way to continue to make great strides in computing performance is to ... offload some of the work onto GPU's that his company just happens to make? [Arte Johnson] Very interesting [wikipedia.org] .

The industry has moved away from "more horsepower than you'll ever need!" to "uses less power than you can ever imagine!" Perpetuating Moore's Law isn't an industry requirement, it's a prediction by a guy who was in the chip industry.

Re:Objectivity? (4, Interesting)

Eccles (932) | more than 4 years ago | (#32084390)

The industry has moved away from "more horsepower than you'll ever need!" to "uses less power than you can ever imagine!"

As someone who still spends way too much time waiting for computers to finish tasks, I think there's still room for both. What we really want is CPUs that are lightning-fast and likely multi-parallel (and not necessarily low-power) for brief bursts of time, and low-power the rest of the time.

My CPU load (3Ghz Core 2 Duo) is at 60% right now thanks to a build running in the background. More power, Scotty!

Re:Objectivity? (1)

wwfarch (1451799) | more than 4 years ago | (#32084598)

If your CPU load is only 60% do you really need more power? I frequently top out at 100% and definitely need more power to handle those peaks. A CPU load of 60% doesn't show that same need though. Obviously this is just a snapshot in time and you may very well hit 100% frequently too.

Re:Objectivity? (1)

ShadowRangerRIT (1301549) | more than 4 years ago | (#32085112)

It shows he needs faster hard disks and/or more disks (so his build is reading from one set and writing to another, never reading and writing to the same disk). Could also mean he needs more memory, or faster memory (though memory speed is unlikely to be the issue here). Either that or he needs a compiler that isn't a twenty year old mostly single-threaded piece of crap.

Re:Objectivity? (1)

JasterBobaMereel (1102861) | more than 4 years ago | (#32085058)

Your CPU spends the vast majority of it's time waiting ....or doing stuff that the operating system thinks is important and you don't ...

If your CPU is not at 100% then the lag is not due to the CPU

Re:Objectivity? (1)

Profound (50789) | more than 4 years ago | (#32085216)

Get faster disks till your CPU is at 100% if you want a faster build.

Re:Objectivity? (4, Insightful)

PitaBred (632671) | more than 4 years ago | (#32085500)

If your CPU is running at 60%, you need more or faster memory, and faster main storage, not a faster CPU. The CPU is being starved for data. More parallel processing would mean that your CPU would be even more underutilized.

Closed source computation won't fly (2, Insightful)

Morgaine (4316) | more than 4 years ago | (#32085510)

Perhaps nVidia's chief scientist wrote his piece because nVidia wants its very niche CUDA/OpenCL computational offering to expand and become mainstream. There's a problem with that though.

The computational ecosystems that surround CPUs can't work with hidden, undocumented interfaces such as nVidia is used to producing for graphics. Compilers and related tools hit the user-mode hardware directly, while operating systems fully control every last register on CPUs at supervisor level. There is no room for nVidia's traditional GPU secrecy in this new computational area.

I rather doubt that the company is going to change its stance on openness, so Dr. Daly's statement opens up the parallel computing arena very nicely to its traditional rival ATI, which under AMD's ownership is now a strongly committed open-source company.

Nvidia says GPUs are the future? (5, Insightful)

iYk6 (1425255) | more than 4 years ago | (#32084190)

So, a graphics card manufacturer says that graphics cards are the future? And this is news?

Re:Nvidia says GPUs are the future? (5, Funny)

Rogerborg (306625) | more than 4 years ago | (#32084262)

So, a graphics card manufacturer says that graphics cards are the future? And this is news?

THIS! IS! SLASHDOT!

Re:Nvidia says GPUs are the future? (0, Offtopic)

FauxPasIII (75900) | more than 4 years ago | (#32084446)

Ah, what a miracle, to witness the birth of a new Slashdot cliche.

Re:Nvidia says GPUs are the future? (1)

Arakageeta (671142) | more than 4 years ago | (#32085330)

Just a few years behind the meme...

Re:Nvidia says GPUs are the future? (1)

Opportunist (166417) | more than 4 years ago | (#32085294)

Oh boy. I can already see the YouTube videos popping up...

Re:Nvidia says GPUs are the future? (1)

edxwelch (600979) | more than 4 years ago | (#32084596)

1. Nvidia marketing guy makes ambitious claim about GPUs being the future.
2. Slasdot readers debunk claim, posting several +5 funny remarks and ridiculing claim.
4. Nvidia marketing guy, runs away shamed, swearing never to make false claims again.
5. Problem solved!

Re:Nvidia says GPUs are the future? (5, Informative)

Anonymous Coward | more than 4 years ago | (#32084782)

Marketing guy?

Before going to nvidia maybe two years ago, Bill Daly was a professor in (and the chairman of) the computer science department at Stanford. He's a fellow of the ACM, IEEE, an AAAS.

    http://cva.stanford.edu/billd_webpage_new.html

You might criticize this position, but don't dismiss him as a marketing hack. NVidia managed to poach him from Stanford to become their chief scientist because he believed in the future of GPUs as a parallel processing tool, not that he began drinking the kool-aid because he had no other options.

Re:Nvidia says GPUs are the future? (0)

Anonymous Coward | more than 4 years ago | (#32084960)

There's no ??? or profit in that.
Hand in your Underpants Gnome badge and GTFO!

Moores law will apply until it doesn't (5, Informative)

91degrees (207121) | more than 4 years ago | (#32084216)

But the only "law" is that the number of transistors doubles in a certain time (something of a self fulfilling prophesy these days since this is the yardstick the chip companies work to).

Once transistors get below a certain size, of course it will end. Parallel or serial doesn't change things. We either have more processors in the same space, more complex processors or simply smaller processors. There's no "saving" to be done.

Re:Moores law will apply until it doesn't (1)

FlyingBishop (1293238) | more than 4 years ago | (#32084544)

There's plenty of money to be saving.

Re:Moores law will apply until it doesn't (3, Insightful)

camg188 (932324) | more than 4 years ago | (#32084856)

We either have more processors in the same space...

Hence the need to embrace parallel processing. But the trend seems to be heading toward multiple low power RISC cores, not offloading processing to the video card.

Re:Moores law will apply until it doesn't (3, Informative)

hitmark (640295) | more than 4 years ago | (#32085122)

but parallel is not a magic bullet. Unless one can chop the data worked on into independent parts that do not influence each other, or do so minimally, the task is still more or less linear and so will be done at core speed.

the only benefit for most users is that one is more likely to be doing something while other, unrelated, tasks are done in the background. But if each task wants to do something with storage media, one is still sunk.

Re:Moores law will apply until it doesn't (1)

icebraining (1313345) | more than 4 years ago | (#32085306)

GPU offloading has appeared with GPGPU. For example, Windows 7 can perform video transcoding using GPGPU [techeta.com] on the Ion.

Not, it's not as useful as a general chip like the CPU, but with software support it can speed up some tasks considerably.

inevitable (4, Insightful)

pastafazou (648001) | more than 4 years ago | (#32084228)

considering that Moore's Law was based on the observation that they were able to double the number of transistors about every 20 months, it would be inevitable that at some point they reach a limiting factor. The factor seems to be the process size, which is a physical barrier. As the process size continues to decrease, the physical size of atoms is a barrier that they can't get past.

Re:inevitable (0)

Anonymous Coward | more than 4 years ago | (#32084278)

yet.

Re:inevitable (3, Interesting)

Junior J. Junior III (192702) | more than 4 years ago | (#32084392)

At some point, they'll realize that instead of making the die features smaller, they can make the die larger. Or three-dimensional. There are problems with both approaches, but they'll be able to continue doubling transistor count if they figure out how to do this, for a time.

Re:inevitable (1)

vlm (69642) | more than 4 years ago | (#32084568)

but they'll be able to continue doubling transistor count if they figure out how to do this, for a time.

32mn process is off the shelf today. Silicon lattice spacing 0.5 nm. Single atom "crystal" leaves factor of 60 possible. Realistically, I think they're stuck at one order of magnitude.

At best, you could increase CPU die size by two orders of magnitude before the CPU was bigger than my phone or laptop.

Total 3 orders of magnitude. 2^10 is 1024. So, we've got, at most, 10 more doublings left.

Re:inevitable (1)

PrFirmin (1804054) | more than 4 years ago | (#32084914)

How would you cool this? The bigger your CPU die, the more heat.

Re:inevitable (1)

PitaBred (632671) | more than 4 years ago | (#32085538)

Who says we have to keep using silicon?

Dear friend: (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#32084254)

Dear friend:

    I found a good shopping site. The company has ebay hot commodity.
the price is much lower than ebay!
Original Products + Best Quality + Brand New + Warranty + Quick Shipping + 100% Secure.
I wish you a pleasant shopping experience.
The company's website: http://www.mall-gifts.com
          Thank you!

Notice (1)

Zoidbot (1194453) | more than 4 years ago | (#32084304)

That it seems you deliberately avoided mentioning the CellBBE...

Umm? (5, Insightful)

fuzzyfuzzyfungus (1223518) | more than 4 years ago | (#32084306)

Obviously "NVIDIA's Chief Scientist" is going to say something about the epochal importance of GPUs; but WTF?

Moore's law, depending on the exact formulation you go with, posits either that transistor density will double roughly every two years or that density at minimum cost/transistor increases at roughly that rate.

It is pretty much exclusively a prediction concerning IC fabrication(a business that NVIDIA isn't even in, TSMC handles all of their actual fabbing), without any reference to what those transistors are used for.

Now, it is true that, unless parallel processing can be made to work usefully on a general basis, Moore's law will stop implying more powerful chips, and just start implying cheaper ones(since, if the limits of effective parallel processing mean that you get basically no performance improvements going from X billion transistors to 2X billion transistors, Moore's law will continue; but instead of shipping faster chips each generation, vendors will just ship smaller, cheaper ones).

In the case of servers, of course, the amount of cleverness and fundamental CS development needed to make parallelism work is substantially lower, since, if you have an outfit with 10,000 apache instances, or 5,000 VMs or something, they will always be happy to have more cores per chip, since that means more apache instances for VMs per chip, which means fewer servers(or the same number of single/dual socket servers instead of much more expensive quad/octal socket servers) even if each instance/VM uses no parallelism at all, and just sits at one core = one instance.

Re:Umm? (1)

senorbum (1795816) | more than 4 years ago | (#32084794)

At some point venders will reach a barrier as well though. They can't ship smaller chips once they start reaching various atomic size limits. Also, one of the biggest issues is that in day to day computing, programs still aren't being programmed well to use multiple cores even if the application could. Also many applications won't benefit from using more than 1-4 cores on a cpu, so throwing thousands at it isn't going to really solve anything.

Re:Umm? (2, Interesting)

fuzzyfuzzyfungus (1223518) | more than 4 years ago | (#32085526)

Certainly, there are challenges to Moore's law, either fundamental physics or sheer manufacturing difficulty; but they have nothing to do with what the transistors are for(aside from modest differences if the issues have to do with manufacturing difficulties: If your 10nm process is plagued by high defect rates, it is probably easier to build SRAM, with tiny functional blocks, test for bad ones, encode the bad block addresses in a little onboard ROM, and have the motherboard BIOS do some remapping tricks to avoid using those than it is to build CPUs, with large functional blocks, and get pitiful yields).

As for applications, there are definitely huge numbers of them that will see little or no benefit from more cores(either because their devs are lazy/incompetent, or because customers won't pay enough for them to justify the greater costs of dealing with hairy parallelism bugs, or because they depend on algorithms that are fundamentally linear). However, because of servers and virtualization, the demand for more cores should continue unabated on the high end for as long as vendors are able to deliver. If your enterprise has tens or hundreds of thousands of distinct processes, or tens of thousands of distinct VMs, you already posses a crude sort of parallelism, even if every single one of those is dumb as a rock and can only make use of a single core.

Who would have thunk it (3, Interesting)

nedlohs (1335013) | more than 4 years ago | (#32084322)

Guy at company that does nothing but parallel processing says that parallel processing is the way to go.

Moore's law has to stop at some point. It's an exponential function after all. Currently we are at in the 10^6 range (2,000,000 or so), our lower estimates for atoms in the universe are 10^80.

(80 - 6) * (log(10)/log(2)) = 246.

So clearly we are going to reach some issues with this doubling thing in sometime in the next 246 more doubles...

Re:Who would have thunk it (2, Insightful)

bmo (77928) | more than 4 years ago | (#32084516)

Parallel processing *is* the way to go if we ever desire to solve the problem of AI.

Human brains have a low clock speed, and each processor (neuron) is quite small, but there are a lot of them working at once.

Just because he might be biased doesn't mean he's wrong.

--
BMO

Re:Who would have thunk it (1)

nedlohs (1335013) | more than 4 years ago | (#32084998)

I don't care about AI (he says ignoring that his PhD dissertation was in the fringe of god-damn-AI)...

I actually do agree with his fundamental claim, doesn't change that you need to find someone else to say it for an article that isn't just PR.

Re:Who would have thunk it (1)

Waffle Iron (339739) | more than 4 years ago | (#32085756)

Human brains have a low clock speed, and each processor (neuron) is quite small, but there are a lot of them working at once.

The brain's trillions of 3D interconnections blow away anything that has ever been produced on 2D silicon.

Current parallel processing efforts are hardly interconnected at all, with interprocessor communication being a huge bottleneck. In that sense, the brain is much less parallel than it seems. Individual operations take place in parallel, but they can all intercommunicate simultaneously to become a cohesive unit.

To match the way the brain takes advantage of lots of logic units, current computer architecture designs and software would all have to be tossed in the trash (including silly GPUs), and the effort would have to start from scratch.

Re:Who would have thunk it (1)

FlyingBishop (1293238) | more than 4 years ago | (#32084562)

The universe regresses infinitely towards smaller and smaller particles. Behind atoms we find electrons, behind electrons we find quarks. Probably we will find some issues within 246 more doubles. But who can say?

Infinite? (2, Informative)

sean.peters (568334) | more than 4 years ago | (#32085270)

The universe regresses infinitely towards smaller and smaller particles. Behind atoms we find electrons, behind electrons we find quarks.

Dude, this is clearly some sense of the word "infinite" of which I haven't been previously aware. A couple things: 1) atoms -> electrons -> quarks is three levels, which is not exactly infinity. 2) I'm not sure if this is what you meant, but electrons are not made of quarks. They're truly elementary particles. 3) No one thinks there's anything below quarks - the Standard Model may have some issues, but no one seriously questions the elementary status of quarks. 4) you can't do anything with quarks anyway - practically speaking, you can't even see an individual quark. They're tightly bound to each other in the form of hadrons.

I think that in practice, we're going to run into problems before we even get to the level of atoms. Lithographic processes can only get you so far - we're already into the extreme ultraviolet, so to get smaller features we're going to have start getting into x-rays/gamma rays, which have rather unfortunate health and safety issues associated with them, not to mention the difficult engineering problems involved in generating tightly focused beams. And even if you can solve that problem, you have to deal with noise introduced by electrons just leaking from one lead to another. I think 246 doublings is way, way generous.

Re:Who would have thunk it (2, Informative)

wwfarch (1451799) | more than 4 years ago | (#32084628)

Nonsense. We'll just build more universes

Re:Who would have thunk it (0)

cgenman (325138) | more than 4 years ago | (#32084682)

I had always heard estimates of Moore's Law breaking down sometime in 2006 or so, which means we're on borrowed time.

Of course, what he really meant was that to keep improving the power of modern CPU's, we need to massively parallelize them. And he's right. Of course, that's also the direction Intel, AMD, and other primary CPU makers have been going.

Re:Who would have thunk it (1)

nedlohs (1335013) | more than 4 years ago | (#32084938)

I don't disagree, but if you are going to put up a story on it at least find one written by someone with just a little less bias.

I'm sure there are lots, since it's a pretty obvious fact that you get more bang for your buck (and maybe more importantly for your power) from more less powerful units in parallel than fewer big units. Well if the programmers would get with the damn program, anyway. Someone not writing PR for a GPU company must have written one...

Re:Who would have thunk it (1)

TheThiefMaster (992038) | more than 4 years ago | (#32085044)

CPUs with a "feature size" of about 22nm are currently in development. A silicon atom is 110pm across, with the largest stable atoms being about 200 pm. In other words, CPU features are currently about 100-200 atoms across. Can't increase too many more times before that becomes a problem...

HDLs (1)

TerranFury (726743) | more than 4 years ago | (#32084326)

Sometimes I think that parallel programming isn't a "new challenge" but rather something that people do every day with VHDL and Verilog...

(Insert your own musings about FPGAs and CPUs and the various tradeoffs to be made between these two extremes.)

Re:HDLs (1)

imgod2u (812837) | more than 4 years ago | (#32084606)

The relative complexity of a C++ program vs what someone can realistically do in HDL is vastly different. Try coding Office in HDL and watch as you go Wayne Brady on your computer.

Re:HDLs (0)

Anonymous Coward | more than 4 years ago | (#32084728)

While it is true that FPGA development is fundamentally parallel in nature, a significant amount of work goes into creating state machines because some things simply have to be done sequentially. It's just the way it is, get over it.
And I really like the soft core processors that have been developed for FPGAs, and the higher end Xilinx devices with the PowerPC cores on top of the FPGA are really cool as well. I would personally like to see more work in that direction. Combining hard processors with FPGAs. I think we could speed up a lot of things by moving in that direction (of course on a multi-tasking system you would have to be careful not to step on yourself with re-programming the FPGA and stuff).

Re:HDLs (1)

tchuladdiass (174342) | more than 4 years ago | (#32085316)

The solution for mutli-tasking & FPGAs is to have multiple FPGA sub-units available, and limit access to them to device drivers running in kernel space. Need a new FPGA program? Load a driver for it -- if no more FPGA units are available, the driver doesn't load.

Re:HDLs (1)

Kamots (321174) | more than 4 years ago | (#32084876)

Most modern CPUs and the compilers for them are simply not designed for multiple threads/processes to interact with the same data. As an excersize, try writing a lockless single-producer single-consumer queue in C or C++. If you could make the same assumption in this two-thread example that you can make in a single-thread problem, namely that the perceived order of operations is the order that they're coded, then it'd be a snap.

But you see, once you start playing with more than one thread of execution, you gain visibility into both CPU reordering and compiler reordering. You also gain visibility into optimizations made (such as maintaining values in a register and not moving to cache or invented predictive stores and the like). If you research enough you'll find that while the volatile keyword will solve some of the problems, it doesn't solve them all, and it introduces others (it works well for what it's designed for, which is interfacing with hardware, if it's being used for intra-thread comms it's being misused). You wind up needing to use architecture-specific memory barriers/fences to instruct the CPU about reordering and when to flush store buffers to cache and so on. You wind up needing to use compiler-specific constructs to prevent it from reordering or maintaining things in registers that you're not wanting. (volatile is often used for the later, and note while volatile variables won't be reordered around each other, the standard says nothing about reordering non-volatile around the volatile. Also, it bypasses the cache, which in x86-land introduces CPU-reordering that otherwise isn't there (as I think volatile winds up being implemented using CLFLUSH?) as well as unnecessary performance hits (which perfomance is evidently important if you're trying to avoid locks...)

Atomicity is a whole different level of fun as well. I was lucky, at the boundary I was dealing with inherently atomic operations (well, so-long as I have my alignment correct, (not guaranteed by new)), but if you're not... it's yet more architecture-specific code.

Re:HDLs (1)

PhilHibbs (4537) | more than 4 years ago | (#32085266)

Atomicity is a whole different level of fun as well. I was lucky, at the boundary I was dealing with inherently atomic operations (well, so-long as I have my alignment correct, (not guaranteed by new)), but if you're not... it's yet more architecture-specific code.

That's also the main complication that I raise when the conversation comes around to personality uploading - the brain is a clockless system with no concept of atomicity at all. How do you take a "snapshot" of that?

Stupid CPU makers (0)

Anonymous Coward | more than 4 years ago | (#32084334)

If they were smart, they'd make processors that could execute multiple threads in parallel. Personally, I'd use a flashy name such as "multi-core" or something.

Re:Stupid CPU makers (2, Funny)

PhongUK (1301747) | more than 4 years ago | (#32084404)

PATENT THAT NOW!

Amending the Law (1)

drumcat (1659893) | more than 4 years ago | (#32084412)

Maybe it's time to amend Moore's Law to also account for a Wattage divisor. Same computing power for half the battery drain would be good...

Moore's Rule (1)

KiwiCanuck (1075767) | more than 4 years ago | (#32084440)

It's a rule, not a law. Moore's rule is based on certain assumptions, and we have push to the point where some of the assumptions are no longer valid. The limiting factor at the time of Moore was the feature size. Halving the size and scaling other parameters accordingly would result in a doubling of the switching time. However, we hit a limit with the gate oxide. The gate oxide need to be halved. Eventually you get low on the number of atoms in the gate layer. For example, when you're down to 3 atoms, you cannot divide by 2. The solution to this problem was Hf dioxide. This was also the cause/solution to the increasing static power draw. The next problem will be in the band gaps. When you shrink the width of the transistor/switch there are less and less atoms (naturally). However, you eventually get to a point where the band gap is no longer well defined. Each atoms has its own relative band gap (in a bulk crystal). The summation of atoms give a clear band gap, but there is a minimum number of atoms that create this scenario. I don't remember exactly, but 100 atoms seems to be in the back of my mind (100 could be wrong). IMHO, it is time to take parallel processing seriously.

Consider the source, folks... (1)

Millennium (2451) | more than 4 years ago | (#32084460)

Seriously. The headline for this should read "Moore's Law will Die Without GPUs, Says GPU Maker ."

Or, to put it another way, the GPU maker keeps invoking Moore's Law, but I do not think it means what he thinks it means. You can't double semiconductor density by increasing the number of chips involved.

Let's not play fast-and-loose with the word "law." (4, Informative)

dpbsmith (263124) | more than 4 years ago | (#32084462)

I'm probably being overly pedantic about this, but of course the word "law" in "Moore's Law" is almost tongue-in-cheek. There's no comparison between a simple observation that some trend or another is exponential--most trends are over a limited period of time--and a physical "law." Moore is not the first person to plot an economic trend on semilog paper.

There isn't even any particular basis for calling Moore's Law anything more than an observation. New technologies will not automatically come into being in order to fulfill it. Perhaps you can call it an economic law--people will not bother to go through the disruption of buying a new computer unless it is 30% faster than the previous one, therefore successive product introductions will always be 30% faster, or something like that.

In contrast, something like "Conway's Law"--"organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations"--may not be in the same category as Kepler's Laws, but it is more than an observation--it derives from an understanding of how people work in organizations.

Moore's Law is barely in the same category as Bode's Law, which says that "the radius of the orbit of planet #N is 0.4 + 0.3 * 2^(N-1) astronomical units, if you call the asteroid belt a planet, pretend that 2^-1 is 0, and, of course, forget Pluto, which we now do anyway."

Of course Nvidia would say that... (1)

Lord Byron Eee PC (1579911) | more than 4 years ago | (#32084464)

Of course, Nvidia says that GPUs are the answer. But then again, as a scientific software developer, I completely agree with him. In fact, I'm in the middle of a grant proposal right now to purchase a series of GPUs to complement our CPU based cluster.

How to solve it - rename "Moore's Law"? (3, Insightful)

IBBoard (1128019) | more than 4 years ago | (#32084474)

Moore's Law isn't exactly "a law". It isn't like "the law of gravity" where it is a certain thing that can't be ignored*. It's more "Moore's Observation" or "Moore's General Suggestion" or "Moore's Prediction". Any of those are only fit for a finite time and are bound to end.

* Someone's bound to point out some weird branch of Physics that breaks whatever law I pick or says it is wrong, but hopefully gravity is quite safe!

Re:How to solve it - rename "Moore's Law"? (1)

jackpot777 (1159971) | more than 4 years ago | (#32085080)

...and it ran out in the mid 1970s (it was a ten year prediction made in 1965).

The end of Moore's Law would be good (1)

gweihir (88907) | more than 4 years ago | (#32084512)

It would mean that development cycles slow down, algorithmics finally win over brute force and that software quality would have a chance to improve (after going downhill for a long time).

GPUs as CPUs? Ridiculous! Practically nobody can program them and very few problems benefit from them. This sounds more like Nvidia desperately trying to market their (now substandard) products.

Moore's Law (1)

Arancaytar (966377) | more than 4 years ago | (#32084526)

What the hell does Moore's Law have to do with parallel computing?

It is concerned with the number of transistors on a single chip. Moore's Law has been dead in practical terms for a while (smaller transistors are too expensive / require too much power and cooling), which is the reason parallel computing is becoming essential in the first place.

TFA fails computer science history forever.

"warned" (0)

Anonymous Coward | more than 4 years ago | (#32084572)

A GPU maker has "warned" us that the rapid pace of obsolescence will decline if we don't use more GPUs? Amazing.

Re:"warned" (0)

Anonymous Coward | more than 4 years ago | (#32084884)

**headdesks at the article**

This reminds me of that "doctor" who was being sponsored by Unilever (in case you're not English, Unilever are a parent company and do a shit lot of products including butter replacements). This "doctor" wanted to ban butter. That article had a lot of comments similar to yours.

http://www.telegraph.co.uk/health/healthnews/7010677/Ban-butter-to-save-our-hearts-says-doctor.html

I think he has dropped the sponsorship since I can no longer access the story of him being sponsered by Unilever as it was in the comments.

This is not withstanding that he's a HEART SURGEON and thus has no dietary/nutrition qualifications. Also a lot of spreads contain hydrogenated fats AKA trans fats, which are considered a hazard to general health by nutritionists.

GPUs are hardly in better shape (1)

DrXym (126579) | more than 4 years ago | (#32084818)

Look at any modern GPU and it's trying to shoehorn general purpose computing functionality into an architecture designed as graphics pipeline. The likes of CUDA, OpenCL, DirectCompute may be a useful way to tap extra functionality, but IMO it's still a hack. The CPU has to load up the GPU with a program and execute it almost as if its a scene using shaders that actually crunch numbers.

Aside from being a bit of a hack, there are 3 competing APIs and some of them are tied to certain combinations of operating system & hardware. I wonder why AMD or Intel haven't produced something analogous to the Cell - a mainstream multicore CPU that also contains a bunch of SPUs purpose built to blaze through data as fast as possible.

Re:GPUs are hardly in better shape (0)

Anonymous Coward | more than 4 years ago | (#32085690)

Intel and AMD haven't produced something like this because a lot of programs/compilers are unable to use more than one of a multitude of identical cores. They aren't going to necessarily add more different ones. Look how long simple instruction set extensions take to be used. SSE3,4,VX, all those are supplements to help without effecting too much else. Trying to get different things working on different things at once is a scheduling/designing nightmare and many levels from the bios, to the firmware, to the os, to the programs, to the compilers etc. Its a hairbrained mess. And trying to change everything at once doesn't work (see also itanium).

In other news... (2, Funny)

ajlitt (19055) | more than 4 years ago | (#32084826)

Albert P. Carey, CEO of Frito-Lay warns consumers that the continuation of the Cheddar-Dorito law and the survival of humanity ultimately relies on zesty corn chips.

Misleading headline (1)

AlecC (512609) | more than 4 years ago | (#32084844)

Yes, we need more parallelism. But parallelism does not have to be implemented via GPUs - though a man from NVidia would say so. GPUs provide a quick-and-dirty form of parallelism which is easy to kludge onto the current PC architecture. Which, given the way the appalling x86 architecture has succeeded by pure kludgy brute force, may be the way we end up going. But actually we would be much better with an architecture that inherently supports parallelism, such as Functional Programming. I don't known whether we can overcome the inertia of the huge corporations committed to the current way of doing things, but there are much better ways. A homogeneous system is easier to use than a heterogeneous one, to start with.

SlashScript (1)

Twillerror (536681) | more than 4 years ago | (#32084864)

if ( story.contains('Moore\'s') and story.contains('die','dead','end in'):
        story.comment('Moore\'s Law is an observation not a law! and.... IT WILL NEVER DIE!!!')

AMD, Intel, Nvidia (1)

OneAhead (1495535) | more than 4 years ago | (#32084996)

The article ends with "Interestingly, though, only one company has the technology and IP needed to integrate a highly parallel GPU into a CPU... and that’s AMD." Although I like AMD and would surely like to see them getting a revolutionary "fusion" product out before anyone else, one has to ask whether the authors have looked under the hood of Intel's Clarkdale and Arrandale core i5... This shows Intel's rapidly catching up, and a neck-to-neck race may arise between their Sandy Bridge and AMD's Bulldozer. Not to mention the stubborn rumors that Nvidia's itself is developing x86 technology...

Here's some background for those of us that have been living in a cave:
http://www.brightsideofnews.com/news/2009/4/15/amds-next-gen-bulldozer-is-a-128-bit-crunching-monster.aspx [brightsideofnews.com]
http://www.tomshardware.com/reviews/future-3d-graphics,2560-9.html [tomshardware.com]

Theories, theories. (1)

jackpot777 (1159971) | more than 4 years ago | (#32085050)

Moore's Law was an observation of a trend made in 1965 that transistor counts on an integrated circuit had doubled and redoubled over a short period of time, and would continue to do so for at least another ten years (the fact that it has done so for half a century is possibly more than Moore could have hoped for). It was based on observed data that was beyond doubt. Phlogiston Theory was not a theory in the primary definition of the word (from the Greek theria meaning observed, the analysis of a set of facts in their relation to one another). It was more a hypothesis assumed for the sake of argument or (what we now know as scientific) investigation, written as a small update to alchemy (earth, air, fire, water). Not to start an off-topic flame war, but the two are analogous to Evolution Theory and Creationism Theory.

Rise of the Machines (0)

Anonymous Coward | more than 4 years ago | (#32085064)

With Moore's Law broken, what will stand in the way of the Robot Apocalypse?

Code Morphing... (1, Interesting)

Anonymous Coward | more than 4 years ago | (#32085066)

It doesnt suprise me since hired a bunch of ex-Transmeta engineers to work for them last year. They are more than likely working on running the GPU with a bios to boot to whatever instruction set they want on the GPU. That would completely negate Moores Law since packing cores on a chip would directly effect performance.

maybe nvidia should stop making space heaters? (1)

alen (225700) | more than 4 years ago | (#32085092)

seriously, in the last few years Intel has produced some good CPU's with good power efficiency. contrast that with Nvidia where every generation you need more and more power to power their cards and the latest generation is something like 250W of heat. years ago we used a compaq all in one cluster server at work as a space heater, the way nvidia is going all you need to do is buy one of their cards and you can heat your house in the winter and not buy heating oil

There is an end... (1)

rickb928 (945187) | more than 4 years ago | (#32085148)

Moore's Law works until the required elements are smaller than quantum objects. Actually, in our current state of technology and anything practical on the horizon, it works until the required elements are smaller than single atoms. Then there is no way to make stuff faster...

Sort of.

While GPUs might 'save Moore's Law', actually they just add other CPUs to each system. So more cores = more performance, and Moore's Law is still relevant.

Now, to change the entire computing paradigm to actually take advantage of parallel processing. GPUs do it, sort of, but we need entirely new operating systems, and probably new physical architecture.

Not that it matters that much to me. I'll just buy whatever is reasonably fast, and leave the bleading edge to those with more money to spend.

hang on a second... (1)

HamSammy (1716116) | more than 4 years ago | (#32085304)

Moore's law will 'die'.

Moores law is frequently misunderstood (1)

VShael (62735) | more than 4 years ago | (#32085418)

though I rarely see the usual mistakes being made by the slashdot community.

I tried explaining to a friend of mine why it was that, in 2004 his standard desktop configuration had a CPU clocking at 2Ghz, and the standard configuration of the machines available last christmas had CPU's clocking in at 2.4Ghz (in the same price range). He seemed to think it would be in the 8-10Ghz range by now.

Please fire his ass (1)

geekoid (135745) | more than 4 years ago | (#32085550)

for not knowing what Moore's law is.

"ntel’s co-founder Gordon Moore predicted that the number of transistors on a processor would double every year, and later revised this to every 18 months.
well, thats half of the rule.

Moore, meet Amdahl (1)

Ken_g6 (775014) | more than 4 years ago | (#32085596)

Sure, you can add more transistors. And you can use those transistors to add more cores. But how useful will they be? That's what Amdahl's Law [wikipedia.org] tells you. And Amdahl's Law is harder to break than Moore's.

GPUs only add one more dimension to Amdahl's Law: what portion of the parallelizable portion of a problem is amenable to SIMD processing on a GPU, as opposed to MIMD processing on a standard multi-core processor.

yeah right (1)

alienzed (732782) | more than 4 years ago | (#32085604)

because software developers have the whole parallel software thing figured out perfectly.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?