Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Ask Jonathan Koomey About 'Koomey's Law'

timothy posted about 3 years ago | from the best-guy-to-ask-probably dept.

Power 52

A few weeks back, we posted a story here that described Koomey's Law, which (in the spirit of Moore's Law) identifies a long-standing trend in computer technology. While Moore's prediction centers on the transistor density of microprocessors, Jonathan Koomey focuses instead on computing efficiency — in a nutshell, computing power per watt, rather than only per square nanometer. In particular, he asserts that the energy efficiency of computing doubles every 1.5 years. (He points out that calling this a "law" isn't his idea, or his doing — but it's sure a catchy turn of phrase.) Koomey has agreed to respond to your questions about his research and conclusions in the world of computing efficiency. Please observe the Slashdot interview guidelines: ask as many questions as you want, but please keep them to one per comment.

Sorry! There are no comments related to the filter you selected.

Your Take on Futurists? (4, Interesting)

eldavojohn (898314) | about 3 years ago | (#37515746)

What is your take on the interpretation of Futurists -- like Raymond Kurzweil -- in regards to extrapolating these 'laws' out to extreme distances [wikipedia.org] ?

Re:Your Take on Futurists? (0)

Anonymous Coward | about 3 years ago | (#37520404)

Singularity nutjobs... techno-religious headcases.

Re:Your Take on Futurists? (0)

Anonymous Coward | about 3 years ago | (#37594196)

I'll answer instead:

Futurists? Same thing as fortune tellers / soothsayers. They tell you believable bullshit in the area that science can't answer, for people who don't go to churches, in exchange they get attention or even money.

Kurzweil's "singularity" is a great example:
He deliberately drives a scenario to an impossible extreme to get attention.
Think about what happens in real life, when a population grows exponentially, but then a key resource becomes limited more and more. In this case the ability to keep up.
It doesn't just get faster and faster. That's physically impossible.
Instead the lacking resources limit growth until it flattens out into a stable balance where people can just keep up.
I'm depressed I even have to mention that, as it's so blatantly obvious, that any single person here must already know it.

But hey, I know what being blinded by someone who's very sure of himself and secure in himself can do, and how it's a normal neurological phenomenon. I guess I just wish it wasn't...

GodfatherofSoul's Law (1)

Anonymous Coward | about 3 years ago | (#37515798)

Find an arbitrary pattern or trend, then name it after yourself.

Lets work this backwards ... (2)

PPH (736903) | about 3 years ago | (#37515940)

... and see where the Babbage Engine [slashdot.org] fits on the curve.

Re:Lets work this backwards ... (0)

Anonymous Coward | about 3 years ago | (#37517100)

Pen + paper -> Babbage...

You are trying to use the religion argument "yea but what about before that" to end.

Infinity w/ reversible computing? (1)

DriedClexler (814907) | about 3 years ago | (#37515970)

This one doesn't seem to have fundamental physical limits, so long as we eventually transition to reversible computing [wikipedia.org] , in which the computer does not use up useful energy because every process it uses is fully reversible (i.e. the original state could be inferred).

All the limits on computation (except regarding storage) that you hear about (e.g. Landauer limit) are on irreversible computing, which is how current architecture works. It is the irreversibility of an operation that causes it to increase entropy.

Re:Infinity w/ reversible computing? (3, Insightful)

DriedClexler (814907) | about 3 years ago | (#37516102)

Sorry, the question there wasn't clear. How about "Could the whole process be bypassed by the near-infinite efficiency of reversible computers?"

Re:Infinity w/ reversible computing? (0)

Anonymous Coward | about 3 years ago | (#37516352)

The Landauer limit is a "patch" to protect the second law of thermodynamics form Maxwells demon. If fermilab can confirm CERNs "recent" measurements of the neutrino then the Landauer limit will have to be recalculated. (If it still is possible to apply it the same way that is. Particles traveling FTL might make it necessary to redefine the laws of thermodynamics slightly.)

Re:Infinity w/ reversible computing? (1)

vlm (69642) | about 3 years ago | (#37516584)

Particles traveling FTL might make it necessary to redefine the laws of thermodynamics slightly.)

Do you have anything other than a W.A.G. or am I missing something? The best argument I can come up with is maxwells demon pretty much needs to operate supersonically, using some kind of electrical design doesn't help, so operating faster than c wouldn't "help" because c is already a zillion mach number so a couple ppm more won't change the result. Everything else in the thermodynamics laws operates subsonic, way less than c...

I do respect that an unknown cause could also have a theoretical effect on thermodynamics (smaller than we've ever been able to measure in a lab), in addition to the speedy neutrino effect.

Re:Infinity w/ reversible computing? (1)

optimism (2183618) | about 3 years ago | (#37516980)

All the limits on computation (except regarding storage) that you hear about (e.g. Landauer limit) are on irreversible computing, which is how current architecture works. It is the irreversibility of an operation that causes it to increase entropy.

No. The loss of power as heat has nothing whatsoever to do with the reversibility of computing operations.

Power is lost (and thermal entropy increased) because of the electrical resistivity of the materials from which our CPUs are made.

If you discover a room-temperature superconductor, please let the rest of the world know, because the researchers haven't found it yet. Bonus points if you can fabricate a modern CPU using that superconducting material.

Re:Infinity w/ reversible computing? (1)

DriedClexler (814907) | about 3 years ago | (#37517348)

Power is lost (and thermal entropy increased) because of the electrical resistivity of the materials from which our CPUs are made.

Right, like I said: it produces entropy because you are using a reversible process to do it -- in this case, a wire that heats up.

Yes, it's *difficult* right now to get the computation speeds we want using only reversible processes. I didn't intend to suggest otherwise.

Multicore or System on a Chip Speed bumps? (1, Interesting)

eldavojohn (898314) | about 3 years ago | (#37515976)

A lot of consumer grade machines have begun focusing on multicore chips with a lower frequency to provide the same or better perceived computing performance than a high frequency single core chip. What happens when a technology like this subverts our craving for higher transistor density [wikipedia.org] ? Can you argue that your "law" is immune to researchers focusing on some hot new technology like a thousand core processor [slashdot.org] or a beefed up system on a chip in order to improve end user experience over pure algorithm crunching speed?

Re:Multicore or System on a Chip Speed bumps? (1)

LWATCDR (28044) | about 3 years ago | (#37516080)

I do not understand how multi-core subverts our craving for transistor density? You kind of need that to increase the core count. The comes form the lack really good tools for parallel programing.

Re:Multicore or System on a Chip Speed bumps? (1)

onepoint (301486) | about 3 years ago | (#37519952)

>>I do not understand how multi-core subverts our craving for transistor density

Look I do understand what you just said but think with the herd ..

back in the 80's ( early ), a phrase used in corporate America, it was " no on got fired for buying IBM "
sometime is the mid-late 80's the 80386 came out, and corporate america said they had those PC's prior to them being sold to the market.
back then it was a MHZ race

people will no longer brag about speed, they will brag about core count, with the thinking that everyone has the same speed in the cores.

the real race that I hope happens is when we start counting how little energy the platform uses when we get to the speed threshold.

Re:Multicore or System on a Chip Speed bumps? (1)

LWATCDR (28044) | about 3 years ago | (#37526468)

Well I do feel that we are at that speed threshold. Mobile will drive down power so we are are already there. I am waiting for the next big jump. There is now reason why a Tegra II could not power the average desktop pc.

Re:Multicore or System on a Chip Speed bumps? (1)

onepoint (301486) | about 3 years ago | (#37527964)

Really? I feel that the speed threshold is very near, it's not like we use all the power already that the chip can do, but the consumer will want all the extra power it can get. I think, I believe that we are nearing the start of using parallel programing so as to take advantage of all the extra cores. most likely it will be the Audio/Visual segment of the market that will create the software for it ( Game designers i am willing to bet ).

We have seen this already done by splitting off the graphics to it's own card, back in the 80's we had intel math chips, I really am not smart enough to figure out what will be spun off the chip ? Maybe some I/O stuff, wireless stuff ... I am not even sure. But I can bet that it will change the market again.

Who knows, maybe a nifty tablet like in star trek, note that in the shows the captain sometimes had multiple tablet's on his desk, go figure what might really happen.

I really do hope that we get to the parallel programing stage of coding, that will utilize more of the cpu's and reduce cost over long term and should reduce the amount of energy used. But what do I know, I'm just a simple observer.

Become insure easily (-1)

Anonymous Coward | about 3 years ago | (#37516014)

get insurance easily..cheap insurance guides [thebest-in...guides.com]

Re:Become insure easily (-1)

Anonymous Coward | about 3 years ago | (#37516092)

I find your comment interesting and would like to subscribe to your newsletter.

Re:Become insure easily (-1)

Anonymous Coward | about 3 years ago | (#37516264)

do you have a MSN account here is my MSN info needsgoodcheapinsurancenow@msn.com

Re:Become insure easily (0)

Anonymous Coward | about 3 years ago | (#37519458)

Holy FUCK! Did I just see a spammer fall for that? I'd have sworn all spammers were using bots by now...

How will this affect programmers? (0)

Anonymous Coward | about 3 years ago | (#37516056)

When we eventually hit the physical limits of atoms, will programmers eventually stop their autistic quest for more and more layers, more and more complexity and more and more languages to move a number from one address to another?

Re:How will this affect programmers? (0)

Anonymous Coward | about 3 years ago | (#37516214)

Around the same time builders stop their quest for more efficient tools, since hammers and chisels should be good enough for anyone.

How will programmers affect this? (1)

skids (119237) | about 3 years ago | (#37517084)

While sarcastic your question is an important one: as computing power has increased, the tendency of coders to just ride over badly coded underlayers rather than redesign them competently and efficiently has increased. Why bother cutting out bloat that causes an 80% penalty on system efficiency when you can just use a more efficient chipset to get the same result?

So my question is whether Koomey has put any thought into similarly quantifying the opposing software bloat factor, and what he sees the total balance of system works out to.

Re:How will programmers affect this? (0)

Anonymous Coward | about 3 years ago | (#37517896)

Why live in a mansion when you could live in a tent?

Re:How will programmers affect this? (0)

Anonymous Coward | about 3 years ago | (#37519348)

Yeah, that's a relevant comparison.

Make the curve longer. (1)

xkr (786629) | about 3 years ago | (#37516210)

I would like to see not only the Babbage engine on your curve, but also the abacus and slide rule. Maybe the physical Rod, too, which used to be used in surveying. (Hey, you try calculating property area using pencil, paper, and a deed.)

Re:Make the curve longer. (1)

Samantha Wright (1324923) | about 3 years ago | (#37517274)

But... why? Those aren't even remotely comparable. What kind of energy do you suppose be measured, the amount of effort it takes to groan when someone makes a comment like this that they think is witty?

Re:Make the curve longer. (1)

xkr (786629) | about 3 years ago | (#37517674)

Not witty at all. Evolution is continuous. For example, one can compare energy costs backwards from nuclear, coal, back to cutting wood. People use energy ... not that hard to make an estimate for slide-rule energy costs. There used to be people who were paid to work 8-hours a day doing calculations by hand (including military ballistic tables). Why would *you* assume that tube-based computers are comparable to an IPad? The fact there there are general "rules" that appear to apply across an extremely wide range of technologies is what makes observations like Moore's Law interesting in the first place. I am an educator. You should not assume that unusual questions are inappropriate. In fact, there was a fascinating TED talk on the cost of light back through the 18th century. Why would observations on the cost of computing not be as valid an area of study as the cost of artificial light?

Re:Make the curve longer. (1)

Samantha Wright (1324923) | about 3 years ago | (#37519246)

Koomey's law only relates to the amount of power required to operate an electronic device [technologyreview.com] . The very purpose of laws like Koomey and Moore is to describe advances in electronics. While perhaps the amount of energy involved in the unwinding of the mainspring of a mechanical computer can be analysed, I think you'll find that you'd be hard-pressed to get meaningful figures for the energy involved in the operation of an abacus or slide rule—which aren't even complete calculating devices and rely very heavily on the operator's brain to yield meaningful results. It will be decades before we can definitely point to an MRI of a functioning human brain and be able to say "it took x kilojoules to work through that reasoning process"—and even then the results are still incomparable.

I'm not typically the sort of person to dismiss unusual inquiries; in fact, I rather enjoy exploring them and finding new truths out of strange combinations of inputs. But this question is inherently bogus because pre-mechanical computers were operated in completely different ways. Transistors were direct replacements for vacuum tubes; they performed the same functions at an individual level, and the calculating machines that were made from each were equally Turing-complete. An ENIAC in the proper configuration and with sufficient memory, energy and time could conceivably emulate an iPad. Moreover, so could its components. The same cannot be said of a slide rule or abacus. They're not actually computing machines.

Re:Make the curve longer. (1)

dtmos (447842) | about 3 years ago | (#37520002)

They're not actually computing machines.

Well, not generally programmable computing machines.

Re:Make the curve longer. (1)

Samantha Wright (1324923) | about 3 years ago | (#37520450)

The difference engine wasn't generally programmable, either, but we can still establish a sense of its power usage based on how much kinetic energy was required to perform its calculations. Automatic mechanical calculators are much closer to being true computers than completely manual devices like the slide rule, the abacus, or a book of trigonometric tables.

Re:Make the curve longer. (1)

xkr (786629) | about 3 years ago | (#37521458)

OK, I concede they are not programmable. They certainly (in my opinion) should be considered computing machines. However, I left off of my "request list" both programmable analog computers and plug programmable punch card equipment. Today's engineers may laugh, but I was able to do some pretty amazing things with both of those types of hardware. You work with the tools you have.

I don't know if these fit his proposed curve or not. I would just like to see the result of thinking about that question.

Obsolete idea (0)

Anonymous Coward | about 3 years ago | (#37516256)

How can anyone take these "laws" seriously anymore in the era of Cloud computing?

Re:Obsolete idea (1)

Superken7 (893292) | about 3 years ago | (#37516314)

Especially in the era of cloud computing. Why does it matter? You know, even if you are not aware of it because Amazon provides you with Infrastructure as a Service, you are running virtual machines, which, wait for it: are running on real hardware :)
And of course amazon has real limitations with how many machines can be fed on a single datacenter and how much this power consumption cost is.

Note that power consumption can easily be 50% of the datacenter, that for every watt spent computing there is about 2.5 as much spent in infrastructure and cooling, and that power consumption costs can easily be greater than the cost of hardware itself. (Sorry, too lazy to provide citations but you can look it up, not hard to find)

So yeah, it does matter pretty much. :)

Re:Obsolete idea (1)

skids (119237) | about 3 years ago | (#37517158)

How can anyone take "cloud computing" seriously? It's really just a much less efficient version of the age old distributed computing paradigm. All it does is enable people who cannot wrap their heads around complex clustering topics to write extremely wasteful applications, and give management a new buzz-word dejour.

Re:Obsolete idea (0)

Anonymous Coward | about 3 years ago | (#37517842)

How can anyone take "cloud computing" seriously? It's really just a much less efficient version of the age old distributed computing paradigm. All it does is enable people who cannot wrap their heads around complex clustering topics to write extremely wasteful applications, and give management a new buzz-word dejour.

I take cloud computing pretty fucking seriously...

http://www.cloudcomputingzone.com/2011/04/amazon-builds-top-500-supercomputer-in-the-cloud/ [cloudcomputingzone.com]

http://www.networkworld.com/news/2011/040611-linux-supercomputer.html [networkworld.com]

http://arstechnica.com/business/news/2011/09/30000-core-cluster-built-on-amazon-ec2-cloud.ars [arstechnica.com]

That means... you need a 30k core supercomputer for a workday? You don't shell out hundreds of
thousands of dollars... you just pay the tidy sum of $10,232

Know what is the best thing about that?

Let's say you get 2 hrs into your sim and you realize you made a mistake in coding, forgot something,
saw initial results and realized you could trim things up to make it run better or turn out better results...

YOU TURN IT OFF AND SPEND NO MORE MONEY UNTIL YOU TURN IT BACK ON

Maybe all u cloud naysayers don't get it.

Tell ya what... http://aws.amazon.com/ec2/pricing/ [amazon.com] there is a free tier, 750 hours per month.
IF you're a numbers person you'll recognize that as safely above the 744 hours that are in a month.
However you can slice that 750 however you like... which means... you can run:
750 instances for an hour
1500 instances for a half hour or
3000 instances for 15 minutes
(you're not charged the first 10 minutes of running an instance [unless they've changed that])

try it... spark up a lamp instance, clone it a few hundred times, and hit it with a load tester,
or run something on it, see how fast you can calculate primes or find numbers in pi... I don't
care, but before saying... "I don't see how you can take it seriously... use it"

Maybe the naysayers are such because they cannot conceive of what you would use the cloud for?

-@|

Re:Obsolete idea (1)

skids (119237) | about 3 years ago | (#37518016)

Let's say you get 2 hrs into your sim and you realize you made a mistake in coding, forgot something,
saw initial results and realized you could trim things up to make it run better or turn out better results...

Or if you were taking the resources you were about to use more seriously, you might not be so quick to start a run before properly testing your code.

Hey, I'm all for commodity distributed computing. Would be nice if you could sell back, but even still. However, the branding of a suite of decade-old technologies as "cloud computing" has been rather silly. That's all I'm saying.

Re:Obsolete idea (0)

Anonymous Coward | about 3 years ago | (#37518940)

Ok, well if you mainly want to pick on the term, yeah... it is kinda silly.

-@|

Applied to Other Kinds of Computing? (0)

Anonymous Coward | about 3 years ago | (#37516370)

How well does Koomey's Law fit other kinds of computing? For instance, has the energy efficiency of cell phone microprocessors followed the same trend as desktop computers and servers? What about embedded systems like routers and car engine controllers, or specialized hardware like game consoles?

Moral/Ethical (1)

vlm (69642) | about 3 years ago | (#37516456)

OK J.K here is the list of moral / ethical arguments about the path we're on, as seen in your law. You saw the path clearly enough to define a time based law. Are there any issues I'm not seeing on our current path?

1) Lower energy consumption at point of use
2) Higher energy consumption at manufacturing point
3) faster cpu = bigger programs = more bugs = lower quality of life
4) faster cpu = stronger DRM possibilities
5) Better processing * battery life = better medical devices
6) Better processing * battery life = better 1984 style totalitarian devices
7) Lower energy consumption = less air conditioning demand = decreasing average lattitude of data centers = population shifts or whatever or something?
8) More money required for both hw and sw development = good for big corps and bad for the little guy

battery capacity vs proc speed (1)

vlm (69642) | about 3 years ago | (#37516496)

Hey J.K. have you run into a law relating battery capacity (either per Kg or L) vs proc speed over time? I bet there is some kind of interesting curve for mobile devices. Or, maybe not, donno thats why I'm asking a guy with previous success at data analysis in a closely related field...

Here's one: Gates' Law (0)

cjonslashdot (904508) | about 3 years ago | (#37516504)

I have one:

Gates' Law: "The bloatedness of software keeps pace exactly with the increase in power of hardware, to ensure that no actual improvements occur in the end user experience."

Also called Wirth's law (1)

tepples (727027) | about 3 years ago | (#37516754)

This law originally having been proposed by Niklaus Wirth [wikipedia.org] .

Queen of hearts? (0)

Anonymous Coward | about 3 years ago | (#37516954)

What do you think about the following observation: that every X years the amount of computing operations we use to perform basic calculations doubles (by virtue of doing those calculations with more complex software, slower languages...), so when you factor in Moore's law (and your own), the amount of useful calculations we do with computers remain more or less constant.

what about coochie law? (-1)

Anonymous Coward | about 3 years ago | (#37517166)

As women get older, their coochies get looser and uglier. Anything we can do about it?

Old news again (0)

Anonymous Coward | about 3 years ago | (#37517660)

Originally posted last week, re-worded and re-posted this week. Can we stop promoting re-posts?

Feynman Quote (2)

yakovlev (210738) | about 3 years ago | (#37517960)

Mr. Koomey, if we take your numbers from the attached article, which may not have been quoted correctly...

Feynman indicated that there was approximately 100 billion times efficiency improvement possible, and 40,000 times improvement has happened so far.

If we take Feynman's number at face value, this means that if computing efficiency improvements continue at the current rate (doubling every 18 months,) we will reach the theoretical maximum in 2043.

Based on that, do you believe that we will see a dramatic reduction in efficiency improvements in the next 10-20 years as we approach the theoretical limit, or do you think Feynman was conservative in his estimate?

Thanks!

This is just Moore's law in another form (0)

Anonymous Coward | about 3 years ago | (#37518454)

There are limits to the management of heat output of microprocessors, so the efficiency scaling is always bound to follow the transistor density.

Haven`t we already fallen behind? (0)

Anonymous Coward | about 3 years ago | (#37518458)

The Pentium M (which is powering the computer that I`m using to type this) came out eight years ago. Let`s call it 7.5 and make our "Koomey factor" 2^5=32. The ULV chip ran at 1.1GHz and ate 6.4W, and we can add on the power of the 855PM northbridge which would make the total 8.2W. I don`t see any products on the market that are anywhere close to a 32x improvement on performance per watt. Do you?

What makes this a non-trivial extension? (0)

Anonymous Coward | about 3 years ago | (#37520302)

What makes your law a non-trivial extension of moore's law which states that the transister count would double every 18 months due to an increase in density? E&M theory states that if you cut a wire's length in half, it's resistance cuts in half. Granted density in this case is a 2 dimensional expansion and wire resistance is a 1 dimensional formula, but what makes this different from what a freshman in college can infer from an R = (resistivity * length)/cross sectional area?

on the other hand (0)

Anonymous Coward | about 3 years ago | (#37523186)

McKusick's theorum... (famed computer scientist of BSD fame)
"computing power to the fingertip has remained constant for the last few decades and probably will do so for more.

A tongue in cheek comment made in a 1992 lecture that in 1972 hitting a key resulted in a printed character echo with about 50 machine instructions where in 1992 it took many tens of thousands (with rendering etc).. now 2 decades further on, we can see that it has grown to tens of millions yet the result is still just a very pretty character echoed to the user.

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?