Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Power Hardware Technology

Jonathan Koomey Answers Your Questions 31

A couple weeks ago, you asked questions of Stanford professor Jonathan Koomey about what has been dubbed Koomey's Law — the idea that the energy efficiency of computing doubles every 1.5 years. Read on for Professor Koomey's answers to the questions you raised.
What makes this a non-trivial extension?
by Anonymous

What makes your law a non-trivial extension of Moore's Law, which states that the transistor count would double every 18 months due to an increase in density? E&M theory states that if you cut a wire's length in half, it's resistance cuts in half. Granted density in this case is a 2 dimensional expansion and wire resistance is a 1 dimensional formula, but what makes this different from what a freshman in college can infer from an R = (resistivity * length)/cross sectional area?

Jonathan Koomey: First, it’s important to note that we assessed these trends empirically, using measured power data for each computer system in our dataset, and it’s often valuable to confirm with actual measurements what theory implies. Just because the result sounds intuitive to you after the fact doesn’t mean that it isn’t valuable to confirm with real data that the trends actually exist. And of course we discuss in the paper the driving forces behind the reductions in power use per logical switch (and they involve more than just reductions in I squared R losses in the wires). I’ve pasted below two relevant paragraphs from the article:

For vacuum tube computers, both computational speed and reliability issues encouraged computer designers to reduce power use. Heat reduces reliability, which was a major issue for tube-based computers. In addition, increasing computation speeds went hand in hand with technological changes (like reduced capacitive loading, lower currents, and smaller tubes) that also reduced power use. And the economics of operating a tube-based computer led to pressure to reduce power use, although this issue was probably a secondary one in the early days of electronic computing.

For transistorized and microprocessor based computers, the driving factor for power reductions was (and is) the push to reduce the physical dimensions of transistors, which reduces the cost per transistor. In order to accomplish this goal, power used per transistor also must be reduced; otherwise the power densities on the silicon rapidly become unmanageable. Per transistor power use is directly proportional to the length of the transistor between source and drain, the ratio of transistor length to mean free path of the electrons, and the total number of electrons in the operating transistor, as Feynman (2001) pointed out. Shrinking transistor size therefore resulted in improved speed, reduced cost, and reduced power use per transistor (see also Bohr (2007) and Carver Mead’s thinking in the late 1960s, as summarized in Brock (2006, pp. 98-100)).

In addition, the fact that the trends have now been confirmed empirically means that people can get on with considering the implications of these trends, which I think are under-appreciated. The idea that we’ll be able to use ever more efficient computing technology in distributed applications will revolutionize data collection, communications, and control of processes, and people are only now starting to think about what may become possible.

As one of many examples showing the potential of ultra low power computing, consider the wireless no-battery sensors created by Joshua R. Smith of Intel and the University of Washington (coverage in the NY Times and the Economist). These sensors scavenge energy from stray television and radio signals, and they use so little power (60 microwatts in this example) that they don’t need any other power source. Stray light, motion, or heat can also be converted to meet slightly higher power needs, perhaps measured in milliwatts. The contours of this exciting design space are only beginning to be explored, and they are enabled by the trends identified in our paper.

I wouldn’t underestimate the importance of a shift in industry focus from raw performance to power efficiency for mobile devices. Some of the best engineers will be drawn to the problems of ultra low power computing in the same way as they’ve were drawn to high performance computing (HPC) in the past (no doubt terrific technologists will also continue to focus on HPC, but anytime a new hot area opens up there’s a migration of talent to that new topic).

Finally, I would add that the truly unexpected result was that the trend in computational efficiency extends for a longer period than Moore’s law, all the way back to Eniac in 1946. So these trends in computational efficiency are an inherent characteristic of computers that use electrons for switching, and are not limited to the microprocessor era. I, for one, did not expect that.

Your Take on Futurists?
by eldavojohn

What is your take on the interpretation of Futurists -- like Raymond Kurzweil -- in regards to extrapolating these 'laws' out to extreme distances?

JK: The physicist Neils Bohr once famously said “Prediction is very difficult, especially about the future.” It’s important to be careful in making long-term extrapolations, even if some technological trend has continued for some time. I think it’s fair to say that Moore’s law (and the trends in computational efficiency we identify) have more years to run, given how far we are from theoretical limits, but exactly when we’ll hit a real roadblock it will take someone more brash than me to say. I discuss the theoretical limit based on Feynman’s calculations below, and we will eventually reach that, but there may be ways to sidestep those limits. We’ll have to see how clever we can be!

Lets work this backwards ...
by PPH

... and see where the Babbage Engine fits on the curve.

JK: Since the Babbage engine never operated, I’m not sure how we could do this. I believe that some parts of the engine have been created using modern machining practices, but I don’t think anyone has ever made one in complete form. If someone has, I’d be interested to measure its electricity use and estimate its performance (of course, it was designed before the era of electricity). Nordhaus (2007) reports that

Early calculators were “dumb” machines that essentially relied on incrementation of digits. An important step in the development of modern computers was mechanical representation of logical steps. The first commercially practical information-processing machine was the Jacquard loom, developed in 1804. This machine used interchangeable punched cards that controlled the weaving and allowed a large variety of patterns to be produced automatically. This invention was part of the inspiration of Charles Babbage, who developed one of the great precursor inventions in computation. He designed two major conceptual breakthroughs, the “Difference Engine” and the “Analytical Engine.” The latter sketched the first programmable digital computer. Neither of the Babbage machines was constructed during his lifetime. An attempt in the 1990s by the British Museum to build the simpler Difference Engine using early-nineteenth-century technologies failed to perform its designed tasks. (reference: Swade, Doron. The Difference Engine. New York: Viking Press, 2000.)

Nordhaus, William D. 2007. "Two Centuries of Productivity Growth in Computing." The Journal of Economic History. vol. 67, no. 1. March. pp. 128-159. [http://nordhaus.econ.yale.edu/recent_stuff.html]

Nordhaus does attempt to estimate the speed of computation possible by hand calculations as well as abacuses, to compare to more automatic methods.

Infinity w/ reversible computing?
by DriedClexler

This one doesn't seem to have fundamental physical limits, so long as we eventually transition to reversible computing, in which the computer does not use up useful energy because every process it uses is fully reversible (i.e. the original state could be inferred).

All the limits on computation (except regarding storage) that you hear about (e.g. Landauer limit) are on irreversible computing, which is how current architecture works. It is the irreversibility of an operation that causes it to increase entropy.

Could the whole process be bypassed by the near-infinite efficiency of reversible computers?

JK:Here’s the flip answer: Only if you can afford to wait infinitely long for your answer.

Here’s the more serious answer: in principle, reversible computing could have a revolutionary impact, if we could figure out how to do it, and some folks are working on this. But I haven’t seen any near term applications of such devices—if you know of any, please let me know.

Multicore or System on a Chip Speed bumps?
by eldavojohn

A lot of consumer grade machines have begun focusing on multicore chips with a lower frequency to provide the same or better perceived computing performance than a high frequency single core chip. What happens when a technology like this subverts our craving for higher transistor density? Can you argue that your "law" is immune to researchers focusing on some hot new technology like a thousand core processor or a beefed up system on a chip in order to improve end user experience over pure algorithm crunching speed?

JK: First, I would call it (like Moore implied in his own papers) an empirical observation rather than a law.

But in any case, I don’t think that the transition to multicore has “subverted our craving for higher transistor density,” we’re just using the transistors in a different way. The density of chips (measured in components per square centimeter or equivalent metric) will continue to increase, it’s just that the scaling of clock speeds that drove performance increases for so long is no longer possible (mainly because of high leakage currents inside the chip). So that means we need to make many cores and then modify software to capture that performance.

At the end of the day, WHAT you choose to do with the computing power is unrelated to the trends we identify, but I would argue the focus of device and software design is inevitably moving towards enhancing the end-user experience because these trends in efficiency are allowing ever more mobile devices to serve people’s immediate needs in an ever more personal way.

How will this affect programmers?
by Anonymous

When we eventually hit the physical limits of atoms, will programmers eventually stop their autistic quest for more and more layers, more and more complexity and more and more languages to move a number from one address to another?

How will programmers affect this?
by skids

While sarcastic, the above question is an important one: as computing power has increased, the tendency of coders to just ride over badly coded underlayers rather than redesign them competently and efficiently has increased. Why bother cutting out bloat that causes an 80% penalty on system efficiency when you can just use a more efficient chipset to get the same result?

So my question is whether you have put any thought into similarly quantifying the opposing software bloat factor, and what he sees the total balance of system works out to.

JK: Software bloat is a real issue, and I agree with your analysis that the ever-improving hardware picture has allowed poor coding practices to continue. But with the shift to multicore, there’s been at least some burden on programmers to change their ways—they have to modify their code to take advantage of multicore performance, so their skills are actually needed to capture increased performance (which is new, or at least a throwback to the early days of computing, when the programmers had so few hardware resources to work with that they had to be extremely parsimonious in their coding).

In the paper, we write:

Whether performance per CPU can grow for many years more at the historical pace is an ongoing subject of debate in the computer industry (Bohr 2007), but near-term improvements are already “in the pipeline”. Continuing the historical trends in performance (or surpassing them) is at this juncture dependent on significant new innovation comparable in scale to the shift from single core to multi-core computing. Such innovation will also require substantial changes in software design (Asanovíc et al. 2006), which is a relatively new development for the IT industry and it is another reason why whole system redesign is so critical to success.

This really doesn’t address the serious issue you raise about bloatware, which I think is a generic problem that other people more skilled in software design than me can address much better than I can. It’s hard to quantify it because it is so situation specific, but someone at a university somewhere may have tried to do this—I just don’t know.

Applied to Other Kinds of Computing?
by Anonymous

How well does Koomey's Law fit other kinds of computing? For instance, has the energy efficiency of cell phone microprocessors followed the same trend as desktop computers and servers? What about embedded systems like routers and car engine controllers, or specialized hardware like game consoles?

JK: These are all excellent questions (which we raise in the article) and I’m actively seeking data, but I don’t have anything new to report on this yet. I’m also interested in trends in data transmission power efficiencies, because that’s a key limitation on these mobile devices. And I’m digging around for battery capacity data over time as well.

Moral/Ethical
by vlm

Here is the list of moral / ethical arguments about the path we're on, as seen in your law. You saw the path clearly enough to define a time based law. Are there any issues I'm not seeing on our current path?

1) Lower energy consumption at point of use
2) Higher energy consumption at manufacturing point
3) faster cpu = bigger programs = more bugs = lower quality of life
4) faster cpu = stronger DRM possibilities
5) Better processing * battery life = better medical devices
6) Better processing * battery life = better 1984 style totalitarian devices
7) Lower energy consumption = less air conditioning demand = decreasing average latitude of data centers = population shifts or whatever or something?
8) More money required for both hw and sw development = good for big corps and bad for the little guy

JK: Hmmm, I’m not quite sure where you are going with this. There are pluses and minuses to all technological innovations, but I’m pretty sure the benefits will outweigh the costs in this case (as long as we put proper restraints on how collected data can be accessed by the authorities).

Batter Capacity vs. Processor Speed
by vlm

Have you run into a law relating battery capacity (either per Kg or L) vs processor speed over time? I bet there is some kind of interesting curve for mobile devices. Or, maybe not — that’s why I'm asking a guy with previous success at data analysis in a closely related field...

JK: Great questions. I haven’t seen any quantitative regularity in how battery power densities vary over time, but am actively looking for data. I hope to have something to report about that (along with the other trends I’m investigating, as I describe above). If you know of any good data sources, please let me know.

Queen of Hearts
by Anonymous

What do you think about the following observation: that every X years the amount of computing operations we use to perform basic calculations doubles (by virtue of doing those calculations with more complex software, slower languages...), so when you factor in Moore's law (and your own), the amount of useful calculations we do with computers remain more or less constant.

JK: This is related to the bloatware question above. I haven’t seen any quantitative estimates of the real cost from bloatware, but computing is becoming more widely distributed throughout the society, and it’s hard to believe that will the proliferation of more and more mobile devices and all the chips now incorporated in embedded systems that we’re doing less useful computing work than in the past. Some folks have tried to quantify total computational work being done, but it’s hard to do: Hilbert, Martin, and Priscila López. 2011. "The World's Technological Capacity to Store, Communicate, and Compute Information." Science. vol. 332, no. 6025. April 1. pp. 60-65

Feynman Quote
by yakolev

Mr. Koomey, if we take your numbers from the attached article, which may not have been quoted correctly

Feynman indicated that there was approximately 100 billion times efficiency improvement possible, and 40,000 times improvement has happened so far.

If we take Feynman's number at face value, this means that if computing efficiency improvements continue at the current rate (doubling every 18 months,) we will reach the theoretical maximum in 2043.

Based on that, do you believe that we will see a dramatic reduction in efficiency improvements in the next 10-20 years as we approach the theoretical limit, or do you think Feynman was conservative in his estimate?

JK: Your math is correct, as is the quotation of those numbers. If computing efficiency doubles every 1.5 years, it will take 21.3 doublings before we reach the theoretical limits identified by Feynman, which means will hit that limit in 32 years (i.e. in 2043).

Here’s what Feynman had to say in the book I cited:

Of course there is a limitation, the practical limitation anyway, that the bits must be of the size of an atom and a transistor 3 or 4 atoms; the quantum mechanical gate I used has 3 atoms. (I would not try to write my bits on to nuclei, I’ll wait till the technological development reaches the atoms before I need to go any further!) That leads us just with (a) the limitations in size to the size of atoms, (b) the energy requirements depending on the time as worked out by Bennett, (c) and the feature that I did not mention concerning the speed of light; we can’t send the signals any faster than the speed of light. Those are the only physical limitations that I know on computers.

If we make an atomic size computer, somehow, it would mean that the dimension, the linear dimension is a thousand to ten thousands times smaller than those very tiny chips that we have now. It means that the volume of the computer is 100 billionth, 1011 of the present volume, because the transistor is that much smaller 1011 , than the transistors that we make today. The energy requirement for a single switch is also about eleven orders of magnitude smaller than the energy required to switch the transistor today, and the time to make the transitions will be at least ten thousands times faster per step of calculation. So there is plenty of room for improvement in the computer and I leave you, practical people who work on computers, this as an aim to get to. (Feynman, Richard P. 2001. The Pleasure of Finding Things Out: The Best Short Works of Richard P. Feynman. London, UK: Penguin Books.)

So the calculation Feynman did was based on a transistor using just three atoms. In theory, one could use individual nuclei (as Feynman suggests) or there may be another as yet totally unknown way to crack this nut. But using Feynman’s calculation as the ultimate limit, in about three decades (and probably before that) we’re going to hit some kind of limit using our current methods.

But even given that, we’ve got at least another decade of improvements (that’s what my friends at Intel tell me) and probably more. Every decade means a factor of 100 improvement in the power efficiency of computing (doubling every 1.5 years) but there are also vast improvements we can make in our software as well as our implementation of power savings in the standby power of these devices (which turns out to be a much bigger power drain than the active power, given that almost all computers have very low average utilization). Hitting these limits may actually force the software designers to get more efficient (we’ll see!). And we’re just at the beginning of using the technologies enabled by these trends to accomplish human goals, so I’m hopeful we’ll be clever and figure out loads of important applications that will become possible with a factor of 100 or 1000 improvements in efficiency over the next 15 years.

Haven`t we already fallen behind?
by Anonymous

The Pentium M (which is powering the computer that I`m using to type this) came out eight years ago. Let`s call it 7.5 and make our "Koomey factor" 2^5=32. The ULV chip ran at 1.1GHz and ate 6.4W, and we can add on the power of the 855PM northbridge which would make the total 8.2W. I don`t see any products on the market that are anywhere close to a 32x improvement on performance per watt. Do you?

JK: Our focus is on system power, not chip power alone. And you need to calculate what your current system is capable of in computations per kWh (which you can calculate from performance per watt) so you can compare to our numbers. But I’ll wager that the current crop of laptops (or the new Mac Mini) will blow away your old machine in terms of computations per kWh at maximum performance (which is what we measure).

This discussion has been archived. No new comments can be posted.

Jonathan Koomey Answers Your Questions

Comments Filter:
  • by Anonymous Coward
    What about the efficiency of code halves every 1.5 years due to frameworks and a lack of programmers feeling they need to know how to program efficiently due to the over-preponderance of frameworks and standard libraries? This means the gain in energy efficiency or performance is negated.
    • by jd ( 1658 )

      Addressed twice in the article.

      • I was going to read the article but about a quarter of the way through it just felt like someone copied and pasted in a bunch of wikipedia articles and it just wasn't worth the effort for something so boring.

        • by mdf356 ( 774923 )

          I was going to read the article but about a quarter of the way through it just felt like the author copied and pasted in several paragraphs from his paper and other supporting documents and I discovered I am lazy.

          FTFY.

    • by iiiears ( 987462 )

      About software bloat.
          If efficiency is the driving factor. Market demand will drive software efficiency when hardware efficiency peaks.
      Unless government intervention influences price.(intellectual property carve outs) The price will be zero plus cost of transit infrastructure.
      So what else is GNU?

  • Quantum computing... http://en.wikipedia.org/wiki/Quantum_computer [wikipedia.org] http://en.wikipedia.org/wiki/Biocomputers

    Either one of these experiences a breakthrough and down go both moore's and this. Moore's only has a lifespan still about 2015~2020 btw, hardly a law in the scientific sense.

    In regards to the programming discussion... prove software bloat on your i7 w 32gb ram first. Then price out the spec of your new machine vs how long it would a dev team to make it run on your current one that's SOOOO bloated.

  • by retroworks ( 652802 ) on Monday October 10, 2011 @04:22PM (#37669820) Homepage Journal

    Retroworks Law: Every saving (in computing consumption) C is offset by A (larger display devices) plus B (shorter lifespan of energy embedded in device, i.e. mining/extraction/refining/molding/distribution). C

    For awhile we thought that the LCDs were going to reduce the consumption of the CRTs, but we found that the energy saved in the LCDs per hour was used up by the behavior of throwing away 3 year old 15" LCDs and replacing them with 21" LCDs. That could be recaptured by delivering the older display devices to emerging markets, such as Egypt and Indonesia, but alas we are banning that trade and building shredders for the display devices. Still, the claims of energy saving go up in proportion to marketing new products, which is directly tied to the problem of increased energy consumption.

    In other words, replacing a working functional device with an energy saving device rarely saves any energy.

    • by Anonymous Coward

      Which is just a specific case of the more general Jevons paradox:

      http://en.wikipedia.org/wiki/Jevons_paradox

      • by gknoy ( 899301 )

        (linked) http://en.wikipedia.org/wiki/Jevons_paradox [wikipedia.org]

        the Jevons paradox (sometimes Jevons effect) is the proposition that technological progress that increases the efficiency with which a resource is used tends to increase (rather than decrease) the rate of consumption of that resource.

        • I think there is a more general behavioral issue at play here. The designer of the original morris mini hoped that by improving cornering ability he would reduce the number of crashes. It just allowed the drivers to corner faster for the same number of crashes. Maybe this is because the acceptable number of crashes is a constant and drivers tweak other variables to stay at the desired point. Similarly the money for energy and monitors is a constant so consumers adjust their behavior to spend at the desired

          • It's not an "acceptable" number. It's just the same number of drivers who will push it until they do crash is the same . So it will just take them a little longer to reach the tipping point :)

          • by AK Marc ( 707885 )
            People drive to their preferred level of risk. When you increase the perceived level of risk, drivers slow down. When you decrease risk, drivers drive less safely to remove that margin. When the perception of risk reduction doesn't match reality (ABS) then crashes will increase with the introduction of a safety device. And if you can increase the perception of risk without increasing risk, drivers will drive more safely. This works with making lanes more narrow and other tricks.

            The real problem is tha
    • by Toonol ( 1057698 )
      In other words, replacing a working functional device with an energy saving device rarely saves any energy.

      Sorry, but I think that already exists, as 'The Prius Fallacy'. The total energy savings of a new, energy-efficient car never justify getting rid of an already-existing, working car.
  • I'm in the process of tapering my slashdot contributions to a dull whimper. I used my time posting on slashdot to evolve my thinking. That project has reached a satisfactory plateau and my energies are drifting toward other outlets.

    Without that process in background, it has become increasingly hard to wade into a discussion like this where so many posts are throwing terms around indicative of brains stuck in some gear entirely unlike hard thinking.

    "Bloat" is a parking orbit of intellectual laziness. Hav

    • So, in summary, (and I mean this in a polite way) you have grown up.

      Like me you also see that what was once proudly held beliefs are actually nothing more than dogma, and it time to stop proclaiming them and move onto other more fruitful pursuits...

      "Java is good", "Gotos are bad", "XML is good", "unstructured text is bad", "Linux is good", "Windows is bad", "AMD CPUs are good", "Intel CPUs are bad", "Nvidia graphics is good", "AMD graphics is bad", "MySQL is good", "Oracle is bad" ...

      Any new methodology wi

  • Why was this not addressed? The question posed by vlm seemed to be dodged.
  • Nikon Digital Cameras Review | Digital Compact Cameras | Digital SLR Cameras | Film SLR Cameras | Nikkor Lenses | Nikon 1 http://www.nikonpreview.com/ [nikonpreview.com]

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...