×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Whither Moore's Law; Introducing Koomey's Law

Unknown Lamer posted more than 2 years ago | from the more-i-mean-less-power dept.

Power 105

Joining the ranks of accepted submitters, Beorytis writes "MIT Technology review reports on a recent paper by Stanford professor Dr. Jon Koomey, which claims to show that the energy efficiency of computing doubles every 1.5 years. Note that efficiency is considered in terms of a fixed computing load, a point soon to be lost on the mainstream press. Also interesting is a graph in a related blog post that really highlights the meaning of the 'fixed computing load' assumption by plotting computations per kWh vs. time. An early hobbyist computer, the Altair 8800 sits right near the Cray-1 supercomputer of the same era."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

105 comments

Power Hog (5, Interesting)

Waffle Iron (339739) | more than 2 years ago | (#37392450)

My favorite example of computing (in)efficiency is the USAF's SAGE bomber tracking computers introduced in the 1950s. These vacuum tube machines had CPU horsepower probably in the same ballpark as an 80286, but could draw more than 2 megawatts of power each. They didn't decommission the last one until the 1980s.

Re:Power Hog (1)

damburger (981828) | more than 2 years ago | (#37392656)

The words that explain this are 'mission critical'. If a computer that important still works, you need a damn good reason to unplug it and replace it with an untested system. Having something new and shiny is not a good enough reason.

Re:Power Hog (1)

Anonymous Coward | more than 2 years ago | (#37393756)

No, that's not a good explanation. Anyone who can't just ask for more money would build a massively cheaper system, run it in parallel to the old power hog until the new design is sufficiently tested, and then junk the ex-mission-critical space heater. The only reasonable explanation for keeping a tube computer running into the 80s is budget hogging.

Re:Power Hog (1)

marcosdumay (620877) | more than 2 years ago | (#37393812)

Would the unreliability of vacuum tubes be a good reason?

Re:Power Hog (0)

Anonymous Coward | more than 2 years ago | (#37397862)

The vacuum tubes were resistant to EMF generated during a nuclear bomb blast and the IC chips are not. Shielding is possible and that's what we do now.

Re:Power Hog (0)

Anonymous Coward | more than 2 years ago | (#37398932)

No. Because the unreliability of vacuum tubes is a well understood thing, and something for which the system was designed. To borrow a phrase, you'd be replacing "known unknowns" with "unknown unknowns."

Re:Power Hog (5, Interesting)

anubi (640541) | more than 2 years ago | (#37392862)

Even the idea one could even implement a vacuum-tube machine capable of performing at 286-levels to me is a miracle in itself. 6502 maybe, but, to me, even the lowly 286 represents a level of sophistication I could not even imagine being implemented with vacuum-tube technology.

I've never seen a SAGE, but it must have been quite a machine. In my imagination, it must have been about the size of a Wal-Mart. With the physical size of the thing, it would amaze me that they would be able to clock the thing anything more than 100 KHz or so.

Yes, I do know what a 6SN7 is. And a 12AT7, which I suspect the machine was full of ( or its JAN equivalent).

Do the designations 12SA7, 12SK7, 12SQ7, 50L6, 35Z5 still ring a bell with anyone?

Re:Power Hog (0)

Anonymous Coward | more than 2 years ago | (#37392906)

no, old guy. you telling us to get off your lawn yet ?

Re:Power Hog (0)

Anonymous Coward | more than 2 years ago | (#37392946)

The standard 5-tube AC/DC tabletop box, no less!

In the days of SAGE, Semi Automatic Ground Environment, memory cycle types measured in tens of microseconds were considered state of the art. Some of them were in tens of milliseconds! Nevertheless that stuff was adequate for developing the newer stuff we enjoy today. Anybody want to venture a guess as to what computing will be like by 2050?

Re:Power Hog (1)

anubi (640541) | more than 2 years ago | (#37393272)

The standard 5-tube AC/DC tabletop box, no less!

Bingo!

I was wondering if anyone out there in Slashdot land was aware of such ancient technology and still lived to tell about it. (No insult intended... just respect)

Re:Power Hog (1)

gmanterry (1141623) | more than 2 years ago | (#37404846)

I repaired 5 tube radios for spending money when I was in High School in the mid fifties. Fond memories.

Re:Power Hog (0)

Anonymous Coward | more than 2 years ago | (#37393412)

I think you got your microseconds [wikipedia.org] and milliseconds [wikipedia.org] backwards.

Re:Power Hog (0)

Anonymous Coward | more than 2 years ago | (#37395674)

So, microseconds are smaller than milliseconds?

State of the art (the best) was measured in the smaller unit (smaller = faster = better), while stuff that was not state of the art (but still damn impressive at the time) was measured in the larger unit.

Can another AC please weigh in an tell us who read what wrong?

Re:Power Hog (2)

maxwell demon (590494) | more than 2 years ago | (#37396672)

Anybody want to venture a guess as to what computing will be like by 2050?

The standard computer will be one which you carry around. It will have the power of today's supercomputers, but a battery life of a full month. However if you hold it wrong, it won't get a network connection. :-)

Re:Power Hog (1)

Bertie (87778) | more than 2 years ago | (#37393432)

I remember reading many moons ago that Colossus was able to do code-breaking in a couple of hours that a Pentium II-class machine would take a day and a half to do. The beauty of designing towards a single purpose, I suppose.

Not surprising (0)

Anonymous Coward | more than 2 years ago | (#37393826)

This is totally not surprising to me. both the sage and the colossus were designed to do something really specific. The 286 is designed to be much more general purpose. Therefore it is of no surprise that SAGE and the Colossus can do their jobs faster, but give them any other task and the 286 would win hands down.

Re:Power Hog (0)

Anonymous Coward | more than 2 years ago | (#37396032)

Because it was massively, massively parallel, and more electro-mechanical than electronic.

Re:Power Hog (0)

Anonymous Coward | more than 2 years ago | (#37393452)

No, no...6j6 variants were most common...common cathode twin triodes with high transconductance and low Rp.

Re:Power Hog (0)

Anonymous Coward | more than 2 years ago | (#37393616)

Thanks for the trip down memory lane. 50L6 audio output, 35Z5 rectifier, 12SA7 converter and a couple of other 12v tubes were the standard superhet table radio. The filaments were wired in series for a 120v supply.

Re:Power Hog (1)

veektor (545483) | more than 2 years ago | (#37393766)

Do the designations 12SA7, 12SK7, 12SQ7, 50L6, 35Z5 still ring a bell with anyone?

Sounds like the tube line-up of an All-American 5 tube radio of the octal tube socket era. K1LT

Re:Power Hog (1)

hamster_nz (656572) | more than 2 years ago | (#37394440)

From http://www.computermuseum.li/Testpage/IBM-SAGE-computer.htm [computermuseum.li]

Technical Description

Size: CPU (50 x 150 feet, each); consoles area (25 x 50 feet) (total system=20,000 square feet)

Weight: 250 tons (500,000 lbs)

Architecture: duplex CPU, no interrupts, 4 index registers, Real Time Clock

Word Length: 32 bits

Memory: magnetic core (4 x 64K word); Magnetic Drum (150K word); 4 IBM Model 729 Magnetic Tape Drives (~100K words ea.); all systems with parity checking

Memory Cycle Time: 6us

I/O: CRT display, keyboard, light gun, realtime serial data (teletype, 1300 bps modem, voice line)

Performance: 75KIPS (single-address)

Technology: vacuum tubes (60,000); diodes (175,000); transistors (13,000)

Power Consumption: about 3 Megawatts

Re:Power Hog (1)

aix tom (902140) | more than 2 years ago | (#37399176)

The "light gun" made me curious.

That's some cool tech [nih.gov], even if the plug is almost as big as the the gun. ;-)

Re:Power Hog (1)

anubi (640541) | more than 2 years ago | (#37405888)

I would imagine the "light gun" was a photocell held against the face of the display CRT which would respond when the area "shot" by the gun was illuminated.

Due to the nature of a CRT, only the phosphor area addressed by the current in its deflection coils will be illuminated, thereby giving the computer a pulse when it directs the beam to the area the gun operator is "shooting".

We used to build these things for our old IMSAI's and Altairs, as we didn't have mice yet, trackballs were terribly expensive, and 2N5777 phototransistors could be had for less than a buck.

Re:Power Hog (1)

jnork (1307843) | more than 2 years ago | (#37395224)

Some. I have some Dynaco stuff I keep meaning to rebuild. It's a shame the good KT88s aren't available any more...

Re:Power Hog (1)

nusuth (520833) | more than 2 years ago | (#37396278)

Even the idea one could even implement a vacuum-tube machine capable of performing at 286-levels to me is a miracle in itself. 6502 maybe, but, to me, even the lowly 286 represents a level of sophistication I could not even imagine being implemented with vacuum-tube technology.
 

There is no miracle, as the machine is about 20 times slower than a 286 according to KIPS value in hamster_nz's sibling post. It is not gener

Re:Power Hog (0)

Anonymous Coward | more than 2 years ago | (#37397586)

As a guitar player, I recognize many of those. I see them when I'm shopping for real tubes (6L6, 12AX7).

On a site note, I own a bunch of 12AT7 and 12AY7 NOS tubes. They were in my grandfather's attic when he died, along with tons of other electronic stuff. No one knew he was a tinkerer.

Not 286 performance, but close given the time (2)

Quila (201335) | more than 2 years ago | (#37398092)

55,000 tubes vs. 134,000 transistors

Had 256 KB + 16 KB RAM vs. the 512-640 KB common in the 286

75,000 instructions per second vs. 1.2 million (@6 MHz)

SAGE used 52 of them, half online at a time, geographically dispersed, all working on tracking aircraft. But they did communicate with each other, so you might consider this a 1,950,000 instructions per second cluster, beating the first 286s that came out around the time SAGE was stood down.

Re:Power Hog (1)

geekoid (135745) | more than 2 years ago | (#37404312)

well, how many multi-megawatt computers HAD you looked at at the time?

because, there isn't anything magical about the 286.

Re:Power Hog (1)

bar-agent (698856) | more than 2 years ago | (#37392972)

These vacuum tube machines had CPU horsepower probably in the same ballpark as an 80286, but could draw more than 2 megawatts of power each.

Surprisingly, according to the computations per kWh chart, transistor computers weren't all that more efficient than vacuum tube computers. For example, the Commodore 64 is about the same distance below the best-fit line as one of the Univac II clusters are.

Re:Power Hog (1)

ChrisMaple (607946) | more than 2 years ago | (#37393512)

You completely misunderstand the chart. A given distance from the line represents relative efficiency for a given year . Absolute efficiency is the vertical axis. The Commodore 64 and other semiconductor computers are newer than the tube computers like Univac II, and more energy efficient.

Tubes are inherently energy hogs. You've got to have at least 50 volts between plate and cathode to have anything close to acceptable performance, and the filament draws a substantial portion of a watt.

Theoretical limits? (1)

Hatta (162192) | more than 2 years ago | (#37392470)

Is there a limit to how efficient calculation can get? Is there some minimum amount of energy required to do one computation? How do you measure "computations" anway, and what units would you use? Bits? Inverse bits?

Re:Theoretical limits? (4, Informative)

PaulBu (473180) | more than 2 years ago | (#37392522)

Yes, there is if you "erase" intermediate results -- look up 'von Neumann-Landauer limit', kT*ln(2) energy must be dissipated for non-reversible computation.

Reversible computation can theoretically approach zero energy dissipation.

Wikipedia is your friend! :)

Paul B.

Re:Theoretical limits? (1)

Khashishi (775369) | more than 2 years ago | (#37392608)

Well, then, we just need to be operating near zero temperature.

True, but... (4, Interesting)

PaulBu (473180) | more than 2 years ago | (#37392654)

I do not think that you get net energy savings (by using the same basic technology, e.g., CMOS at room temeprature or "cold"), if you take into account the fact that cooling things down also costs energy! For example, liquid helium refrigeration costs about 1 kW of wall outlet power to compensate for 1 W dissipated at 4.2 K.

Changing your basic technology to, e.g., some version of superconductor-based logic can help (a lot!), current state of the art (in my very biased opinion, since I am cheering for those guys, and have been involved in related research for years) is here: http://spectrum.ieee.org/semiconductors/design/superconductor-logic-goes-lowpower [ieee.org] ...

Paul B.

Re:True, but... (1)

marcosdumay (620877) | more than 2 years ago | (#37393884)

Send that computer into space and with huge enough radiators you'll have no ongoing spending to cool it into just above 3K. Of course, when we get anywhere near that limit somebody can spend some time thinking how to launch (or manufacture on space) such computer...

I've seen somebody cite some highter clock dependent limit. Altought I can't remember the name, neither understood where it came from when I saw it.

Re:True, but... (0)

Anonymous Coward | more than 2 years ago | (#37394744)

Tell us Sherlock, what would transport the heat away from the computer's surface in vacuum?

Re:True, but... (1)

Michael Woodhams (112247) | more than 2 years ago | (#37395286)

Tell us Sherlock, what would transport the heat away from the computer's surface in vacuum?

Radiation would.

However, cooling to 3 degrees isn't quite so straight forward as grandparent makes out. You have to transport the heat from your computer to the radiators, which requires either a temperature gradient or work to pump the heat. If the radiators are at (say) 4 Kelvin, the rate at which they radiate that energy is going to be very slow, so you're going to need a lot of radiator surface area per Watt.

Stefan-Boltzman law: power emitted by a black body is:
P = 5.67e-8 A T^4
where P is in Watts, A is surface area in m^2, T in Kelvin. So at T=4K we emit just 1.5e-5 W/m^2. (This isn't accounting for the fact that the background is at 2.7 degrees rather than 0. Accounting for this drops the cooling power by about 3e-6 W/m^2.) The good news is that T^4 means you get huge improvement in power for modest increase in temperature. For example, at T=10K, we get 5e-4 W/m^2, at T=20K, 8e-3W/m^2. At 300K, 460W/m^2.

You can analyse cooling of heat sources in a very similar manner to electricity*. Heat power is analogous to current, temperature to voltage, and we have a heat resistance (degrees/Watt) analogous to electrical resistance (volts/amp). The lower the heat resistance between your heat source and your cold sink, the cooler your heat source. The problem with a 1m^2 radiator is that it has high heat resistance, so you need a huge number in parallel (huge surface area) to cool your source to a few degrees above the cold sink. In this example, the universe is your cold sink. For your computer, the air in the room is your cold sink**.

* in the absence of heat pumps.
** in the vast majority of cases.

Re:True, but... (0)

Anonymous Coward | more than 2 years ago | (#37395468)

Don't bother.The idea that information management has an energy cost is the only thing that prevents Maxwells demon from disproving the second law of thermodynamics.
If you are anywhere near showing what you can perform calculations at zero energy cost you will have to fight through hordes of "In this house we obey the laws of thermodynamics!" before you can get your point through.

Re:Theoretical limits? (1)

SnarfQuest (469614) | more than 2 years ago | (#37392848)

Using extreme cooling methods will not improve the energy efficiency. It would probably, if anything, make it worse.

You still must expend X units of energy doing the calculations, and changing the method moving the heat thus generated away from the circuit does not change the fact that the energy was expended. I also doubt that extreme temperatures will improve the efficiency of a circuit to any useful degree.

Re:Theoretical limits? (0)

Anonymous Coward | more than 2 years ago | (#37393294)

"I also doubt that extreme temperatures will improve the efficiency of a circuit to any useful degree."

Semiconducting vs. superconducting makes a big difference...

Re:Theoretical limits? (0)

Anonymous Coward | more than 2 years ago | (#37394328)

Or they could just use a massively parallel quantum computer with room temperature superconductors powered by a zero-point module.

It sounded like they were considering operating existing hardware at lowered temps, not replacing the whole computer with something entirely different. Why didn't he say so at the beginning. Apparently he wanted to compare the Eniac with a superconducting version of the Alpha running at 4000 Ghz, or whatever existing superconducting CPU you have running at home.

Re:Theoretical limits? (4, Informative)

bunratty (545641) | more than 2 years ago | (#37392788)

Yes, reversible computation can theoretically approach zero energy dissipation, but if you use no energy, the computation is just as likely to run forwards as backwards. You still need to consume energy to get the computation to make progress in one direction or the other. Richard Feynman has a good description of this idea in his Lectures on Computation.

Re:Theoretical limits? (0)

Anonymous Coward | more than 2 years ago | (#37393448)

> Wikipedia is your friend! :)

And so is the late, great Richard Feynman. In his book "Lectures On Computation" he describes this limit with beautiful clarity.

Re:Theoretical limits? (1)

compro01 (777531) | more than 2 years ago | (#37392560)

The Landauer limit gives a lower bound on how much energy it takes to change a bit at kT*ln2

Where k is the Boltzmann constant and T is the circuit temperature in kelvins.

So, near absolute zero, somewhere on the order of a yoctajoule per bit change.

Re:Theoretical limits? (1)

danlip (737336) | more than 2 years ago | (#37392588)

I think you meant yocto, but seriously who uses those prefixes? I had to look it up. 10^-24 is much clearer.

Re:Theoretical limits? (0)

Anonymous Coward | more than 2 years ago | (#37392602)

Depends on the logic gates used in the chip arc. If it's intel or amd then yes there is a limit. In order not to use as much power as possible you need a reversable gate design chip arc. It was a proven theory from years ago, never developed into a usefull product because all chip designs would have to be thrown out and start from scratch breaking any x86 programs in the process, this was before computers were powerfull enough for emulation. Intel was very interested till they learned that they would loose out on all they had invested in x86 designs.

Re:Theoretical limits? (3, Informative)

MajroMax (112652) | more than 2 years ago | (#37392614)

Without reversible computing [wikipedia.org], there indeed is a fundamental limit to how much energy a computation takes. In short, "erasing" one bit of data adds entropy to a system, so it must dissipate kT ln 2 energy to heat. This is an extremely odd intersection between the information theoretic notion of entropy and the physical notion of entropy.

Since the energy is only required when information is erased, reversible computing can get around this requirement. Aside from basic physics-level problems with building these logic gates, the problem with reversible computing is that it effectively requires keeping each intermediate result. Still, once we get down to anywhere close to the kT ln 2 physical constraint, reversible logic is going to look very attractive.

Re:Theoretical limits? (1)

anubi (640541) | more than 2 years ago | (#37392940)

What amazes me is the computation done in biological systems.

When I consider the amount of correlation and replication done by RNA/DNA systems, I am left in the dust, wondering just what happened.

Re:Theoretical limits? (2)

Smidge204 (605297) | more than 2 years ago | (#37393118)

What amazes me is the computation done in biological systems.

When I consider the amount of correlation and replication done by RNA/DNA systems, I am left in the dust, wondering just what happened.

I'm not sure I would classify a polymerization as a "computation." Even then the RNA transcription rate is on the order of ~50 nucleotides per second or so, which isn't all that stunning. The only thing that's really impressive is how interdependent the chemical reactions are, and how sensitive the whole system is.

Don't be fooled by the DNA :: Computer Code analogy - it is very, very wrong.
=Smidge=

Re:Theoretical limits? (2)

inode_buddha (576844) | more than 2 years ago | (#37393170)

What amazes me is the computation done in biological systems.

When I consider the amount of correlation and replication done by RNA/DNA systems, I am left in the dust, wondering just what happened.

Most likely what just happened is you got laid.

Re:Theoretical limits? (2)

Kjella (173770) | more than 2 years ago | (#37395462)

The thing is that even if we could do the whole calculation using reversible computing, then what? If we start over on a new and completely different calculation we can't use any of the previous intermediaries and if we clear them - either before or during the next calculation - then we've just spent as much energy as doing it the non-reversible way. Reusing past calculations or lookup tables that are really cached results is something we do in many algorithms today, so each calculation is likely to be necessary and then I don't see how reversible computing is going to do anything but fill the computer with useless intermediaries. We just delay the energy use until we somehow dispose of or reset them.

Private sector (-1)

Anonymous Coward | more than 2 years ago | (#37392534)

While private sector is working on doubling energy efficiency every number of years, or maybe doubling packaging density of transistors, etc. The government is working on doubling the debt, trade deficit and unemployment, while putting the inflation on a different kind of a curve, which may end up looking like exponential at some point in time, also increasing bureaucracy in ways that could be considered hyperbolic or maybe just functions of power.

The differences are stark.

Re:Private sector (0)

MrEricSir (398214) | more than 2 years ago | (#37392558)

The private sector seems to be doing pretty well at getting in debt and causing unemployment as well. Just look at BofA, Cisco, etc.

Re:Private sector (0)

Anonymous Coward | more than 2 years ago | (#37392762)

Bank of America is "private sector"? Not with all the subsidies, mandates, protections, regulations, moral hazards and government insurance. Cisco? It was always overpriced and it ate well with government contracts.

Re:Private sector (1)

MrEricSir (398214) | more than 2 years ago | (#37392824)

Oh, so those examples are invalid because they're involved with the government, whereas Intel is a valid example despite government contracts?

You are an example of confirmation bias in action.

Re:Private sector (0)

Anonymous Coward | more than 2 years ago | (#37392852)

Did Intel get free money to gamble with while getting regulated to sell mortgages without paying attention to risk as banks did? Did Intel get a bail out?

Private companies CAN go into debt and fail. There IS a difference between those who make stupid or just wrong decisions and fail and lose investor money and those who do stupid things because of government regulations, moral hazards, free money and protection and when they fail they get bail outs.

It is OK for a company to fail. It's not OK for it to get bail outs.

Private companies DO create wealth, as shown by this story and all other stories. Government do NOT create wealth, they create debt and economic destruction.

Re:Private sector (1)

MrEricSir (398214) | more than 2 years ago | (#37392966)

How does any of this apply to Cisco?

Government do NOT create wealth, they create debt and economic destruction.

Then how did the USSR have a GDP > 0?

Re:Private sector (0)

Anonymous Coward | more than 2 years ago | (#37397338)

USSR's GDP was impossible to calculate it was using slave labor, which is what you do when you pay with fake money.

USA today is not different - currency is printed, not generated via investment and savings by deferring consumption. Anything that Fed buys in fiat they print is stolen, because the federal reserve not is not backed by anything, so the 'note', as an IOU, a promise to give something real for the dollars is an empty promise. They'll give you nothing for it, which means that it's counterfeit and anything bought for that currency printed by the Fed is stolen - be it assets or labor.

By the way, Schiff just testified in front of Congressional Jobs Committee. [slashdot.org]

video [youtube.com] (his participation starts at minute 31 and goes on till the end). Here are cuts to his stuff only [youtube.com]

  and here is his presentation given to the committee as pdf [house.gov] (but the video is very interesting in itself, it goes much beyond his text presentation). My journal entry on it from some time ago [slashdot.org]

This is the guy who understood the crisis and explained it much before it struck and understands the next one. [youtube.com]

Re:Private sector (1)

MrEricSir (398214) | more than 2 years ago | (#37403876)

I think you might want to loosen it, that tinfoil hat seems to be cutting off the circulation to your brain.

Re:Private sector (1)

damburger (981828) | more than 2 years ago | (#37392626)

Yeah, because exponential growth is ALWAYS a good sustainable strategy. Especially exponential growth in something like efficiency, which is capped by the very laws of physics.

Re:Private sector (1)

anubi (640541) | more than 2 years ago | (#37393012)

Consider the logistic equation : dQ(t)/dt=(Q(t))*(1-Q(t)). [wikipedia.org]

What it models is the rate a resource can be extracted is proportional to how much of the resource you have consumed times how much you have left. As the amount you consume grows higher, your capacity to extricate the remaining resource grows, but only to a point - beyond that, there is an ever dwindling resource left.

It looks exponential at first, then surprise, it bends over and decays.

This equation has been found to model many phenomena in nature which are depletional in nature, such as oil well production, or growth of yeast in a petri dish.

I would be very wary of predicting future growth in a finite system based on the ascending part of what is likely a logistic (sigmoid) curve...

Re:Private sector (1)

Smidge204 (605297) | more than 2 years ago | (#37393142)

The government is the one who paid (and is paying) for most of the basic research that allows the increase in efficiency and density - and the government is also one of, if not the, biggest customer of the final products.

That is true for far more than just the computer industry, too.
=Smidge=

Transistor count, not computing "power" (0)

Anonymous Coward | more than 2 years ago | (#37392550)

I read the first paragraph of TFA, and what do I see? An incorrect statement of Moore's Law. The correct Moore's Law is transistor count doubles every 18 months, not this vague and misleading idea of computing "power". I understand where the "computing power" version came from and it's usefulness for people without a technical background, but this article is for technically minded people and should have the correct definition.

Golden Girls! (-1)

Anonymous Coward | more than 2 years ago | (#37392556)

Thank you for being a friend
Traveled down the road and back again
Your heart is true, you're a pal and a cosmonaut.

And if you threw a party
Invited everyone you ever knew
You would see the biggest gift would be from me
And the card attached would say, thank you for being a friend.

Battery Size/Efficiency? (1)

DJRumpy (1345787) | more than 2 years ago | (#37392566)

Does this take into account the miniaturization of electronics and the associated increase in battery size? We're seeing this in many mobile platforms. I'm curious if this is taken into account when they consider 'battery life' while possibly ignoring that batteries themselves may be more efficient or simply larger due to more space in the enclosure.

Re:Battery Size/Efficiency? (1)

Bob-taro (996889) | more than 2 years ago | (#37392684)

Does this take into account the miniaturization of electronics and the associated increase in battery size? We're seeing this in many mobile platforms. I'm curious if this is taken into account when they consider 'battery life' while possibly ignoring that batteries themselves may be more efficient or simply larger due to more space in the enclosure.

Errr, I'm not sure what your point was, but it is interesting that even as devices like laptops get more efficient (more computations per unit energy), we make them do *so* much more computing that they still require more power and bigger/better batteries.

Re:Battery Size/Efficiency? (1)

Anti_Climax (447121) | more than 2 years ago | (#37395550)

It's important to note that a large amount of power in a portable computer is being expended outside performing calculations. Your LCD probably consumes more energy than your processor - heck, if I leave wifi off on my cell phone, more than 90% of my battery consumption is from the OLED screen. Add in a portable's spinning disks, wifi radio and other various bits and you have a system that, even if the processor was 2x as energy efficient, you'll barely be into a double digit percentage savings in overall energy. Granted, battery tech is getting better and other components are getting more efficient as well, but not anywhere near an 18 month exponential rate.

Altair: "news for nerds" (-1)

Anonymous Coward | more than 2 years ago | (#37392634)

Have this site strayed so far from "news for NERDS" that it has to explain what an Altair 8800 is? Haven't many/most of us used or owned them? It was only one of the most popular systems of the day.

What's next, explaining Commodore 64's? Apple II's?

Re:Altair: "news for nerds" (0)

Anonymous Coward | more than 2 years ago | (#37392742)

Maybe because not all nerds are over 45.

The Cray-1... (1)

Gazoogleheimer (1466831) | more than 2 years ago | (#37392644)

The Cray-1 was ECL. The Altair 8800 was TTL. We're now CMOS, but I wouldn't mind an ECL i7, despite the fluorinert waterfall... (My real point is that there were very serious differences between the Altair 8800 and the Cray-1 despite the obvious which lend to significant differences in power dissipation...and speed.)

Additionally, the other thing this article doesn't take into account is the preponderance of battery-powered modern devices -- before, power consumption wasn't really much of any consideration (plus, now it's marketing!)

Re:The Cray-1... (0)

Anonymous Coward | more than 2 years ago | (#37392758)

The data in the graph is showing that, per unit of energy, their ability to compute is similar. That does not say anything about speed, throughput, or idle power consumption of the systems.

Re:The Cray-1... (2)

blair1q (305137) | more than 2 years ago | (#37392760)

There's an even more obvious difference.

The Cray-1 is sitting half a division above the line. As that's a logarithmic abscissa, that Cray is putting out about 3X as many calculations per KWh as the on-the-line entrants are.

The Altair-8800 is sitting right on the line, being non-impressive to its contemporaries, while the Cray is blasting them with its laser vision and eating nothing but salads.

Re:The Cray-1... (0)

Anonymous Coward | more than 2 years ago | (#37392970)

Thank you. I was wondering if anyone was going to bring up the "order of magnitude" difference

whither? (0)

Anonymous Coward | more than 2 years ago | (#37392726)

dictionary.com says:
adverb:
1. to what place? where?
2. to what end, point, action, or the like? to what?
conjunction:
3. to which place.
4.to whatever place.

To what place Moore's law? Where Moore's law? To which place Moore's law? To whatever place Moore's law?
No, none of those make sense. WTF? Is this Use an unusual word day?

Whither thou goest, so goest I.

Re:whither? (0)

Anonymous Coward | more than 2 years ago | (#37393106)

Assuming he doesn't have the word confused with "wither" i.e., "Shrivel and die, Moore's Law; Introducing Koomey's Law," this is a legitimate use of the word. However, a question mark, not a semicolon is required. Per the New Oxford American Dictionary:

whither -
. . . . to what place or state . . . .
what is the likely future of: whither modern architecture?

More examples:

E-book Sales Top 90%;
Whither the Paperback?

New Plastic Trumpets Developed
Whither the "Brass" Section? [referring to the brass section of an orchestra or band]

Nonsense (1)

bloggerhater (2439270) | more than 2 years ago | (#37392792)

Nonsense. What kind of fixed load did they define? How does this fixed load utilize available system resources? I could define a code payload targeted at technologies present in early 90s Pentium CPUs, and then run this code on a modern machine for a much greater overall gap in efficiency. Producing any target number I want, thus correlating or wildly disproving this law. This hardly qualifies as a constant, let alone a "law." There are just to many factors involved to make any kind of statement like this. Moore's law isn't wrong...it just didn't take into account all the variables. Neither does Koomey's.

Re:Nonsense (1)

geekoid (135745) | more than 2 years ago | (#37404350)

Moore's law is done. dead. Demised. It has shuffled off this mortal coil... as even Moore expected.

You completely misunderstood what the submitter was saying. However, you did make the same mistake he implied reporters would make.

classic.

The Cray and the altair had a race.... (0)

Anonymous Coward | more than 2 years ago | (#37392844)

I hope submitter realizes that is a logarithmic scale and that by my best guesstimate graph reading the Cray was about 5 times more efficient than the Altair,

They may indeed "sit next to each other" but that doesn't in this case imply comparable efficiency.

Re:The Cray and the altair had a race.... (1)

emurphy42 (631808) | more than 2 years ago | (#37393116)

Yes, the difference between the top and bottom of the barrel at any given moment is significant (perhaps about 2 orders of magnitude, assuming that the points shown are typical), but the difference between the barrel now and the barrel in 10-15 years is about equally significant. That same Cray was 5 times less efficient than the IBM PC (about 5 years later), and about 1 million times less efficient than your typical modern laptop (about 35 years later).

This is such an absurd point (4, Insightful)

terraformer (617565) | more than 2 years ago | (#37393320)

It's the inverse of Moore's law so yeah, duh....

If your compute power doubles in the same size die every 1.5 years, then if you halve the die size keeping the compute power the same you actually cut the power in half. This is a very well known phenomenon and Koomey is doing what he has been for a while, making headlines with little substance and lots of flair.

That Microsoft and Intel paid for this research calls into question what it was they were actually paying for.

Re:This is such an absurd point (1)

LittlePud (1356157) | more than 2 years ago | (#37393664)

I had the same initial reaction when I read TFA. If I had any points I'd mod you up.

Re:This is such an absurd point (0)

Anonymous Coward | more than 2 years ago | (#37394392)

Moore's law is about the cost of transistors. Koomey's law is about energy per computation. Shrinking the die size has an effect on both the cost of transistors and joules per computation, that much is true. It is not true that Moore's law is only due to shrinking die sizes, it is not true that Koomey's law is only due to shrinking die sizes and it is not true that Koomey's law is the inverse of Moore's law. It is logically possible for Moore's law to hold while Koomey's law does not and vice versa.

Re:This is such an absurd point (2)

danhaas (891773) | more than 2 years ago | (#37397192)

With advanced chip refrigeration, like impinging jet or phase change, you can achieve a very high flops per area. The power consumption, though, increases a lot.

Moron - it's a logarithmic scale (0)

Anonymous Coward | more than 2 years ago | (#37393774)

The Altair 880 is nowhere near the Cray 1. True, they're separated by something less than an order of magnitude (perhaps a factor of 5 or so) but the Altair isn't exactly near the Cray. It'd be fair to say, however, that the Altair is about the same as the PDP-11/20. Learn to read the graph.

Trivial consequence of Moore's law (1)

purplie (610402) | more than 2 years ago | (#37394010)

Isn't this a trivial consequence of Moore's law, if we interpret the latter to mean exponential growth of (computations/time), and additionally make the very reasonable assumption that users' tolerance for power consumption (energy/time) is more or less constant?

Re:Trivial consequence of Moore's law (1)

erice (13380) | more than 2 years ago | (#37402074)

Isn't this a trivial consequence of Moore's law, if we interpret the latter to mean exponential growth of (computations/time), and additionally make the very reasonable assumption that users' tolerance for power consumption (energy/time) is more or less constant?

Not really. Moore's law actually says nothing about computation. It is about transistor count and cost. It is just that we have come to expect a similar relationship for end performance from the use of those transistors. It think this result may have more to do with change in how those transistors are allocated. The fraction of transistors directly involved in computation is shrinking. I expect those transistors be rather active and power hungry. Memory, which has come to dominate modern chips, uses less energy. Modern chips also have much larger and more elaborate support structures. Busses, arbitrators, control logic, etc. It is unclear how these parts contribute to power efficiency.

Energy Used Creating Efficiency (2)

user flynn (236683) | more than 2 years ago | (#37394600)

What about the energy used creating efficiency?

    Are we experiencing an increase in efficiency?

  OR

  Are we expending every increasing amounts of energy creating the appearance of efficiency?

Where are all the interesting CPUs? (0)

Anonymous Coward | more than 2 years ago | (#37395718)

What about Intel Atom, Nvidia Tesla, Transmeta , ARM, MIPS, TI MSP430, PicoChip, XMOS, Ubicom32, Hyperstone, ...?
The recent ones listed there are pretty boring.

Cray != Altair? (1)

L4t3r4lu5 (1216702) | more than 2 years ago | (#37396078)

I've not read the article (in true Slashdot fashion), but I'm taking issue with the statement "An early hobbyist computer, the Altair 8800 sits right near the Cray-1 supercomputer of the same era" from the stub. Really? Is that meant to be insightful? They're from the same era, so the same research has been done to get both to the same point. The Cray has many more CPUs of the same generation as the Altair, so uses a lot more power. Am I supposed to be surprised by this?

Either way, I don't really see an application for this "law" other than as benchmarking progress. Neat trend to notice, but not exactly useful.

Re:Cray != Altair? (1)

geekoid (135745) | more than 2 years ago | (#37404410)

bench-marking efficiency get as close to Landauer's principle as possible.

So the same computing power using less electricity.

Reading the post is confusing (1)

hesaigo999ca (786966) | more than 2 years ago | (#37397928)

Are they saying because Moore's law is a slightly bit off and that someone has proved that it is a .5 year off, that we rename the law to this new scientists name?
If I proved something different with the theory of relativity, does that mean that Einstein is any less the creator of that theory?
I would hope not....

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...