Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Server Power Consumption Doubled Over Past 5 years

Zonk posted more than 7 years ago | from the lets-get-that-fusion-thing-going-huh dept.

Power 148

Watt's up writes "A new study shows an alarming increase in server power consumption over the past five years. In the US, servers (including cooling equipment) consumes 1.2% of all the electricity in 2005, up from 0.6% in 2000. The trend is similar worldwide. 'If current trends continue, server electricity usage will jump 40 percent by 2010, driven in part by the rise of cheap blade servers, which increase overall power use faster than larger ones. Virtualization and consolidation of servers will work against this trend, though, and it's difficult to predict what will happen as data centers increasingly standardize on power-efficient chips." We also had a recent discussion of power consumption in consumer PCs that you might find interesting.

cancel ×

148 comments

poppycock! (0, Redundant)

President_Camacho (1063384) | more than 7 years ago | (#18044134)

That's complete nonsense! In the future, all computers will be a series of tubes, and computations will be done with water [mit.edu] , not electricity!

Re:poppycock! (0)

Anonymous Coward | more than 7 years ago | (#18044382)

only if you live in a country with oil.

Re:poppycock! (1)

ergo98 (9391) | more than 7 years ago | (#18044696)

computations will be done with water [mit.edu], not electricity

Do you think they have to water cool those?

Seriously, though, here's a great article on power consumption, though in this case in the home [yafla.com] . This was linked from another recent slashdot article on power consumption [slashdot.org] .

Of course, I am biased.

Re:poppycock! (1)

Gospodin (547743) | more than 7 years ago | (#18045436)

If you use water to cool electricity-based computing, wouldn't it make sense to use electricity to cool water-based computing? Where's your head, man?!

ATTENTION DENIZONS OF SLASHDOT (-1, Redundant)

Anonymous Coward | more than 7 years ago | (#18044140)

Be part of history! Introducing a new internet meme, now used for only the second time on Slashdot! Spread the word, post it in every story you see, where appropriate of course! And without further ado, here it is:

"But can I trap my foreskin in it?" (on topic: will my foreskin cause an alarming increase in server power consumption?)

Misleading Title (0)

Anonymous Coward | more than 7 years ago | (#18044160)

One is lead to believe that individual servers are using more power whereas the article indicates that more servers are being deployed.

Re:Misleading Title (1)

Technician (215283) | more than 7 years ago | (#18044480)

One is lead to believe that individual servers are using more power whereas the article indicates that more servers are being deployed.

I wonder if anyone has bothered to do a study of server power consumption per teraflop or web page served? In the same time frame, how much has the number of servers increased and how many transactions per second do each server perform?

Unlike cars which have decreased gas consumption per vehicle on average of only about 20% while carying the same number of passangers per vehicle, I think servers on average provide a lot more performance on a lot less power. There is just enough more of them (guessing a 10X increase) to increase the total electrical load (2X).

Inconvenient Truth (5, Funny)

JusticeISaid (946884) | more than 7 years ago | (#18044162)

Well, I blame Al Gore ... for inventing the Internet in the first place.

Blasphemy! (2, Funny)

dextromulous (627459) | more than 7 years ago | (#18044432)

How dare you blame the man who has ridden the mighty moon worm!

Re:Inconvenient Truth (0, Offtopic)

Secwind (990088) | more than 7 years ago | (#18044904)

your 300gig account here [gmail.com]

Re:Inconvenient Truth (1)

IflyRC (956454) | more than 7 years ago | (#18045364)

Well, you certainly can't blame the SUV on this one.

Re:Inconvenient Truth (1)

timeOday (582209) | more than 7 years ago | (#18046038)

It's a bogus statistic anyways, just another liberal whacko with his panties in a bunch. Did you rta? "US servers now use more electricity than color TVs." Clearly they're scrabbling to invent an impressive statistic by discounting black & white televisions. There must be thousands of those still out there! We might as well just give up making computers more efficient.

That's because (1)

Jeremiah Cornelius (137) | more than 7 years ago | (#18044164)

There's a Gizoogle [blogspot.com] new machines on line!

Re:That's because (0)

Anonymous Coward | more than 7 years ago | (#18046870)

Well, to be blunt. Google does consume a ton of power overall; not per machine. Their current stock of production machines have ~200 watt power supplies. That isn't much; but you stack 22,000 in a data center, that is a lot of wattage.

PS: I am a former Google Tech.

Server consumption doubles? (2, Insightful)

bilbravo (763359) | more than 7 years ago | (#18044172)

Nah... the figure doubled. I'm sure the overall power consumption in the US (or elsewhere) has not lessened while servers have doubled.
 
Nitpicking, I know...

Re:Server consumption doubles? (2, Insightful)

bilbravo (763359) | more than 7 years ago | (#18044244)

Hit submit, not preview...

I wanted to add, I'm sure that means the number has more than doubled; I'm sure power consumption has grown, so if the percentage doubled, that needs to be multiplied by whatever factor energy consumption OVERALL has increased.

I got too excited about my nitpicking to post my actual though.

Re:Server consumption doubles? (1)

timeOday (582209) | more than 7 years ago | (#18046086)

Another nitpick, they claim servers use more electricy than TV. But looking at the graph, half the electricity they're counting for the servers is cooling. Did they count the electricity used to cool the TVs? Might sound silly since we don't think about "cooling" TVs, but if you're running AC, any appliance you use adds to the heat burden.

water logic (0)

cpearson (809811) | more than 7 years ago | (#18044238)

Wasnt there just a /. article about water power logic? ... oh yeah there was [slashdot.org]

Vista Help Forum [vistahelpforum.com]

Re:water logic (0)

Anonymous Coward | more than 7 years ago | (#18044602)

Well, I have signatures disabled, but I still see this link for Vista Help Forum at the bottom of your post. Now let me ask, why is that? Because you don't realize there are signatures? Or more likely because you want to fucking spam everybody on Slashdot with your retarded site?

Time to add this assknob to your foes list and never read anything that comes out of his keyboard again, folks.

What is so alarming? (1, Funny)

Anonymous Coward | more than 7 years ago | (#18044310)

By definition the 0.6% increase in the fraction of electricity used by servers was matched by a 0.6% decrease in the fraction used by everything else, so everything is good.

Doubled in six years? (0, Redundant)

SNR monkey (1021747) | more than 7 years ago | (#18044356)

I'm sure I read on Wikipedia the other day that server power consumption has tripled in the last six months.

Solution (3, Interesting)

Ziest (143204) | more than 7 years ago | (#18044380)

48 volt DC. Why the hell are we still putting 110 AC into the power supply and steping it down to 24 volt DC. And what do you get when you do that? HEAT. And to compensate for not having a better power system you then get to spend a fortune on HVAC to cool the room that you heat by stepping down the voltage. 110 power supplies make sense in the home but in a data center it is stupid.

Re:Solution (2, Insightful)

Anonymous Coward | more than 7 years ago | (#18044496)

The frustrating part is that some of the equpiment has that ability built in, it's just not standardized enough to be used. A bunch of our cisco gear has a plug for backup power, and we had some DEC equipment years back that did, but they were different plugs and different voltages. If it were standardized, life would be good.
I think what it would take is for UPS manufacturers to standardize a set of voltages (12, 5, 3.3 perhaps) and a plug so that it would be very easy to replace standard power supplies with a standard DC in power supply.

Re:Solution (1)

Ungrounded Lightning (62228) | more than 7 years ago | (#18044752)

A bunch of our cisco gear has a plug for backup power, and we had some DEC equipment years back that did, but they were different plugs and different voltages. If it were standardized, life would be good.

So switch to Redback gear. It can all be powered by telco-standard 48VDC supplies. B-)

Re:Solution (1)

Chandon Seldon (43083) | more than 7 years ago | (#18044512)

So how do you get 12 volt, 5 volt, 3.3 volt, and 1.5 volt DC from that?

48 VDC (1)

Beryllium Sphere(tm) (193358) | more than 7 years ago | (#18044768)

With DC-DC downconverters, which also generate heat (and potentially EMI).

Re:Solution (3, Interesting)

Ungrounded Lightning (62228) | more than 7 years ago | (#18044968)

So how do you get 12 volt, 5 volt, 3.3 volt, and 1.5 volt DC from that?

High-efficiency switching regulators on the blades. (They're actually getting so good that you have less heat loss by putting a local switcher near a power-hungry chip than by bringing its high current in at its low voltages through the PC-board power planes.)

Getting the raw AC->DC conversion out of the way outside the air-conditioned environment saves you a bunch of heat load, as does distributing at a relatively high voltage (such as "relay-rack" standard 48VDC) to reduce I-squared-R losses. And switchers are more efficient with higher raw DC supplies, so going to 48V (about the highest you can while avoiding touch-it-and-die shock hazard - which is why Bell standardized on it) is much better than 12 or 24.

Re:Solution (1)

WorseThanNormal (1033014) | more than 7 years ago | (#18045264)

Except you shouldn't have to. As one of the founders of Google pointed out about 3 months ago, most if not all the compnents in a PC could be designed to run off a common voltage. The only reasons they don't are historical. Alot of the power loss and heat generation is caused by not just converting the 110V AC to 48V DC, but then converting that down to all the lesser voltges. Converting the components to use a common voltage wouldn't solve the problem, but it would decrease it and would provide a smoother transition. We wouldn't have to worry about the power infrastructure (which needs to be upgraded, as well) and provide significant power savings.

Or I could be wrong.

Re:Solution (3, Informative)

Wesley Felter (138342) | more than 7 years ago | (#18045532)

As one of the founders of Google pointed out about 3 months ago, most if not all the compnents in a PC could be designed to run off a common voltage. The only reasons they don't are historical.

That's not what the Google paper said. It proposed that power supplies should output only 12V and motherboards should contain many DC-DC converters to generate voltages needed by chips. As chip fabrication technology changes, newer chips need lower voltages to operate optimally (not to mention that lower voltage = lower power); since different chips in a computer are made with different technologies, they need different voltages ranging from 1.8V down to 1.0V.

I am writing my congressman now! (1, Funny)

Anonymous Coward | more than 7 years ago | (#18046740)

I will request that he submit a bill to MANDATE that all electronic devices run from a single voltage, say Pi Volts.

There's no problem in the world that can't be made better the simple stroke of the pen, right?

Re:Solution (3, Interesting)

hackstraw (262471) | more than 7 years ago | (#18046990)

I didn't read the Google paper (or the FA for that matter), but while we are on the subject, this is something that I don't understand.

Why do servers have AC to DC power supplies at all? I don't know about you, but I have my servers on UPSes. So the whole thing goes AC from wall to DC in the UPS to the batteries then from DC to AC to the computer where it converts it back to DC.

I'm not an EE, but why cant AC come from the wall into the UPS and then the UPS spits out DC to the computer?

Granted the UPS power supply needs to be redundant because they are the 2nd most likely thing to fail in servers after disks, but what am I missing here? I know there are telco grade computers that take DC, but these are not available in many options and are typically lower end boxes. But to me, none of these additional conversions to AC and DC an back again with the added likelihood of a failure anywhere in the chain seems a bit non-optimal.

Re:Solution (1)

skelly33 (891182) | more than 7 years ago | (#18044836)

DC power would be dandy if it weren't cost prohibitive to convert older, massive, well-established operating systems to it. And small incremental additions to such an existing, large installation don't justify the added expense of DC power on their own. As a result, it's not so easy for data centers to do this conversion. If it were able to pay for itself in 8 weeks, you might see more activity...

Re:Solution (2, Informative)

NerveGas (168686) | more than 7 years ago | (#18045744)

I wasn't aware that operating systems really cared which voltage was powering the hardware... :-)

While individual systems may vary, I've noticed that the older the facility where I was working, the more likely they were to have DC power - since the facilities were "telco" before they were "telecom", and most telco stuff is DC. Even in newer datacenters, it's only the small outfits that haven't had DC, most of the larger ones have had DC available.

Re:Solution (1)

hutchike (837402) | more than 7 years ago | (#18045130)

Feb 15th: Rackable Systems granted patent for DC power to server racks. [rackable.com]

"...The patented designs, released in 2003, leverage step-down power converters and alternating current (AC) to direct current (DC) power converters--commonly known as rectifiers--to distribute DC power to systems inside a server cabinet. Failover protection may be achieved by replacing a standard AC power supply with a highly reliable DC power card. Additionally, a secondary voltage step-down may be used within each system. This novel method of power distribution may occur inside or outside of the server cabinet, allowing for the flexibility to provide DC power to either a single cabinet or entire row of cabinets populated with systems..."

Fing Amazing (1)

WindBourne (631190) | more than 7 years ago | (#18045418)

the RBOCS have been doing this forever with a number of their equipment. I wonder how many more insane patents will be granted.

Holy Cow Batman, we're beaten by Captain Obvious (0)

Anonymous Coward | more than 7 years ago | (#18045784)

What a fucking trip. [slashdot.org]

I guess I should hurry up and submit my patent application for making server cases out of plastic and metal.

Re:Solution (5, Insightful)

NerveGas (168686) | more than 7 years ago | (#18045676)


      Get a grip on reality.

      Even if you switch to 48V DC, you still have to convert 120 VAC to 48 V DC, then down to 12/5/3.3/1.x volts for motors and logic, so all you're doing is moving the conversion from a decentralized setup (a power supply in each computer) to a centralized one (a single large power supply). In the end, however, you still have to get from 120 down to around 1 volt for the CPU, and you're not going to suddenly make an order-of-magnitude change in the efficiency of that - or even near a doubling.

    To keep it in perspective, though, there are vastly overshadowing losses which make the small differences in centralized/decentralized conversion efficiency moot. Your 120 VAC leg is probably coming from a 440 VAC lead coming into the building, and going through a very large transformer to get 120 VAC - and the 440 VAC that comes in is coming from a much higher voltage that was converted down at least once (and perhaps more) after being transmitted very long distances. The losses in all of that are much, much higher than the losses in conversion that you mention.

    Sure, if you could generate and transmit a nice, smooth, regulated 48V DC from the power station to your computer, that would be great - but that's so unfeasable that you might as well wish for a pink unicorn while you're at it.

Re:Solution (1)

vertinox (846076) | more than 7 years ago | (#18046282)

Sure, if you could generate and transmit a nice, smooth, regulated 48V DC from the power station to your computer, that would be great - but that's so unfeasable that you might as well wish for a pink unicorn while you're at it.

I wouldn't put it past Google if they haven't considered making their own power generation facilities already.

Actually doing it on the other hand is another question in itself.

Re:Solution (0)

Anonymous Coward | more than 7 years ago | (#18046370)

While most of what you say is true, you can still save a lot by having that 120->48 V outside the building, saving the trouble of having to cool that step of the conversion. AC costs are a large part of total energy expense for data centers. You can make a sizable dent in that by not having all those hot power supplies inside the data center.

Re:Solution (0)

Anonymous Coward | more than 7 years ago | (#18046398)

You don't need to carry it from the power station. You do it locally at the data center with a large rectifier/transformer that can run more efficiently than normal computer power supplies. Then the power supply in the computer saves a step and is left with converting the 48 VDC input to the necessary hardware voltages (12, 5, 3.3, 1.5) without rectifying.

Google released a whitepaper on it a few months ago (they're very interested in server power consumption. You should see the data center they're building over here in the Dalles, OR. You're right that it wouldn't be anywhere near a halving of power consumption. I think they cited around a 1% improvement. For them, that equates to tens of millions of dollars per year.

Somebody pointed out an added benefit that most backup systems are standardized around 48V battery stacks. This saves a step there, too.

Re:Solution (1)

Breakfast Pants (323698) | more than 7 years ago | (#18046906)

Actually, stepping down AC voltage is one of the most efficient energy conversion processes man has ever produced (we can get up to ~99.75% efficiency).

Re:Solution (0)

Anonymous Coward | more than 7 years ago | (#18046818)

Aside from cable loss? See what happens when you need to run 48VDC for 100 feet for a 20 amp circuit......

Ohhh, and then there is a simple equation:

P = I x E

Ahh, yes and 48VDC is made from AC to begin with. In short it is cheaper to use AC supplies because your not burning up a ton of watts in cable loss.
The hype for DC is the fact it can be made more reliable because you can stack batteries on the DC buss.

Re:Solution (1)

Agripa (139780) | more than 7 years ago | (#18047070)

48 volts DC is indeed too low because of the resistive losses. The server DC distribution standard being considered is actually much higher with 380 volts DC being a major candidate. I presume they are looking at a large scale active power factor corrected boost converter with input voltages from 208 to 277 volts AC outside of the server room.

The servers are actually doing something (2, Informative)

mdsolar (1045926) | more than 7 years ago | (#18044390)

Maybe, if they are sending out data. The standby power use of TVs and such is greater.

Sun's David Douglas, VP Eco Responsibility, estimates that the cost of running computers (power use) will exceed the cost of buying computers in about 5 years: http://www.ase.org/uploaded_files/geed_2007/dougla s_sun.pdf [ase.org] . This site has more (mainly corporate) musings on energy efficiency: http://www.ase.org/content/article/detail/3531 [ase.org] .
--
Get abundant, get solar. http://mdsolar.blogspot.com/2007/01/slashdot-users -selling-solar.html [blogspot.com]

Re:The servers are actually doing something (2, Funny)

fred fleenblat (463628) | more than 7 years ago | (#18044844)

>> Sun's David Douglas, VP Eco Responsibility, estimates that the
>> cost of running computers (power use) will exceed the cost of
>> buying computers in about 5 years

i think if you're running linux/intel it's already the case.
maybe the cost of sun's hardware is so high that the problem is still 5 years out for them.

Re:The servers are actually doing something (1)

mdsolar (1045926) | more than 7 years ago | (#18045520)

You might be right. I know that old suns can last a very very long time, but boy they cost when they were new.

Moore's law (4, Insightful)

k3v0 (592611) | more than 7 years ago | (#18044434)

Considering that the processing power has more than doubled over that amount of time it would seem that we are still getting more bang per watt than before

"Alarming" increase in "alarming" statistics (4, Insightful)

G4from128k (686170) | more than 7 years ago | (#18044456)

Why does this alarm anyone and is it even really true? Several factors conspire to make this statistic both bogus and unalarming.

1. More computers are classed as "servers." I'd bet that before many of the workgroup and corporate IT computers and mainframes weren't classed as "servers." It's the trend toward hosted services, web farms, ASPs, etc. that is moving more computers from dispersed offices to concentrated server farms.

2. More of the economy runs on servers - this would be like issuing a report during the industrial revolution that power consumption by factories increased at an "alarming" rate. Moreover, I'd wager that a good chunk of that server power is paid for by exporting internet and IT-related services.

3. Electricity is only a small fraction of U.S. energy consumption. Most of the energy (about 2/3) goes into transportation (of atoms, not bits).

It's only natural and proper that server power consumption should rise with the increasing use of the internet in global commerce. This report should be cause for celebration, not cause for alarm. (but then celebration does sell news, does it.)

Re:"Alarming" increase in "alarming" statistics (5, Insightful)

fred fleenblat (463628) | more than 7 years ago | (#18044784)

more to the point energy-wise, people using those servers (for on-line shopping, telecommuting, etc) are saving tons of enegy by not driving to the store, the mall, or the office to accomplish everything.

Re:"Alarming" increase in "alarming" statistics (1)

G4from128k (686170) | more than 7 years ago | (#18044950)

Excellent point! The key is not "how much energy is item X using?" but how wisely is that energy being used? Moving bits is greener than moving atoms. YouTube or Bit Torrent is thousands of times "greener" than driving to Blockbuster. Using online banking is greener than driving to the ATM or mailing a check.

Re:"Alarming" increase in "alarming" statistics (2, Interesting)

AusIV (950840) | more than 7 years ago | (#18045220)

I agree. Server power consumption may have doubled over the past 5 years, but what has the increase in data throughput been? Using a mutilated version of Moore's law, I'll assume that each server is doubling it's throughput every 18 months. 5 years is 60 months, so each server should have doubled 3 and 1/3 times, meaning each server is over 8 times more productive than they were 5 years ago (it's closer to 10, but we'll round down, as I'm trying to make this a conservative estimate).

It's also safe to say that there are "more" servers than there were 5 years ago, but I'm not even going to venture a guess on how many more. Assuming we have the exact same number of servers we did 5 years ago, we'd be processing 8 times more data per kilowatt-hour, meaning the cost of processing data has fallen by 75%. My estimates of data throughput may be high, and my server quantity estimate is definitely low, but I'm guessing 75% is a low end estimate.

Even if we are using more energy, we're getting more bang for our buck. I'd rather have data traveling through servers than on planes and trucks in the form of mail. I'd rather have documents be stored in mass on hard drives than have millions of pages of paper going to waste. Suggesting that this increase of power consumption is alarming is absurd.

Re:"Alarming" increase in "alarming" statistics (1)

Da Fokka (94074) | more than 7 years ago | (#18045686)

Suggesting that this increase of power consumption is alarming is absurd.
I don't think there is anyone here who claims that. However, this does mean that server power is a more suitable candidate to look to for potential energy savings now than it was 5 years ago.

Re:"Alarming" increase in "alarming" statistics (1)

AusIV (950840) | more than 7 years ago | (#18047082)

Suggesting that this increase of power consumption is alarming is absurd.
I don't think there is anyone here who claims that.
From the summary:

A new study shows an alarming increase in server power consumption over the past five years

Computers are powerhogs (1)

tsa (15680) | more than 7 years ago | (#18044466)

In the average home, the refrigerator had been the biggest power consumer for a long time. Now this place had been taken by the computer. Computers at home can be switched off when not in use, but for a server this is hardly possible. I'm not a computer hardware designer but I am curious in what ways the power consumtion of computers can be reduced. Using better cooling equipment? Using another semiconductor than silicon for the CPU? Or a radical change in the design of the CPU or orther components? Are there experts here who can elaborate on this?

Re:Computers are powerhogs (2, Informative)

Technician (215283) | more than 7 years ago | (#18044786)

Using another semiconductor than silicon for the CPU? Or a radical change in the design of the CPU or orther components? Are there experts here who can elaborate on this?

Performance per watt is a biggie for chip manufactures. Having a less than 10 watt server chip is possible, but who wants to use a Palm Pilot for a transaction server?

Having the performance to handle a slashdotting is what is needed in many servers. Performance is first, power consumption is second. That is why the performance per watt is an important part of the chip design. Low power chips is not the main design item. High performance is the most important. Providing that performance at the lowest power possible is the sweet spot chip designers aim for.

Here is additional reading. Look at what the Core 2 Duo and quad is bringing to the server market.
Please note the Woodcrest and Operon is now obsolete. The Operon was leading, but the new multi-core chips are a new race in the performance per watt race.

http://www.computerworld.com/blogs/node/2160 [computerworld.com]

http://www.intel.com/performance/server/xeon/ppw.h tm [intel.com]

http://www.supermicro.com/newsroom/pressreleases/2 006/press081406.cfm [supermicro.com]

http://news.com.com/Chipmakers+admit+Your+power+ma y+vary/2100-1006_3-6082352.html [com.com]

Re:Computers are powerhogs (1)

Joe The Dragon (967727) | more than 7 years ago | (#18045306)

core2 may use less power but FB-DIMMS eat a lot more then ddr2 ecc and that is where amd is better.

Re:Computers are powerhogs (1)

iamlucky13 (795185) | more than 7 years ago | (#18044866)

How about not buying Windows Vista?

All the time this power increase has been happening, chips have been getting more efficient (in terms of power per operation). However, they're also doing a lot more work. 6 years ago a typical new computer was something like a 700-1000 MHz Pentium III (except for the Celeron cheapies) with 128-256 MB of RAM. The computer I built myself this last Christmas is 2.1 GHz dual core with a Gig (for now) of RAM. That's 4-6 times the clock cycles (at 64 bits, nonetheless) and 4-8 times the memory. Power supply capacity has only increased by something like 50%.

What am I saying? Every time hardware improves, we don't use it to cut down power or anything like that. We use it to increase the number of operations we perform. Instead of halving the power requirements of our computers every X months, we focus on doubling the performance.

And of course, Windows Vista is illustrative of this.

Re:Computers are powerhogs (0)

Anonymous Coward | more than 7 years ago | (#18044980)

Cooling equipment (i.e, fans) don't really require a whole lot of energy to run for your average desktop computer. The components that usually waste the most energy are those that *produce* the most heat (the CPU & the GPU) Ways to reduce energy consumption for a desktop computer would be:

Replace CRT monitors with an LCD display (this can save around 80Watts)

Use a smaller, more efficient CPU manufacturing technology (I.e, switch from a 90nm Pentium 4 to a 65nm Core2Duo).

Replace mechanical hard-disk drive with a solid-state device (these are quite expensive currently, and have a small capacity, but they'll get bigger,faster,& cheaper in a couple years)

Or just buy a laptop. . .

Re:Computers are powerhogs (0)

Anonymous Coward | more than 7 years ago | (#18045002)

Performance-per-watt has been steadily going up and is still going up, so nothing is wrong on that front. To save power what you need is to convince people to "compute less".

Credential revocation ? (1)

cavehobbit (652751) | more than 7 years ago | (#18044478)

This guy actually wrote:
"Virtualization and consolidation of servers will work against this trend, though, and it's difficult to predict what will happen"

Instead of just taking the current trend, projecting it into the future as an infinite progression, therefore concluding that the human race will end sometime near the end of the 21st century?

I think this guys just destroyed any career as a media pundit he may have been planning.

Better start checking out some of those Medical Billing or "Massage Therapy" classes that have taken over at former computer-tech trade schools.

Re:Credential revocation ? (1)

gregleimbeck (975759) | more than 7 years ago | (#18045370)

Virtualization and consolidation of servers will work against this trend, though, and it's difficult to predict what will happen
I predict that within 100 years, computers will be twice as powerful, 10,000 times larger, and so expensive that only the five richest kings in Europe will own them

cheap blade servers... (2, Informative)

gavint (785035) | more than 7 years ago | (#18044572)

driven in part by the rise of cheap blade servers

Rubbish. One of the biggest myths in server sales today is that blades consume more power. If you fill racks full of them they consume more power per square metre of floor space, not per server. If you need the same number of servers they should consume less power, largely due to the centralised AC/DC conversion.

HP especially are working to make blades some of the most efficient servers on the market.

Re:cheap blade servers... (1)

eneville (745111) | more than 7 years ago | (#18045384)

driven in part by the rise of cheap blade servers

Rubbish. One of the biggest myths in server sales today is that blades consume more power. If you fill racks full of them they consume more power per square metre of floor space, not per server. If you need the same number of servers they should consume less power, largely due to the centralised AC/DC conversion.

HP especially are working to make blades some of the most efficient servers on the market.

i agree. the problem i think is that today computers are cheaper than before. it's so easy for people to think that they can improve uptime through distributing the processes on hardware.

imo this reduces uptime since there are more components involved. but that's just my opinion of it.

Re:cheap blade servers... (2, Funny)

timeOday (582209) | more than 7 years ago | (#18046206)

But compared to other computers, blade servers have a higher density of processors to other expenses (especially if you include server space), so your $100K buys more CPUs. I know it's certainly arguable which is the most relevant metric, but look at it this way: the Eniac [bookrags.com] pulled 150,000 watts. Since computers are so much more efficient now, the total burden from computers must have fallen, right? Wrong. Because the economics now allow google to run 200K computers (a guess, since it's a secret). Sure, if google had to run on Enacs, it would have to be plugged straight into the sun. But the total amount of computation "required" is not fixed after all.

Calibrate your BS detectors.. (4, Insightful)

stratjakt (596332) | more than 7 years ago | (#18044622)

"If current trends continue" is almost always followed by a fallacious argument. Current trends rarely continue. Be it world population, transistor density, climatology, and especially at the blackjack table.

Just pointing that out.

Re:Calibrate your BS detectors.. (1)

mcrbids (148650) | more than 7 years ago | (#18045058)

Current trends rarely continue. Be it world population, transistor density, climatology, and especially at the blackjack table.

Except that current trends have continued for 30 years in the case of Moore's law.

Re:Calibrate your BS detectors.. (1)

justthinkit (954982) | more than 7 years ago | (#18046832)

Actually, the world population juggernaut/trend _does_ continue. The current trend in this case is not a precise number of fleshy masses we force onto Mother Earth each day. The current trend is that we can not afford the population we have let alone the population we will have tomorrow, next year, next generation.

Re:Calibrate your BS detectors.. (1)

khallow (566160) | more than 7 years ago | (#18047206)

A good example. The UN (WHO, I think) has projected that the global population growth rate has slowed substantially over recent years. They even have a projection that it'll peak and start to decline somewhere around 2050. So world population is an example of what the grandparent was talking about.

Did you know disco record sales were up 300%? (2, Interesting)

Phanatic1a (413374) | more than 7 years ago | (#18044642)

In the US, servers (including cooling equipment) consumes 1.2% of all the electricity in 2005, up from 0.6% in 2000. The trend is similar worldwide. 'If current trends continue ...then by the year 2100, server rooms and cooling equipment will consume over 300,000% of all the electricity!

Global Warming (1)

CitX (1048990) | more than 7 years ago | (#18044654)

I've always argued among my tech friends that the technology and computer industry, whether is it the manufacturing of the iPod (Green's agree with me here) or the world servers (Google etc..), they will be the biggest cause of greenhouse gases besides autos in the future.

I always tease them that it is Google and Apple that are destroying the environment...:)

add 2 remove 1 (1)

sys_mast (452486) | more than 7 years ago | (#18044672)

One aspect that nobody seems to be talking about here is server room growth. Without actually looking up the numbers I would say I've added 2 or 3 servers for every 1 server removed from the racks.
My point being that the MHz/Watt (or whatever metric of computer power to electrical draw you want to use) is increasing but NOT at the rate that technology improves. We continue with each generation to hold on to and use older and older equipment.
Of course perhaps other shops have the type of budget to always replace older stuff with newer or more power efficient equipment.

And how much energy did those computers save? (3, Interesting)

WoTG (610710) | more than 7 years ago | (#18044738)

It's not like we plug in computers to sit around idling all day. They're doing stuff. I can send an email to anywhere on the planet instead of stuffing and envelope to have it carried by truck, boat, or plane. Cars have better power plants than ever before... they didn't get that way with back of the envelope calculations! A lot of forms that I used to submit by fax or snail mail? All gone electronic.

So, computers are using more power than 5 years ago? Who cares? If it bothers you, then get off the grid and fun in your cave.

Don't you see the looming crisis? (2, Funny)

NotQuiteReal (608241) | more than 7 years ago | (#18046864)

computers are using more power than 5 years ago

There's your problem, right there. You are thinking on such a short time scale. If you look back 100 years, the amount of electricity being used by computers is INFINITELY more than before. In no time at all, COMPUTERS WILL USE ALL THE ELECTRICITY IN THE UNIVERSE.

Clearly this is a problem. Think about it - those electrical cords have two wires. Electricity comes in one side, swirls around your computer for a bit, heating things up and showing you devil images, then it goes out the other wire, "to the ground", where SATAN lives. I am sure all that electricity "juice" is polluting our groundwater and causing all the hideous mutations we see today. Wasn't one of earliest large-scale electric projects the TVA, and isn't Tennessee where Al Gore, the ARCHITECT of Global Warming, is from?

And who would benefit from Warm Weather - that's right, SATAN. And Nashville is the home of country music, associated with BANJO MUSIC, need I say more? Sure Al says he is against global warming, but that is just reverse psychology. There is a reason why clueless Tonight Show "Jaywalkers" didn't recognize his photo, but knew he was up to no good.

All the juice in the universe must be a lot - so... when it all seeps into the ground, WHAMO, the whole thing will burst open, unleashing the daemons of the netherworld (which has nothing to do with Netherlands, it's cold there, right now)!

Disclaimers:

All facts taken out of context on purpose.

Two animals were harmed making this post, well, just one, but I kicked it twice.

[omg, I will be SO modded down for this post, but damn, it is scary how easy it is to think like a loon.]

Well (1)

Quick Sick Nick (822060) | more than 7 years ago | (#18044742)

We got served.

Forced Change (1)

MountainLogic (92466) | more than 7 years ago | (#18044794)

Expect to see local government force data centers to get more efficient. Right new there are many moves afoot to reduce the amount of AC (that is air conditioning, not alternating current) that can be provided to buildings. It will not take much of a push in this direction to make us start talking about "cooling bound' data centers. For example, in Washington State and other states there are already limits on the amount of heating capacity (BTUs) per square foot so this is a logical extension.

what do we expect. (1)

ak3ldama (554026) | more than 7 years ago | (#18044920)

Server deployments these days are placed into laterally expandable environments serving up Windows based .NET server architectures or *nix based AMP solutions which are basically highly inefficient in terms of processing power. But everyone is happy because it's reasonably easy to develop solutions ontop of these frameworks. Here's some observed data as I hit my home computer with about one page refresh every one or two seconds:

[james@localhost ~]$ vmstat 2
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 232020 205076 124776 171632 0 0 9 38 17 1 4 10 85 0 0
0 0 232020 205076 124776 171640 0 0 0 20 1089 151 0 6 95 0 0
0 0 232020 204924 124776 171640 0 0 0 0 1097 273 20 6 74 0 0
3 0 232020 204924 124776 171640 0 0 0 0 1094 264 21 5 75 0 0
0 0 232020 204924 124776 171644 0 0 0 50 1101 261 21 5 74 0 0
0 0 232020 204924 124776 171644 0 0 0 0 1098 250 20 6 75 0 0
0 0 232020 204924 124776 171644 0 0 0 0 1105 294 31 5 63 0 0
1 0 232020 204924 124780 171644 0 0 0 42 1099 221 25 6 70 0 0
1 0 232020 204924 124780 171644 0 0 0 0 1097 242 15 5 79 0 0
0 0 232020 204924 124780 171644 0 0 0 124 1118 250 18 6 76 0 0
0 0 232020 204924 124784 171644 0 0 0 42 1099 221 20 5 76 0 0
0 0 232020 204932 124784 171644 0 0 0 0 1087 177 0 5 95 0 0
0 0 232020 204964 124788 171640 0 0 0 42 1090 195 1 6 94 0 0
[james@localhost ~]$

The font chopped up the nice fixed formatting, but what is seen is that on my reasonably fast single core machine it used approximately 25% of the system's CPU power to serve up a small number of pages per second. This is with an installation of a MediaWiki on Linux with Mysql, so it's possible that other solutions are worse especially when having to render up more complex HTML. I am not an expert in this field though, so feel free to add to or correct my statements.

I'm not a physicist but isn't heat the problem? (1)

gelfling (6534) | more than 7 years ago | (#18044930)

Won't heat become a much bigger problem before we get to the point that electricity in constrained? Rack servers are very dense from a BTU/sq ft perspective. Wont we bump against an inability to handle the cooling requirements if we double our power density per sq ft?

Re:I'm not a physicist but isn't heat the problem? (1)

Wesley Felter (138342) | more than 7 years ago | (#18045618)

Yes, in many cases cooling is the limit (unless you install rack water/air heat exchangers). Reducing power also reduces heat output, so either way you might as well do it.

But Other Efficiencies Are Gained (2, Informative)

DumbSwede (521261) | more than 7 years ago | (#18044984)

This tends to be the trend with any useful technology. As technologies become more cost effective and energy efficient the rise in demand outpaces the energy savings as the economic advantage they offer is more fully utilized. This happened first with steam powered devices, then automotive, then air travel.

While it may seem disturbing that computers are consuming a larger percentage of energy usage, one has to realize they probably more than offset their own energy use -- this by allowing other resources to either be used more efficiently or by enabling other economic activity that discovers and distributes resources, energy among them.

Re:But Other Efficiencies Are Gained (1)

drinkypoo (153816) | more than 7 years ago | (#18046308)

As technologies become more cost effective and energy efficient the rise in demand outpaces the energy savings as the economic advantage they offer is more fully utilized. This happened first with steam powered devices, then automotive, then air travel.

Rail is more efficient than automobiles, but was still replaced by rail. The problem with your statement (which does apply HERE) is that often political concerns trump practical ones. We take a step forward, we take a step back. (We take a step forward, we take a step back, a step forward, a step back, and we're doing the cha-cha.)

Re:But Other Efficiencies Are Gained (1)

smartyknickers (1053102) | more than 7 years ago | (#18046746)

Hmm - rail is more [energy] efficient in the purest sense; weight transported per unit/mile - but cars are far more efficient in most real-world scenarios - better for ad-hoc, door-to-door journeys with no waiting around, no need for advance sceduling, no indirect routes because thats where the tracks lie.

Rail is only more big-picture efficient in highly dense environments (city metro systems), or situations with regular pre-determined routes (heavy commercial traffic)

Don't worry about it. (1)

sam991 (995040) | more than 7 years ago | (#18044992)

Most companies seem to be using 5 year old equipment anyway.

We darkened the sky... (1)

Stanistani (808333) | more than 7 years ago | (#18045004)

When the machines in their lust for power exhaust the conventional sources... they will turn to the only source left... mankind.

Then we'll all have that inconvenient blue/red pill choice thingy.

Bullshit (2, Insightful)

MindStalker (22827) | more than 7 years ago | (#18045012)

Trend continues. Thats like saying people have been using more 120W bulbs than when they used to use 60W bulbs, if this trend continues everyone will be using 500W bulbs by 2015.

Yea as computing has gotten cheaper and people are using more of it, but thats because the relative cost of powering them have remained cheap. Don't expect the trend to continue once it becomes expensive compared to other things.

Umm, No (1)

Eddi3 (1046882) | more than 7 years ago | (#18045138)

It's 's consuming twice the percentage of electricity. Electricity is always at a higher demand, and new recources are always being made, which means the amount of electricity being supplied is going up. Ergo, Servers are probably using something more like 3 or more times the amount of pure wattage, however that's now only about twice the amount, relative to the ever increasing supply of electricity.

  -Eddie

Quite believable (1)

rongage (237813) | more than 7 years ago | (#18045238)

I can attest to this personally.

I have several white-box servers in a co-lo that together with a good stiff tailwind draw about 4 amps total.

I also have several Dell 1950 and 2950 servers in a data center for my day job. Each one by itself draws about 3 amps (dual supply, 1.5 amps per supply, surging to 3 amps when one of the supplies is turned off for whatever reason). Granted, there are many more fans in the Dell servers than in my whitebox servers, but I have more storage in my whitebox servers.

Re:Quite believable (1)

Spoke (6112) | more than 7 years ago | (#18045466)

Did you measure power factor on all servers? How many is "several" white-box servers? The main power draw in most servers is the processor. I'd bet that "several" means 3-5 single processor servers. And the Dells are probably dual-quad processor.

Dell servers load balance incoming power over both PSUs, which is why power consumption spikes when you pull the power on one.

Did you measure power factor on your white-box servers? If they don't have power factor correction (preferably active PFC), they likely show up as 50-100% more load to the UPS they are on and power consumption goes up accordingly even though they are only actually using less power. Active PFC (standard on all Dell servers, unfortunately not on any desktop I've seen) is extremely important to have in PSUs for the datacenter.

Trends (5, Funny)

glwtta (532858) | more than 7 years ago | (#18045244)

"This baby is only six months old and she already has one head and two arms; if these trends continue, she'll have 4 heads and 8 arms by the time she's two!"

Ok, so power use doubled... (3, Interesting)

Aphrika (756248) | more than 7 years ago | (#18045340)

...but how much did performance increase by?

Most data centres already maxed out (1)

anticypher (48312) | more than 7 years ago | (#18045522)

The article is the usual tabloid trash which confuses the issues and has some strange tie-ins to Chernobyl in the hopes of spreading panic. Exactly what I've come to expect on /.

The story everywhere is that servers are getting more efficient, smaller, and more dense. This means that data centres all over Europe are at their capacity for supplying electricity and cooling, with lots of empty space they can't rent out. Even the newer centres designed a few years ago are having problems. I hear the same thing about centers in the U.S. The electricity companies are struggling to keep up for the data centres not directly on the main distribution grid, requiring the replacement of transformers and transmission lines. The smarter data centres build right next to major sub-stations/switching stations. Google got smart and are building a massive data centre right next to a hydro-electric dam at one of the main crossroads of the western US power grid.

Currently, customers with dense server racks are asking for 2.5KVA per square metre, but there aren't any centres that can supply more than 1.5KVA/m2. There are newer racks coming out that will require 3KVA/m2 or more. With every watt (or BTU if you count that way), there needs to be an equal amount of cooling, which is always slightly less efficient.

Many server farm companies are turning to hiring case modding "tuning" specialists who can build water cooled equipment, and lashing up entire floors full of servers that use more efficient cooling so they can put their cooling energy budget into powering the servers. Interesting to see a whole floor of servers without all the fan noise.

On the other side of the equation, consumers want more and more content. Broadband is taking off (well, except in some places [slashdot.org] ), carrier bandwidth is keeping pace with more capacity at ever lower costs, but the content providers are running into a wall with being able to grow their server farms fast enough. As server capacity grows, the energy costs will grow, but the amount of content served will be significantly higher.

the AC

SGI's were wicked, power wise. (1)

haeger (85819) | more than 7 years ago | (#18045530)

From what I remember one of the focus points of MIPS was a low footprint when it came to electrical power. I seem to recall that when AMD's and Intels were about 60W the equivalent MIPS-cpu were about 17W or so.
This was some years ago so things are probably different now, but at the time this was a big selling point. Same computing power, lower electrical bill.

.haeger

Thats actually not bad (0)

Anonymous Coward | more than 7 years ago | (#18045728)

That's a pretty good figure considering servers power about 100% of the Internet! Hopefully when business figures out the power of telecommuting, there will be other energy reductions: gas, office heating/cooling and pushing that damn turnstile!

In similarly misleading news: (1)

recursiv (324497) | more than 7 years ago | (#18045760)

Total XBox 360 power consumption has gone up inf% since 2000.

What about weak SFFs? (0)

Anonymous Coward | more than 7 years ago | (#18045918)

Disclaimer: this is an honest ignorant question.

What about SoC-based SFF machines doing small things? Like the upcoming P-M based thing Intel's got. Stick dual GbE ports on it, a PCI-X or PCI-E slot on a riser, and you might have something you could fit 2-4 of in a 1U space, as an alternative to virtualization and propriety blade systems. Aside from just Intel and AMD not focusing on such them, what really gets in the way of this type of solution, and for what reason might this be inferior?

How about output? (1)

phorm (591458) | more than 7 years ago | (#18046022)

OK, so the input has increased by 2x. In terms of output, how do current servers compare to five years ago? If the output is only 1.5 and the power consumption doubles, that sucks. If you're getting 5-10x the power output, then perhaps it's not really such a big deal. Through refining and better engineering most things can be made smaller, faster, and more efficient over time, but there is still a point where efficiency and output diverge.

Strange units (1)

mgscheue (21096) | more than 7 years ago | (#18046314)

"The 2005 estimate shows that servers and associated equipment burned through 5 million kW of power"

Watts is a measure of power, which they correctly state, but it's odd to say that something "burned through 5 million kW" since, being power, that's a rate of energy use rather than the amount of energy itself (which should be in Killowatt-hours or Joules).

And say one-third of that is for spam filtering (2, Informative)

justthinkit (954982) | more than 7 years ago | (#18046850)

A sickening thought, actually. Reminds me when the Sudbury nickle smelter belched out 2% of the world's SO2.

overcapacity, spam, botnets (2, Interesting)

bcrowell (177657) | more than 7 years ago | (#18046862)

Personally, I use an insanely wasteful server because I don't have any choice. 99% of the time its cpu is 99% idle. However:
  • You can't get webhosting with good support and reliability unless you pay for the level of webhosting that gets you your own box.
  • I need my server to be able to stand up to a spike in demand caused by ten thousand spams hitting it in three seconds...
  • ... or 1000 ssh login requests in one minute from a bot searching for weak pasword...
  • ... or a brain-dead bot requesting the same 5 Mb pdf file 10,000 times in one hour, and sucking down 60 Mb worth of partial-content responses.
Similar deal with multi-core CPUs. People are talking about making desktop machines into the equivalent of 1980 supercomputer, and one of the main justifications seems to be that anti-virus software can run all the time without affecting responsiveness. This is nuts. The internet and its protocols weren't designed for a world infested by Windows machines controlled by malware.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...