Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

EPA Sends Data Center Power Study to Congress

CmdrTaco posted more than 6 years ago | from the turns-out-we-use-a-lot-of-juice dept.

Power 127

BDPrime writes "We've all been hearing ad nauseum about power and cooling issues in the data center. Now the EPA has issued a final report to Congress detailing the problem and what might be done to fix it. Most likely what will happen is the EPA will add servers and data centers into its Energy Star program. If you don't feel like reading the entire 133-page report, the 14-page executive summary is a little easier to get through."

cancel ×

127 comments

first fish! (0)

Asshat_Nazi (946431) | more than 6 years ago | (#20130959)

have a nice warm cup of frothy piss...

HOW THE HELL IS THIS OFFTOPIC? (0)

Anonymous Coward | more than 6 years ago | (#20132081)

He mentioned a perfectly fuel efficient method to cool servers!

Summery (3, Funny)

Average_Joe_Sixpack (534373) | more than 6 years ago | (#20130989)

If you don't feel like reading the entire 133-page report, the 14-page executive summary is a little easier to get through.

Still too long. Can anyone reduce it to a single phrase or word? Thanks in advance

Re:Summery (1, Insightful)

Anonymous Coward | more than 6 years ago | (#20131049)

These forecasts indicate that unless energy efficiency is improved beyond current trends, the federal government's electricity cost for servers and data centers could be nearly $740 million annually by 2011, with a peak load of approximately 1.2 GW.
It then goes on to describe three scenarios that decrease this to various extents but require work and preparation.

Essentially, we're going to end up building 10 more power plants in the next 4 years because we're so fucking stupid that we can't take simple measures on our current data centers to make them even a little bit more efficient. If you ask me, energy is just too cheap. Put a cap limit on energy use and everything over goes up in price exponentially for a facility. Then you'll see them start to listen to you.

Re:Summery (1)

gallwapa (909389) | more than 6 years ago | (#20133459)

Unfortunately, virtualization could probably help out here: I have a feeling a lot of the servers mentioned (although I admit I havent rtfa) have windows...Ive seen govt policies that say 1 app per windows server because they[widnows apps] "don't play nicely" together. Wasn't there an article recently on this, containerization or whatever they called it?

Re:Summery (1)

OnlineAlias (828288) | more than 6 years ago | (#20134685)


Virtualization has put the super smack down on our datacenter. From 250 physical servers to 20...how's that for power savings?

Great scott! (4, Interesting)

Bacon Bits (926911) | more than 6 years ago | (#20131061)

Snipped from page 5:

These forecasts indicate that unless energy efficiency is improved beyond current trends, the federal government's electricity cost for servers and data centers could be nearly $740 million annually by 2011, with a peak load of approximately 1.2 GW.

Re:Great scott! (4, Funny)

nharmon (97591) | more than 6 years ago | (#20131249)

That amount of power can be easily generated with one DeLorean. I'm going back to sleep...

Re:Great scott! (1)

that IT girl (864406) | more than 6 years ago | (#20132681)

Don't you mean one bolt of lightning? :)

Re:Great scott! (1)

achbed (97139) | more than 6 years ago | (#20133363)

Either that or a few banana peels and a couple of cans o' beer.... meet Mr Fusion!

Unless... (0)

Anonymous Coward | more than 6 years ago | (#20132691)

You're speaking French. Then, you'll need 2.21 [wikipedia.org] .

Re:Great scott! (3, Insightful)

Frank T. Lofaro Jr. (142215) | more than 6 years ago | (#20131331)

$740 million? That's like 4.2 days of the Iraq war!
($177M/day for Iraq http://www.usatoday.com/news/politicselections/nat ion/president/2004-08-26-iraq-war-clock_x.htm [usatoday.com]

That sounds like a big number, and is for most of us, but not for the Federal government. About 29 cents more in taxes off each paycheck (assuming 100 M taxpayers, and paychecks every 2 weeks).

There are much bigger fish to fry.

Also, there is only so much one can cut the energy use, and thus that cost down, and still get the business of the government done. And the improvements in efficiency will require hardware, software, and personnel which have their own costs. Eventually you will hit a point where there is no longer a return on investment to make it worthwhile.

Re:Great scott! (1)

Burz (138833) | more than 6 years ago | (#20132095)

Notice that the report was issued by the EPA, and what seems like a trifling amount of money for the federal gov't could represent a very significant impact to the environment. On page 8 of the summary it projects the CO2 emissions that could be avoided under the different scenarios.

Re:Great scott! (0)

Anonymous Coward | more than 6 years ago | (#20131841)

So what's the problem? Everyone knows that by 2011 plutonium will be available at every corner drug store!

Our government, now powered by lightning (1)

GoNINzo (32266) | more than 6 years ago | (#20131979)

1.21 gigawatts? 1.21 gigawatts? Great Scott!

The only power source capable of generating 1.21 gigawatts of electricity is a bolt of lightning.

(Just reinforcing the reference. heh)

Re:Summery (1)

tgatliff (311583) | more than 6 years ago | (#20131091)

Maybe in a sentence.. "Individual Server power usage is embarrassing and it should be more efficient"...

So what happens now?? Now we wait for a congressional committee meeting broadcasted on c-span where politicians can grand stand and talk about how it needs to change... Fortunately most politicians will consider this typical as a "black box", so they will not touch on technical details, but rather just complain.. Maybe talk about a special "colo server" tax.... Maybe throw in some global warming comments... Nothing occurs more than thi...

What will change? Well considering that Intel and AMD are already moving towards cooler chips, this will be fixed thru time..

Re:Summery (1)

NeoTerra (986979) | more than 6 years ago | (#20131823)

Reduced to a single word...

Whirrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr.....

That help you out?

Re:Summery (0)

Anonymous Coward | more than 6 years ago | (#20131967)

SH*T!

Hot loads (1)

wsanders (114993) | more than 6 years ago | (#20132241)

Google for that and see what you get.

Grampa Simpson: (3, Funny)

obergfellja (947995) | more than 6 years ago | (#20130995)

"... EPA!!! EPA!!!"

Re:Grampa Simpson: (3, Funny)

Apocalypse111 (597674) | more than 6 years ago | (#20131063)

I have two buttons on my screen. One of these buttons will supply humorous moderations to your post. The other will release the hounds.

Plead your case.

Re:Grampa Simpson: (2, Funny)

obergfellja (947995) | more than 6 years ago | (#20131077)

Please open up and look into your heart...

Re:Grampa Simpson: (2, Funny)

TooMuchToDo (882796) | more than 6 years ago | (#20131837)

Mr Apocalypse, I ask you to look into your heart....

*runs from the hounds*

Mandatory Madonna reference (2, Funny)

Ancient_Hacker (751168) | more than 6 years ago | (#20131041)

Having any Govt investigate efficiency is about as practical as the Madonna Commission On Chastity and Modesty. Computers are doing just fine at reducing their power consumption by many percent a year without the govt's "help".

Re:Mandatory Madonna reference (5, Funny)

Nezer (92629) | more than 6 years ago | (#20131231)

Having any Govt investigate efficiency is about as practical as the Madonna Commission On Chastity and Modesty.


Which Madonna?

Re:Mandatory Madonna reference (0)

Anonymous Coward | more than 6 years ago | (#20131983)

Which Madonna?

One is the most famous mythical slut in history, the other a lousy pop singer.

Re:Mandatory Madonna reference (1)

Phreakiture (547094) | more than 6 years ago | (#20133491)

Which Madonna?

Madonna Ciccone [wikipedia.org] , I'm pretty sure.

Solution: A Giant Dome (-1, Offtopic)

Nova Express (100383) | more than 6 years ago | (#20131043)

Just put a giant dome over the data center. Prevents any waste heat from escaping out into the environment.

Now if the EPA could just address our nation's growing Spider Pig problem...

wow (3, Informative)

thatskinnyguy (1129515) | more than 6 years ago | (#20131051)

n 2006, U.S. data centers consumed an estimated 61 billion kilowatt-hours (kWh) of energy, which accounted for about 1.5% of the total electricity consumed in the U.S. that year.


Is that it? Seems like small potatoes to me.

Re:wow (4, Interesting)

jandrese (485) | more than 6 years ago | (#20131173)

1.5% of the total electricity used in the US per year is a huge number. It's like when politicians talk about something really expensive and they say "oh, it's only 1% of our GDP" to make it sound not so bad, except to people who know just how enormous the GDP of this country is.

More importantly, this could probably be reduced considerably without major disruptions or reduction in quality of service by just embracing higher efficiency components in our datacenter equipment (especially servers).

Re:wow (4, Informative)

Bacon Bits (926911) | more than 6 years ago | (#20131277)

It's an estimated 11,000,000 servers in everything from 2 server closets to thousand server enterprise centers. These 11 million systems consume more power than all the TV sets in the US combined, and there are more TV sets in the US today than people.

Or lets do it this way. Hoover Dam at peak output produces 2 Gigawatts of power per hour. 11 million servers consume 61 billion KW hours annually. It takes Hoover Dam 30,000 hours (about 3.5 years) to produce that much power. So you need four Hoover Dams just to power all the data centers in the US.

Re:wow (2, Informative)

miller60 (554835) | more than 6 years ago | (#20131493)

Actually, dams are serving as magnets for data center development, since hydro power is cheaper than other sources and provides the public relations advantage of being "greener" than coal or nuke power. That's why more than 2 million square feet of data center space [datacenterknowledge.com] is being planned in and around Quincy, Washington, a farm town of 5,000. Meanwhile, in northern NY state, HSBC is locating a $1 billion data center project [datacenterknowledge.com] in Cambria (another farm town of 5,000), where it will use hydro power from the Niagara river.

Re:wow (0)

Anonymous Coward | more than 6 years ago | (#20131567)

That number would change if they could just breach the coolant/cpu barrier in Megatron!

Re:wow (1)

Himring (646324) | more than 6 years ago | (#20131771)

That doesn't include the energy needed to support the nation of wow, now numbering over 9 million....

Units -arghhhh! (2, Informative)

IvyKing (732111) | more than 6 years ago | (#20132067)

Or lets do it this way. Hoover Dam at peak output produces 2 Gigawatts of power per hour.


What you meant to say is "Hoover Dam at peak output produces 2 Gigawatts." What does make more sense is saying 48 million KWH per day or a bit over 17 billion KWH per year - assuming that there is enough water behind the dam to allow for continuous peak output, which is certainly not the case this year.

Re:Units -arghhhh! (1)

Bacon Bits (926911) | more than 6 years ago | (#20132221)

I'm sorry, is my math wrong? Or are you just irritated by the fact that I didn't show my work?

Re:wow (3, Funny)

wcspxyx (120207) | more than 6 years ago | (#20132615)

Can you please state that in units us Slashdotters can understand? Like how many Libraries of Congress would we have to burn to get that much energy?

Re:wow (1)

Shishak (12540) | more than 6 years ago | (#20133671)

Um, If the hoover dam produces 2 Gigawatts then over an hour it would produce 2 Gigawatt hours. If the Government servers consume 1.2 Gigawatts then over an hour they would consume 1.2Gigawatt hours (GwH) So, 1 hoover damn can support all the servers. Still an ass load of power, no doubt but you don't need 4 hoover dams. That is by 2011, not today, they are expecting some growth over the next 4 years.

cogeneration (3, Funny)

MonorailCat (1104823) | more than 6 years ago | (#20131101)

Move all the data centers to Minnesota or Canada and use them to heat people's houses.

Or better yet! DatacenterBurgerKing with CPU-broiled whoppers.

Re:cogeneration (0)

Anonymous Coward | more than 6 years ago | (#20131135)

I know from experience that Minnesota (and probably most of Canada) get extremely hot in the summer for about a month. Right now there have been weeks of 90-100 degree weather.

So many people are so clueless about the qualities of a continental climate.

Re:cogeneration (1)

j_sp_r (656354) | more than 6 years ago | (#20131481)

That sucks, alcohol evaporates at 78.37 degrees!

Re:cogeneration (1)

pecosdave (536896) | more than 6 years ago | (#20132743)

This Texan laughs at you!

Back in 1994 when I still lived in Pecos it got up to 128F one summer. My dad tells me of 132 when he was in school in the 70's. 116 was a typical high, the lack of an "official" weather station means that the town gets credit for whatever Kermit Texas has for a temp (quite a few miles off and quite a few degrees cooler).

Re:cogeneration (0)

Anonymous Coward | more than 6 years ago | (#20133769)

Typical Texan BS. The official record high for Texas is 120 degrees F, on Aug. 12, 1936, according to the NCDC [noaa.gov] .

Re:cogeneration (1)

pecosdave (536896) | more than 6 years ago | (#20134355)

Would you please point me to the official weather station in the vacinity of Pecos Texas which has different terrain and elevation than the official one I spoke of? Not having an official place of measurement doesn't mean it doesn't get hotter. Damn, there were times I wished we could have locked onto the current "official" temperature and lowered it to that. Tell you what, why don't you go move there for a year or two, if you survive the gangs who hate yankee carpetbaggers worse than they hate the average local gringo then you can contradict me.

Re:cogeneration (1)

lpangelrob (714473) | more than 6 years ago | (#20131281)

I'll take a Mushroom and Swiss Intel with chips, please. Oh, and I'll also have floating point errors on the side.

Re:cogeneration (3, Interesting)

misleb (129952) | more than 6 years ago | (#20131389)

Ya know, I always wondered why most places weren't more efficient about the cooling of their datacenters... particularly in the winter. Like it'll be 20 degrees F outside and they're STILL running A/C for the computers. WTF? Just vent a small amount of the outside air into the datacenter and you're done. Or better yet, just blow in the air from the offices and send them warm, data center heated air.

Another question, why do we vent the exhaust from our refrigerators into the house during the summer? Just seems like there's a lot you could do to save energy just by moving what would outerwise be waste heat to places where it can either be used or at least not cause a larger cooling problem.

Guessing (4, Interesting)

iknownuttin (1099999) | more than 6 years ago | (#20131473)

Like it'll be 20 degrees F outside and they're STILL running A/C for the computers.

Climate controlled. There's this element among building planners that think any outside air is bad(TM). That's why, even in small buildings where you don't have to worry about pressure differentials blowing windows out like you do in skyscrapers, you can't open a frick'n window in the Fall or Spring when the air smells wonderful and there's this perfect chill in the air the just stimulates the brain.

I'm drenched in sweat here in Hotlanta (it's 82F and 66% humidity and climbing to 94) and I really miss New England's Spring and Fall.

Re:Guessing (1)

misleb (129952) | more than 6 years ago | (#20133079)

I'm drenched in sweat here in Hotlanta (it's 82F and 66% humidity and climbing to 94) and I really miss New England's Spring and Fall.


Haha, it is almost chilly here today in Portland. Well, cool, anyway. Portland summers are the mildest I've ever experienced in the lower 48. Though I imagine Seattle is similar.

-matthew

Re:cogeneration (0)

Anonymous Coward | more than 6 years ago | (#20131543)

Datacenter cooling is more involved than just heat. Humidity, pollution and other things as well as physical access must be controlled. This cannot be done by 'cracking a window' or venting raw air into the server room. As far as heating the rest of the office, a few DCs I have worked at vent and retrieve air from the plenum (the space between the false ceiling and the floor above) which actually does have a positive effect on the rest of the buildings temperature. Of course by positive I mean raising the temperature, which is not desirable in the summer months of a/c. But it is not terribly practicle to plug the plenum passages once a year.

Re:cogeneration (2, Insightful)

hcdejong (561314) | more than 6 years ago | (#20131995)

Humidity, pollution and other things as well as physical access must be controlled.

Pollution is easily taken care of with a filter. Controlling physical access is trivial. Humidity may be a bit more involved, but then again, you're heating the incoming air which reduces its relative humidity. Condensation isn't likely. If it does turn out to be a problem, use a heat exchanger and preheat the incoming air using the exhaust air.

But it is not terribly practicle to plug the plenum passages once a year.

So? Install a valve. The savings should be enough to cover the cost of some extra ducting.

Um.. this is happening (1)

jsailor (255868) | more than 6 years ago | (#20131933)

The concept of "free cooling" is gaining significant momentum in the data center space. I don't have any free or public information, but rest assured that leveraging winter air and other technique are being looked at very hard. Of course, this is not altruism or "green" thinking. It's our old friend financial greed. Reduction in capital expenditures for chiller plants and reduction in utility bills.

Data Center Jacuzzis (1)

miller60 (554835) | more than 6 years ago | (#20131731)

Gervase Markham from the Mozilla Foundation suggests using excess heat to power data center jacuzzis [mozillazine.org] .

Re:cogeneration (1)

Mspangler (770054) | more than 6 years ago | (#20132325)

Scientific American had an article about containerized data centers; basically a server center built into a standard 20 foot shipping container. I was reading it while at the pool where my daughter was taking swimming lessons. And the thought di occur to me that hooking up the cooling system to the pool would be a great idea.

And of course in winter, all you would need around here is to open the doors, but since there is a school only 75 yards from the pool, it would not be hard to run the water over there.

People at all levels are so hard to get to spend Capital, but they'll flush Operating Expenses down the drain without a thought.

great news for Sun (2, Informative)

toby (759) | more than 6 years ago | (#20131205)

...whose servers are among [sun.com] the most power-efficient [sun.com] available, and even more so with Niagara 2. [sun.com]

Disclaimer: I own a tiny bit of Sun stock. (But I bought it because I believe in them, not vice versa!)

Re:great news for Sun (1)

fm6 (162816) | more than 6 years ago | (#20132491)

Niagara shares a problem with many other energy-saving technologies: the money you save by buying less power is swamped by the extra cost of the hardware.

s/problem/irrelevancy/ (1)

toby (759) | more than 6 years ago | (#20132513)

Get over it.

Re:s/problem/irrelevancy/ (1)

fm6 (162816) | more than 6 years ago | (#20133647)

To tech nerds, irrelevancy has never been a problem.

Simple Solution (4, Insightful)

evilviper (135110) | more than 6 years ago | (#20131257)

I've long been dumbfounded by the way datacenters charge. They seemingly all charge a hell of a lot for physical space, and then almost completely ignore power requirements. This seems incredibly strange, since datacenter operating costs are pretty much tied directly to power consumption (monthly electricity fees, UPSes, electrical generators, cooling, etc.), and only incidentally to physical space.

Further, the cost to handle each extra watt is multiplied thanks to cooling, power back-up, wiring, etc., while increasing the physical size of the building, constructing more datacenters, etc. is just a flat (linear) cost, and mostly just a one-time expenditure at that.

This strange arrangement is what has led us here. It's not the natural evolution of technology to cram as much power consumption into as tiny a box as possible. It's an artificial need, created by the idiotic distribution of fees common to datacenters.

If a few large datacenters declared their fees as a small $$$ value for each unit of space, and additionally a few dollars, per watt of power consumption, you'd see the problem naturally fix itself, through normal economic forces. As soon as watts are the defining factor, companies won't pay more for a cramped 1U server rather than an (inexpensive) 2U or 3U server. You will also see companies happy to pay more for lower-powered server hardware, as having them directly bear the energy cost will make buying efficient servers a significant savings to them.

Re:Simple Solution (0)

Anonymous Coward | more than 6 years ago | (#20131343)

It probably would cost to much to bother reporting on, what are you going to install watt meter's on each server? I know you can calculate the theoretical power usage but it won't always be correct (but still is doable). The cost of raised floor is significantly higher than the tiny amount you pay for power (although they buy a lot of it). I think more efficient cooling would be a good thing (maybe use the heat from the servers to heat the rest of the building,heat exchangers anyone? You could also earth shelter a building, and add geothermal A/C's (require trenching or drilling). But none of this crap will really happen due to the fact a data center will usually be located in a leased building with a large square footage, in densely developed place (with lots of bandwidth from multiple providers), and talented individuals to work for the technology...

Re:Simple Solution (3, Insightful)

Nezer (92629) | more than 6 years ago | (#20131633)

It probably would cost to much to bother reporting on...

Because when you run a multi-million dollar data center, you clearly can't afford install a few-hundred dollar device in each customer's rack especially if it's a major part of how you bill your customer.

Look, the power companies do exactly what the parent poster suggests. Imagine if power companies charged a flat rate each month based on the square footage of your house. There would no incentive (unless your a save-the-planet hippie type which isn't a bad thing) to turn up the setting on the air conditioner (or turn it off all together), keep incandescent lights running 24/7 along with the giant plasma TV. This is essentially how data centers operate today. There is no motivation to have energy efficient servers unless you're the one that owns the data center and pays the power bill. Today the best a data center owner can do is invest in more efficient cooling systems and that's about it.

Re:Simple Solution (1)

Renraku (518261) | more than 6 years ago | (#20131601)

People good with numbers will usually take the usable floor space of a data center and put a watts per square foot estimate with it based on average or projected power consumption. If you say every square foot costs an average of $5 in power usage, you can figure that in with maintainance per square foot, cooling per square foot, etc, etc.

Combine these all into a neat little sum on someone's bill.

Try to think of data center 'floor space' as the main stage. Everything is built around maintaining and supplying that stage. Thus, all costs will be paid for in that one room. You can bet your ass that electricity, cooling, maintainance, supplies, services, etc, are all figured into that little bill.

Re:Simple Solution (2, Interesting)

evilviper (135110) | more than 6 years ago | (#20132961)

People good with numbers will usually take the usable floor space of a data center and put a watts per square foot estimate with it based on average or projected power consumption.

Of course the (average) price of electricity is figured into it. That is the PROBLEM.

It is a (self-perpetuating) prisoner's dilemma. The more power consumption you can squeeze into the smallest space, the better of a deal you get. Since it's all averaged out, those using more power than average are getting subsidized by those who do not. It's basically stupid to invest in power-saving tech, since your hosting bill won't be any cheaper. However, this has gradually, yet significantly, raises hosting costs for all.

It's a terrible system, that has single-handedly led to the wholly unnatural market for cramped and massively hot 1U servers.

Re:Simple Solution (1)

jsailor (255868) | more than 6 years ago | (#20132107)

Very few, if any providers charge solely by square footage anymore.
Like shipping packages where the fees are combination of volume, weight, distance, etc. Data center pricing is typically priced based on a combination of space, power, power density, circuits, contract length, continguous space, etc.

Re:Simple Solution (3, Informative)

fm6 (162816) | more than 6 years ago | (#20132207)

If a few large datacenters declared their fees as a small $$ value for each unit of space, and additionally a few dollars, per watt of power consumption, you'd see the problem naturally fix itself, through normal economic forces
How on earth do you track individual power consumption? Putting a meter on each system is hardly practical. I suppose you get away with one on each rack, but many customers (the vast majority in the one data center I worked in) don't rent whole racks.

Re:Simple Solution (1)

Wesley Felter (138342) | more than 6 years ago | (#20132451)

They don't need to track power consumption; they track capacity. So if you pay for a 30A circuit they assume that you will use all of it all the time.

Re:Simple Solution (1)

mosch (204) | more than 6 years ago | (#20132845)

How on earth do you track individual power consumption? Putting a meter on each system is hardly practical. I suppose you get away with one on each rack, but many customers (the vast majority in the one data center I worked in) don't rent whole racks.

PDUs that can track per-outlet power distribution, and spew the data over serial or SNMP are widely available, and deployed widely.

The problem is also solved for larger (per-rack) situations.

Re:Simple Solution (1)

evilviper (135110) | more than 6 years ago | (#20133107)

Putting a meter on each system is hardly practical.

I can't see any reason why not. An induction coil costs a few cents, and you could easily feed a rack worth into a single cheap meter.

But more to the point, I really wasn't suggesting live monitoring. Just have them select from a range of power levels and charge them as appropriate.

Re:Simple Solution (2, Insightful)

ICLKennyG (899257) | more than 6 years ago | (#20132217)

This is already happening to a functional point. Navisite, the host we use here at this company, charges by the square foot, however you only get so many watts per square foot. We have (2) 19" racks about half full of hardware with a total physical footprint of under 9 square feet; we could even get it under 5. However due to the power density we have 100 square feet of space that we rent. Because we use hyper dense blade servers for the management efficiency we fill a "racks" space of power with aproxamately a single blade chassis. So while we aren't physicly using 100 square feet, we have to have that space blocked out because we are drawing that much power off of their infrastructure, and so we pay for it. Same concept, just different implementation.

I would say that 1% of the nation's power for every computer isn't that much when you consider how incredibly tied in we are, and the savings in power created by the use of those computers. Simple example. Before the electronic age if one wanted to buy something over a specific value they would comparison shop, likely driving all over town to find the lowest price. Now you just hop online, find who has the lowest price and go pick it up, or better yet have it delivered in which case many trips are "car pooled" into one more efficient trip.

Power is a concern for computing, but we need to quit being Chicken Little about this problem.

Re:Simple Solution (1)

guruevi (827432) | more than 6 years ago | (#20133091)

Having worked in a datacenter and I have even set up one, let me make it simple to you:

The cost of 1U space = ((power + people + space + loan + hardware + software) / avg. used U's by customers) * profit rate

Somebody can simply do that using an Excel sheet and the customer will know that his server costs $1000/year.

The way you propose would increase that cost with development time, read-out infrastructure and the extra support to handle those things. Next to that, the customer would get a random bill every month that they can't foresee and then you have all those customers that will complain: but my processor only uses max. 50W, you bill my server for 100W/h etc. etc.

It's much simpler to calculate it in the first place and advertise your server will cost a certain amount than saying, your rent is $100/year and a variable amount of $'s into whatever you use.

Re:Simple Solution (1)

evilviper (135110) | more than 6 years ago | (#20133349)

Somebody can simply do that using an Excel sheet and the customer will know that his server costs $1000/year.

You have an interesting point at least. It is simpler billing, but charging by size is the worst possible thing you can do. It has lead to many problems over time, a few of which I've already mentioned.

A thought experiment:
What if you were to price based purely on number of servers, using average server size?
What if you were to price based on the WEIGHT of the server instead of U size?
What if you base the cost on power consumption, and average the U size instead?
etc.

Re:Simple Solution (1, Informative)

Anonymous Coward | more than 6 years ago | (#20133663)

Well, I'm no expert, but I happen to work for the US Govt, and I happen to be aware of two major data centers we're standing up.

I can tell you the above statement "They seemingly all charge a hell of a lot for physical space, and then almost completely ignore power requirements." is completely out of line with what we pay for our use of Tier 5 data centers.

The two data centers we use (http://www.data393.com/ in Denver, http://www.heraklesdata.com/ [heraklesdata.com] in Sacramento) charge us a pretty penny for power usage.

Hell we pay THOUSANDS of dollars to get them to RUN power, then we pay THOUSANDS more for the juice. (And this is the site that uses overhead rails with modular power.)

Believe me... As an IT employee of the Feds, I'm WELL aware of the power costs, and I'm constantly trying to penny pinch to save YOU and ME money. I may have the infamy of working for the "MAN", but I do try and make the best decisions with the money I spend.

(On the flip side, I'm posting on slashdot while you're paying my salary... I guess I'll have to work an extra few minutes today to avoid the hypocrite mod.)

Re:Simple Solution (1)

Doug Neal (195160) | more than 6 years ago | (#20134217)

This is definitely happening, at least in the UK anyway. It seems to be quite a recent thing, I suspect that power companies have been jacking up the prices for datacentres. As little as a couple of years ago, if you wanted to get some colocation, the cost was all about how much bandwidth you were using. I've been trying to get quotes for colocation recently and the message I've been getting from almost every company I've spoken to is that bandwidth and rack space is relatively abundant, but power consumption is what really determines the cost. The latest APC PDUs will tell you how much power you're using, possibly even on a per port basis - so it's not hard to work it out and bill for it.

As you said, market forces will start improving the power efficiency of equipment. This can only be a good thing IMHO.

Congress will act (4, Funny)

MobyDisk (75490) | more than 6 years ago | (#20131269)

No doubt our congress will act swiftly by moving daylight savings time to conserve power.

Re:Congress will act (1)

RobBebop (947356) | more than 6 years ago | (#20131739)

I wish you weren't joking....

Virtualization? (2, Insightful)

tji (74570) | more than 6 years ago | (#20131309)

I just grabbed the executive summary version, and didn't see any mention of virtualization..

To me, this seems like one of the more important aspects of power efficiency. Individual server efficiency is important, but the gains from higher utilization could be even more significant. Adding another core to a hypervisor will always be more efficient than adding a new system (CPU, Power Supply, disks, video, etc..). The energy efficient hardware can also be applied to the hypervisor hosts. Build efficient servers, and use as few of them as practical.

Many data centers are already greatly decreasing their server count using virtualization. This should be part of any data center energy efficiency discussion.

Re:Virtualization? (0)

Anonymous Coward | more than 6 years ago | (#20131425)

Yep. See this recent IBM Press release [informationweek.com] . From the release,

IBM on Wednesday said it has consolidated 3,900 computer servers in six locations worldwide onto about 30 refrigerator-sized mainframes running Linux, a move that the tech giant claims reduces computer-devoted floor space by 85% and will cut costs by $250 million.


Re:Virtualization? (1)

thegameiam (671961) | more than 6 years ago | (#20131653)

Virtualization is great for some things, but security concerns are substantial - PCI compliance generally means that all of the guests on a host have to be at the same security level. Also, some of the virtual environments don't handle IPv6 properly (and a few other things). These aren't showstoppers, but they can reduce some of the benefit.

Re:Virtualization? (1)

TooMuchToDo (882796) | more than 6 years ago | (#20131865)

Also, in most current virtualization environments, performance under heavy load on the guest has shown to suffer from 10-40% (depending on which virtualization product you're using). Not quite ready for prime time yet.

Re:Virtualization? (2, Informative)

necro81 (917438) | more than 6 years ago | (#20131991)

Table ES-1 in the executive summary suggests server consolidation at various levels (moderate, aggressive, etc.). Server consolidation can be done in a number of ways, with virtualization being one of the most effective and popular.

Re:Virtualization? (2, Informative)

1sockchuck (826398) | more than 6 years ago | (#20134101)

The report addresses virtualization only indirectly when it refers to electric utilities offering incentive programs. PG&E offers financial incentives to encourage the use of virtualization in data center consolidations [datacenterknowledge.com] , with qualifying customers able to earn a rebate of up to $4 million per project site. Other utilities are looking at adapting similar incentives based on virtualization.

I'm not sure EPA is the right party to be advocating virtualization. The EnergyStar ratings and utility-level programs are more up their alley.

location (1)

SolusSD (680489) | more than 6 years ago | (#20131415)

With a lot of these massive datacenters residing in sunny california you tihnk they could offset a large chunk of their power needs with solar panels covering the roofs Like the FedEx hub in Oakland.

Re:location (1)

Dan Ost (415913) | more than 6 years ago | (#20132335)

Wouldn't it make more sense to simply build data centers in cooler climes?

Re:location (1)

SolusSD (680489) | more than 6 years ago | (#20134077)

cool climate with some sunshine would be best i guess. you could leave the windows open *and* benefit from solar power to help power the servers. :)

Get rid of the AC DC power supplys and replace.... (1)

Joe The Dragon (967727) | more than 6 years ago | (#20131463)

Get rid of the AC DC power supplys and replace them with bigger ones that power more then one system also this will work better with back up power.

Re:Get rid of the AC DC power supplys and replace. (1)

Chris Snook (872473) | more than 6 years ago | (#20133231)

-48VDC is standard for some kinds of telecom equipment, so there's plenty of server gear that will work in -48VDC data centers, and it's very efficient. Unfortunately, using this niche gear requires very large economies of scale, on the order of tens of millions of dollars to be cost-effective.

For mere mortals, blade servers are a better compromise. When you have 4 power supplies per 10 servers, instead of 20, you can afford to invest in more efficient equipment. It's still not as efficient as the rectifiers used in large telecom data centers, but it's a big improvement, and it takes 1000x to be cost-effective.

Re:Get rid of the AC DC power supplys and replace. (1)

Joe The Dragon (967727) | more than 6 years ago | (#20133563)

also a dc set will give off less heat leading to less need for AC

Server "sleep" mode? (1)

GIL_Dude (850471) | more than 6 years ago | (#20131491)

OK, so I am a desktop/notebook guy. So this stuff may already exist for servers - but:

1) With multiple front-end servers behind NLB, make the NLB smart enough to put some servers to sleep when their processing isn't needed and wake-on-lan those servers (or the equivalent) when they are needed again?

2) Do servers do "speedstep" like desktops/notebooks where the processors and other components go to lower power level modes when they are not being fully utilized? If not, they should enable that.

Re:Server "sleep" mode? (1)

techpawn (969834) | more than 6 years ago | (#20131517)

Most servers run 24/7, so sleep mode would do more harm than good.

Re:Server "sleep" mode? (1)

tomstdenis (446163) | more than 6 years ago | (#20131715)

Most servers are also overpowered for the workload they perform. Just getting people to stop running 3GHz processors when a 1Ghz Sempron would do, would be a nice start.

Re:Server "sleep" mode? (1)

techpawn (969834) | more than 6 years ago | (#20131895)

Replacing old boxes with blade servers are a nice touch. In mose cases they use far less energy then the previous box, but yes if you have a file server connected to a SANs and its only real reason is for OS there's no need for dual quad cores. But, in the case of say a SQL or IIS/WEB server, you need both horsepower and 24/7 response times. So, a sleep mode wouldn't help you in that case.

Re:Server "sleep" mode? (0)

Anonymous Coward | more than 6 years ago | (#20131699)

1) In certain situations this is practiced. IIRC there are some ipvs tools to support this.
2) Most servers I have worked on are set for this. The only problem being most servers I work on have a load average of 3-5/cpu so the cpu never slows down because it is being used heavily and constantly.

Re:Server "sleep" mode? (1)

TooMuchToDo (882796) | more than 6 years ago | (#20131893)

AMD processors "speedstep" (it's why we purchased AMD boxes for our hosting operation). We've saved a huge amount on power (our datacenter does metered power, not flat rate power) due to this.

Re:Server "sleep" mode? (0)

Anonymous Coward | more than 6 years ago | (#20132339)

intel also does this.. put a clamp meter on your server line at 3:00am and retest at 10 am .. :D

Cisco's Take (0)

Anonymous Coward | more than 6 years ago | (#20131519)

I listened to this podcast just the other day. I found it very interesting to hear it from the network side of things.

http://www.bladewatch.com/2007/07/25/cisco-and-the -green-data-center/ [bladewatch.com]

For the most part, vendors are already working on reducing the power load. That's what the Low Voltage CPUs are all about. Spend more now to get a CPU that will use less power and run a little slower. Besides, do you really need a quad-core 3 GHz CPU for that file-server?

Federal Guidelines for Clock Speed Limits (3, Funny)

infonography (566403) | more than 6 years ago | (#20131903)

55 Mhz that's the law, exceed it and your looking at a speeding ticket.

Re:Federal Guidelines for Clock Speed Limits (2, Funny)

archen (447353) | more than 6 years ago | (#20132947)

I didn't see the turbo button was pushed officer! Honest!

DC power distro (1)

conspirator57 (1123519) | more than 6 years ago | (#20132357)

http://www.eweek.com/article2/0,1895,2000867,00.as p [eweek.com]

Also reduces a major cost and greenness problem: all those little redundant ac/dc power supplies in those rackmount machines. Further, it allosw you to take the heat generated by the power conversion to another nearby location, reducing the CFM reqs for your cooling system.

Re:DC power distro (1)

bigtrike (904535) | more than 6 years ago | (#20133359)

Yup...You replace all your AC->DC power supplies with DC->DC power supplies in the servers. It seems like the real benefit of the setup you linked to is the fact that it operates at 380v to the servers, which most likely allows for more efficient power supplies. You would probably see a lot of these benefits by switching to a higher voltage of AC, but you wouldn't want to run 5vdc and 12vdc through the whole data center, not without running copper wires the size of small tree trunks.

Who's on first? (1)

pongo000 (97357) | more than 6 years ago | (#20133129)

We've all been hearing ad nauseum about power and cooling issues in the data center.

Whose data center? Mine? Yours? The EPA's?

Please don't butcher the English language like that. Throwing random articles around is a sign of laziness (similar in magnitude to "They said...").

Brought to you by SIAA (Society against the Indiscriminant Abuse of Articles)

Energy Star (1)

sunking2 (521698) | more than 6 years ago | (#20134143)

Does this mean an Energy Star computer would be Sales Tax Free in Connecticut like appliances are?
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...