Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

The Risks and Rewards of Warmer Data Centers

samzenpus posted more than 4 years ago | from the not-so-chilly dept.

Power 170

1sockchuck writes "The risks and rewards of raising the temperature in the data center were debated last week in several new studies based on real-world testing in Silicon Valley facilities. The verdict: companies can indeed save big money on power costs by running warmer. Cisco Systems expects to save $2 million a year by raising the temperature in its San Jose research labs. But nudge the thermostat too high, and the energy savings can evaporate in a flurry of server fan activity. The new studies added some practical guidance on a trend that has become a hot topic as companies focus on rising power bills in the data center."

cancel ×

170 comments

Quick solution (2)

Geoffrey.landis (926948) | more than 4 years ago | (#29834425)

Locate the server farm in Antarctica!

Re:Quick solution (1)

Necroloth (1512791) | more than 4 years ago | (#29834467)

That'll teach those penguin lovers!

Re:Quick solution (3, Funny)

orsty3001 (1377575) | more than 4 years ago | (#29834473)

If rubbing frozen dirt in my crotch is wrong, I don't want to be right.

Re:Quick solution (0)

Interoperable (1651953) | more than 4 years ago | (#29834755)

You loose your energy savings to having to pay employees away from home pay and "-60C, are you fucking kidding me?" pay. That said...I'd work in Antarctica if I had the chance :-)

Re:Quick solution (1)

fatalwall (873645) | more than 4 years ago | (#29835497)

wouldn't you just lose the savings with heating... computers do have a minimum temperature that they will work in...

btw not to be a grammar nazi but its one 'o' not two for lose

Re:Quick solution (1)

petermgreen (876956) | more than 4 years ago | (#29835629)

Just make the building well insulated and then have controlled fans to bring in just enough outside air to keep the temperature where you want it.

Re:Quick solution (1)

Interoperable (1651953) | more than 4 years ago | (#29837097)

The "rubbing frozen dirt in my crotch" post get modded "Funny" and I get called out "loose". Not fair ;-)

Re:Quick solution (1)

petermgreen (876956) | more than 4 years ago | (#29835661)

Antartica would indeed not be a good choice but afaict there are places with temperatures low enough that you could use outside air to cool stuff year round while not being so low as to cause major logistical problems.

Re:Quick solution (2, Funny)

WhatAmIDoingHere (742870) | more than 4 years ago | (#29835993)

"You loose your energy savings.."

So all you have to do is tighten those savings and you'll be fine.

Re:Quick solution (1)

Interoperable (1651953) | more than 4 years ago | (#29837077)

Pffff... /. needs an speling Nazi mod cattegory :-)

Re:Quick solution (1)

GargamelSpaceman (992546) | more than 4 years ago | (#29834785)

It seems to me that computers produce X BTUs of energy that must be taken out of the server room. They will produce this energy regardless of the temperature in the server room. So... with great insulation around the room, the temperature INSIDE the room should not matter much with regards to the cost of keeping it cold. I think you'd want a temperature where the fans never come on at all ideally. How about making the server room a large dewar flask and fill it with liquid nitrogen and running servers? Why should it cost any more to maintain the room at 0 degrees than it would to maintain the room at 100 degrees. I would expect quite the opposite ( with great insulation AROUND the room. )

Re:Quick solution (1)

snowraver1 (1052510) | more than 4 years ago | (#29834873)

Good thoughts, but I think that cooling equipment works better when the internal temperature is higher. That way the coolant can collect more heat in the evaporation phase, which can then be dumped outside from the condenser. So, by keeping the temperature warmer, the return coolant has more heat in it, but the coolant still evaporates at the same temperature, so you get a larger delta. Yes, the compressers would have to work a little harder, but overall, apperantly it is a net savings in energy.

Re:Quick solution (1)

Mr. Freeman (933986) | more than 4 years ago | (#29836869)

"That way the coolant can collect more heat in the evaporation phase"
WHAT?!?

The heat of vaporization doesn't change based on temperature. What are you talking about?

Re:Quick solution (3, Insightful)

autora (1085805) | more than 4 years ago | (#29834907)

I see you've really thought this one through... A warehouse full of servers that need regular maintenance filled with liquid nitrogen is sure to lower costs.

Re:Quick solution (4, Informative)

jschen (1249578) | more than 4 years ago | (#29835059)

It is true that if you are producing X BTUs of heat inside the room, then to maintain temperature, you have to pump that much heat out. However, the efficiency of this heat transfer depends on the temperature difference between the inside and the outside. To the extent you want to force air (or any other heat transfer medium) that is already colder than outside to dump energy into air (or other medium) that is warmer, that will cost you energy.

Also, too cold, and you will invite condensation. In your hypothetical scenario, you'd need to run some pretty powerful air conditioning to prevent condensation from forming everywhere.

Re:Quick solution (3, Informative)

Yetihehe (971185) | more than 4 years ago | (#29836769)

Condensation happens on surfaces colder than surrounding air. If you have computers which are warmer than your cooling air, it would not be a problem.

Re:Quick solution (2, Interesting)

lobiusmoop (305328) | more than 4 years ago | (#29835767)

Data centers would be much more efficient if blade servers had modular water cooling instead of fans. Water is much better at transferring heat than air. Then you could just remove all the fans from the data center and add a network of water pipes (alongside the spaghetti of network and power cabling) around the data center. Then just pump cold water in and dispose of the hot water (pretty cheap to do). Should be reasonable safe too really - the water should only be near low-voltage systems really (voltage stepdown should really be happening at a single point in an efficient data center, not at every rack).

Re:Quick solution (1)

Mr. Freeman (933986) | more than 4 years ago | (#29836939)

I thought the reasons for water cooling systems not being placed in data centers were:
1) A failure causing coolant leakage could potentially destroy tens of servers.
2) Maintenance of these systems is quite expensive (mold and such growing int he lines that needs to be periodically cleaned out.)
3) Failure of a main pump could bring down the entire data center (although I assume there would be redundant systems in place)

Re:Quick solution (3, Informative)

billcopc (196330) | more than 4 years ago | (#29837201)

You mean like Crays used to have ?

The problems with water are numerous: leaks, evaporation, rust/corrosion, dead/weak pumps, fungus/algae, even just the weight of all that water can cause big problems and complicate room layouts.

Air is easy. A fan is a simple device: it either spins, or it doesn't. A compressor is also rather simple. Having fewer failure modes in a system makes it easier to monitor and maintain.

You also can't just "dispose of the hot water". It's not like you can leave the cold faucet open, and piss the hot water out as waste. Water cooling systems are closed loops. You cool your own water via radiators, which themselves are either passively or actively cooled with fans and peltiers. You could recirculate the hot water through the building and recycle the heat, but for most datacenters you'd still have a huge thermal surplus that needs to be dissipated. Heat doesn't just vanish because you have water, it only allows you to move it faster.

Re:Quick solution (1)

Mr. Freeman (933986) | more than 4 years ago | (#29836841)

"Why should it cost any more to maintain the room at 0 degrees than it would to maintain the room at 100 degrees. I would expect quite the opposite ( with great insulation AROUND the room. )"
Yeah, therein lies the problem. This "great insulation":
A) Doesn't exist.
B) Is horrendously expensive.

Yes, in an ideal environment this makes sense, but we're not working in one. You have energy leak in from the outside. In addition to that, there's no device that can move energy ideally. There's inefficiencies in every thing. Heaters, fans, air conditioning, etc. This causes further energy "losses".

It reminds me of a joke. A farmer goes to a physicist and says "I want to know if a chicken lays an egg on a roof, which way will it roll. Do you have an equation for this". The physicist thinks on this for awhile and then says "I do, but it only works for perfectly spherical chickens in a vacuum".

We have thermodynamic equations to model things like server rooms, but you have two options:
1) Make assumptions about things to simplify the equations.
2) Take into account everything and make your equations so complicated that they become practically unsolvable

Move to Canada (2)

Midnight Thunder (17205) | more than 4 years ago | (#29834877)

I know it was meant as a joke, but moving to colder climates may not be such a bad idea. Moving to a northern country such as Canada or Norway, you would benefit from the colder outside temperature, in the winter, to keep the servers cool and then any heat produced could be funnelled to keeping nearby buildings warm. The real challenge will be keeping any humidity out, but considering how dry the air during the winters can get there it may not be any issue.

All this said and done, trying to work out the sweet spot between not cooling a room to save energy and not having the server fans turn on is important. I would be curious to know if there are any solutions that allow the system temperature monitors to be linked into a central system, which is then linked to the room's climate control system exist?

Re:Move to Canada (2, Interesting)

asdf7890 (1518587) | more than 4 years ago | (#29835613)

I know it was meant as a joke, but moving to colder climates may not be such a bad idea. Moving to a northern country such as Canada or Norway, you would benefit from the colder outside temperature, in the winter, to keep the servers cool and then any heat produced could be funnelled to keeping nearby buildings warm.

There has been a fair bit of talk about building so-call "green" DCs in Iceland, where the lower overall temperatures reduce the need for cooling (meaning less energy used, lowering operational costs) and there is good potential for powering the things mainly with power obtained from geothermal sources.

There was also a study (I think it came out of Google) suggesting that load balancing over an international network, like Google's app engine or similar, be arranged so that when there is enough slack to make a difference more load is passed the DCs that are experiencing more wintery conditions than the others. It makes sense for applications where the extra latency of the server perhaps being the other side of the world some of the time isn't gonig to make much difference to the users.

Re:Move to Canada (1)

smellsofbikes (890263) | more than 4 years ago | (#29836349)

That was my first thought a year ago, when Iceland was going bankrupt: Google should buy the whole place. They have nearly free power because of all the hydroelectric, the ambient temperature is low, they have gobs of smart engineering and IT people looking for work, and Icelandic women are really hot.

Re:Move to Canada (2, Funny)

speculatrix (678524) | more than 4 years ago | (#29836391)

when Iceland was going bankrupt: Google should buy the whole place....Icelandic women are really hot

Ah, that's why you never see Icelandic women working in data centres, they overload the air-con!!!

Re:Quick solution (1)

N Monkey (313423) | more than 4 years ago | (#29836021)

Locate the server farm in Antarctica!

Perhaps not quite Antarctica, but according to the BBC's Click program [bbc.co.uk] Iceland is bidding for server business based on the low temperatures and lots of cheap geothermal power.

Re:Quick solution (1)

xaxa (988988) | more than 4 years ago | (#29836457)

Very high altitude, very cold, very low humidity -- you regularly lose hard drives from head crashes.

Possible strategy (3, Interesting)

Nerdposeur (910128) | more than 4 years ago | (#29834507)

1. Get a thermostat you can control with a computer
2. Give the computer inputs of temperature and energy use, and output of heating/cooling
3. Write a program to minimize energy use (genetic algorithm?)
4. Profit!!

Possible problem: do we need to factor in some increased wear & tear on the machines for higher temperatures? That would complicate things.

Re:Possible strategy (2, Funny)

jeffmeden (135043) | more than 4 years ago | (#29834655)

Careful with that, there are numerous patents to that effect. You wouldn't want to be suggesting IP theft, now, would you?

Re:Possible strategy (1)

dissy (172727) | more than 4 years ago | (#29835105)

Careful with that, there are numerous patents to that effect. You wouldn't want to be suggesting IP theft, now, would you?

Of course not! We don't steal IP here. In fact, that sheet of paper with their IP on it (the patent) will forever remain safely tucked away on file at the patent office, safe from all thieves.

We are however suggesting to ignore the fact that patent exists, and use that knowledge anyway.

Even if you want to be anti-capitalist and follow patent law, it is easy enough to use only the methods provided by IBMs expired patents and thus not run a fowl of any laws.

Re:Possible strategy (1, Funny)

Anonymous Coward | more than 4 years ago | (#29835329)

True, but try to run a swine of the law and see what happens.

Re:Possible strategy (1)

L4t3r4lu5 (1216702) | more than 4 years ago | (#29834773)

Erm... Surely you just replace a room thermostat with a CPU temp probe and the biiiiiiig chillers on the back wall with smaller chillers feeding directly into the cases?

Feedback loop to turn on the chillers above a certain temp and... Bob's your mother's brother.

Re:Possible strategy (-1, Troll)

Anonymous Coward | more than 4 years ago | (#29834903)

what are you some Irish cunt?

Re:Possible strategy (1)

L4t3r4lu5 (1216702) | more than 4 years ago | (#29835005)

No, but I'm visiting Ireland next week! I'll think of you while I'm enjoying a decent beer and looking at some Irish... Well, you know what I mean.

Re:Possible strategy (0)

Anonymous Coward | more than 4 years ago | (#29835409)

blarney stones?

Re:Possible strategy (1)

Jurily (900488) | more than 4 years ago | (#29834813)

Possible problem: do we need to factor in some increased wear & tear on the machines for higher temperatures? That would complicate things.

And the increased burnout rate of your sysadmins. But who cares about them, right?

Re:Possible strategy (5, Funny)

Monkeedude1212 (1560403) | more than 4 years ago | (#29834827)

Sadly, in an effort to save money, we hired some developers with little to no experience, and zero credentials. Turns out the program they wrote to control the thermostat eats up so many compute cycles that it visibly raises temperature of whatever machine its running on. So we ran it in the server room, because thats where temperature is most important. However by the time it would adjust the temperature the room would raise 1 Degree. Then it would have to redo its analysis and adjustments.

Long story short, the building burned down and I'm now unemployed.

Re:Possible strategy (3, Funny)

JustinRLynn (831164) | more than 4 years ago | (#29836755)

Wow, I never thought I'd observe a thermal cascade reaction outside of a chemistry lab or a nuclear power plant. Thanks slashdot!

Re:Possible strategy (3, Interesting)

Linker3000 (626634) | more than 4 years ago | (#29834843)

Interestingly enough, I recently submitted an 'Ask Slashdot' (Pending) about this as my IT room is also the building's server room (just one rack and 5 servers) and we normally just keep the windows open during the day and turn on the aircon when we close up for the night, but sometimes we forget and the room's a bit warm when we come in the next day! We could just leave the aircon on all the time but that's not very eco-friendly.

I was asking for advice on USB/LAN-based temp sensors and also USB/LAN-based learning IR transmitters so we could have some code that sensed temperature and then signalled to the aircon to turn on by mimicking the remote control. Google turns up a wide range of kit from bareboard projects to 'professional' HVAC temperature modules costing stupid money so I was wondering if anyone had some practical experience of marrying the two requirements (temp sensor and IR transmitter) with sensibly-priced, off-the-shelf (in the UK) kit.

Anyone?

Re:Possible strategy (1)

WuphonsReach (684551) | more than 4 years ago | (#29835615)

http://www.itwatchdogs.com/products_mon.shtml [itwatchdogs.com]

Basically, you're looking at $300-$1000 in hardware, but it can interface with Nagios.

If we ever move our servers to the basement, I'll be setting these up to monitor for flooding or temperature issues.

Re:Possible strategy (1)

Linker3000 (626634) | more than 4 years ago | (#29835837)

Cheers - my start point for pricing was a USB-based IR transmitter that only costs 67UKP

http://www.redrat.co.uk/products/index.html [redrat.co.uk]

Anyone used this for a similar temp control project?

Re:Possible strategy (1)

herring0 (1286926) | more than 4 years ago | (#29837215)

We used the Sensatronics EM1 which is connected to the network and monitor it with several things. The EM1 interface is very simple and one of the monitors is just a cron job that scrapes the output from the web interface and will shutdown some of our more sensative equipment if it gets too hot.

They also have a bevy of interfaces from commercial products and the couple of monitoring/notification systems we tested were all able to communicate with the EM1 without any problems.

The total cost for the EM1 and several temp and temp+humidity probes was less than 700$ USD. If you don't care about multiple probes you could probably get it for under 500$ USD.

http://www.sensatronics.com/ [sensatronics.com]

Re:Possible strategy (0)

Anonymous Coward | more than 4 years ago | (#29835061)

1. Get a thermostat you can control with a computer 2. Give the computer inputs of temperature and energy use, and output of heating/cooling 3. Write a program to minimize energy use (genetic algorithm?) 4. Profit!!

Possible problem: do we need to factor in some increased wear & tear on the machines for higher temperatures? That would complicate things.

Eh, don't bother factoring in the wear and tear. See, then, the genetic algorithm will find that the most energy efficient setting is AC off... then, when the servers overheat all and die, that's even LESS energy used!

What about HDDs? (0)

headhot (137860) | more than 4 years ago | (#29834529)

Google did a study that said the MTBF for HDD decreases significantly with each warmer degree of temperature.

Re:What about HDDs? (5, Informative)

DomNF15 (1529309) | more than 4 years ago | (#29834597)

No they didn't - what they did do is figure out that increased temperature is not correlated to higher failure rates - the failure rates don't magically decrease as it gets hotter.

Here's the link for your review: http://hardware.slashdot.org/story/07/02/18/0420247/Google-Releases-Paper-on-Disk-Reliability [slashdot.org]

Re:What about HDDs? (1)

Kashgarinn (1036758) | more than 4 years ago | (#29836361)

I'm pretty sure if they did a check between vibration and failure rates, they'd get correlation.

I'll also put out there a hypothesis that the rack which has no moving parts will have less failure rates compared to the rack which has moving parts.

Wouldn't it be fun to be a head engineer at one of the bigger companies and be able to test it out :)

Re:What about HDDs? (2, Funny)

Mr. Freeman (933986) | more than 4 years ago | (#29837107)

"Wouldn't it be fun to be a head engineer at one of the bigger companies and be able to test it out :)"
Oh really?

Let's see your proposal, your test criteria, your plan.
Let's see your budget... cut it in half
Now for risk analysis, what if you're right and the servers all fail sooner than expected (i.e. sooner than budgeted)?
Spend 3 weeks filling out red tape
Spend 2 weeks waiting.

OK, you can run your study. Set up two racks in a closet and take measurements every day for a year.

Now write up the review.

Alright, thanks for your study, but our lawyers have advised us that it wasn't peer reviewed and published in a respected compsci journal and therefore we can't do anything with it, or the insurance wouldn't cover us and we'd be liable for deaths resulting from servers or something.

File in circular file or far back of filing cabinet never to be seen again until you're clearing out your office because they had to let you go because server replacement costs were too high to keep you on the payroll.

Re:What about HDDs? (2, Insightful)

wsanders (114993) | more than 4 years ago | (#29836851)

I am a little skeptical since most hard drive failures I've had have been right after a air conditioning outage. The Google paper uses temperature obtained from SMART, which is usually 10 to 15C higher than the ambient temperature in the room, and the tail of their sample falls off rapidly over 40C. What would the SMART temperature be if the ambient temperature was 40 or so? Probably 60 or above. Their graphs don't do that high.

But we're talking raising the temperature of a data center only 2 or 3 deg. Meat lockers are not helpful. Moral of the story? Maybe spend your cooling bucks on your storage, then let the rest of your systems eat their exhaust. I have some new Juniper routers, no moving parts inside except fans - the yellow alarm doesn't kick off until 70C and the machine doesn't shut down until 85C.

Re:What about HDDs? (1)

Bandman (86149) | more than 4 years ago | (#29834613)

Until what point? You can't consistently say "increase the temperature to decrease the MTBF".

You'll end up with molten slag.

Re:What about HDDs? (0)

Anonymous Coward | more than 4 years ago | (#29834675)

If you slag the drives, MTBF = 0.

Hint: when you're talking about time between failures, decreasing is the bad derivative.

Re:What about HDDs? (1, Informative)

jeffmeden (135043) | more than 4 years ago | (#29834677)

Until what point? You can't consistently say "increase the temperature to decrease the MTBF".

You'll end up with molten slag.

Yes, you can. MTBF = mean time before/between failure. To decrease, reduce, lower, however you want to say it, it is going to fail SOONER meaning it is getting LESS reliable. That was the point, hotter temps = less reliability. Same goes for just about any physical/chemical process (fans, batteries, hard drive motors, etc.)

Re:What about HDDs? (1)

MrNemesis (587188) | more than 4 years ago | (#29834823)

Except the google study didn't display any evidence of this happening - there was no correlation between higher temperatures and higher failure rates on mechanical drives.

http://hardware.slashdot.org/article.pl?sid=07/02/18/0420247 [slashdot.org]
http://www.engadget.com/2007/02/18/massive-google-hard-drive-survey-turns-up-very-interesting-thing/ [engadget.com]
http://labs.google.com/papers/disk_failures.pdf [google.com]

Even if it were, it'd be easy to rememdy - boot all your servers off SSD and keep them in a "hot" room. Keep your SANs-full-o'-spinning-rust in a "cold" room. You've just saved a fortune in air con despite being unable to convince your CTO that heat isn't as big a killer as many people claim it to be.

We had a power failure at one of our data centres, due to a combination of a stupid JCB driver and IBM's ineptitude (not keeping the diesel tanks full). Power for the servers was restored about six hours before the air-con was back up and running, and most of our equipment got cooked (ambient temp ranging from 35 to 40 degrees depending which part of the data centre you stood in) - we demanded IBM guarantee us a 3hr turnaround on any parts that died for the next 6 months due to heat failure. 18 months later and our hard drive failure graph is the same as it ever was.

Shoddy components on hardware is another matter I guess, but we've never had any hardware die due to a single faulty component apart from the occasional RAID card. Expect, as "hot" DC's become more common, that the heat thresholds on lots of enterprise equipment will increase... for a price, of course ;)

Re:What about HDDs? (1)

jeffmeden (135043) | more than 4 years ago | (#29835085)

After giving that paper a closer look (the best link is this one, btw, the engaget link is dead: http://research.google.com/archive/disk_failures.pdf [google.com] ) The failure rate went up with cold AND hot temperatures. How they got the disk temps that cold is beyond me, but their hot end seemed a little optimistic since I have seen desktops in comfortably air conditioned rooms running disk temps of 50C or more, and have a strong set of anecdotal evidence that these are the disks that fail most often.

Re:What about HDDs? (0)

Anonymous Coward | more than 4 years ago | (#29835463)

Ah yes, our good friend the anecdote: the singular form of data.

Re:What about HDDs? (1)

petermgreen (876956) | more than 4 years ago | (#29835881)

The google survey appears to show that drives are happiest arround 35-40 celcius with failure rates increasing both sides of that band.

Of course there are a couple of issues with that data

Firstly the data comes from the drives built in sensors so if a particular brand of drive has both an abnormal failure rate and an abnormal reported running temperature (either due to producing a different ammount of heat or due to a bad sensor) it would skew the results.

The second problem is they simply don't have much data outside the range 25-45 celcius.

Still I don't think it's a huge issue as long as you don't try to run your datacenter insanely hot.

The Risks and Rewards of 'Warner' Data Centers (0)

Anonymous Coward | more than 4 years ago | (#29834543)

I read that as "The Risks and Rewards of Warner Data Centers", see the previous news item, "Time Warner Cable Modems Expose Users"

I'm getting old.

sysadmin (0)

Anonymous Coward | more than 4 years ago | (#29834553)

so, this is all good and well, but for us simple sysadmins who run just a few servers in a closet room, what does it mean?

in my case, I have 8 servers and 1 12k btu AC, currently set at 22 degrees celcius. is this in line with the recommendations?

Re:sysadmin (0)

Anonymous Coward | more than 4 years ago | (#29834641)

> in my case, I have 8 servers and 1 12k btu AC, currently set at 22 degrees celcius. is this in line with the recommendations?

Use a larger case.

A little bit unclear (1)

sunking2 (521698) | more than 4 years ago | (#29834581)

Sure, the fans kick in and you aren't saving as much, but are you still saving? I suspect you still are, there is a reason you are told to run ceiling fans in your house even with the AC on.

The thermal modeling for all this isn't that difficult. You can get power consumption, fan speeds, temp, etc and feed them into a pretty accurate plant model that should be able to on the fly adjust temperature for optimal efficiency. Or I guess we can hire company to form a bunch of committees to do a bunch of studies and come up with a bunch of papers that state the obvious.

Re:A little bit unclear (2, Insightful)

mea37 (1201159) | more than 4 years ago | (#29834913)

"Sure, the fans kick in and you aren't saving as much, but are you still saving? I suspect you still are, there is a reason you are told to run ceiling fans in your house even with the AC on."

If only someone would do a study based on real-world testing, we could be sure... Oh, wait...

There are several differences between ceiling fans and server fans. You can't use one to make predications about the other. "Using one large fan to increase airflow in a room is a more efficient way for people to feel cooler than using AC to actually drop the temp a few extra degrees", but this does not imply that "running a bunch of little fans to individually increase heat sink efficiency in each of a number of computers would be moer efficient than just keeping the room cool enough for those heat sinks to do their job in the first place".

Re:A little bit unclear (4, Insightful)

greed (112493) | more than 4 years ago | (#29835167)

For starters, people sweat and computers do not. So, airflow helps cool people by increasing evaporation, in addition to direct thermal transfer. Even when you think you aren't sweating, your skin is still moist and evaporative cooling still works.

Unless someone invents a CPU swamp cooler, that's just not happening on a computer. You do need airflow to keep the hot air from remaining close to the hot component (this can be convection or forced), but you don't get that extra... let's call it "wind chill" effect that humans feel.

Ok, just how much warmer? (0)

RyanSpade (820527) | more than 4 years ago | (#29834605)

I'll bite... We all know that a couple of degrees can save a good bit over a years time. (Somewhere around 5 to 10% IIRC) Will a couple of degrees make that much of a difference? Likely not in the general lifespan of the equipment. Will A LOT of dgrees make a difference? I'm willing to bet so.

we can just use geeks as a heatsink... (0)

Anonymous Coward | more than 4 years ago | (#29834635)

oh wait, you mean the coal, chemical, oil, and nuclear companies already have heated the rivers up and killed a bunch of fish? damnit.

maybe we can use people as a heatsink? 'help wanted, must love drinking water and pee-ing'

The internet's not free? (1)

commodore64_love (1445365) | more than 4 years ago | (#29834661)

I thought the internet was free (or so people keep telling me). You mean it actually costs these companies money to maintain the connections??? Wow. I guess my $15/month bill actually serves a purpose after all.

UNITS? (1, Interesting)

RiotingPacifist (1228016) | more than 4 years ago | (#29834701)

80 whats? Obviously they mean 80F (running a temperature at 80K, 80C or 80R would be insane), but you should always specify units (especially if your using some backwards units like Fahrenheit!)

Re:UNITS? (2, Funny)

friedo (112163) | more than 4 years ago | (#29835013)

Fahrenheit backwards? That shit was metric before the Metric System even existed.

To wit:

0F is about as cold as it gets, and 100F is about as hot as it gets.

See? Metric.

Re:UNITS? (1)

ubercam (1025540) | more than 4 years ago | (#29835251)

lol not around here [google.ca] my friend.

Have you ever gone outside when it's -40 (C or F, it's the same)? The air is so cold that it hurts to breathe, but I love it. There is nothing like it. The humidity from your breath sticks to your eyelashes and they freeze together and you have to pick the ice off so you can open your eyes. It's amazing human beings even live here.

Re:UNITS? (1)

Abstrackt (609015) | more than 4 years ago | (#29836959)

As a fellow Winnipeger all I can say is this: at least it's not Saskatchewan. =p

Re:UNITS? (1)

ubercam (1025540) | more than 4 years ago | (#29837193)

So true! At least we have something mildly interesting to look at, like a tree once in a while, maybe a lake.

Re:UNITS? (2, Informative)

tepples (727027) | more than 4 years ago | (#29835373)

Fahrenheit backwards? That shit was metric before the Metric System even existed.

To wit:

0F is about as cold as it gets, and 100F is about as hot as it gets.

You're right for the 40th parallel or so. But there are parts of the world that routinely dip below 0 deg F (-18 deg C) and other parts that routinely climb above 100 deg F (38 deg C). Things like that are why SI switched from Fahrenheit and Rankine to Celsius and Kelvin.

Re:UNITS? (1)

jeffmeden (135043) | more than 4 years ago | (#29836861)

Things like that are why SI switched from Fahrenheit and Rankine to Celsius and Kelvin.

Sure, blame climate change. Everyone else does!

Re:UNITS? (3, Interesting)

nedlohs (1335013) | more than 4 years ago | (#29835437)

And yet the temperature here measured in F gets negative every winter. And where I previously lived it got above 100F every summer (and it also does where I am now, but only a day or three each year).

But in both those places a temperature of 0C was the freezing point of water, and 100C the boiling point. Yes that 100C one isn't so useful in terms of daily temperature, the 0C is though since whether water will freeze or not is the main transition point in daily temperature.

Re:UNITS? (0)

Anonymous Coward | more than 4 years ago | (#29836307)

Fahrenheit backwards? That shit was metric before the Metric System even existed.

In deed. A bit lika horse riding was a metric of how long you could travel in a day, or how fast you could go. Nowadays, in the backwards parts of the world, the whole concept of horse travel has been practically forgotten. Infidels!

Ducted cabinets (2, Interesting)

tom17 (659054) | more than 4 years ago | (#29834707)

So what about having ductwork as another utility that is brought to each individual server? Rather than having thousands of tiny inefficient fans whirring away, you could have a redundant farm of large efficient fans that pull in cool air from outside (cooling only required then in hot climates or summer) and duct it under the floor in individual efficient ducts to each cabinet.Each cabinet would then have integral duct-work that would connect to individual servers. The servers would then have integral duct-work that would route the air to the critical components. There would have to be a similar system of return-air duct-work that would ultimately route back to another redundant farm of large efficient fans that scavenge the heated air and dump it outside.

I realise that this is not something that could be done quickly, it would require co-operation from all major vendors and then only if it would actually end up being more efficient overall. There would be lots of hurdles to overcome too... Efficient ducting (no jagged edges or corners like int domestic HVAC ductwork), no leaks, easy interconnects, space requirements, rerouting away from inactive equipment etc etc etc.You would still need some ac in the room as there is bound to be heat leakage from the duct-work, as well as heat given off from less critical components, but the level of cooling required would be much less if the bulk of the heat was ducted straight outside.

So I know the implementation of something like this would be monumental, requiring redesigning of servers, racks, cabinets and general DC layout. It would probably require standards to be laid out so that any server will work in any cab etc (like current rackmount equipment is fairly universally compatible), but after this conversion, could it be more efficient and pay off in the long run?

Just thinking out loud.

Tom...

Re:Ducted cabinets (0)

Anonymous Coward | more than 4 years ago | (#29834733)

Why do you want to wrap server cabinets with duct tape ?!?

Re:Ducted cabinets (2, Informative)

Cerberus7 (66071) | more than 4 years ago | (#29834777)

THIS. I was going to post the same thing, but you beat me to it! APC makes exactly what you're talking about. They call it "InfraStruXure." Yeah, I know... Anywho, here's a link to their page for this stuff [apc.com] .

Re:Ducted cabinets (1)

jeffmeden (135043) | more than 4 years ago | (#29836961)

Ah, erm, no. That's not what InfraStruXure is. And there is a good reason. What happens when you need to work in the front or back of the cabinet? All of a sudden your cooling mechanism is offline and you have precious few minutes without forced air before your servers roach themselves.

The reason this has never (and probably will never) been done is the amount of form factor standardization required from top to bottom in the vendor lineup. Even if the heavens parted and God himself handed down a standard, it wouldn't be six months before some vendor decided they would one-up the market and come out with something "Better" that breaks it, and then you are back to the basics. Given the huge amount of heterogeneity in a given data center, asking every bit of equipment to conform to the same standard, and to stick to that standard for more than one product release cycle, is something of a pipe dream.

We have replaced.. (1)

UncleWilly (1128141) | more than 4 years ago | (#29834779)

We have replaced Tom's Decaf with DOUBLE ESPRESSO this morning, let's see if he's noticed the difference..

Re:Ducted cabinets (2, Informative)

Linker3000 (626634) | more than 4 years ago | (#29834915)

While I was looking at aircon stuff for our small room, I came across a company that sold floor-to-ceiling panels and door units that allowed you to 'box in' your racks and then divert your aircon into the construction rather than cooling the whole room. Seems like a sensible solution for smaller data centres or IT rooms with 1 or 2 racks in the corner of an otherwise normal office.

Re:Ducted cabinets (1)

LMacG (118321) | more than 4 years ago | (#29835677)

I know just the man to work on this -- Archebald 'Harry' Tuttle. [wikipedia.org]

Re:Ducted cabinets (1)

0100010001010011 (652467) | more than 4 years ago | (#29836625)

Why even have individual cases? It seems to be rare now days that a full rack isn't just full of computers. Why not have one massive door and a bunch of naked computers on the racks. Set up the air flow in your building such that one side is high pressure the other side is low and blow air across the entire thing.

Cluster load balancing based on temp (2, Interesting)

EmagGeek (574360) | more than 4 years ago | (#29834749)

Well, if you have a large cluster, you can load balance based on CPU temp to maintain a uniform junction temp across the cluster. Then all you need to do is maintain just enough A/C to keep the CPU cooling fans running slow (so there is excess cooling capacity to handle a load spike since the A/C can only change the temp of the room so quickly)

Or, you can just bury your data center in the antarctic ice and melt some polar ice cap directly.

One failed attempt (0)

Anonymous Coward | more than 4 years ago | (#29834789)

They did try using 40 year old female virgins as heat sinks but there was took much icing.

Turn fans down? (1, Informative)

RiotingPacifist (1228016) | more than 4 years ago | (#29834849)

I used to have a Pentium 4 Prescott , the truth is processors can run significantly above spec (hell the thing would go above the "max temp" just opening notepad). It's already been shown that higher temps don't break HDD, are the downsides of running the processor a few degrees hotter significant or can they be ignored?

Re:Turn fans down? (1)

TheRaven64 (641858) | more than 4 years ago | (#29836073)

With the P4, there are significant disadvantages with running it too hot. The CPU contains a thermal sensor and automatically reduces the clock speed when it gets hot to prevent damage. When it's running at 100MHz, it takes a while to get anything done...

Hardware upgrades probably nullify the problems (2, Interesting)

Yvan256 (722131) | more than 4 years ago | (#29834887)

If you save enery by having warmer data centers, but that it shortens the MTBF, is it really that big of a deal?

Let's say the hardware is rated for five years. Let's say that running it hotter than the recommended specifications shortens that to three years.

But in three years, new and more efficient hardware will probably replace it anyway because it will require, let's say, 150 watts instead of 200 watts, so the old hardware would get replaced anyway because the new hardware will cost less to run in those lost two years.

Re:Hardware upgrades probably nullify the problems (2, Interesting)

amorsen (7485) | more than 4 years ago | (#29835213)

But in three years, new and more efficient hardware will probably replace it anyway because it will require, let's say, 150 watts instead of 200 watts

That tends to be hard to get actually, at least if we're talking rack-mountable and if you want it from major vendors.

Rather you get something 4 times as powerful which still uses 200W. If you can then virtualize 4 of the old servers onto one of the new, you have won big. If you can't, you haven't won anything.

Longer Study (3, Insightful)

amclay (1356377) | more than 4 years ago | (#29835039)

The studies were not long enough to constitute a very in-depth analysis. It would have to be a multi-month, or up to a year to analyze all the effects of raising temperatures.

For example, little was considered with:

1) Mechanical Part wear (increased fan wear, component wear due to heat)

2) Employee discomfort (80 degree server room?)

3) Part failure*

*If existing cooling solutions had issues, it would be a shorter time between the issue and additional problems since you have cut your window by ~15 degrees.

Re:Longer Study (1)

gx5000 (863863) | more than 4 years ago | (#29835273)

This sounds like Newbies at play... Keep it cool and it will last.. Let it go warm, and hardware sales will wipe out your savings.... Then there the time our power went out and the backup to the building was wiped out by MetroPower, The server room became a sweat lodge with as many fans as we could find. Had we been warm in there we would have lost numerous server towers and Raid Racks... I like my server room served at 69 F / 19 C ... chilled not baken TY.

Re:Longer Study (1)

LWATCDR (28044) | more than 4 years ago | (#29835945)

"Had we been warm in there we would have lost numerous server towers and Raid Racks."
Don't your servers have thermal protection?
Don't you have a power down temp and procedure?
I can not imagine a professional allowing hardware to fry just because they loose AC. You must have a procedure for that situation because it will happen if you run a data center long enough.

Re:Longer Study (1, Funny)

Anonymous Coward | more than 4 years ago | (#29836493)

Don't you have a power down temp and procedure?

Funny story about a time I was in a similar situation.

With shutdown at 90F, our procedure, made up on the fly when we hit 85F and started getting nervous, was to open the doors the machine room, and carry large cardboard sheets in and out of the room in a circle, walking through it and circulating the air like bees do at the front of a hive on a hot day.

It got the room down to 80F in the hour it took for the big-ass blower fans to get there, and the fans kept it stable in the high 70s for the rest of the day. Zero downtime.

The HVAC was fine; the root cause was that building maintenance shut off the water to the AC units without notice to us, the tenant. Yeah, we chewed them out royally the next day, but the first priority was to keep the machines running.

So in a roundabout way, we had a procedure. It was just pretty absurd. On the other hand, we didn't care how dorky we looked; it worked!

Article... What Article? (0)

Anonymous Coward | more than 4 years ago | (#29835147)

I've RTFA and the article lacked most of the information that was discussed in the summary. It doesn't really explain about the many risks of higher temperatures, only about the cost savings of raising the temperature.

With modern cooling infrastructure, are cooling costs that high? At my datacenter, cooling isn't that much expensive. The chiller units are expensive to buy, but the price for electricity and chilled water isn't that high.

Don't people know what happens when computer equipment is exposed to high temperatures? Hardware failures increate, hard drives may fail sooner, and fans will be running way faster (TFA mentions that one).

Running a datacenter too cool isn't good, either. Your staff will be freezing and you'll be wasting money on chiller maintenance.

Time to recover (1)

afidel (530433) | more than 4 years ago | (#29835361)

Yes, but if you have the room at the tipping point what does this do to your ability to recover from a fault? I know one reason many datacenters have experienced outages even with redundant systems is that the AC equipment is almost never on UPS and so it takes some time for them to recover after switching to generators. If you are running 10F hotter doesn't that mean you have that much less time for the AC to recover before you start experiencing problems? For a large company with redundant datacenters or in Cisco's case where they are mostly development labs it probably is worth the risk, but for your average small to midsized corporate datacenter it's probably smarter to stay with the tried and true.

Newer tech may reduce temperature issues (1, Informative)

Anonymous Coward | more than 4 years ago | (#29835405)

The use of SSDs in data centers can dramatically impact power usage and temperature management costs:

"The power savings for the SSD-based systems is about 50 percent, and the overall cooling savings are 80 percent, according to the white paper. These savings are significant for a datacenter that spends 40 percent of its budget on power and cooling, and they're bound to make other datacenter operators sit up and take notice." http://arstechnica.com/business/news/2009/10/latest-migrations-show-ssd-is-ready-for-some-datacenters.ars

While MTBF and unit cost are still concerns, the potential savings will likely see more centers moving in this direction.

Another server room horror story (5, Insightful)

BenEnglishAtHome (449670) | more than 4 years ago | (#29835809)

I'm less concerned with the fine-tuning of the environment for servers than I am with getting the basics right. How many bad server room implementations have you seen?

I'm sitting in one. We used to have a half-dozen built-for-the-purpose Liebert units scattered around the periphery of the room. The space was properly designed and the hardware maintained whatever temp and humidity we chose to set. They were expensive to run and maintain but they did their job and did it right.

About seven years ago, the bean-counting powers-that-be pronounced them "too expensive" and had them ripped out. The replacement central system pumps cold air under the raised floor from one central point. Theoretically, it could work. In practice, it was too humid in here the first day.

And the first week, month, and year. We complained. We did simple things to demonstate to upper management and building management that it was too humid in here, things like storing a box of envelopes in the middle of the room for a week and showing management that they had sealed themselves due to excessive humidity.

We were, in every case, rebuffed.

A few weeks ago, a contractor working on phone lines under the floor complained about the mold. *HE* got listened to. Preliminary studies show both penicillin (relatively harmless) and black (not so harmless) mold in high concentrations. Lift a floor tile near the air input and there's a nice thick coat of fluffy, fuzzy mold on everything. There's mold behind the sheetrock that sometimes bleeds through when the walls sweat. They brought in dehumidifiers that are pulling more than 30 gallons of water out of the air every day. The incoming air, depending on who's doing the measuring, is at 75% to 90% humidity. According to the first independent tester who came in, "Essentially, it's raining" under our floor at the intake.

And the areas where condensation is *supposed* to happen and drain away? Those areas are bone dry.

IOW, our whole system was designed and installed without our input and over our objections by idiots who had no idea what they were doing.

So, my fellow server room denizens, please keep this in mind - When people (especially management types) show up with studies that support the view that the way the environment is controlled in your server room can be altered to save money, be afraid. Be very afraid. It doesn't matter how good the basic research is or how artfully it could be employed to save money without causing problems, by the time the PHBs get ahold of it, it'll be perverted into an excuse to totally screw things up.

Re:Another server room horror story (2, Funny)

infinite9 (319274) | more than 4 years ago | (#29836811)

About a year ago, I worked on a project in a backwards location that was unfortunately within driving distance of the major city where I live. The rate was good though so I took the job. These people were dumb for a lot of reasons. (it takes a lot for me to call my customers dumb) But the one that really made me laugh was the server rack strategically placed in the server room so that the server room door would smack into it whenever someone came in the room.

Warmer, better, faster... (2, Informative)

pckl300 (1525891) | more than 4 years ago | (#29835861)

I was at a Google presentation on this last night. If I remember correctly, I believe they found the 'ideal' temperature for running server hardware without decreasing lifespan to be about 45 C.

Risk of AC failure (4, Interesting)

Skapare (16644) | more than 4 years ago | (#29836315)

If there is a failure of AC ... that is, either Air Conditioning OR Alternating Current, you can see a rapid rise in temperature. With all the systems powered off, the latent heat inside the equipment, which is much higher than the room temperature, emerges and raises the room temperature rapidly. And if the equipment is still powered (via UPS when the power fails), the rise is much faster.

In a large data center I once worked at, with 8 mainframes and 1800 servers, power to the entire building failed after several ups and downs in the first minute. The power company was able to tell us within 20 minutes that it looked like a "several hours" outage. We didn't have the UPS capacity for that long, so we started a massive shutdown. Fortunately it was all automated and the last servers finished their current jobs and powered off in another 20 minutes. In that 40 minutes, the server room, normally kept around 17C, was up to a whopping 33C. And even with everything powered off, it peaked at 38C after another 20 minutes. If it weren't so dark in there I think some people would have been starting a sauna.

We had about 40 hard drive failures and 12 power supply failures coming back up that evening. And one of the mainframes had some issues.

keep UPS separate and cooler (2, Insightful)

speculatrix (678524) | more than 4 years ago | (#29836545)

UPS batteries are sealed lead-acid and they definitely benefit from being kept cooler, it's also good to keep them in a separate room, usually close to your main power switching. As far as servers are concerned, I've always been happy with ab ambient room temp of about 22 or 23, provided air-flow is good so you don't get hot-spots, and it makes for a more pleasant working environment (although with remote management I generally don't need to actually work in them for long periods of time).

Be very afraid of a study that ... (1)

Skapare (16644) | more than 4 years ago | (#29836765)

describes temperatures using the Fahrenheit scale.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...