Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Cooling Challenges an Issue In Rackspace Outage

Zonk posted more than 6 years ago | from the getting-a-touch-warm-in-here dept.

Power 294

miller60 writes "If your data center's cooling system fails, how long do you have before your servers overheat? The shrinking window for recovery from a grid power outage appears to have been an issue in Monday night's downtime for some customers of Rackspace, which has historically been among the most reliable hosting providers. The company's Dallas data center lost power when a traffic accident damaged a nearby power transformer. There were difficulties getting the chillers fully back online (it's not clear if this was equipment issues or subsequent power bumps) and temperatures rose in the data center, forcing Rackspace to take customer servers offline to protect the equipment. A recent study found that a data center running at 5 kilowatts per server cabinet may experience a thermal shutdown in as little as three minutes during a power outage. The short recovery window from cooling outages has been a hot topic in discussions of data center energy efficiency. One strategy being actively debated is raising the temperature set point in the data center, which trims power bills but may create a less forgiving environment in a cooling outage."

cancel ×

294 comments

Sorry! There are no comments related to the filter you selected.

HAIL RED ARMY! (-1, Offtopic)

Anonymous Coward | more than 6 years ago | (#21337851)

ussr is where we are!

Re:HAIL RED ARMY! (-1, Offtopic)

Anonymous Coward | more than 6 years ago | (#21337967)

Hail Red Army Choir!

Kalinka, Kalinka, Kalinka Maya!

This is number 3 (5, Informative)

DuctTape (101304) | more than 6 years ago | (#21337871)

This is actually Rackspace's number 3 outage in the past couple days. My company was only (!) affected by outages 1 and 2. My boss would have had a fit if number 3 would have taken us down for the third time.

Other publications [valleywag.com] have noted it was number 3, too.

DT

Why run data centres in hot states? (1)

EmbeddedJanitor (597831) | more than 6 years ago | (#21338555)

It seems crazy that the data centres seem to run in hot states. Surely Alaska would be better? C'mon Alaska, get the tax-breaks right.

Re:Why run data centres in hot states? (4, Interesting)

arth1 (260657) | more than 6 years ago | (#21338989)

(Disregarding your blatant karma whoring by replying to the top post while changing the subject)

There's several good reasons why the servers are located where they are, and not, say, in Alaska.
The main one is light speed through fiber, and a cable from Houston to Fairbanks would induce a best case of around 28 ms latency, each way. Multiply by several billion packets.

This is why hosting near the customer is considered a Good Thing, and why companies like Akamai have made it their business of transparently re-routing clients to the closest server.

Back to cooling. A few years ago, I worked for a telephone company, and the local data centre there had a 15 degree C ambient baseline temperature. We had to wear sweaters if working for any length of time in the server hall, but had a secure normal temperature room outside the server hall, with console switches and a couple of ttys for configuration.
The main reason why the temperature was kept so low was to be on the safe side -- even if a fan should burn out in one of the cabinets, opening the cabinet doors would provide adequate (albeit not good) cooling until it could be repaired, without (and this is the important part) taking anything down.
A secondary reason was that the backup power generators were, for security reasons, inside the server hall themselves, and during a power outage these would add substantial heat to the equation.

Re:This is number 3 (0)

Anonymous Coward | more than 6 years ago | (#21338841)

Other publications include www.rackspace.com, too:

I think the incident #3 referred to at ValleyWag was a brief, but surely painful, glitch according to some reports, but ultimately a part of the Sunday AM incident.

At least they're not denying a problem, minimizing impact and shruging off blame. Things break. Things break badly. People make mistakes and fail to deliver. Yet, it is the response to these failures that shows the true measure of a company (and a person).

Which only shows (2, Informative)

CaptainPatent (1087643) | more than 6 years ago | (#21337873)

If you want 100% uptime, it's important to have back up power for the cooling as well as the server systems themselves.
 
Is this really news?

Re:Which only shows (1)

_14k4 (5085) | more than 6 years ago | (#21337959)

Has anyone thought about putting data-centers in upper Canada / arctic regions? Just (honestly) curious.

Re:Which only shows (3, Interesting)

lb746 (721699) | more than 6 years ago | (#21338051)

I actually use a vent duct to suck in cold air from outside during the winter to help cool a server in my house. Originally I was more concerned with random object/bugs/leaves so I made it a closed system(like water cooling) to help protect the actual system. It works nicely, but only for about 1/3 or less of the year when the temperature is cold enough to make a difference. I've always wondered about a larger scale of something like this such as how the parent suggested servers in a colder/arctic region.

Re:Which only shows (1)

ByOhTek (1181381) | more than 6 years ago | (#21338057)

Or Siberia.

I was thinking the same thing.

AC is out? Crank open the vents and turn on the fans.

Admittedly it wouldn't work so well in the summer, but spring/winter/fall could be nice.

Re:Which only shows (1)

trolltalk.com (1108067) | more than 6 years ago | (#21338127)

In the winter, if you heat with electricity, you can basically run your computer for free, since its waste heat reduces the amount of heat needed to be generated by resistance heaters.

Re:Which only shows (1)

ByOhTek (1181381) | more than 6 years ago | (#21338189)

An Athlon XP 2500+ with two HDDs and a GeForce 6600GT running 24x7 can give a 12x14 foot room a 10-15 degree temperature boost.

My computer room is quite toasty in the winter...

Re:Which only shows (1)

russ1337 (938915) | more than 6 years ago | (#21338497)

I'm about to move to a colder (and more damp) environment (and an older house with wood floors), and have thought about putting my server and NAS in hallway cupboard, drawing cool air from under the house and venting it to the hallway.

Will have to see how big a hole I can cut (and repair) in the floor of the cupboard without the new landlord noticing....

Re:Which only shows (2, Interesting)

Ironsides (739422) | more than 6 years ago | (#21338059)

Yes, actually. This was looked into by multiple companies during the late 90's. I'm not sure if any were ever built. I think one of the considerations as a byproduct was the savings of not having to run chillers with the cost of getting fibre and power laid to the facility.

Re:Which only shows (1)

_14k4 (5085) | more than 6 years ago | (#21338101)

Well, being that the planet is (generally speaking) a sphere... you'll only really need at max, a D length piece of cable, no? :P What else are all of those holes to China I dug as a child good for?

Re:Which only shows (1, Insightful)

Anonymous Coward | more than 6 years ago | (#21338091)

Has anyone thought about putting data-centers in upper Canada / arctic regions? Just (honestly) curious.

There isn't any cheap high-speed fiber up there. Even if there was, the additional lag due to 5000 miles of fiber would be annoying, not to mention shipping & transport costs.

Re:Which only shows (1)

bn-7bc (909819) | more than 6 years ago | (#21338935)

Well shipping costs I can not comment about (I do not now the rates used in the us/Canada), the RTT for 5000 miles will be about 53.6 ms. Here is tha calculation Light speed: 186 411 miles/second Distance: 10 000 miles (iiuc the fiber is 5000 miles long so fo a raond trip the singnal hs to travel 10 000 Miles) RTT (propagation dalay):10000/186411=53.6ms Note: yes i know they will not run a single fiber 5000 miles in one go so RTT will be larger, but this at lesat gives an indication.

Re:Which only shows (3, Interesting)

blhack (921171) | more than 6 years ago | (#21338111)

I think the problem is availability of power. When you are talking about facilities that consume so much power that, when built, their proximity to a power station is taken into account, you can't just slap one down at the poles and call it good. I would imagine that lack of bandwidth is a MAJOR issue as well..... ...one field where I think storing servers at the poles would be amazing is super computing. Supercomputers don't require the massive ammounts of bandwidth that webservers etc do. You send a cluster a chunk of data for processing, it processes it, and it gets sent back. For really REALLY large datasets (government stuff)...just fill a jet with hard-disks and have it to the server center in a few hours.

Re:Which only shows (1)

l-ascorbic (200822) | more than 6 years ago | (#21338463)

Norway is obviously the answer then. Bloody freezing, and loads of hydroelectric power.

Re:Which only shows (1)

Average (648) | more than 6 years ago | (#21338701)

Bandwidth is comparatively cheap to get somewhere. A few redundant loops of fiber... undersea if need be. Fiber does not suffer transmission losses in the way that sending electricity the other way would.

One fairly obvious location for this would be Labrador in Canada. Very well cooled. Absolutely lots of hydroelectric. Churchill Falls is huge. They lose half the energy sending it down to the US, but no one closer needs the power. Several major untapped hydro locations, too. Lots of land for approximately free.

Re:Which only shows (1)

scheme (19778) | more than 6 years ago | (#21338795)

one field where I think storing servers at the poles would be amazing is super computing. Supercomputers don't require the massive ammounts of bandwidth that webservers etc do.

Supercomputing absolutely requires massive amounts of bandwidth. In a particle physics, detectors at places like LHC are generating 1-5 petabytes of data each year and this data needs to be sent out to centers and processed. Likewise, bioinformatics applications tend to generate lots of data (sequences, proteins, etc.) and this data needs to be shuttled around while it's being processed.

Re:Which only shows (1)

Critical Facilities (850111) | more than 6 years ago | (#21337989)

Exactly! The fact that these Chillers weren't on Emergency Generator Power is rookie mistake #1. All the generator power and UPS power in the world ain't gonna help if your Data Center gets too hot.

Re:Which only shows (4, Informative)

jandrese (485) | more than 6 years ago | (#21338003)

If you want 100% uptime (which is impossible, but you can put enough 9s in your reliability to be close enough), you need to have your data distributed across multiple data centers, geographically separate, and over provisioned enough that the loss of one data center won't cause the others to be overloaded. It's important to keep your geographical separation large because you never know when the entire eastern (or western) seaboard will experience complete power failure or when a major backhaul router will go down/have a line cut. Preferably each data center should get power from multiple sources if they can, and multiple POPs on the internet from each center is almost mandatory.

Re:Which only shows (3, Interesting)

NickCatal (865805) | more than 6 years ago | (#21338275)

I can't stress this enough. When I talk to people about hosting and they rely on 100% availability they NEED to go with geographically diverse locations. Even if it is a single backup somewhere you have to have something.

For example, Chicago's primary datacenter facility is in 350 E. Cermak (right next to McCormick Place) and the primary interconnect facility in that building is Equinix (which has the 5th and now 6th floors.) A year or so ago there was a major outage there (that mucked up a good amount of the internet in the midwest) when a power substation caught on fire and the Chicago Fire Department had to shut off power to the entire neighborhood. So the backup system started like it should, with the huge battery rooms powering everything (including the chillers) for a bit while the engineers started up the generators. Only thing is, the circuitry that controls the generators shorted out, so while the generators themselves were working, the UPS was working, the chillers were working, this one circuit board blew at the WRONG moment. And this isn't the only time this circuit has been used, they test the generators every few weeks.

Long story short, once the UPSes started running out of power the chillers started going, lights flickered, and for a VERY SHORT period of time the chillers went out before all of the servers did. Within a minute or two it got well over 100 degrees in that datacenter. Thank god the power cut out as quick as it did.

So yes, Equinix in that case did everything by the book. They had everything setup as you would set it up. It was no big deal. But something went wrong at the worst time for it to go wrong and all hell broke loose.

It could be worse, your datacenter could be hit by a tornado [nyud.net]

Re:Which only shows (0)

Anonymous Coward | more than 6 years ago | (#21338483)

Thank god the power cut out as quick as it did.

If he had just gone ahead and put out the fire in the substation in the first place he wouldn't have had to bother.

Re:Which only shows (2, Interesting)

Azarael (896715) | more than 6 years ago | (#21338081)

Some data centers also have multiple incoming power lines (which hopefully don't have a single transformer bottle-neck). Anyway, I know for sure that at least one data center in Toronto had 100% uptime during the big August 2004 Blackout, so it is possible to prevent these problems.

Re:Which only shows (2, Interesting)

afidel (530433) | more than 6 years ago | (#21338137)

It sounds like they DID have backup power for the cooling but that they switch over to backup power caused some problems. This isn't really all that unusual because cooling is basically never on UPS power so the transition to backup power may not go completely smoothly unless everything is setup correctly, tested, and there are no or little unusual circumstances during the switchover. I've seen even well designed systems have problems in the real world. One time we lost one leg of a triphase power system so the automatic transfer switch failed to flip over and startup the generator. The UPS realized it wasn't getting good power so it flipped over to battery power. Luckily the UPS sent out its notification and we were able to manually switch over to generator and get the cooling online, but there is almost no chance of it working in we only had 3 minutes to get things corrected.

Re:Which only shows (1)

Bandman (86149) | more than 6 years ago | (#21338213)

I'm not disagreeing, but if people really want 100% uptime, they're much better off investigating an infrastructure using GSLB or something similar, where a single geographically isolated event won't impact them

Re:Which only shows (1)

beavis88 (25983) | more than 6 years ago | (#21338223)

From a message to Rackspace customers:

"When generator power was established two chillers within the data center failed to start back up"

They had backup power for the chillers - but obviously, something didn't go quite right.

Re:Which only shows (0)

Anonymous Coward | more than 6 years ago | (#21338903)

Part of it is you have to leave A/C offline for at least 3 minutes prior to cycling it back on.

Re:Which only shows (1)

SleptThroughClass (1127287) | more than 6 years ago | (#21338741)

Or instead of depending so much on chillers which draw a lot of power, buffer the cooling by having a large amount of chilled water as part of the system. Design it so if the chillers aren't working, the server room is still somewhat cooled by the pre-chilled water; circulation of water and air requires less power than refrigeration systems. I doubt most sites can store enough ice/chilled water to operate for hours. If that were the case, server rooms could be operating off of cold which was stored during cheaper evening electric rates.

How to estimate the cooling needs? (2, Interesting)

Dynedain (141758) | more than 6 years ago | (#21337899)

Actually this brings up an interesting point of discussion for me at least. Our office is doing a remodel and I'm specifying a small server room (finally!) and the contractors are asking what AC unit(s) we need. Is there a general rule for figuring out how many BTUs of cooling you need for a given wattage of power supplies?

Re:How to estimate the cooling needs? (4, Informative)

CaptainPatent (1087643) | more than 6 years ago | (#21338001)

Is there a general rule for figuring out how many BTUs of cooling you need for a given wattage of power supplies?
I actually found a good article [openxtra.co.uk] about this earlier on and it helped me purchase a window unit for a closet turned server-room. Hope that helps out a bit.

Re:How to estimate the cooling needs? (0)

Anonymous Coward | more than 6 years ago | (#21338983)

Hire a consulting engineering firm that specializes in HVAC systems, or maybe just in Data Center HVAC design, to figure it out for you. All the dumbass responses I've seen here reminds me why I go crazy when the guy in charge of the server room starts telling me how many tons his server room needs.

You guys get it wrong on so many levels it isn't even funny. You also combine that with compounding your error and you are really screwed. If I followed the advice given here on sizing a unit it would be based on the rated full load amperage draw of every piece of equipment, add some random numbers for the number of people in the space, multiply by pi, divide by 1,200 to get Tons, and then round up. Just to be clear, you do not use the rated full load amperage of the equipment to get the cooling load. By doing that you are saying that your computers are just giant electric strip heaters and that all the electricity that could ever possibly be used (not even what is actually used) in the server room ends up as heat in the space (this isn't correct by the way). The conversions are actually 3.413 BTUH = 1 WATT and you divide BTUH by 12,000 for tonnage.

The real problem, as others have mentioned, is putting your server room on back-up power if you are not going to include your AC. The power densities are too great to get any kind of thermal storage to flywheel you through the outage. In fact the densities are so great that you don't even have an allowable time window to safely power down the machines or implement some sort of alternate cooling plan.

If you want to increase the time your server room has before requiring shutdown in a power outage that takes out the AC, build the server room like a bunker with thick concrete walls, floor, and ceiling. If it is a really big space add some intermediate concrete walls to add additional thermal storage capacity in the central part of the room. Follow this up with keeping the room at an even lower temperature than you already do (hello even higher electric bills). This will give a longer safety window as the walls, ceiling, and floor will actually "store" some thermal capacity and radiate it back into the space when the AC goes down.

Re:How to estimate the cooling needs? (0)

Anonymous Coward | more than 6 years ago | (#21338005)

Power capacity and BTU's are directly related. A quick google will show you how to convert between the two - its not a bad rule of thumb to ensure that cooling capacity can equal BTU potential, even during the failure of an AC unit.

Re:How to estimate the cooling needs? (2, Informative)

Critical Facilities (850111) | more than 6 years ago | (#21338015)

Try this. [anver.com]

Re:How to estimate the cooling needs? (0)

Anonymous Coward | more than 6 years ago | (#21338031)

The data sheet for a given piece of gear should indicate it's heat output in BTUs along with power requirements, operating temperature, etc. You shouldn't have to calculate BTU output based on a power consumption figure.

Re:How to estimate the cooling needs? (4, Interesting)

trolltalk.com (1108067) | more than 6 years ago | (#21338077)

Believe it or not, but in one of those "life coincidences", pi is a safe approximation. Take the number of watts your equipment, lighting, etc., use, multiply by pi, and that's the # of btus of cooling. Don't forget to include 100 watts per person for body heat.

It'll be 90F degrees outside, and you'll be a cool 66F.

Re:How to estimate the cooling needs? (1)

MrLogic17 (233498) | more than 6 years ago | (#21338669)

I've always heard taht 100 watt/person, but have never seen real data to back that up.

According to
http://www.swarthmore.edu/NatSci/es/energy/bodyheat.html [swarthmore.edu]
A person puts out 54 watts. Or at least 1 standard professor unit does.

Re:How to estimate the cooling needs? (1)

DamnStupidElf (649844) | more than 6 years ago | (#21338791)

Don't forget that nerds and geeks often have much more surface area than the standard professor, leading to higher heat loss into the cooler server room.

Re:How to estimate the cooling needs? (2, Insightful)

Bandman (86149) | more than 6 years ago | (#21338889)

I hate to pick, but I think you'll find rotund people actually have a lower surface to mass ratio than thin people.

Re:How to estimate the cooling needs? (2, Funny)

Anonymous Coward | more than 6 years ago | (#21338997)

> Don't forget that nerds and geeks often have much more surface area than
> the standard professor, leading to higher heat loss into the cooler server room.

Not to even think about the actual situation during power outtakes:

When power is lost, all the geeks are going to be really excited and sweating (and heating) very much more than normally!

Re:How to estimate the cooling needs? (0)

Anonymous Coward | more than 6 years ago | (#21338895)

've always heard taht 100 watt/person, but have never seen real data to back that up. According to http://www.swarthmore.edu/NatSci/es/energy/bodyheat.html [swarthmore.edu] A person puts out 54 watts. Or at least 1 standard professor unit does.

It's not just body heat that has to be removed, but also humidity.

Re:How to estimate the cooling needs? (1)

FuzzyDaddy (584528) | more than 6 years ago | (#21338953)

To calculate the average IT worker heat dissipation, take the daily number of Calories in the diet, converted to joules, divided by the number of seconds in a day.

This assumes no actual work is being done.

Re:How to estimate the cooling needs? (2, Interesting)

Nf1nk (443791) | more than 6 years ago | (#21339007)

Personal energy output is a function of a number of variables, but the most important, are the ambient temperature and the movement of air through the room. The 100 watts per person is a conservative estimate based on a roughly 75 F room.

The Prof in a box experiment has a large issue that contributes to error. He is breathing with a tube, the heat exchange in your lungs is a convection exchange and has too large a magnitude to ignore. If you have doubts about how much heat flows out through breathing next time you are cold in bed pull the covers up over your head and breath under the covers. You will find that the bed gets nice and warm in a very short time.

Re:How to estimate the cooling needs? (2, Funny)

JUSTONEMORELATTE (584508) | more than 6 years ago | (#21338919)

Believe it or not, but in one of those "life coincidences", pi is a safe approximation. Take the number of watts your equipment, lighting, etc., use, multiply by pi, and that's the # of btus of cooling. Don't forget to include 100 watts per person for body heat.

It'll be 90F degrees outside, and you'll be a cool 66F.
And if that doesn't work, you can always tell your VP that you were taking your numbers from some guy named TrollTalk on ./
I'm sure he'll understand.

Re:How to estimate the cooling needs? (1, Informative)

fifedrum (611338) | more than 6 years ago | (#21338245)

It's not really this simple, but a decent back of the napkin is to take the amperage the voltage and multiply, then multiply again to get btu/hour, divide to get tons.

20A x 110V = 2200 VA which doesn't directly translate to Watts, as someone will surely correct me, but for cooling purposes it's not a bad rule of thumb to directly translate the VA to Watts because you'll be including a built-in overhead into which you will surely grow your server space. Then go from Watts to BTU/hour.

2200 Watts x 3.412 BTU/Hour Watt = 7506 BTU/hr

12000 BTU/hr = 1 ton. Do that calculation for all possible hosts in your space, round up. Then purchase an additional, but portable, cooler for the space. Use that cooler for emergencies, like chilling beer, and if the main chillers break, you'll have nice cold beer to drink while the HVAC guys fix the big units and you wait for your less-essential machines to come up.

Most people will do the caclulaton and find their datacenter cooling systems are woefully under sized, running 100% whenever the outside air temperature is above 50F...

Re:How to estimate the cooling needs? (1)

xaxa (988988) | more than 6 years ago | (#21338667)

WTF? I thought 'ton' was a mass/weight, but you have BTU/hr which is energy/time, which is power.

Crazy Americans :-p

Re:How to estimate the cooling needs? (1)

Smidge204 (605297) | more than 6 years ago | (#21338303)

1 Watt = 3.41 BTU/hr

So if your server/network setup has 1000 watts of power supply capacity, I would recommend no less than 3410 BTU/Hr of cooling capacity, or 3 tons of cooling. This is given that power supplies don't usually run at their peak rated capacity, so it's slightly more than you technically need if there were no other sources of heat. Add in 10-25% for additional heat gains. Add 250 watts on top of that for each person that may work in that room more than an hour at a time.

Final formula I used on the job:

[1.2 * (Max rated wattage for all equipment)] + [250 * (Max number of people)] * 3.41 = BTU/Hr cooling

(1200 BTU/Hr = 1 ton of cooling, as some cooling equipment is rated in tons)

Of course, consult your local P.E. or cooling equipment manufacturer since there may be unique things about your situation.
=Smidge=

Physics (3, Informative)

DFDumont (19326) | more than 6 years ago | (#21338411)

For those of you who either didn't take Physics, or slept through it, Watts and BTU's/hr are both measurements of POWER. Add up all the (input) wattages, and use something like http://www.onlineconversion.com/power.htm/ [onlineconversion.com] to convert. This site also has a conversion to 'tons of refrigeration' on that same page.
Also note - Don't EVER user the rated wattage of a power supply because that's what it SUPPLIES, not uses. Instead use the current draw multiplied by the voltage (US - 110 for single phase, 208 for dual phase in must commercial blgs, 220 only in homes or where you know thats the case). This is the 'VA' [Volt-Amps] unit. Use this number for 'watts' in the conversion to refrigeration needs.
Just FYI - a watt is defined as 'the power developed in a circuit by a current of one ampere flowing through a potential difference of one volt." see http://www.siliconvalleypower.com/info/?doc=glossary/ [siliconvalleypower.com] , i.e. 1W = 1VA. The dirty little secret about power calculations is that there is another factor thrown in, typically about 0.65, called the 'power factor' that UPS and power supply manufacturers use to lower the overall wattage. That's why you always use VA (rather than the reported wattage) because in a pinch you can always measure both voltage and amperage(under load).
Basically do this - take all the amperage draws for all the devices in your rack/room/data center, multiply them by the applied voltage for that device (110 or 208) and add all the products together. Then convert that number to tons of refrigeration. This is your minimum required cooling for a lights out room. If you have people in the room, count 1100 BTU's/hr for each person and add that to the requirements (after conversion to whatever unit you're working with). Some HVAC contractors want specifications in BTU's/hr and other want it in tons. Don't forget lighting either if its not a 'lights out' operation. A 40W florescent bulb means its going to dissipate 40W (as in heat). You can use these numbers directly as they are a measure of the actual heat thrown, not of the power used to light the bulb.
Make sense?

Dennis Dumont

Re:Physics (2, Informative)

timster (32400) | more than 6 years ago | (#21338987)

The dirty little secret about power calculations is that there is another factor thrown in, typically about 0.65, called the 'power factor' that UPS and power supply manufacturers use to lower the overall wattage.

It's not "thrown in" by the manufacturers. The dirty little secret is simply that you are talking about AC circuits. 1W = 1VA in AC circuits only if the volts and the amps are in phase -- which they aren't.

Take a sine wave -- in AC, that's what your voltage looks like, always changing. If you're powering something purely resistive like an incandescent bulb, your amps follow the same sine wave and 1W=1VA. But inductive loads like power supplies introduce a lag in the current, so that the amps aren't in phase with the volts. As a result, you cannot naively multiply the RMS volts by the RMS amps to get the average wattage -- you have to take the integral of volts times amps through the curve. And for part of that curve, the voltage and the current flow in different directions, which represents negative power (that is, the inductive circuitry is pushing current back across the wire). As a result of this the overall power will always be less than the volt-amps.

Re:How to estimate the cooling needs? (0)

Anonymous Coward | more than 6 years ago | (#21338689)

Just don't do like a client of mine. After numerous comment on their need for a locked server room, they 'made one' 2 years later placing a lock on a closed office.
I commented on the need for cooling they pointed up to the air output, I then pointed at the unique temp-control for the entire floor outside the office, coincidently 4 feet away from the door. They did not understand my concern (all 3 electrical engineer I might add).
On my last visit I learned the server room door is mostly kept open, I saw a gaping 1x1 foot hole cut in the door with a fan dangling. Working in there for more than 30 minutes grant a 't-shirt permission'. Poor consultants in a suit and tie!

Liquid cooling on a datacenter level? (0)

Anonymous Coward | more than 6 years ago | (#21337965)

I wish someone would come up with a failsafe design for liquid cooled systems that wouldn't leak if it came off, from fittings that can be yanked off without draining the system, to pipes which can be installed for the long haul and keep their flexibility over years, not decades even in UV prone environments, to some type of monitoring and automatic shutoff of the core and system in case of a detected leakage.

After that, some standard way of hooking up machines/blades on a rack so they all can be cooled via a central coolant system.

Voila. Problem solved. It would be trivial to have redundant cooling loops so if one failed, the rest of the data center would still be at an operational temperature.

Someone needs to chuck some R&D money at liquid cooling, and get it out of the stone age. As of now, all it takes is one small crack in a hose, and the whole machine would be killed. Due to this, liquid cooled PCs pretty much never are able to have a useful life past 2-3 years until the cooling system has some type of critical (and messy) failure. If its not a coolant leak, its algae getting in the coolant, or corrosion on fittings.

Damn dihydrogen monoxide (2, Funny)

wsanders (114993) | more than 6 years ago | (#21338163)

They should ban that stuff. (dhmo.org)

Re:Liquid cooling on a datacenter level? (0)

Anonymous Coward | more than 6 years ago | (#21338195)

It would be nice, but it's Not Possible without some amazing innovation. The biggest problem would be plugging in new equipment and winding up with a large air bubble as the air in the empty equipment is displaced by the fluid. Next up would be disconnecting a machine, taking the fluid inside of it out of the circuit. Locking connectors that automatically seal when released are possible, but then the problem is that you don't know if it's going to work until you've unlocked them, and then if it was broken, you end up with all your coolant on the floor (better to use manual valves). Finally, maintaining pressure throughout the datacenter will be a serious problem, likely requiring a large number of coolant loops to avoid spreading the pump power too thin.

And then your client shows up with custom hardware that expects the inlet and outlet to be on the opposite sides.

Re:Liquid cooling on a datacenter level? (1)

Smidge204 (605297) | more than 6 years ago | (#21338529)

The solution (no pun intended) is to use something other than water for coolant, such as a fluorocarbon liquid (Not to be confused wth ozone depleting chlorofluorocarbons...)

LFCs are electrically and chemically inert, allowing for direct submersion cooling. If that is undesirable, a normal "water block" heat sink can be used and leaks are only a mess issue, not a functional one.

"best" would be, IMHO, a heat pipe design inside the server module that brings heat to a contact surface on the back, which then thermally connects to a cooling backplate of some kind. This eliminates any liquid inside the modules themselves and mitigates the leak risk completely.
=Smidge=

It could be done. (1)

Kadin2048 (468275) | more than 6 years ago | (#21338537)

You could do it, it's just probably more expensive than forced-air cooling.

What you probably would want to do is have a closed system that's actually inside the computer. Fill it with some sort of nonconductive/noncorrosive coolant that won't destroy the machine if it leaks (e.g. 3M Fluorinert), then have a cooling block on the back, away from the electronics, where you plug in the chilled water lines. If you don't daisy-chain, and instead end-run the water intake/exhaust lines from every machine to a central pump, and more importantly than that, you have it driven by suction on the return side rather than positive pressure on the supply side, you could easily attach and detach machines without leaks. (Since in a datacenter a leak is probably more disastrous than a LoC to one server, suction is preferable to positive pressure.)

You'd disconnect the supply from a machine using a quick-release valve; then wait a second for the suction on the return side to pull the water out of the machine's cooling block and start sucking air. Then you'd disconnect the return side. This obviously means that you'd need a way of separating the air out of the return side before it hits the pump, but that's not exactly a unique engineering problem.

It's all doable, but the problems are the expense and the standardization. There's a major chicken-and-egg problem with equipment: you don't want to build a datacenter that can't use commodity equipment, but hardware manufacturers don't want to build gear that can't go into a standard air-cooled rack. So even though datacenters may be the biggest purchasers of racked servers (I'm not sure of that but I suspect they are, at least of some types), and datacenters might be better served by some sort of cooling besides forced-air, everybody gets the lowest common denominator.

Re:It could be done. (1)

ivan256 (17499) | more than 6 years ago | (#21338879)

In a closed loop, there is no difference between sucking on the return, and pumping material into the supply. Using the pump to assist draining as you describe would require one pump per server, as the coolant is preferentially going to come from the loops that are being backfilled by the pump itself, and not from the disconnected pipe.

Ideally, you would connect the systems with quick-releases that close off on both sides. That would prevent the need for draining the components inside the system *ever*, and it would also eliminate the need to purge the air from the system.

The reason these types of systems aren't used already is the same as the reason most datacenters aren't supplying our servers with DC power to prevent the loss of 20% of the energy before they even start processing data.... There aren't any standards, and there is a lot of legacy equipment out there. We need intel, and Sun, and a bunch of switch manufacturers to get together and come up with a standard for external DC power supplies and cooling connections. There also needs to be a low cost, small scale box that can produce what those connections need in a small setting for not much more money than the old way. Then over time we'll see more efficiency in this space.

You could do like my previous Director of IT did.. (1)

apparently (756613) | more than 6 years ago | (#21337979)

When systems start shutting down because the on-board temperature alarms trip, just disable the alarms.

Man, I wish I was making that up.

And the answer is: Liquid Nitrogen (2, Informative)

Bombula (670389) | more than 6 years ago | (#21338065)

Liquid nitrogen is the cooling answer, for sure. Then you're not dependent upon power of any kind at all. The nitrogen dissipates as it warms, just like how a pool stays cool on a hot day by 'sweating' through evaportation, and you just top up the tanks when you run low. It's cheap and it's simple. That's why critical cold storage applications like those in the biomedical industry don't use 'chillers' or refrigerators or anything like that. If you really want to put something on ice and keep it cold, you use liquid nitrogen.

Re:And the answer is: Liquid Nitrogen (0, Flamebait)

trolltalk.com (1108067) | more than 6 years ago | (#21338175)

Just don't do it in a closed room. And don't dip your finger in it to see if its "cool enough".

Re:And the answer is: Liquid Nitrogen (1)

The -e**(i*pi) (1150927) | more than 6 years ago | (#21338839)

You can stick your whole hand in it for about 1-2 seconds 4 or more seconds may make it turn white colored tho, but warm water fixes that. And I speak from personal experience with a dewar flask of liquid N2, and be careful because those giant containers can freeze open and spray all the liquid nitrogen out, true story and you should have seen the wrench we needed to shut off the valve. I never got to play with the liquid helium because its too expensive.

Re:And the answer is: Liquid Nitrogen (1)

Radon360 (951529) | more than 6 years ago | (#21338549)

Seems that having an industrial-sized tank of LN2 outside the building for such a purpose might make sense as a rather inexpensive emergency backup cooling system. Diesel generators keep the server farm online while cool N2 gas (after evaporation) keeps the server room cool. Just keep the ventilation system balanced properly that you don't displace the oxygen in the rest of the building, too.

And that brings up one caveat: You wouldn't have access to the areas cooled without supplied air when such a system is in operation (that's why it would be an emergency backup). Though, you could get back in fairly quickly (matter of minutes) once normal power is restored and the room is ventilated.

Re:And the answer is: Liquid Nitrogen (1)

everphilski (877346) | more than 6 years ago | (#21338591)

Liquid oxygen?

The boiloff is a little bit worse but the stuff is almost as cheap as dirt. A mix of LOX and NOX would be breathable and not risk explosion.

Re:And the answer is: Liquid Nitrogen (0)

Anonymous Coward | more than 6 years ago | (#21338765)

When you're just keeping a few dead cells in a four foot thick styrofoam bucket, then sure, liquid nitrogen is great. A server farm isn't just trying to hide from heat, it's making it, by the (bit)bucketload. For the actual anti-btu's inside a bottle of nitrogen, you're paying a premium for the quality of the cold over a larger quantity of, say, ice. Without doing the actual numbers, ACs must be still more effecient, since they hardly go overboard to bring the temperature way down.
Still, nitrogen would make for a pretty cool looking backup system. Run the boiloff through a steam engine, and you could even get some power out of it!

Re:And the answer is: Liquid Nitrogen (2, Informative)

DerekLyons (302214) | more than 6 years ago | (#21338851)

Liquid nitrogen is the cooling answer, for sure. Then you're not dependent upon power of any kind at all.

Except of course the power needed to create the LN2.
 
 

That's why critical cold storage applications like those in the biomedical industry don't use 'chillers' or refrigerators or anything like that. If you really want to put something on ice and keep it cold, you use liquid nitrogen.

As above - how do you think they prevent the LN2 from evaporating? The LN2 is a buffer against loss of power, but typically they have a pretty serious cryocooler to keep the LN2 there when they do have power.

New cooling strategy needed? (5, Interesting)

MROD (101561) | more than 6 years ago | (#21338099)

I've never understood why data centre designers haven't used a different cooling strategy to re-circulated cooled air. After all, for much of the temperate latitudes for much of the year the external ambient temperature is at or below that needed for the data centre so why not use conditioned external air to cool the equipment and then exhaust it (possibly with a heat exchanger to recover the heat for other uses such as geothermal storage and use in winter)? (Oh, and have the air-flow fans on the UPS.)

The advantage of this is that even in the worst case scenario where the chillers fail totally during mid-summer there is no run-away, closed loop, self re-enforcing heat cycle, the data centre temperature will rise but it would do so more slowly and the maximum equilibrium temperature will be far lower (and dependant upon the external ambient temperature).

In fact, as part of the design for the cluster room in our new building I've specified such a system, though due to the maximum size of the ducting space available we can only use this for half the heat load.

Re:New cooling strategy needed? (1)

Frosty Piss (770223) | more than 6 years ago | (#21338287)

After all, for much of the temperate latitudes for much of the year the external ambient temperature is at or below that needed for the data centre so why not use conditioned external air to cool the equipment and then exhaust it...
The next major source of "global warming"...

Re:New cooling strategy needed? (1)

The -e**(i*pi) (1150927) | more than 6 years ago | (#21338901)

air conditioners do the same thing, you didn't think they magically cool the air did you?

Re:New cooling strategy needed? (3, Informative)

afidel (530433) | more than 6 years ago | (#21338433)

The problem is humidity, a big part of what an AC system does is maintain humidity in an acceptable range. If you were going to try to do once through with outside air you'd spend MORE power during a significant percent of the year in most climates trying to either humidify or dehumidify the incoming air.

Re:New cooling strategy needed? (1)

silas_moeckel (234313) | more than 6 years ago | (#21338539)

They do big data centers use glycol, when it's cool outside the compressors turn off and they just run the fans. It's a bit more up front but has savings in areas where it gets below 45 on a regular basis. Another option is large blocks of ice with coolant running through them to shift power consumption to the night and reduce the amount of energy required (cheaper to make ice at night) but only for smaller facilities they leave a reserve capacity of x hours and/or go with n+1 setups.

Better yet a new power strategy (0)

Anonymous Coward | more than 6 years ago | (#21338647)

A better idea would be converting servers from AC to DC. The powersupply probably generates 25-50% of the heat from a device/server. Wouldn't it make more sense for OEMs to start making devices that used DC directly and then place one large transformer outside the datacenter and then run DC circuits to the racks? It might not eliminate the cooling requirements entirely, but even a reduction of 25% might go a long way.

No cooling unit UPS is not too unusual (1)

CambodiaSam (1153015) | more than 6 years ago | (#21338117)

After reading the articles linked from previous posts, it looks like the third outage was related to their cooling units not coming back online from the power outage linked to the Semi vs. Transformer battle. I know the units in our data center aren't hooked up to the UPS, but instead are wired directly to the generator in case of outage. I belive this is due to the massive number of additional cells that would be needed to keep up with the wattage requirements. The theory is that if the power goes out, you can live without cooling for the couple minutes while the generator pumps out the first giant plumes of black diesel and revs up to max capacity. We had a similar unplanned test when the local grid had a brownout. Luckily, our units functioned as designed. I wonder if their issues before did more damage to the units than they would have expected...

For good reason: (0)

Anonymous Coward | more than 6 years ago | (#21338905)

Air handlers/chillers require enormous amount of UPS capacity if they are fed from there, taking up valuable UPS capacity in an age where power availability is critical and is much better served beign reserved for server use, not equipment loads. Further, having the air handlers introduces an extremely "noisy" electrical signature into an environment where clean power is important.


Wiring the air handlers from the backup/generator power (any serious data center has backup power) gives one a downtime of a minute or two until the generator(s) spool up-which should be well doable temperature-wise.


Beyond that-the land where Mr. Murphy plays his games like having one or your only backup generator not starting, additional precautions should be made avaialble such as multiple industrial fans in open doors configured in a way to allow them to suck fresh cool air in and expel hot air out (same principles of cooling your computer case-you remember that fun right?) for however long is required. This tactic alone has saved my (international critical) data centers from being shut down on a couple of occasions by keeping just cool enough air circulating long enough so backup power issues were solved and air handlers were back up and running again.

Ironic advertisement (2, Funny)

davidwr (791652) | more than 6 years ago | (#21338171)

Ah, the dangers of context-sensitive advertising.

Ad on the main page [2mdn.net] when this article was at the top of the list.

Does "50% off setup" mean you'll only be set up halfway before they run out of A/C?

I've seen this a few times (1)

petes_PoV (912422) | more than 6 years ago | (#21338179)

If your data center's cooling system fails, how long do you have before your servers overheat?

The first occasion was over a weekend (no-one present) in a server room full of VAX's. On the monday when it was discovered, we just opened a window and everything carried on as usual.

The next time was when an ECL model Amdahl was replaced by a CMOS IBM. No-one downgraded the cooling and it froze up - solid. This time the who shebang was down for a day while the heat-exchangers thawed out. It was quite interesting watching the temperature monitors, it took a couple of hours until the temperature rose above the "danger" threshold.

So the answer is either, until you arrive at work (2 days or more), or sometimes a bit more heat is a good thing.

Re:I've seen this a few times (1)

bstone (145356) | more than 6 years ago | (#21338445)

This problem has been around since the dawn of data centers. One bank in Chicago with IBM mainframes in the 60's had battery UPS + generators to back up the mainframes, an identical setup to back up the cooling system, plus one more identical backup system to cover failure in either of the other two.

huh huh. (1)

BUTT-H34D (840273) | more than 6 years ago | (#21338309)

Nice rack. Huh huh. Heh heh heh.

Lots of ice (0)

Anonymous Coward | more than 6 years ago | (#21338321)

Having a lot of ice on hand would be a good way to bridge the gap between when the power goes out and when your backup system gets running. Ice is relatively cheap to store once it's created. A company called Ice Bear used to make an air conditioner based on this principle.
http://www.news.com/Ice-powered-air-conditioner-could-cut-costs/2100-1008_3-6101045.html [news.com]

Just make sure your equipment doesn't get wet.

FANATICAL!!!! (0, Troll)

puterTerrorist (1133535) | more than 6 years ago | (#21338335)

I guess they do not have *FANATICAL* cooling systems ...

Funny you mention this (4, Interesting)

Leebert (1694) | more than 6 years ago | (#21338397)

A few weeks ago the A/C dropped out in one of our computer rooms. I like the resulting graph: http://leebert.org/tmp/SCADA_S100_10-3-07.JPG [leebert.org]

Re:Funny you mention this (1)

caluml (551744) | more than 6 years ago | (#21338797)

At 17:00 too - just when you're ready to head home.

Re:Funny you mention this (1)

milgr (726027) | more than 6 years ago | (#21338801)

That graph doesn't look bad. It indicates that the high temerature was 92F.
Where I work, the AC in one of the two main labs goes out. I have seen thermometers register 120F. And, the computer equipment keeps running until someone notices and asks people to shut down equipment that is not currently needed.

One of the labs has exterior windows. Once when the AC failed in the middle of the winter, they removed a pane of glass to help cool the lab (this kept the temperature to the low 90's with some equipment turned off). More recently they replaced to plate glass windows with sliding windows so it is easier to open them.

Did I mention that our equipment generates a lot of heat? A couple of years ago 1 fully populated frame required 5 tons of cooling. I think that newer equipment generates even more heat.

Well once option is to recycle heat (1)

Timberwolf0122 (872207) | more than 6 years ago | (#21338417)

In a previous /. article (Ancient fridge [slashdot.org] )we learned that a sterling engine can run off excess heat, so why not power the cooling system with a sterling engine?

The hot air from the cabinates could be pumped by the Stirling engine to the sterling engine, the work done will lower the air temperature which can then be pumped back to the rack.

Now I realize that a Stirling engine might not be able to extract enough energy to cool a rack in a on-going way, during normal operation it could run in a supplementary capacity with conventional air conditioning but in a power outage it could well buy the extra time needed to either get the chillers running or shut down the servers.

equipment heating is a bête noire for me (1)

br00tus (528477) | more than 6 years ago | (#21338425)

I have worked for over a decade as a sysadmin and have seen firsthand the correlation between temperatures and server failure. I have witnessed two small server rooms melt down to lack of A/C. It is important to me because I know high temperatures will mean more likelihood that I will get a phone call in the middle of the night or on a weekend that a drive, processor or whatnot has failed on a machine.

One thing to consider is if the heat measured outside a box is high, the heat on the surface of the processor is much higher. Even with little fans or heatsinks on them, it doesn't do much, remember, fans and heatsinks don't change temperature they just displace heat - and the heat is attempting to be displaced in an environment of a lot of other boxes trying to displace heat.

In our current data center, run by a respected name, I have measured external temperatures in excess of 100 degrees Fahrenheit on some machines. Machines that run 24/7/365. We have small non-production rooms which have cheap fans that fill up with condensation, and a building staff which is supposed to empty the water when it fills up, but often doesn't.

Sometimes it gets kind of insane - I worked for a Fortune 100 financial company that had tons of money, and had a data center with Sun Enterprise 4000 series servers all over the place - yet the server room was above room temperature, and even more so in certain areas. We had disk and processor/memory board failures all the time, but they never really cared about the room temperature - they spent more time making sure the insides of the fibre optic cables were clean.

I have always brought up my concerns, but management has never really taken them seriously, and then I become overloaded with other work and forget about it as well. The ideal temperature for servers is a few degress above 0 Celsius, or even below 0 depending on the equipment. Meanwhile, if you find a server room where the temperature is below 20 degrees Celsius, you're lucky. It's just one of those things where it is cheaper and easier for them to just waste my time than to fix the problem.

Short-cycling protection (5, Interesting)

Animats (122034) | more than 6 years ago | (#21338447)

Most large refrigeration compressors have "short-cycling protection". The compressor motor is overloaded during startup, and needs time to cool. So there's a timer that limits the time between two compressor starts. 4 minutes is a typical delay for a large unit. If you don't have this delay, compressor motors burn out.

Some fancy short-cycling protection timers have backup power, so the the "start to start" time is measured even through power failures. But that's rare. Here's a typical short-cycling timer. [ssac.com] For the ones that don't, like that one, a power failure restarts the timer, so you have to wait out the timer after a power glitch.

The timers with backup power, or even the old style ones with a motor and cam-operated switch, allow a quick restart after a power failure if the compressor was already running. Once. If there's a second power failure, the compressor has to wait out the time delay.

So it's important to ensure that a data center's chillers have time delay units that measure true start-to-start time, or you take a cooling outage of several minutes on any short power drop. And, after a power failure and transfer to emergency generators, don't go back to commercial power until enough time has elapsed for the short-cycling protection timers to time out. This last appears to be where Rackspace failed.

Dealing with sequential power failures is tough. That's what took down that big data center in SF a few months ago.

Re:Short-cycling protection (1)

corsec67 (627446) | more than 6 years ago | (#21338709)

Easy way to test that:

Have a 2 year old play with the main power switches...

AC's use Power??? (1)

spiedrazer (555388) | more than 6 years ago | (#21338599)

I fail to see where this could be news to anyone who works with data centers. If you want your datacenter to operate during a power outage, you need a Generator with enough capacity for your servers/network and your cooling. If a fancy hosting site with SLA's making up-time guarantees doesn't understand this, I think thier customers should start looking elsewhere.

Re:AC's use Power??? (1)

bizitch (546406) | more than 6 years ago | (#21338713)

Thats the first that came to mind for me as well -

What? No freaking generator? Umm....wtf?

If not a matter of IF you lose power - just WHEN you lose power

It will happen - I guarantee it

Cooling is usually the achilles heel... (0)

Anonymous Coward | more than 6 years ago | (#21338629)

Cooling is usually the achilles heel of many data centers.

It takes so much power to run the air conditioners that many data centers I've been into don't even put them on their backup generators at all. There is no way the air conditioners are on battery backups either, so when the power does go out, they are off for at least the time it takes to start the generator and get it warmed up. (a minute or two at least)

All it takes is a couple minutes for the temperature of an entire data center to rise to a point where it takes hours to get it back down to normal levels. If the power cycles even a couple times you need to start thinking about which servers to turn off. Sometimes companies will put a "minimal" number of air conditioners on the generator, but they often fail to account for the increasing number of servers, so when the power does finally go out, they can't keep up anyways.

When I worked at one of the top tier hosting providers we had industrial fans stored in a closet and when the power went out (a few times per year at least) we had alarms that would go off and the entire support/NOC departments sprang into action like a well oiled machine to dig out the fans, setup extension cords and start taking the front/back doors off every cabinet to improve cooling and keep the servers from cooking themselves. They usually did anyways, it wasn't uncommon for staff to burn themselves on the cases during periods like this. I was always amazed at the temperatures that the servers did continue to run at though. I can also recall times where the entire office heated up to 90+ degrees as that was the only place the fans could blow the heat when you're in an office tower.

This isn't rocket science (0)

Theovon (109752) | more than 6 years ago | (#21338685)

Ok, what do I know about cooling, right? For ages, Intel processors have had a facility to protect against overheating should the CPU fan fall off or whatever. When the temperature gets too high, the CPU is made to sleep for periods of time necessary to keep it cool enough. The key point is that the system keeps running, just more slowly. Now, why can't data centers emply something like that? Servers that sleep to keep cool and then just ventilation systems that circulate air with the outside. Server response is slower, but nothing goes down.

Backup Generator? (1)

dbdavids (1188501) | more than 6 years ago | (#21338711)

The real solution here is not really a multihour ups for cooling and power, it would be a emergency generator. Have the generator auto kick on after 5 minutes and have 10 minutes of UPS time on the equipment. Generally I have found there is no need to ups the cooling unit.

Situate stores on top of underground datacentres? (0)

Anonymous Coward | more than 6 years ago | (#21338763)

Just a random thought -

obviously any cooling system will separate out large amounts of heat. Would it be an idea to use multistory underground stacks of data centres, and have stores on top, that would be heated by the runoff energy?

I was about to say housing, but given the outrage on electromagnetic sensitivity and whatever, it might be more palatable if people don't actually sleep above the datacentres. Or, you could just wrap them in tin foil.

Highlights Serious Flaw - Neglecting Outside (3, Insightful)

Ron Bennett (14590) | more than 6 years ago | (#21338859)

While many here are discussing UPSes, chillers, set-points, etc the most serious flaw is being glossed over ... the lack of redundency outside the data center, such as multiple, diverse power lines coming in...

From the articles, it appears that Rackspace datacenter doesn't have multiple power lines coming in and/or many come in via one feed point.

How else is it that a car crash quite some distance from the datacenter can cause such disruption. Does anyone even plan for such events - I get the feeling most planners don't, since I've seen first-hand many power failures occur in places where one would expect more redundency from dumb things like a vehicle hitting a utility pole, etc.

Ron

Maxwells data center (2, Funny)

techpawn (969834) | more than 6 years ago | (#21338871)

We've summoned a small demon to let in cool air particles and shunt out hot ones. Sure the weekly sacrifice gets to be a pain after a while, but there's always a pool of willing interns right?

Cooling answer... (0)

Anonymous Coward | more than 6 years ago | (#21338965)

Your answer to cooling is a little bit of cool air and a whole lot of air flow. Pipe hot air away from servers and get it out of the room. If you replace the air with 70 degree air that is good enough. Be sure to replace the cubic quantity of air in the room Once every 2 hours for one server and subtract 10 minutes for every additional server. Cap it at 10 minutes. (120 mins by 20 servers is replacing the entire room air every 10 minutes) 11 servers is 110 minutes subtracted from 120 minutes.. all the rest of the servers will be fine. To replace the air and get that much air flow you should look into massive blowers, nearly an entire wall will be a "warm" air return (you want that as close to the servers as possible.) and pump cold air through a false floor (commonly used in server rooms and data centers) up through the racks.

I have been in data centers in Chicago where cardboard was not allowed because it gets sucked against the wall. It is very noisy and I felt like I was in a wind tunnel. But there were several hundred servers, network devices, and blinky light things that I have never seen before in cages that I wasn't allowed in. Air flow is the key to rack cooling. Maybe not to that extreme though.

computers convert 100% electricity to heat (2, Insightful)

mwilliamson (672411) | more than 6 years ago | (#21338967)

Every single watt consumed by a computer is turned into heat, and generally released out the back of the case. Computers behave the same as the coil of nichrome wire as is used in a laundromat clothes dryer. (I guess a few milliwatts gets out of your cold room via ethernet cables and photons on fiber)

5kw? ow. (2, Insightful)

MattW (97290) | more than 6 years ago | (#21339005)

5 kilowatts is a heck of a lot to have on a single rack - assuming you're actually utilizing that. I recently interviewed a half dozen data centers to plan a 20-odd server deployment, and we ended up using 2 cabinets in order to ensure our heat dissipation was sufficient. Since data centers are usually supplying 20 amp, 110 or 120v power, you get 2200-2400 watts available per drop; although it's considered a bad idea to draw more than 15 amps per circuit. We have redundant power supplies in everything, so we keep ourselves at 37.5% of capacity on the drops, and each device is fed from a 20amp drop coming from a distinct data center pdu. That way even if one if the data center pdus implodes, we're still up and at 75%- capacity.

Almost no data center we spoke to would commit to cooling more than 4800 watts of power at an absolute maximum per rack, and those were facilities with hot/cool row setups to maximize airflow. But that meant they didn't want to drop more than 2x20amp power drops, plus 2x20 for backup, if you agreed to maintain 50% utilization across all 4 drops. But since you'd really want to maintain 75%- even in the case of failure, you'd only be using 3600watts. (In the facility we ended up in, we have a total of 6 20 amp drops, and we only actually utilize ~4700 watts.

Ultimately, though, the important thing is that cooling systems should be on generator/battery backup power. Otherwise, as this notes, your battery backup won't be useful.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>