Beta

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Green Grid Argues That Data Centers Can Lose the Chillers

Soulskill posted about 2 years ago | from the have-they-checked-the-lost-and-found dept.

Earth 56

Nerval's Lobster writes "The Green Grid, a nonprofit organization dedicated to making IT infrastructures and data centers more energy-efficient, is making the case that data center operators are operating their facilities in too conservative a fashion. Rather than rely on mechanical chillers, it argues in a new white paper (PDF), data centers can reduce power consumption via a higher inlet temperature of 20 degrees C. Green Grid originally recommended that data center operators build to the ASHRAE A2 specifications: 10 to 35 degrees C (dry-bulb temperature) and between 20 to 80 percent humidity. But the paper also presented data that a range of between 20 and 35 degrees C was acceptable. Data centers have traditionally included chillers, mechanical cooling devices designed to lower the inlet temperature. Cooling the air, according to what the paper originally called anecdotal evidence, lowered the number of server failures that a data center experienced each year. But chilling the air also added additional costs, and PUE numbers would go up as a result."

cancel ×

56 comments

Sorry! There are no comments related to the filter you selected.

Translate this to (-1)

Anonymous Coward | about 2 years ago | (#41783905)

Tree huggers telling an IT manager it's OK for his servers to burn up so save a baby seal.

Re:Translate this to (1)

Synerg1y (2169962) | about 2 years ago | (#41784075)

Can the baby seals insulate the data center?

Re:Translate this to (5, Interesting)

icebike (68054) | about 2 years ago | (#41784097)

Tree huggers telling an IT manager it's OK for his servers to burn up so save a baby seal.

Well, Google has already started running their data center much warmer than many data centers of the past, apparently with no ill effect.

  It has nothing to do with hugging trees, simply hard nosed economics. If 5 degrees induces 3 more mother board failures in X number of months and you already have the fail-over problem handled it only takes a few seconds on a hand held calculator to figure out that trees have nothing to do with it.

The rules were written, as the article explaines, based on little if any real world data, designed for equipment that no longer exists, built with technology long since obsolete. It was probably never justified, and even if it was back in thr 70s and 80s, it isn't any more.

Google and Amazon and others have carefully measured real world data talen from bazillions of machines in hundreds of data centers. They know how to do the math.

Re:Translate this to (4, Interesting)

ShanghaiBill (739463) | about 2 years ago | (#41784349)

Well, Google has already started running their data center much warmer than many data centers of the past, apparently with no ill effect.

This is an understatement. Google increased the temp in their data centers after discovering that servers in areas with higher temps had fewer hard errors. So they went with higher temps across the board, saved tons of money on lower utility bills, and have fewer hard errors.

Back in the 1950s, early computers used vacuum tubes, which failed often and were difficult to replace. So data centers were kept very cool. Since then, data centers have continued to be aggressively cooled out of tradition and superstition, with little or no hard data to show that it is necessary or even helpful.

Re:Translate this to (1)

Anonymous Coward | about 2 years ago | (#41785239)

I have been running several passive heatsink cooled servers for 5+ years on ambient temps that get as high as 85 F during the day while the AC is off and I'm at work. IMO, money is better spent on lower TDP components. Generally for server CPUs you have a choice of a lower TDP more $ cpu vs a higher TDP but cheaper CPU of similar power. The lower TDP CPU will use less electricity, plus generate less heat, which amounts to less cooling, so you theoretically cover the extra cost over time.

Additonally the lower TDP means you have less likely hood of heat related failure. In my own limited experience, if you have something that gets so hot that it needs a fan to stay alive, then that moving part is what is going to fail before any solid state component fails, and thus you will loose more components to fan failures than anything else. Therefore its better to get a CPU that has a low enough TDP to allow passive cooling. Usually I do have case fans in place, but no single fan is responsible for a single component, so if one fails the others will keep things from dieing until it is replaced.

Additionally, you don't have a crisis when the chiller dies and you find that the recent maintenance has left the backup in a state where it is not operable, or some such nonsense(personally witnessed this kind of thing on more than one occasion). I am a programmer, so am usually just an observer to these things in production environments.

Low TDP CPUs + SSDs is the way to go IMO. After accounting for electricity/cooling savings, I believe an SSD is on par or cheaper than than 15k RPM HDD. Most scenarios will not reach the write limits of the SSD in 5 years(theoretically, time will tell on SSD reliability, as right now anecdotal evidence seems to suggest they have a high rate of DOA, but that may be due to poor QA).

*Pretty isolated anecdotal account, but I speculate that a setup that eliminates component specific fans(i.e. cpu fans) and focuses on lower TDP passively cooled components, will suffer fewer failures, reduce electricity and cooling costs, and decrease the risks posed by fan failures, chiller failures, heat related failures, or mechanical failures(HDDs). Less failures and less risks will save man hours replacing hardware or planning for those contingencies.*

I really wish there were some flexible heatpipes on the retail market(they do exist but only when ordered in large quantities), so we could have large heatsink plates on the side of the server chassis, and attach the flexible heapipes to transfer heat from internal components to the external heatsink. Similar to some of the custom built cases out there which use standard custom made heatpipes to create silent PCs, where the entire outside of the case is metal with large fins(usually used in audio studios). These cases though are built for specific components that fit the heatpipe configuration. Flexible heatpipes would fit into current cases/form factors with only the addition of the external heatsink. Additionally several units could share a large heatsink, which would be more effective since the area of several smaller seperate heatsinks would be wasted since several servers at any given time could be idle. I don't know how reliable heatpipes are though. If there was an internal DOA defect(maybe fluid leaked out), it won't be as obvious without doing testing on them, compared to a fan that simply doesn't spin.

Consolidated redundant power supplies would be another thing that would reduce heat, but these are super expensive and usually specialized to specific brands of servers+chassis. I don't know if it's an economy of scale thing, or economy of it's-for-enterprise-and-there-is-no-standard-so-you-won't-find-a-comparable-option-so-we'll-make-as-much-profit-as-we-can-get-away-with. Not that I object(sic?) to that, but usually puts it out of reach of all but the very large enterprises.

Re:Translate this to (1)

adolf (21054) | about 2 years ago | (#41787009)

I have been running several passive heatsink cooled servers for 5+ years on ambient temps that get as high as 85 F during the day while the AC is off and I'm at work. IMO, money is better spent on lower TDP components. Generally for server CPUs you have a choice of a lower TDP more $ cpu vs a higher TDP but cheaper CPU of similar power.

My own experience with mostly-passively-cooled modern PCs is that while temperatures within remain low enough that everything continues to work fine on a hot day (if I switch off the window AC when I'm out for the day, things can exceed 110F ambient inside), there are localized failures of capacitors.

Specifically, the hottest caps fail first. Cooler caps fail later, or not at all, or are shown to be visibly in a lesser state of failure.

The hot caps are right next to and/or above the passive heatsink. The cooler caps are a little farther away, and/or lower (in terms of gravity).

In many cases, these capacitors are identical and wired in parallel.

So, electrically, things are exactly the same. The only difference is temperature.

Just throwing that out there. (I try to keep cooling and airflow to a minimum to reduce noise.)

Re:Translate this to (0)

Anonymous Coward | about 2 years ago | (#41787327)

Why not ask the engineers who designed the chips, they have quite a bit of data on how their transistors, and wires behave at specified temperatures. It's only the latest (Untested) materials that have unknowns. This trial and error approach is like playing darts in a dark room, you may hit the target once in a while but your overall accuracy will suck.

Re:Translate this to (1)

theendlessnow (516149) | about 2 years ago | (#41786053)

Well.. yes and no. Can you build servers that can take the heat? Sure. But that's not what most datacenters have. Sure, processors and maybe (and it's a big maybe) memory can take the heat... but in general, those 15K rpm disk drives are not going to like the extra heat. They have enough problems dissapating heat currently.

So.. possible, sure. But it does require some extra work. Your off the shelf HP, Dell or IBM, I wouldn't recommend it.

You do lower the life span of the equipment by placing it under enormous heat stress.. life could be reduced by several years if assuming a 10 year lifespan. If you have a 5 year life cycle, you may have to consider a 4 year life cycle. And even then, I'd avoid the 15K rpm drives and other things that aren't cooled very well (e.g. gobs of memory and even chipsets on some designs). That passive heat sink on your fibre channel card and/or 10Gbit ethernet... probably not going to cut it anymore... so that's also changes you'd have to make... there are many.

Again, you CAN do it.. but design has to be done... it's done with intent, not through random experimentation (unless you have money to burn).

Well Sure! (0, Funny)

Anonymous Coward | about 2 years ago | (#41783927)

Of course they are wasting energy keeping it that cold. I am surprised the servers aren't frozen keeping it below 32 like that!

65 to 70 is plenty at my data center.

Re:Well Sure! (1)

tempest69 (572798) | about 2 years ago | (#41783965)

I think 298 degrees would be more reasonable. No reason to risk superfluidity.

Too hot. (0)

Anonymous Coward | about 2 years ago | (#41783949)

Yeah right, I'm not running a data center at 35 degrees C. People do have to go inside there and they shouldn't have to die from heat stroke. And, it would probably heat up any other rooms/offices it's next to.

Re:Too hot. (1)

oodaloop (1229816) | about 2 years ago | (#41784021)

And, it would probably heat up any other rooms/offices it's next to.

If only there were some unused chiilers that could be used to cool the air next to the server room.

Re:Too hot. (2, Informative)

Anonymous Coward | about 2 years ago | (#41784101)

They aren't going to die of heatstroke in 95 degrees. Drama queen much?

Re:Too hot. (0)

Anonymous Coward | about 2 years ago | (#41790009)

Different people have different tolerances to heat. Usually there is only one tech person in the data center so if the person collapses then no one is going to know for hours.

Oversimplification much?

Re:Too hot. (1)

Anonymous Coward | about 2 years ago | (#41784495)

Agreed. I'm not spending two hours moving, upgrading, whatever in a 35C/95F room.

Are building owners really overheating? (1)

Meshach (578918) | about 2 years ago | (#41783975)

If the owners of the building could run cooler I would think they would. Heat is expensive and building owners are cheap; if it is possible to spend less I would think that owners would.

Re:Are building owners really overheating? (2)

bill_mcgonigle (4333) | about 2 years ago | (#41784247)

If the owners of the building could run cooler I would think they would.

Have a look here [datacenterknowledge.com] for more background.

Basically, they're describing four types of data centers. Have you seen the Google data centers with their heat curtains and all that? I surely don't work in any of those types of data centers. Some of the fancier ones around here have hot/cold aisles, but the majority are just machines in racks, sometimes with sides, stuck in a room with A/C. Fortunately it's more split systems than window units these days!

The conventional wisdom was that AC is cheaper than downtime/hardware so they told the building owner what to run the temperature at and they paid for it. Some of those assumptions are now being challenged.

I do dig energy effecient IT - I focus on this whenever I spec gear - but many people just 'go big', 'go cheap', or 'go IBM' (for various values of 'IBM'). Focusing on operating heat is an after-the-fact approach if you have opportunity to cut down on heat (freebie: do you put SSD's in front of your big drives to keep them cooler?)

With that said, there's one very good reason to run a cold room: power failures. I typically see places with decent to nice UPS units, but the A/C units are almost never on battery backup, and generators are too rare (even when they're there, they're rarely sized for or connected to the A/C). A data room can get hot in a hurry without A/C and if you're running at 65, you get to 95 much less slowly than you do when you're running at 82. Yeah, if you're a government contractor you just buy a CAT diesel and go about your day, but for many businesses the monthly cost of A/C is weighed against the purchase of the generator to make it able to sustain those kinds of conditions.

Re:Are building owners really overheating? (2)

DarthBart (640519) | about 2 years ago | (#41785891)

A data room can get hot in a hurry without A/C and if you're running at 65, you get to 95 much less slowly than you do when you're running at 82.

That really depends on the size of your datacenter and your server load. If you've got a huge room with one rack in the middle, you're good to go. If you've got a 10x10 room with 2 or 3 loaded racks and your chiller goes tits up, you're going to be roasting hardware in a few short minutes. Some quick back-of-the-napkin calculations show that a 10x10x8 room with a single rack pulling all the juice it can from a 20 amp circuit will raise the temperature in the room about 10 degrees every 2 minutes. From 82 to 95 is about 3 minutes, from 65 to 95 is about 6.

Striking a balance... (0)

Anonymous Coward | about 2 years ago | (#41783993)

There is a flip side to the coin.... Higher inlet temperature can cause higher leakage current, resulting in lower efficiency. Some electricity extra will be lost to this effect. Also, in such conditions, thermal throttling can occur, reducing performance and particularly performance per watt, causing more energy to be required for same amount of work. Finally, there is some degree of longevity, which causes component failure ahead of expectations.

A way of getting the similar energy benefit without the risk would be something like what SuperMUC (http://en.wikipedia.org/wiki/SuperMUC) does: run water direct to the components. The problem is the upfront cost is generally not worth it except in places where energy costs are high enough to recoup that cost.

Re:Striking a balance... (1)

icebike (68054) | about 2 years ago | (#41784203)

Component failures in a fail over designed world are no big deal. So what if your $80 cpu halts 6 additional times per cpu year? Toss that puppy in the scrap bin and slap in a new one. Commodity components used in massively parallel installations have different economics than the million dollar central processors used in the 80s.

Many have solved this another (better) way. (1)

Anonymous Coward | about 2 years ago | (#41784017)

We looked for where the fibermap ran over a mountain range, and was near a hydroelectric plant. Our data center is cooled without chillers, simply by outside airflow 6 monhts of the year and with only a few hours use of chillers per day for another 3 months. I know this won'r help people running a DC in Guam, but for those who have a choice, locatiion makes a world of difference.

Late to the Party (1)

kgeiger (1339271) | about 2 years ago | (#41784113)

November 2012 Wired covers "hot" machine rooms in its paean to Google's data centers. Usually by the time they've picked up a story, it's done.

Does it make a difference? (1)

godrik (1287354) | about 2 years ago | (#41784115)

I am bad in physics so I might say something stupid. But does it actually make a difference? I feel like the temperature of the hot components are WAY over 20C. So whatever energy they output is what you need to compensate for. In the steady state you need to cool as much as they heat. Isn't that constant whatever the temperature the datacenter is run at?

Re:Does it make a difference? (2)

Jiro (131519) | about 2 years ago | (#41784545)

Imagine that you used no cooling at all. The components wouldn't get infinitely hot; they'd get very hot, but the hotter they get the more readily the heat would escape, until they reach some steady state where they're hot enough that the heat escapes fast enough that it doesn't get any hotter.

So technically you're correct--a steady state always means that exactly the same amount of energy is being added and removed at the same time--but using cooling will allow this steady state to exist at lower temperatures where the natural escape of heat isn't so efficient.

Re:Does it make a difference? (1)

SJHillman (1966756) | about 2 years ago | (#41784595)

Our server room is typically kept at 74 to 76 degrees. We've had a few close calls over the summer where the ambient temp got above 84 and some of the machines just up and froze or shut down (mostly the older gear... newer stuff does seem to handle heat better). As the room temp rises, the internal temperatures rise too - some processors were reporting temps near the boiling point.

Re:Does it make a difference? (2)

Cramer (69040) | about 2 years ago | (#41785787)

Yes and no. If the room is properly insulated, any heat generated in the room will have to be forcefully removed. At some point, the room will reach equilibrium -- heat will escape at the rate it's generated, but it will be EXTREMELY hot in there by then. Rate of thermal transfer is dependant on the difference in temperature; the larger the difference, the faster energy transfers. Raising the temp of the room will lead to higher equipment temps; until you do it, you won't know if you've made the difference better (wider) or worse (narrower).

The key finding from Google's research was that temperature stability was the most important factor. Fluctuating temperatures are very hard on machines -- esp. hard drives.

I once worked in an office building where the building would shut off the HVAC in the evenings and all day weekends... it would be 100F+ in there Sunday evening. (over 120 on 100 degree days.) Then they have tones of heat to dump come Monday morning; and all the while, they're destroying every piece of electronics in the building. (net cooling costs... they saved very little. add in replacing the damaged everything, and it cost them money.)

Carnot cycle means hotter exchanges work quicker (0)

Anonymous Coward | about 2 years ago | (#41787909)

Therefore if you run hotter, the cooling of that hotter air or extraction to work is better.

Heat your company's hot water tank from the hot air from the server room and you save energy twice.

Re:Does it make a difference? (0)

Anonymous Coward | about 2 years ago | (#41786493)

Yes. The efficiency of compressor chillers changes with DeltaT. A LOT.

Re:Does it make a difference? (0)

Anonymous Coward | about 2 years ago | (#41791331)

Yeah, but it also takes energy to circulate the glycol or air, so you can have a situation where below a certain DeltaT your pumps/fans will start to require more additional power than you are saving in the compressor.

Also if there is anything "legacy" in the glycol or air pipes/ducts that was designed for a large DeltaT (think small diameter pipes or too-restrictive heat exchangers), then you will just have more problems with pumping.

I have set up heat pumps to pull heat out of a server room and dump it into the building. Up here in the Great White North this is economical. It works even better if it's a room where power-hungry machines (not just computers) and humans have to coexist by allowing for heat transfer between rooms without a DeltaT. I'm not a computer person, I'm a mechanical engineer though.

What data centers did these guys look at? (4, Informative)

Chris Mattern (191822) | about 2 years ago | (#41784123)

I've been an operator and sysadmin for many years now, and I've seen this experiment done involuntarily a lot of times, in several different data centers. Trust me, even if you accept 35 C, the temperature goes well beyond that in a big hurry when the chillers cut out.

Re:What data centers did these guys look at? (1)

geekoid (135745) | about 2 years ago | (#41784489)

"the temperature goes well beyond that in a big hurry when the chillers cut out."
AND?
alternative:
SO?

If it's below 35 C outside, why wouldn't you just pump that air in(through filters)

You situation is most likely in room that are sealed to keep cool air in, so it traps the heat in. If the systems can run at 35 C, you would have windows. Worse case, open some windows and put a fan in.
Computer can run a lot hotter then they could 3 decades ago.

Re:What data centers did these guys look at? (1)

Cramer (69040) | about 2 years ago | (#41785909)

A) Temperature STABILITY!
B) Humidity.

The room is sealed and managed by precision cooling equipment because we want a precisely controlled, stable environment. As long as the setpoint is within human comfort, the exact point is less important than keeping it at that point! Google's data has shown, *for them*, 80F is the optimal point for hardware longevity. (I've not seen anywhere that it's made a dent in their cooling bill.)

Re:What data centers did these guys look at? (1)

Burdell (228580) | about 2 years ago | (#41786753)

I know some people that have tried to work out filtration systems that can handle the volume of air needed for a moderate size data center (so that outside air could be circulated rather than cooling and recirculating the inside air), and it quickly became as big of an expense as just running the A/C. Most data centers are in cities (because that's where the communications infrastructure, operators, and customers are), and city air is dirty.

Re:What data centers did these guys look at? (0)

Anonymous Coward | about 2 years ago | (#41787313)

and city air is dirty.

In Los Angeles, it's downright filthy.

Re:What data centers did these guys look at? (1)

amorsen (7485) | about 2 years ago | (#41784497)

Trust me, even if you accept 35 C, the temperature goes well beyond that in a big hurry when the chillers cut out.

Only because the chillers going out kills the ventilation at the same time. THAT is unhealthy. Cooling a datacenter through radiation is adventurous.

Re:What data centers did these guys look at? (0)

Anonymous Coward | about 2 years ago | (#41785291)

I think the idea is that you not use a chiller at all, and just have blowers, with maybe some humidity conditioning. Probably alot cheaper to have redundant blowers than chillers, and probably alot more reliable simply because a blower is much simpler than a chiller.

This was actually suggested in some standards if the outside air temp was below a certain threshold. However, some big name data center operators didn't want those specifics in the standard, because they wanted the freedom to evaluate if it was more/less efficient to condition the outside air(i.e. humidity+filtering) then it was to just recycle+cool the already conditioned inside air. It was some article on slashdot a ways back. So kind of inconclusive on what would be best.

Re:What data centers did these guys look at? (2)

Cramer (69040) | about 2 years ago | (#41786001)

In my experience, blows/fans fail more often than compressors and pumps. :-) (blowers run constantly, compressors shouldn't)

The preference in data center cooling is/has been to use "free cooling" through water/glycol loops when the outside air is cold enough to handle heat rejection on it's own. Otherwise, compressors are used to push heat into the same loop. It's becoming more trendy to place data centers in cooler climates where compressors are never needed; then stability can be maintained by precise mixing of inside and outside air. (some systems bring outside air into the room, others have air-to-air heat exchangers so outside air doesn't enter the room... less work to scrub that air.)

For electronic components, heat == death (3, Informative)

Miamicanes (730264) | about 2 years ago | (#41784145)

Heat is death to computer hardware. Maybe not instantly, but it definitely causes premature failure. Just look at electrolytic capacitors, to name one painfully obvious component that fails with horrifying regularity in modern hardware. Fifteen years ago, capacitors were made with bogus electrolyte and failed prematurely. Some apparently still do, but the bigger problem NOW is that lots of items are built with nominally-good electrolytic capacitors that fail within a few months, precisely when their official datasheet says they will. A given electrolytic capacitor might have a design half-life of 3-5 years at temperatures of X degrees, but be expected to have 50/50 odds of failing at any time after 6-9 months when used at temperates at or exceeding X+20 degrees. Guess what temperature modern hardware (especially cheap hardware with every possible component cost reduced by value engineering) operates at? X+Y, where Y >= 20.

Heat also does nasty things to semiconductors. A modern integrated circuit often has transistors whose junctions are literally just a few atoms wide (18 is the number I've seen tossed around a lot). In durability terms, ICs from the 1980s were metaphorically constructed from the paper used to make brown paper shopping bags, and 21st-century semiconductors are made from a single layer of 2-ply toilet paper that's also wet, has holes punched into it, and is held under tension. Heat stresses these already-stressed semiconductors out even more, and like electrolytic capacitors, it causes them to begin failing in months rather than years.

Re:For electronic components, heat == death (1)

geekoid (135745) | about 2 years ago | (#41784543)

define a lot? Cause I don't see it a lot, I've never read a report that would use the term 'a lot'.

And you are using 'Heat' in the most stupid way. Temperatures over a certain level will cause electronic to wear out faster, or even break. not 'heat is bad'.
Stupid post.

Re:For electronic components, heat == death (1)

Miamicanes (730264) | about 2 years ago | (#41784589)

The article's central argument is that data centers can be run at higher temperatures. I'm pointing out that if you run your data center at higher temperatures to save on your energy costs, much or all of those savings could end up getting neutralized by premature equipment failure, and the cost of mitigating it.

Re:For electronic components, heat == death (4, Insightful)

ShanghaiBill (739463) | about 2 years ago | (#41785565)

The article's central argument is that data centers can be run at higher temperatures. I'm pointing out that if you run your data center at higher temperatures to save on your energy costs, much or all of those savings could end up getting neutralized by premature equipment failure, and the cost of mitigating it.

Yet when Google analyzed data from 100,000 servers, they found failures were negatively correlated with temperature. As long as they kept the temp in spec, they had fewer hard errors at the high end of the operating temperature range. That is why they run "hot" data centers today.

I'll take Google's hard data over your gut feeling.

Re:For electronic components, heat == death (1)

tlhIngan (30335) | about 2 years ago | (#41804379)

The article's central argument is that data centers can be run at higher temperatures. I'm pointing out that if you run your data center at higher temperatures to save on your energy costs, much or all of those savings could end up getting neutralized by premature equipment failure, and the cost of mitigating it.

Yet when Google analyzed data from 100,000 servers, they found failures were negatively correlated with temperature. As long as they kept the temp in spec, they had fewer hard errors at the high end of the operating temperature range. That is why they run "hot" data centers today.

Well, other than hard drives, I think most servers probably have a life of around 3 odd years or so before they're replaced with gear under a new support contract, at which point even if it was able to last 10 and you shortened it to 5, the server would be replaced before it dies.

Sure that oddball PC you have in the corner that's got a decade's worth of dust on it probably wouldn't have lasted as long, but it's probably good practice to figure out what it did and replace it.

Re:For electronic components, heat == death (0)

Anonymous Coward | about 2 years ago | (#41786349)

Not exactly, it's more accurate to say its central argument is that accepting a higher failure rate is cheaper than the cooling expense.

Far too many "green" articles tend to deprioritize the value of making things actually work well and reliably for me. Sort of like the emphasis on ultra efficient toilets or even composting toilets (ick).

Re:For electronic components, heat == death (1)

StoneyMahoney (1488261) | about 2 years ago | (#41802379)

You completely missed the point and obviously didn't RTFA. The empirical evidence shows that datacentres can be run warmer than they typically are now with an acceptable increase in hardware failure - ie. bugger all. Increasing the temp in a massive datacentre by 5 degrees C will save a bundle of money/carbon emissions that far more than offsets the cost of replacing an extra component or two a month.

As impressive as your assertions are, they are just that - assertions. Reality disagrees with you.

Silly Enviromentalist.. (3, Insightful)

Severus Snape (2376318) | about 2 years ago | (#41784157)

Yes, it's generally in the nature of these companies to spend unneeded money. They hire people who's exact job is to make data centers' as efficient as possible. Even to the extent Facebook and others are open sourcing their information to try and get others involved to improve data center design. I say generally as I'm sure most seen the story on here recently over Microsoft wasting energy to meet a contract target, that however is a totally different kettle of fish.

Re:Silly Enviromentalist.. (2)

10101001 10101001 (732688) | about 2 years ago | (#41784547)

Explain to me, again, why Facebook isn't dumping tons of money into a one-time investment into making Linux power management not suck? Or other companies, for that matter? Right, because it's an "accepted fact" that data centers must run at very high capacity all the time and power management efforts would hinder availability. And I presume this is *after* they dumped the money into Linux power management and saw it work out to be a colossal failure? Well, that's possible--they might have never bothered releasing their work. On the other hand, it could just be they haven't bothered because their IT folk know the gospel well enough and aren't given the leeway to experiment on mission critical systems and they don't really have a whole spare data center to play around with.

Of course, there's always Google and a question why they haven't bothered, but then they may just not care--there's the story of their early development going as far as basically not bothering to find, disconnect, and repair broken servers since the rerouting software was good enough and the effort to fix servers was more than the worth (ie, it was easier to just add another server to the growth list).

In short, the idea that companies in general are always aware let alone capable of choosing the best choices is silly, given the clear counter-examples. Penny-pencher management will likely eventually push data centers to the breaking point and then you'll likely only learn how well the hardware handles graceful degradation of performance as it gets hotter, not so much just what's viable and economical.

I work on telecom servers (0)

Anonymous Coward | about 2 years ago | (#41784765)

And our customers (the telcos and enterprise) don't care enough about power savings for our management to pay me to work on it.

So our systems run with C-states disabled and no frequency/voltage stepping when idle.

Re:Silly Enviromentalist.. (0)

Anonymous Coward | about 2 years ago | (#41789227)

because google is running SERVERS not home PC's. As someone who worked in hardware just fine, server boards are on almost all the time, but they do have excellent power shutdown/sleep . They respond just fine to linux power saving modes. I don't know what you're trying to point out here but you are just flat wrong.

Re:Silly Enviromentalist.. (1)

geekoid (135745) | about 2 years ago | (#41784553)

I like that you assume corporation run everything perfectly and never make a mistake, or continue to dodo something based on an assumption.

It would be adorable if it wasn't so damn stupid.

Qui bono? (3, Insightful)

J'raxis (248192) | about 2 years ago | (#41784827)

The board of directors [wikipedia.org] of the "Green Grid" is composed almost entirely of the companies that would benefit if data centers had to buy more computing hardware more frequently, rather than continued paying for cooling equipment.

Re:Qui bono? (1)

PPH (736903) | about 2 years ago | (#41785281)

More green in their pockets.

Re:Qui bono? (0)

Anonymous Coward | about 2 years ago | (#41785497)

That's an ad hominem, there's no particular reason to believe that they're giving bad advice in this regards as having equipment failing prematurely would make it seem as if they're selling shoddy equipment. And if the failures start occurring more frequently after the switch, I'm sure they'd be the ones blamed as well.

Re:Qui bono? (0)

Anonymous Coward | about 2 years ago | (#41786227)

Do you really think that greed has to make sense? People let their greed overrule their logic all the time.

A Couple Degrees Warmer - Electronics Like Cold (1)

jollyrgr3 (1025506) | about 2 years ago | (#41785103)

You can go a couple degrees warmer than in the "old days" (ten years ago). Things like bearings in fans and drives will fail. Capacitors will fail. Data centers produce LOTS of heat. I don't believe that the coin counters figured in the staff to replace the failed parts or the extra staff and time needed when manual procedures are used due to a downed system.

Re:A Couple Degrees Warmer - Electronics Like Cold (0)

Anonymous Coward | about 2 years ago | (#41787209)

"My superstitions trump Google's data." --jollygrr3

Ignorant Article (1)

detain (687995) | about 2 years ago | (#41785877)

Computers crash/fail when overheating and in a datacenter that can happen very fast. You absolutely must keep the temperatures from getting too hot. Some datacenters can get away with minimal cooling. Some datacenters need chillers and tons of money invested in keeping things at a low enough temperature where computers wont randomly lock up on you from the heat. There must be some datacenters who have too much cooling but to say that datacenters in general dont need them demonstrates a lack of understanding what a datacenter is, that they are not all the same size nor is the hardware in them the same or all generating the same predictable temperatures.
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?
or Connect with...

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>