Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered


Open source data center? (3, Insightful)

El_Muerte_TDS (592157) | about 3 years ago | (#35843462)

So, where can I download the blueprint of it?

Those sure are (1)

Anonymous Coward | about 3 years ago | (#35843488)

some hardcore great systems

Not quite. (4, Interesting)

Anonymous Coward | about 3 years ago | (#35843538)

No, it's not the most energy-efficient in the world. The numbers they published were only from a VERY limited timespan during the coldest part of the year when energy needed for cooling would be drastically lowered.

If they published a full-year figure, I can guarantee you it wouldn't be nearly as good as the published one.

Re:Not quite. (1)

Anonymous Coward | about 3 years ago | (#35843790)

Agreed. And what about being environmentally friendly? There sure are a lot of filters that would need to be replaced on a regular basis. I know this because I'm a refrigeration/air-conditioner technician. I'd like to learn more about how 'environmentally efficient' they are. Not just energy-efficient.

Re:Not quite. (1)

aztracker1 (702135) | about 3 years ago | (#35845046)

I think this would depend entirely on the materials used for the air filters and the source(s) of electricity used... I don't consider outright disposal, or landfill of paper products harmful to the environment... trees are a renewable resource.

Re:Not quite. (1)

Midnight Thunder (17205) | about 3 years ago | (#35847404)

You might well be right, but wouldn't that be reason for more environmentally friendly designs be made public. It would cool to see companies out-competing each other over improved environmental designs if data centers.

BTW Can you point to some existong data centers that are likely to out compete already? I would be curious to see their designs.

Re:Not quite. (0)

Anonymous Coward | about 3 years ago | (#35847942)

You are one of those assholes who work for Google and feel threatened because everything your company does is a proprietary piece of shit, including Android 3.0 and all your secret datacenters. right? Well, fuck you and fuck Google as well.

Re:Not quite. (1)

QuantumRiff (120817) | about 3 years ago | (#35849570)

Prineville, OR is high altitude desert.. So even when it gets warm in the summer, the nights are usually cool. there is a very short timespan that it is hot both night and day in that part of the state.

Open Source (1)

The O Rly Factor (1977536) | about 3 years ago | (#35843552)

Seems like it is becoming the next big buzzword for MBAs to throw around. "Yeah Bill, our new data and commerce center is leveraging the open source capabilities of the cloud to make sure our crowd sourced ROI brings back the best managed results we can get with today's scalability and reliability of "Echs Eight Six" platform development systems...at least that's what this whitepaper in front of me says."

Re:Open Source (1)

marcello_dl (667940) | about 3 years ago | (#35843592)

I don't think the decision to open up the plans is buzzword compliance, though. They probably have more practical reasons like getting feedback.

Re:Open Source (3, Insightful)

MoonBuggy (611105) | about 3 years ago | (#35843618)

One of the reasons this will probably work well is that Facebook has the advantage of not being in the data centre business per se. They use data centres, sure, but they don't sell them, which means they have nothing to fear from competitors copying their good ideas. Major hosting companies would quite likely be more reluctant to say "here's our great new idea, any ideas on how to improve it?", because they have much more to lose if someone comes along, works out an improvement, and then implements it for themselves rather than passing it back to the community.

Re:Open Source (1)

fuzzyfuzzyfungus (1223518) | about 3 years ago | (#35843716)

It's pretty much a classic application of the major business case for "open source" anything(the second, typically less significant, one being to assure a customer or customers that you aren't locking them in, if they are large enough or the business competitive enough, to demand that): commodifying your complements.

When a hardware vendor gets all "open source", that is usually a sign that they want a cheap OS and/or some flavor of middleware that can sit between their hardware and their consulting services or genuinely competitively distinguished software(see IBM's use of Linux, or Oracle's, at least before they purchased Sun).

In this case, Facebook's competitive advantage is some combination of their software platform and the network effects of their existing userbase and giant pile-o'-user data. Any requirement that gets in the way of shoving ads and Zanga games down user's gullets is just a cost center, and thus a perfect area to try to share R&D costs on(and possibly put the screws to some of the weaker datacenter operators who are trying to go it alone on high efficiency designs).

Now(just to avoid misinterpretation), the fact that there is a perfectly pragmatic logic behind this doesn't make its value as 'open source' any lower(as long as it isn't some sort of tricky quasi-open thing, which is tried from time to time, though not obviously so here.), in fact, that pragmatic utility is one of 'open source's greatest allies in getting more stuff to be available under those terms; but it is a purely pragmatic thing.

Re:Open Source (1)

Glendale2x (210533) | about 3 years ago | (#35845694)

They also have a fixed hardware platform - not so with a colocation datacenter where the operator is going to need to accommodate a wide mixture of equipment from unrelated vendors that customers bring in.

One thing I found interesting that seems to be popular with new facilities like this one is omitting the clean agent fire suppression systems that used to be all the rage. Specifically it says:

4.10 Fire Alarm and Protection System
  Pre-action fire sprinkler system uses nitrogen gas in lieu of compressed air to eliminate pipe corrosion.
  Online nitrogen generator.
  A VESDA air sampling system is provided for early detection for fire/smoke detection.

For those who don't know, a pre-action system is a water system except that the pipes are charged with compressed air (on in this case, nitrogen) with an interlock system that requires smoke detection followed by loss of pressurization before releasing water. The benefit is preventing accidental discharges; if someone breaks a head without the smoke detection, the loss of pressure will be seen as a trouble event and water will not be released into the pipes. If you get a smoke detect event and *then* a head fuses, water is released. It's all still a fancy water system though.

Re:Open Source (1)

zero0ne (1309517) | about 3 years ago | (#35845744)

So on top of destroying hardware, most of which is not replaceable under warranty (I doubt Dells 4hr business service accommodates water damage), they are trying to put out electrical fires with water. I think we can assume the majority of fires in a data center will be electrical.

Re:Open Source (1)

qubezz (520511) | about 3 years ago | (#35846176)

They are pumping tons of air through the facility. Lowering o2 levels with a nitrogen or a halon system would be pretty ineffective. A chemical bottle extinguisher is pictured in one photo - manually extinguishing local fires (like if an AC panel goes boom) would probably be expected. The sprinkler system is probably code mandated to keep the whole facility from going up in flames in a disaster.

Re:Open Source (1)

bill_mcgonigle (4333) | about 3 years ago | (#35848042)

One thing I found interesting that seems to be popular with new facilities like this one is omitting the clean agent fire suppression systems that used to be all the rage.

New data architectures make this possible. Facebook can lose a room full of equipment and not loose any significant data. It's probably cheaper to replace a room full of commodity servers than to maintain halon systems everywhere.

If I recall their replication correctly, if a sprinkler system took out a room full of servers, the data layer would increase redundancy of that data automatically (not really knowing why the servers went offline).

censored in the western world, the unproven dead (-1)

Anonymous Coward | about 3 years ago | (#35843556)

talk about finding the original source code, from the chosen ones operating system, this stuff makes the genuine natives want to puke when they think about it. delivered to them for 'free', touted as gifts from god, soon they were dying from things nobody had ever died from before. god again? it's all in the teepeeleaks etchings, an evil take all action adventure thriller like none since, until now.

how come our maniac partners never run out of guns (-1)

Anonymous Coward | about 3 years ago | (#35843766)

if they/we weren't so well supplied with WMD by the chosen ones holycost international weapons peddlers & money changers, would not the possibly us undead civilians have some chance to survive until all weapons are banned forever?


PUE tricks (4, Interesting)

Anonymous Coward | about 3 years ago | (#35843612)

Several large data center operators are trying to win this "most efficient" title and putting lots of innovation and resources into them. However, you need to be very careful at comparing the outcome. First notice the actual claim, "most efficient". The Facebook data center consumes water to reduce energy needs. This can be a very dangerous practice if followed on a large scale. Consider the recent annex of a US government site in Utah in order to get priority water service. http://www.datacenterknowledge.com/archives/2011/03/09/annexation-boosts-cooling-for-nsa-data-center/

So far, anyone attempting to lay claim on the "most efficient" title has moved things from the cooling column, sometimes into the computer load column (most notably fans), sometimes over to water consumption. Yes, you get a better efficiency awarded if you consume more power in non cooling areas.

The quote showing when this came up is so true;
The trouble with the rat-race is that even if you win, you're still a rat. -- Lily Tomlin

Re:PUE tricks (1)

jonbryce (703250) | about 3 years ago | (#35843750)

Water can still be used to for example irrigate crops after it has been used for cooling, you can use non-drinking water such as seawater for cooling, though you might want to keep that a bit further away from the computers than you would for pure water, perhaps as a secondary heat exchange, and there are parts of the world with plenty of water, such as Canada, Scotland and Scandinavia.

Re:PUE tricks (3, Informative)

CastrTroy (595695) | about 3 years ago | (#35844268)

Yeah, Living in Canada, I always laugh when they talk about water conservation. Not that we should waste it, but we have much bigger environmental problems to worry about. If there was anywhere near a shortage, it wouldn't cost only a couple buck for a cubic meter (1000 L). Most of the problems with water in this world are a distribution problem, not a supply problem. And water is something that is quite expensive to transport. It's not like you can dehydrate the water to bring the weight down.

Re:PUE tricks (1)

fuzzyfuzzyfungus (1223518) | about 3 years ago | (#35844984)

Actually, if anybody is interested, I have a large supply of dehydrated Water Ready to Drink(WRD) packets available. These things are the perfect complement to MREs, and great in all sorts of emergency situations.

Re:PUE tricks (1)

aztracker1 (702135) | about 3 years ago | (#35845110)

I live in Arizona, and still don't get the argument... if you are using well water, or otherwise isolated, sure... but in general water evaporates, becomes clouds and rains/snows back down again... Used for cooling, like in nuclear plants, it isn't tainted, lost or otherwise removed from the planet, just evaporated. Now split for hydrogen use, that's a different story slightly.

Re:PUE tricks (1)

Midnight Thunder (17205) | about 3 years ago | (#35847460)

The problem is the water doesn't necessarily come down in the same place. This means what ever you are taking out the ground is not necessarily being replaced at the same rate. This is even more true when the number if people exploiting it increases. There are plenty of cases where the level of the water table has dropped from over exploitation.

Re:PUE tricks (0)

Anonymous Coward | about 3 years ago | (#35846296)

It's not conceptually hard to bring the weight down.

Step 1. Remove the oxygen atoms.
Step 2. Stuff the hydrogen in a zeppelin.
Step 3. Float it to where you want it reconstituted.
Step 4. Light a cigar in celebration of your feat.
Step 5. ???
Step 6. Oh, the humanity!

Re:PUE tricks (1)

Cramer (69040) | more than 2 years ago | (#35886528)

Not the way they're doing it. Evap cooling vaporizes the water. Once it's vapor, it's hard to drink, irrigate, etc. with it. You'll have to wait for it to condense back into liquid (ala rain, snow, sleet, etc.) before it's "usable" again. As others point out, that's very rarely a closed system -- the rain comes down hundreds of miles away.

The way many (some?) office buildings are cooled, on the other hand, does not vaporize the water. It takes water from the muni supply, runs it through a coil, and returns it (a bit hotter) to the muni supply. That works rather well for an office building. The heat load in a datacenter is way too high for that -- it takes too high a volume and would vent much higher temp water than most cities will accept.

Re:PUE tricks (3, Insightful)

fuzzyfuzzyfungus (1223518) | about 3 years ago | (#35843784)

Depending on how local power is generated, the power consumption column may or may not be hiding a substantial amount of water use as well(the giant cooling towers, billowing clouds of steam that weren't successfully recovered, are not there for show, nor is the siting of many sorts of power plants next to bodies of water just because operators like paying more for picturesque real estate with flood risks.) Some flavors of mining are pretty nasty about using and/or filling with delightsome heavy metals and such, water resources as well.

I don't mean to detract from your, eminently valid, point that there are a lot of accounting shell games, in addition to actual engineering, going on when being "efficient"; but it is really necessary to decompose all the columns in order to figure out what is hiding under each of the shells(and how nasty each something is).

Swamp coolers are a great way(in non-humid areas) to reduce A/C costs and increase clean freshwater use. Whether or not that is a better choice than using more energy strongly depends on how your friendly local energy producer is producing. Odds are that they are consuming cooling water(or hydropower's consumption of water with potential energy); but it isn't always clear how much.

Regardless, though, what you Really, Really, Really want to avoid is situations where archaic, weak, nonsensical, and/or outright corrupt regulatory environments allow people to shove major costs under somebody else's rug. Why do they grow crops in the California desert? Because the 'market price', such as it is, of water sucked from surrounding states is virtually zero. Why are there 20-odd water-bottling operations in Florida, a state barely above sea-level and with minimal water resources? Because the cost of a license to pump alarming amounts(you guys weren't using those everglades for anything, right?) of water is basically zero(unless you are a resident, of course, they face water shortages. Trying incorporating next time, sucker). Similar arguments could be made that energy users in a number of locales are paying absurdly low rates for the Appalachian coal regions being turned into a lunar theme park, among other possibilities.

Playing around with 'efficiency' numbers is a silly game; but largely harmless PR puffery. Making resource tradeoffs that are sensible simply because they allow you to shove major costs onto other people at no cost to yourself is all kinds of serious.

Re:PUE tricks (0)

Anonymous Coward | about 3 years ago | (#35845314)

"The Facebook data center consumes water to reduce energy needs.". This is true, but producing energy consumes a surprising amount of water (http://pubs.usgs.gov/circ/2004/circ1268/htdocs/table13.html) . Therefore, if you can consume water to reduce your electrical consumption, the net effect can be less water consumption (http://www.nrel.gov/docs/fy04osti/33905.pdf).

Jets of cooling water (0)

Anonymous Coward | about 3 years ago | (#35843786)

Jets of cooling water, can anyone explain?

wtf. [flickr.com]

I know IC's are pretty much water resistant by the way they are designed... but wouldn't humidity be a bad thing in a data center?

Re:Jets of cooling water (0)

Anonymous Coward | about 3 years ago | (#35844418)

You want *some* humidity in your air to prevent static buildup and subsequent discharge.

In this case though they're using water to cool the air.

Coal fired power is not a good idea (2)

mauriceh (3721) | about 3 years ago | (#35843798)

This is a site in a location where most of the power is from coal fired generators.
Totally lacking in foresight.
What happens when coal generation is banned in a few years?

Re:Coal fired power is not a good idea (1)

the linux geek (799780) | about 3 years ago | (#35843874)

Nobody is voting to ban coal power production any time in the next thirty years, due to the annoying fact that it would result in a total collapse of the United States economy.

Re:Coal fired power is not a good idea (1)

CastrTroy (595695) | about 3 years ago | (#35844272)

Ontario recently made a decision to shut down all the coal power plants. They are phasing them out. There are better sources of electricity out there.

Re:Coal fired power is not a good idea (1)

aztracker1 (702135) | about 3 years ago | (#35845140)

I live in the southwestern U.S. so most of the power here is hydro, solar, or nuclear... I really don't get the use of coal, which is terribly inefficient for power generation.

Re:Coal fired power is not a good idea (0)

Anonymous Coward | more than 2 years ago | (#35877316)

Except they keep delaying the shutdown date every year for a while now. Then they will transition to burning renewables (see: wood!).


Ontario will need to build additional nuclear plants to accommodate future growth, but not many (maybe the 4 reactors that are planned, but all bids were rejected due to costs). New project is planned now.


These additional plants are required to meet future needs of Ontario without the use of fossil fuels, especially as electricity becomes more useful for transportation (electric cars).

Re:Coal fired power is not a good idea (1)

Eric(b0mb)Dennis (629047) | about 3 years ago | (#35845074)

I seriously doubt anyone is going to ban coal power production!

I'd wager a good 70% of us could go look at our energy bill right now and see that just wouldn't be possible.

Personally, I get about 88% of my energy from coal.. and that's in the California valley.

Ewww, commodity (2)

the linux geek (799780) | about 3 years ago | (#35843860)

I have to wonder how much power that would use if it ran on mainframes or large UNIX servers rather than unreliable and relatively slow clusters of small machines. It's strange that none of the "new generation" websites are choosing to go with bigger systems, despite the fact that they tend to perform better on both performance and power/performance.

Re:Ewww, commodity (0)

Anonymous Coward | about 3 years ago | (#35844180)

you forget this thing called PRICE

Re:Ewww, commodity (1)

Anonymous Coward | about 3 years ago | (#35844248)

I think it's a problem of culture. With "agile" methods they're pushing code all the time and using their distributed infrastructure to test it in real time. So they deploy to 10-20 machines, if it's ok then to 100-200, etc... Having just a few huge optimized servers with broken would affect too many users at once, so I guess web companies are confortable with their commodity server.

I work for such a web company and every attempt to introduce more reliable systems is met with mixed feelings... they've grown in a culture of commodity PCs and think everything is equally unreliable, so why spend more money on big-iron if it's going to fail at the same rate? That's how they think and in their mind it doesn't make sense.

Re:Ewww, commodity (3, Insightful)

turbidostato (878842) | about 3 years ago | (#35845058)

"they've grown in a culture of commodity PCs and think everything is equally unreliable, so why spend more money on big-iron if it's going to fail at the same rate?"

I don't think that's exactly their point.

The point is more "why spend more money on big-iron if it's going to eventually fail anyway?" If it's going to fail eventually, you'll have to program-around the failure mode, but once you properly program-around system failure why going with the more expensive equiment? Go with the cheaper one and allow it to fail more frequently, since it really doesn't matter now.

Re:Ewww, commodity (0)

Anonymous Coward | more than 2 years ago | (#35875734)

You're missing the GP's point that big iron is more efficient. So, yah, it costs more and is more reliable, and more efficient.

Re:Ewww, commodity (1)

turbidostato (878842) | more than 2 years ago | (#35922778)

"You're missing the GP's point that big iron is more efficient. So, yah, it costs more and is more reliable, and more efficient."

It's only that no, that's not his point. My previous grandparent post says nothing about efficiency, just reliability.

Yes, his grandparent (the linux geek) does talk about efficiency, only his claim is unstated by any fact.

Yes, a big box should be more energy efficient but it is? Even more, is it to the point of being economically savvy?

Big vendor provided boxes are not just "bigger" but bigger for the most part in uneeded ways (I don't need big backplanes or expensive RAID controllers if I don't even think of attaching hard disks; the same goes with redundant or hot pluggable power units, CPU, RAM, etc. if my system is tolerant to a node's failure). And then, it's the software issue: specially open source "usual" programs are not really so deeply tested on really high loads, so why should I be the one hitting nasty bugs when I can simply add few more machines and get them under "usual" loads where they just go humming OK?

You see, in the end Facebook opted for home-grown boxes but, even then, they went the "big bunch of commodity machines": less cost, less testing, less risk if something goes nuts. All of this must be balanced against those untested efficiency claims for bigger boxes.

Re:Ewww, commodity (1)

SuperQ (431) | about 3 years ago | (#35845090)

But the thing is, even with big iron you still need to plan for downtimes and maintenance. For the scale of some of the "big" sites out there you still end up having to build software that can tolerate failure. Planned or unplanned. All of the utility of the big iron goes away when you still have to plan to fail over.

Re:Ewww, commodity (1)

SuperQ (431) | about 3 years ago | (#35845064)

I've done these numbers before. How about this:

IBM Power 795 (4.0 GHz, 256-core) 1TB ram - specint rate 2006 = 11,200 - $2m
AMD Opteron 6176 dual socket (2.3Ghz, 24 core) 128GB ram - specint rate 2006 = 400 - $8100

So you need about 30 AMD machines to get the same speed. That's about $250k including rack and networking. Right off the bat you're talking about a 1/8 the cost of the IBM power system.

As for performance/watt, the AMD machines need about 600W each. A rack plus switch is probably going to need 20kW to run.

Of course I'm having a very hard time trying to find the power requirements for a fully loaded 795.

Re:Ewww, commodity (1)

the linux geek (799780) | about 3 years ago | (#35845218)

Except that you have to deal with the garbage of running a cluster, having a higher cost of licensing, and a higher rate of failure.

Re:Ewww, commodity (1)

zero0ne (1309517) | about 3 years ago | (#35845808)

Licensing? Licensing for WHAT? Do you think Facebook is running windows on these machines? Maybe they are running Oracle Linux?

Clearly if we are trying to save money by running commodity hardware, we are going to load up Server 2008 R2 on every box with some MSSQL. I think you need to go reread all the posts above yours...
- rate of failure doesn't matter if its commodity hardware
- licensing clearly won't matter if you are using open-source software
- clustering? Easier to use BigTables, Cassandra, Hadoop, etc (yes this may be clustering to you, but it sure as hell isn't "garbage" clustering)

Re:Ewww, commodity (0)

Anonymous Coward | more than 2 years ago | (#35875652)

Except that you have to deal with the garbage of running a cluster, having a higher cost of licensing, and a higher rate of failure.

Have you considered going into comedy? The thought of a mainframe-head whining about the "cost of licensing" is pretty damn funny...

In any case, the real issue is that mainframe systems are designed for an *entirely* different sort of requirements - ultra-high availability, predictable and short response times, etc. Facebook really couldn't give a shit if a few of their users get intermittent errors, timeouts, etc - it's the nature of the web, and "OMG LOL WTF" isn't exactly banking-class data.

There's also the small matter of the existing architecture using *way* more memory (mostly for memcached) than even a z-Series box can hold: 28 TB (http://www.facebook.com/note.php?note_id=39391378919) at one mention back in 2008, and I can only suspect it's significantly bigger now. Yes, there's probably a better mainframe-centric way to deal with the database load issues, but that almost certainly means a massive rewrite of the code...

Re:Ewww, commodity (0)

Anonymous Coward | about 3 years ago | (#35846004)

If all these websites are going for commodity systems, and presuming they've put some effort into considering the options, perhaps your fact is wrong?

The google? (0)

Anonymous Coward | about 3 years ago | (#35843872)

Seriously? "The google"? Was the summary written by a 80 year old?

Re:The google? (0)

Anonymous Coward | about 3 years ago | (#35844164)

Learn to grammar. It is "The Google datacenter" transformed into "The Google and other datacenters".

Seems... Wasteful (0)

Anonymous Coward | about 3 years ago | (#35844102)

Looking at their cooling infrastructure there looks like a lot of energy wasted during the heat transfer.

why did Rackspace send me there? (0)

countertrolling (1585477) | about 3 years ago | (#35845272)

I promised them a free plug...

So, Don't forget.. Rackspace is the world’s biggest web hosting company. Buy from Rackspace.. That's Rackspace, okay? Rackspace...


photo tour (2)

perryizgr8 (1370173) | about 3 years ago | (#35845474)

with an iphone!
how hard would it have been to take a proper camera? the photos are almost unlookable!

Re:photo tour (1)

cthulhu11 (842924) | about 3 years ago | (#35852324)

It takes an act of God to get a proper camera into my company's DC's - I suspect it's like that other places too.

Most efficient in the world? (0)

Anonymous Coward | about 3 years ago | (#35846048)

I'm struggling to find numbers to back up "most energy efficient in world", and am unsurprised especially as Google - known to be one of the industry leaders in this - never publishes its datacenter efficienciesi. Could someone enlighten me? All I see is a claim on opencompute.org that it's "one of the most efficient in the world"?

Pretty ugly (1)

McTickles (1812316) | about 3 years ago | (#35846338)

That is one of the ugliest data center I have ever seen. Even datacenters consisting of messes of cables and racks units piled up look more .. hmm.. welcoming? friendly? nice? than that.

I guess what makes this datacenter particularly ugly is the people behind it and what goes on in it. Zuckerberg and it's zombie factory, aka facebook, and all the scheming going on selling the souls of said zombies to others.

No this is truly a disgusting data centre.

I don't really understand (1)

Asaf.Zamir (1053470) | about 3 years ago | (#35846720)

Why do they have all the water jets, filters, air temperature changers and so on?

Is it good for the servers or something?

Re:I don't really understand (0)

Anonymous Coward | about 3 years ago | (#35846914)

It's stupidity at its worse.

Facebook needs to just worry about its silly little servers and leave the true physical infrastructure to engineering professionals.

This is what happens when you get a bunch of software open-source geeks trying to build something physical.

In this particular, they just don't understand HVAC. The humidity in that locale is well above average. In fact, the charts show morning city humidity is 90% year-round. It only drops to 40% for 3 months in the summer afternoons.

Yeah, you're little spray jets aren't going to be doing much except creating a mold problem.

Re:I don't really understand (0)

Anonymous Coward | more than 2 years ago | (#35876966)

Humidity is a problem in most datacenters. CRAC units remove water from the air when cooling. You generally want humidity 35%+ or else you can start running into random weirdness/reboots/failures on the servers due to static electricity.

Re:I don't really understand (0)

Anonymous Coward | more than 2 years ago | (#35877424)

Why do they have all the water jets, filters, air temperature changers and so on?

Is it good for the servers or something?

The Answer:

Are they on Qwest? (1)

drinkypoo (153816) | about 3 years ago | (#35847122)

Qwest came into existence through a clever deal to purchase right-of-way along the railroad track, and it's mentioned that the DC is located where it's located because of this access. Qwest is notable for being the only LD provider to not instantly cave when asked to install equipment to permit the federal government to listen in on all calls.

How much less power would they consume? (2)

srodden (949473) | about 3 years ago | (#35850454)

How much less power would they consume if they didn't plug in the silly blue lights on all the servers? Per machine it can't be too much but across a data centre that size, it must add up to several hundred watts. It's not like they need to whip out an epeen at a LAN day.

Re:How much less power would they consume? (1)

SuperQ (431) | more than 2 years ago | (#35861484)

A blue LED like the ones used are probably 5-10mW max. A not so bright 5mW LED * 10k machines = 50W used. 50W is probably 0.001% of the power used by a cluster of that size.

Re:How much less power would they consume? (1)

srodden (949473) | more than 2 years ago | (#35862074)

I see what you're saying but looking at the number of lights there, I think it's more than one LED per machine.

Re:How much less power would they consume? (0)

Anonymous Coward | more than 2 years ago | (#35876766)

And how much power would they save running a pure water cooled system?? Yes, it would be A LOT more, but they don't do that. Take a guess why not.

Now, extrapolate that to understand why your point is stupid.

Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account