Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Startup's Submerged Servers Could Cut Cooling Costs

timothy posted more than 4 years ago | from the alliteration-alternation dept.

Hardware 147

1sockchuck writes "Are data center operators ready to abandon hot and cold aisles and submerge their servers? An Austin startup says its liquid cooling enclosure can cool high-density server installations for a fraction of the cost of air cooling in traditional data centers. Submersion cooling using mineral oil isn't new, dating back to the use of Fluorinert in the Cray 2. The new startup, Green Revolution Cooling, says its first installation will be at the Texas Advanced Computing Center (also home to the Ranger supercomputer). The company launched at SC09 along with a competing liquid cooling play, the Iceotope cooling bags."

cancel ×

147 comments

Sorry! There are no comments related to the filter you selected.

Until... (0)

Anonymous Coward | more than 4 years ago | (#31527544)

someone urinates in the cooling liquid, that is.

Re:Until... (1)

fatherjoecode (1725040) | more than 4 years ago | (#31528032)

someone urinates in the cooling liquid, that is.

Just keep Tyler Durden out of the computer room and you'll be fine.

Or (4, Insightful)

sabs (255763) | more than 4 years ago | (#31527626)

Until you have to try and RMA that CPU :)

Re:Or (3, Insightful)

Z00L00K (682162) | more than 4 years ago | (#31527992)

Don't forget the problems you run into when the server decides to spring a leak. Old servers and old cars would have the same level of sludge and oil puddles below them.

And the weight of the servers will be higher too.

Re:Or (0)

Anonymous Coward | more than 4 years ago | (#31528474)

And the weight of the servers will be higher too.

What a verbose way of saying "And the servers will be heavier too."

Re:Or (1, Funny)

Anonymous Coward | more than 4 years ago | (#31529300)

And the weight of the servers will be higher too.

What a verbose way of saying "And the servers will be heavier too."

Interestingly enough, I find that you have chosen such a long-winded method of communication for expressing your thoughts regarding the excessive wordiness of the father's father post (the grandfather) that I cannot help but notice how impaired your message is by your excessive use of words.

TL;DR would have worked.

Re:Or (0)

Anonymous Coward | more than 4 years ago | (#31529464)

Well, if your company is located in a building on an upper floor and the load bearing capacity of the floor is already stressed by the density of standard server racks, it is an important concern.

Re:Or (1)

ipquickly (1562169) | more than 4 years ago | (#31530818)

Sprinkler test in 3..2..1...

how much does it cost? (5, Interesting)

alen (225700) | more than 4 years ago | (#31527630)

the new Xeon 5600's run at less power than previous CPU's. and SSD's also run a lot cooler. how much does this liquid cooling enclosure cost and what is the performance compared to just upgrading your hardware?

HP is going to ship their Xeon 5600 servers starting on the 29th

Re:how much does it cost? (4, Funny)

Anonymous Coward | more than 4 years ago | (#31528496)

Thanks for conveniently letting us know that HP's new server, based on the Xeon 5600, is shipping soon. I'll be sure to look out for that HEWLETT PACKARD server coming soon, with a Xeon 5600. On the 29th. I'll be looking for it.

Re:how much does it cost? (1)

GreyWolf3000 (468618) | more than 4 years ago | (#31528856)

I prefer to buy my servers from Dell, and if you can't take it, then I'll see you on July 12, 8 o'clock 9 pm central on pay per view! at the Royal Rumble in Las Vegas!

As someone who HAS built & run oil immersed.. (5, Informative)

GuyFawkes (729054) | more than 4 years ago | (#31527696)

..computers, allow me to label this a "fad"

The idea is funky, but to get good cooling you want convection (every joule of pump energy from a circulating pump gets transferred into the oil at yet more heat) which means deep tanks which means, to the server environment, goodbye high density.

The ONLY thing that has changed since I was doing this is the affordability of SSDs, which mean that now it is practical to immerse the whole computer, and the mass storage too, which makes things a lot simpler and cheaper, and means you really can be JUST oil cooled, not oil cooled mainly, except for air cooled HDs etc.

TOP TIP from an old hand.

If you are going to oil cool by immersion, buy the latest top quality hardware, because once immersed it stays there, you'll only pull it once to see why it sucks.

BIGGEST mistake experimenters make is using old hardware, cos you always end up playing with it, making mess, ahh fsckit..

Nota Bene if you are building one of these in anger, make allowances for the significant increase in the weight that the oil makes.

HTH etc

Forgot to say why I oil cooled. (1)

GuyFawkes (729054) | more than 4 years ago | (#31527808)

It was in order to build a totally silent computer, the cooling aspect worked OK, nothing spectacular, not of you layout the case properly, buy fans with decent blade profiles and proper bearings, and decent aftermarket heatsinks, but the total silence was beautiful... even ATX PSU's do make a noise, you only notice when you immerse *everything*

Re:Forgot to say why I oil cooled. (2, Interesting)

HungryHobo (1314109) | more than 4 years ago | (#31528540)

I'm also curious- is there any kind of fire hazard doing this on a large scale?

There isn't a lot to burn in a normal computer(at least not burn really well) but could a short circuit near a leak lead to a inferno in an oil cooled data centre?

Or is the oil treated in some way to make it less likely to burn?

Re:Forgot to say why I oil cooled. (3, Informative)

MoralHazard (447833) | more than 4 years ago | (#31529568)

Educate thyself: http://en.wikipedia.org/wiki/Mineral_oil#Mechanical.2C_electrical_and_industrial

Just because something CAN burn doesn't make it dangerous to have around potential sources of electrical arcing. Hydrocarbon petroleum products present no real fire/explosion danger unless the substance is warmer than its flash point, which is the temperature above which the liquid substance can evaporate into the air. Below the flash point temperature, oil is only as flammable as plastic. The evaporated fumes mixed into the air are the ignition danger, not the liquid itself.

This is because ongoing hydrocarbon combustion requires steady supplies of freely-mixing HC and oxygen. Sustaining the reaction requires the input of a tremendous volume of oxygen (compared the the liquid fuel volume, anyway), and the oxygen has to get rapidly mixed with the HC. That mixing can't happen quickly enough to the liquid HC. That's why the flash point is such an important consideration--the gaseous HC fumes mix quite well and quickly with atmospheric oxygen, creating nice conditions for a sustained combustion (a fire).

This is even true of gasoline (flash point = -40F). If you pour gasoline into a pail in the middle of a bad Antarctic winter, and you throw a match into the pail, the gasoline will just extinguish the match like a bucket of water.

Of course, if you mix liquid HC with liquid oxygen, or any other eager oxidizers, all bets are off. That shit will explode at cryogenic temperatures if you just look at it funny. (That's how rocket engines work.)

Re:Forgot to say why I oil cooled. (1)

lukas84 (912874) | more than 4 years ago | (#31529594)

Not all Oil burns well at atmospheric pressure, or at all for that matter.

Re:As someone who HAS built & run oil immersed (0)

Anonymous Coward | more than 4 years ago | (#31527928)

The place where oil immersion makes sense is in the data center, especially at a large one where the surrounding buildings already have a chilled water loop for cooling. All you need is a heat exchanger to turn hot oil into cold, and cold water into warm. You don't have to turn over building fulls of air.

Re:As someone who HAS built & run oil immersed (4, Insightful)

eln (21727) | more than 4 years ago | (#31528118)

There are other ways to make data center cooling more efficient, such as hot aisle containment and individual rack-top coolers blowing cold air directly in front of the racks. There's no reason a modern data center needs to move entire buildings full of air anymore, even without liquid cooling.

Oil immersion may or may not be more efficient, but it doesn't seem like it would scale well. In a large data center where some hardware component is failing on a daily basis, because you have tens of thousands of servers, keeping all that oil contained within the enclosures would be a major challenge. During maintenance, that stuff is going to be getting all over everything, including the tech, who can easily spread it all over anything he touches before he gets around to cleaning up. You'd need a cleaning crew out on the floor constantly.

Re:As someone who HAS built & run oil immersed (1)

turbidostato (878842) | more than 4 years ago | (#31528428)

"modern data center [...] Oil immersion may or may not be more efficient, but it doesn't seem like it would scale well. In a large data center where some hardware component is failing on a daily basis, because you have tens of thousands of servers"

In a modern, large datacenter you don't repair each failing component; you just redrive your computing load around it.

Re:As someone who HAS built & run oil immersed (3, Funny)

blueZ3 (744446) | more than 4 years ago | (#31528784)

Cue tech in scuba gear swimming down through the oil to change a power supply.

Re:As someone who HAS built & run oil immersed (1)

Sulphur (1548251) | more than 4 years ago | (#31529232)

Cue tech in scuba gear swimming down through the oil to change a power supply.

Would this be a good application for a robot?

Re:As someone who HAS built & run oil immersed (1)

lastchance_000 (847415) | more than 4 years ago | (#31530386)

Sounds like a job for Mike Rowe.

Re:As someone who HAS built & run oil immersed (1, Insightful)

Anonymous Coward | more than 4 years ago | (#31530892)

You are making the assumption that individual servers or even racks must be changed out regularly. Considering how many datacenters are no longer space constrained, but rather power constrained by the number of KW/square foot, or cooling restrained due to local regulations or power consumption, other approaches are valid.

The opposite conclusion is a containerized datacenter/rack cluster using oil immersion as the internal primary coolant, hooking into a datacenter fed cold water heat exchanger mounted at the end of the container. With that you would nominally design for a specific rated giggflop/Gbps for the container as a whole. At first, you have more than that, but as devices fail, you fail in place. When the container performance drops below rated, you swap out the whole container. Considering the depreciation and rated lifecycle/lifetime of servers, this is not an unreasonable proposition. Say, expected rated lifetime of 3 years. Swapping containers with the container manufacturer as a part of a trade-in/leased pool financing plan. When the container is brought in for maintenance, the equipment and personnel necessary to deal with oil immersion are available. The manufacturer can refurbish/replace the servers within to return to the lease pool, or if the conclusion is that it isn't cost effective to do so, drain the bastard and sell off the remaining servers to recycle the container or simply sell off the old container whole as a "below rated" or EoL product.

Re:As someone who HAS built & run oil immersed (1)

Rich0 (548339) | more than 4 years ago | (#31528068)

Why not use a water heat exchanger outside the case to cool the oil (while keeping water away from system components, and getting full contact with the entire system)? The water could then go into a loop to cool it. Other coolants could also be used, although water is great from a heat capacity standpoint.

Since the water doesn't touch anything important, it can be dumped into a cooling tower/etc.

To cool one system I doubt it is worth all the trouble, but for a datacenter I bet you could make it very efficient. It is a lot easier to run pipes of water than sufficient ductwork for A/C.

Component replacements could be a pain, unless the rack made it really easy to drain a given case.

Re:As someone who HAS built & run oil immersed (1)

Sir_Lewk (967686) | more than 4 years ago | (#31528548)

How do you build a server 'in anger'?

Re:As someone who HAS built & run oil immersed (1)

Bob-taro (996889) | more than 4 years ago | (#31528902)

...you want convection (every joule of pump energy from a circulating pump gets transferred into the oil at yet more heat) which means deep tanks which means, to the server environment, goodbye high density.

Really? You could say the same about air moved by a fan (that the fan's energy contributes to the overall heat). I'm no expert in this area, but I've seen liquid cooled PCs and the only big component is the radiator. I would think you could pack liquid cooled components more densely than air cooled, and you could put the radiator in another room.

Re:As someone who HAS built & run oil immersed (1)

dasdrewid (653176) | more than 4 years ago | (#31529890)

Just curious, and you seem like the guy to ask, has anyone done full center immersion? With the proliferation of shipping container rack systems, would it be possible to seal the entire container into one giant unit with a manhole on the top, then drop in a diver with either tanks or a line and let them do maintenance without worries of spillage? You'd be able to keep the same density as is currently used, since you'd be able to use the normal maintenance space as space for convection currents and the normal A/C units as heat exchangers. If the depth specifically is an issue, you could always move the racks to floor, since a diver wouldn't require a floor to walk on...

Like I said, I'm just curious. I don't really know much about any of this.

Re:As someone who HAS built & run oil immersed (0)

Anonymous Coward | more than 4 years ago | (#31530180)

You can submerge traditional platter type hard drives too! We have an enclosure to package the drives before they get submerged.

We've also tried high density. Works great! Actually, this is the golden solution for high density markets. Ever tried to put 12 physical CPU's on one motherboard? Wouldn't happen without liquid. Just so happens it's cheap with our technology.

Our technology raises a lot of questions. We were at last year's Supercomputing conference (10,000 attendee's) and had tons of questions. I would like to believe we had answers...

-Christiaan Best (GRC employee)

Re:As someone who HAS built & run oil immersed (1)

spire3661 (1038968) | more than 4 years ago | (#31530594)

Are SSD's submersible?

Maintaince Access? (4, Interesting)

Daniel_Staal (609844) | more than 4 years ago | (#31527712)

How much harder does it make doing standard move cables/switch harddrives/change components maintenance?

One of the advantages of a standard rack to me is that all of that is fairly easy and simple, so you can fix things quickly when something goes wrong.

Re:Maintaince Access? (1)

DigiShaman (671371) | more than 4 years ago | (#31527932)

My I recommend some long surgical gloves? Having a box of disposables near by might be a common practice in these type of data centers.

Re:Maintaince Access? (1)

Rich0 (548339) | more than 4 years ago | (#31528156)

Agreed, although if this became standard and built into racks then maybe each server would just have a button next to it that pumped out the coolant quickly. Hot-swaps probably wouldn't work inside the case itself, since you'd have to remove the coolant to perform this task.

Alternatively, you could perform a hot swap immersed in oil if you did it quickly - the oil probably couldn't be circulated with the case open but it would at least be there. I'm not sure that this would actually buy you much though, as oil has low heat capacity. Without pumps running the cooling might not be better than air cooling without fans. So, maybe hot-swaps would be totally out of the question unless the case could be designed to allow oil to flow without being under pressure.

Re:Maintaince Access? (4, Funny)

ArsonSmith (13997) | more than 4 years ago | (#31528506)

scuba gear and lessons for all sys admins!! All datacenters could just be a giant pool of swirling oil.

Re:Maintaince Access? (0)

Anonymous Coward | more than 4 years ago | (#31528566)

All datacenters could just be a giant pool of swirling oil.

So what you're saying is that most sysadmins would feel right at home...

Re:Maintaince Access? (1)

Kompressor (595513) | more than 4 years ago | (#31528980)

If the external cooling for the oil failed, you might end up with some mighty crispy techs...

Just in case, have them roll in breading before going in; then you could at least salvage the meat :-D

Mmm... Country Fried Tech...

Re:Maintaince Access? (2, Funny)

Hoi Polloi (522990) | more than 4 years ago | (#31528986)

Do we get old timey shirts with our name on them too?

"I see, ahh, your problem here maam. Your server rack is down a few pints. I'll top it off and put it on the lift and check the pump too."

Re:Maintaince Access? (1)

Wolfraider (1065360) | more than 4 years ago | (#31530048)

I want to go see the datacenter at godaddy if they can get all those girls to wear bikini's to the server room. mmm, room full of oily, scantly clad girls, That's where I want to work.

Re:Maintaince Access? (0)

Anonymous Coward | more than 4 years ago | (#31530944)

that's actually not a horrible idea.

Re:Maintaince Access? (0)

Anonymous Coward | more than 4 years ago | (#31528562)

Liquid cooled servers are awesome! Especially when you use kerosene as the liquid!

Ease of Service (0)

Anonymous Coward | more than 4 years ago | (#31527762)

Here's the ease of service video.

http://www.youtube.com/watch?v=-q0sTFX1DFM

Re:Ease of Service (3, Insightful)

eln (21727) | more than 4 years ago | (#31527926)

In any kind of a large data center environment the whole floor is going to be covered in that shit in short order. I can just imagine the fun of dealing with workman's comp claims every other week because someone slipped on liquid coolant on the floor and injured themselves. Even with high quality components, if you have 30,000 servers in a big room, you're going to have someone out there fiddling with one or more of them on a daily basis, and keeping things clean when they're all fully immersed like that would be next to impossible, especially if you're dealing with oil.

Re:Ease of Service (3, Insightful)

Grishnakh (216268) | more than 4 years ago | (#31528570)

With 30,000 servers in a big room, you do NOT want anyone "fiddling" with them at all. They need to be removed from the room and taken someplace else to be "fiddled" on.

Here's an idea. This would require a chassis redesign, but it would remove the maintenance problems mostly.

Make a special case for each system, which has no fans (since they're only useful for air-cooled systems), and has some type of pump for circulating the cooling oil. In this circulation loop is a heat exchanger, one built into each chassis. The backside of the chassis has two quick-connect connectors for connection to a cooling water supply. These are the type of connectors that close when they're unplugged. Such connectors are both on the water supply, and on the chassis. This way, when a server malfunctions, all the tech has to do is unplug it and pull it out of the rack. The water connectors will disengage, so only a few drops of water will spill (which will evaporate quickly). All the cooling oil will be contained within the server chassis.

The server can then be taken to a designated maintenance area where the oil can be drained and the server operated on, and then refilled with oil and plugged back into the server rack.

Re:Ease of Service (1)

TubeSteak (669689) | more than 4 years ago | (#31529576)

You really don't want to take an entire rack offline just to fix one server.

Re:Ease of Service (1)

Grishnakh (216268) | more than 4 years ago | (#31529650)

Who said anything about taking an entire rack offline? My idea was to make each server (with multiple servers per rack, obviously) self-contained with its own water connection, and easily removable without disturbing the other servers in the rack.

Re:Ease of Service (0)

Anonymous Coward | more than 4 years ago | (#31528628)

I'll agree with your contention. After seeing a flourinert cooled 40kW Radar set about the size of a 1 foot cube back in 1983, you know it can be done. BUT ! It's a far cry from fault tolerant triple redundant military hardware to lowest cost commodity servers.

You;ll have to come up with a way to pull the rack out of the bath without having to go swimming, and work on the rack without getting gunky (for example, an automatic wash-off when it comes out of the bath).

Making it work will be a far cry from 'I can run this board in a fish tank filled with Crisco ! D000d !"

Re:Ease of Service (0)

Anonymous Coward | more than 4 years ago | (#31529422)

The fluid doesn't look particularly nasty or viscous. I think some rubber mats and a couple of drains would go a long way. Oh, and plenty of gloves, and an employer-pays-for-clothes policy.

Submerged data center (1)

wjousts (1529427) | more than 4 years ago | (#31528012)

Was I the only one who read the headline and immediately thought of some kind of under water data center. That would have been cool!

Re:Submerged data center (1)

oodaloop (1229816) | more than 4 years ago | (#31528292)

No, I thought they meant it was submerged as well, as in using the earth's water and soil as a heatsink. Sort of like those geothermal heating/cooling units some houses have. The deep water is always 67 deg F, so it warms in the winter and cools in the summer. Massively more efficient than conventional oil heat and electric AC. For all the attention Al Gore received for Global Warming, it was President Bush was has one of these in his Crawford ranch.

Anyway, this is much less interesting. Oh well.

Re:Submerged data center (1)

compro01 (777531) | more than 4 years ago | (#31528554)

Nope, you're not the only one. I had a vision of sysadmins in SCUBA gear doing hardware swaps.

Re:Submerged data center (0)

Anonymous Coward | more than 4 years ago | (#31529794)

Yeah, I was thinking this guy [slashdot.org] had finally found his niche.

Oh yuck. (1)

istartedi (132515) | more than 4 years ago | (#31528040)

You'll obviously need to be scaling before you invest in a system that involves a big vat full of oil.

Also, what does the fire marshall think of a big vat full of oil? Hazardous disposal? Oh boy... some company goes BK, and they leave behind a big vat full oil and outdated electronics.

I didn't dig deep enough to see if they are actively pumping the oil or not. If they are, they're not doing it right. Any system that really cuts cooling costs should be using a LTD engine to transform the heat into useful work.

Of course, you still need to reject the heat someplace. At one place, it was my understanding that they had a helluva time trying to explain to some manager why they had to cut a hole in the building to let heat out of the server room. It's the same basic thermodynamics of "what happens if you leave the refrigerator door open". The room just gets hotter.

So. You'll have to have some kind of oil-air heat exchanger someplace. The hole for an oil line coming out of the server room is smaller... but it's an oil line. Back to the hazard factor...

Don't get me wrong. I understand why they used oil in things like Crays. The rate of heat exchange between the electronics and the oil is evidently better. It's the same reason why 50 degree water gives you hypothermia in 10 minutes and 50 degree air doesn't.

So. That leads us to the questions: Is your overall system efficiency going to be better in some way by running hotter? Does that savings offset the cost of the oil system?

Plainly, a commodity Intel server box doesn't run hot enough to require oil for effective heat transfer, unless you overclock it. If you can get twice the effective computing power in a room with fire-hot overclocked servers and the fancy oil cooler, ok maybe it's worth it?

Note: I don't lay any claim to be an expert in this field. These are just the kind of questions I think a generally intelligent person should ask. If somebody who really knows this stuff can *politely* rebut, then great.

Re:Oh yuck. (2, Insightful)

Grishnakh (216268) | more than 4 years ago | (#31528724)

You don't need oil-air heat exchangers, oil vats, or anything of the kind. What you need is chilled WATER, which is already generated by cooling plants. Run this water to each server using simple pipes and a large pump for the whole facility, and then put an oil/water heat exchanger inside each chassis, along with a pump to circulate the oil.

Is the efficiency going to be better? Maybe, maybe not, who cares. What's different is that cooling is much easier with 3/8" pipes of water rather than worrying about ductwork and A/C units. This will also allow you to have much, much higher server density than with air cooling; fluid is a much better (and denser) conductor of heat than air. Instead of wasting a lot of space on fans and ductwork and other places for air to flow, you only have to worry about some little pipes. Floor space is expensive in a facility like this.

And if you keep the cooling oil contained within the servers, you won't have to worry about any mess.

LOL (0)

Anonymous Coward | more than 4 years ago | (#31528760)

If somebody who really knows this stuff can *politely* rebut, then great.

Politely? You must be new here, you niggerdick lovin' cocksucker. XD

Re:Oh yuck. (1)

ubercam (1025540) | more than 4 years ago | (#31528864)

My initial thoughts were "Why on earth would you use the engine from an LTD [wikipedia.org] ?"

My ambiguous Wikipedia search revealed that you were in fact referring to a Stirling engine (aka. a low temperature difference engine).

Re:Oh yuck. (1)

Dilligent (1616247) | more than 4 years ago | (#31529390)

So. That leads us to the questions: Is your overall system efficiency going to be better in some way by running hotter?

As someone who has taken a class in electronics I can assure you that the efficiency of electronic equipment drops with increases in temperature as leaking currents are increasing. This may even lead to a thermal run-away situation.
Running hot is also pretty bad as far as reliability goes.

Re:Oh yuck. (1)

camperdave (969942) | more than 4 years ago | (#31529642)

The reason they use oil, or some fluorocarbon is that it doesn't conduct electricity, like water does. However, just because they have oil in the servers does not mean that they will be pumping oil out of the server room, or even out of the server itself, to cool it. One way you could do it is to oil cool each server in a rack using a rack mounted supply, then use a water system to cool the rack mounted supply. This is the way Iceotope does it.

How longer before we re-invent the mainframe? (1)

strangeattraction (1058568) | more than 4 years ago | (#31528052)

I'm starting a pool. How much longer before the mainframe is re-invented to power cloud computing. I'm taking 1.5 years. Any other bets?

Re:How longer before we re-invent the mainframe? (1)

zero0ne (1309517) | more than 4 years ago | (#31528244)

2.5 but it will be a mainframe that is powered by GPU's

Re:How longer before we re-invent the mainframe? (1)

WinterSolstice (223271) | more than 4 years ago | (#31528742)

Sort of like installing little Linux LPARs and such. Very amusing.

Mainframes are still the very best power/performance out there... and probably always will be :)

Re:How longer before we re-invent the mainframe? (1)

Grishnakh (216268) | more than 4 years ago | (#31528802)

If you think about it, a "server farm" really isn't that different from a "mainframe"; it's a whole bunch of CPUs working in parallel, all packed into one room. The only real difference is that most server farms are implemented with separate OSes on each system, instead of a single OS for the whole thing, which is good for redundancy and partitioning but not so great for efficiency. It'd be a lot more simple and efficient if we just had one big OS for the whole system, with different users using different user accounts, exactly what multi-user operating systems were designed for. Unfortunately, no one's seemed to figure out yet how to make a truly reliable and fault-immune multi-user OS, so we're using separate systems, virtualization, etc. to contain the damage when faults (either hardware or software) happen and crash the entire OS.

Re:How longer before we re-invent the mainframe? (1)

ebuck (585470) | more than 4 years ago | (#31528972)

Well, if you're starting a pool, throw in a cloud of servers and you'll be the pioneer.

Come to think of it, I'll refrain from betting on this one, when you so poised to control the outcome, odds are I'll lose.

Submerged hard disks? (3, Interesting)

JPerler (442850) | more than 4 years ago | (#31528270)

Hard disks aren't sealed, there's always (at least, on the dozens of disks I've taken apart) a little felt-pad or sticker covered vent on them. I figured it was for equalisation or something crazy, but I'm not positive.

Given hard disks aren't sealed, wouldn't they fill with fluid and assuming they'd still function with a liquid screwing up the head mechanism (given modern disk's head's float above the platter surface on a cushion of air) wouldn't the increased viscosity slow down seek events?

Re:Submerged hard disks? (1)

idontgno (624372) | more than 4 years ago | (#31528638)

Solid state disks.

Essentially, if it has moving parts, it probably stays in air, and uses either conventional air cooling or contact non-submergence liquid cooling.

Re:Submerged hard disks? (3, Informative)

mnmoore (50459) | more than 4 years ago | (#31528756)

In the embedded video, they indicate that hard disks need to be wrapped in some material the vendor apparently provides, presumably for just this reason. Not sure how well the wrapping transfers heat.

Re:Submerged hard disks? (2, Interesting)

Grishnakh (216268) | more than 4 years ago | (#31528874)

No, the fluid would completely ruin the hard drive because they're not designed for that.

There's two ways around this problem that I see:
1) Use SSD disks instead of mechanical platter HDs.
2) Use regular HDs, but do not submerge them in the cooling oil. Instead, put them in some type of aluminum enclosure which conducts the heat to the cooling oil, but keeps it from contacting the HD itself, sort of like what the water-cooling enthusiasts do for their hard drives today.

And yes, I believe you're correct about equalization; the disks have filters to keep contaminants out and the air inside clean, but they're not designed to be pressurized, so they have to equalize with the ambient air pressure.

Re:Submerged hard disks? (1)

ebuck (585470) | more than 4 years ago | (#31529012)

Such issues could easily be solved by only submersing the compute nodes (which back to an external SAN for storage), encasing hard drives in airtight containers which have (heat) conductive contact with the drive body, or using newer SSDs to remove the need for an air cushion between your non-existent head and your non-existent platter.

I wonder what kind of oil (0)

Anonymous Coward | more than 4 years ago | (#31528330)

the datacenter in the movie the matrix the humans are emerged in. Good solution too when the "components" fail. Just flush it out the drain ;-)

Same Thermal Output (2, Insightful)

TheNinjaroach (878876) | more than 4 years ago | (#31528352)

Won't these servers bathed in oil still have the same thermal output? I don't understand why it would be cheaper to cool oil than it would air or any other medium..

Re:Same Thermal Output (0)

Anonymous Coward | more than 4 years ago | (#31528684)

Because air is a poor thermal agent.

Anyway oil sounds fun until something goes wrong and you fry the whole potato bag. I trust coolants these days are not flammable anymore, so that piece of the show is gone.

Re:Same Thermal Output (1)

newcastlejon (1483695) | more than 4 years ago | (#31528794)

Mineral oil has a thermal conductivity 5 times greater than air, and is much easier to pump around. I expect the difference in cp is similar but wikipedia doesn't list a value for oil.

Re:Same Thermal Output (0)

Anonymous Coward | more than 4 years ago | (#31528922)

Easier to pump around than air? I think not. The density of oil is higher, and therefore you're pushing a larger mass. Even if it is 5x more effective at carrying the heat away its definitely more than 5 times denser (and therefore harder to pump), ergo its even 'harder' to pump per unit heat removal too.

Re:Same Thermal Output (1)

h4rr4r (612664) | more than 4 years ago | (#31529084)

You do not pump the oil. You setup a current of heat via your layout. If you are pumping the oil you are doing it wrong.

Re:Same Thermal Output (1)

newcastlejon (1483695) | more than 4 years ago | (#31529440)

You can easily pump oil with a positive displacement pump. It's quieter, too - our 10-ton hydraulic press makes less noise than my home PC (with only one SilenX fan in it).

Density isn't really an issue because for all the extra mass you're pumping you taking away something on the order of 5 times the heat per unit mass of coolant (air/oil) that you pump. Aerofoils aren't my area so I'll have to ask someone else to comment on the efficiency of a bladed fan vs. a PD pump, i.e. the energy needed to move 1kg of air vs. 1kg of oil, ignoring the fact that you need to move less than half the mass of oil for similar Q-values.

WRT the sibling post I should say that I don't see how completely immersing the system and relying on convection is a much better solution than using something similar to a water cooling setup supplemented for forced-air convection for the cooler (e.g. HDDs) components. But I suppose with the RAM, northbridge (are they still called that?) and all the rest you might as well go the whole hog.

Re:Same Thermal Output (1)

cynyr (703126) | more than 4 years ago | (#31530676)

but look at the differances in energy per unit mass. Fans, while good at moving air loose alot as soon as there is some sort of obstruction in the way, getting a fan to move more than a few (1-3) inches of water worth of pressure is quite a bit of work. so while oil may be harder to move you need to move less of it or it can move slower. You are correct that you need to move a specific thermal mass to cool a heat source.

Re:Same Thermal Output (2, Interesting)

eh2o (471262) | more than 4 years ago | (#31528854)

Air actually has a very high thermal resistance so one needs to use forced circulation to actually transport moderate amounts of heat. Running all those fans uses more energy. In fact in any closed room, running a fan may cause objects immediately in front of the fan to be cooled, but overall the room is heating up from the power use.

Oil has a very low thermal resistance naturally so one can use ordinary convection instead (up to some point).

A less messy solution would be for servers to be made with integrated metal heat-pipes that conduct the waste energy to the case. Then a special type of rack would carry the heat away through the mounting rails.

Re:Same Thermal Output (2, Interesting)

AlejoHausner (1047558) | more than 4 years ago | (#31528878)

The company's website [grcooling.com] claims that it's easier to cool oil than to cool air. Their argument is that conventional air cooling requires 45 degree F air to keep components at 105 degree F, whereas the higher heat capacity of the oil lets it come out of the racks at 105F. The oil is hotter than ambient air (at least where I live), so it should be easier to remove its heat (through a heat exchanger) than to chill warm exhaust air back to 45F (through a refrigeration unit). Of course most components can run hotter than 105F, and that only strengthens their argument.

Alejo

Re:Same Thermal Output (2, Interesting)

Grishnakh (216268) | more than 4 years ago | (#31528932)

It's not cheaper to cool oil. However, it's easier, because you can use oil-to-water heat exchangers, and cool the whole server farm with a chilled water plant (like A/C, but only chills water and never uses it to cool air). The benefit of this is that you don't have to worry about airflow, ductwork, and the like, and you can pack servers much more densely into a space than with air cooling. Since floor space in a facility like this is expensive, this saves money. It might also be more efficient to use chilled water in pipes to cool the servers directly rather than chilling air and blowing that around a big building.

Do this for free to be Green. (1)

bill_mcgonigle (4333) | more than 4 years ago | (#31530476)

It might also be more efficient to use chilled water in pipes to cool the servers directly rather than chilling air and blowing that around a big building.

Especially when it's free. I used to work at a medical center with a big data center. Cold city water was run first to the data center, heat-pumped to a cold-air Liebert, and then the slightly-warmer water was piped on to all the places where cold water is used. A degree or two warmer is quite fine at the tap.

Smart downtown-City data centers would work a deal with the city to do the same thing and stop paying for coal-generated electricity. Maybe the next crop of data centers should be build next to the water treatment plants.

This kind of "green" will be of the "backs" kind.

Re:Do this for free to be Green. (1)

Grishnakh (216268) | more than 4 years ago | (#31530544)

Won't work here in Phoenix. Here, in the summertime, there is no "cold" water faucet in your home; there's only "warm" and "hot". Many times, the "warm" faucet is just as hot as the "hot" one.

Of course, I don't know what kind of idiot would locate a datacenter in Phoenix anyway. Except maybe Paypal.

Re:Do this for free to be Green. (1)

bill_mcgonigle (4333) | more than 4 years ago | (#31530716)

Heh, that's funny. Fortunately fiber optics run to cold places pretty well.

Re:Same Thermal Output (0)

Anonymous Coward | more than 4 years ago | (#31529218)

Because liquids are much more efficient at transferring and carrying heat so you don't need to expend as much energy turning huge fan blades to blow large volumes of air past a heat sink versus a small pump providing a gentle current.

The test (1)

fulldecent (598482) | more than 4 years ago | (#31528434)

Oh..... there's something Google didn't think of and try.

"Green Revolution"!!! (1)

oldhack (1037484) | more than 4 years ago | (#31528508)

Astute move. They're named "Green Revolution Cooling". Everyone knows you can't go wrong when you go "green".

Fanless low power servers are the future (4, Interesting)

colordev (1764040) | more than 4 years ago | (#31528612)

A server with this [newegg.com] Intel Atom equipped mobo draws something like 25-35W under full load. And the performance of these D510 dual core processors is comparable [cpubenchmark.net] to better Pentium 4 processors.

Re:Fanless low power servers are the future (1)

h4rr4r (612664) | more than 4 years ago | (#31528948)

So the future is going to be slow, really really slow?

We keep quad socket quad Xeon boards at very high usage all the time. These things are not going to cut it.

Re:Fanless low power servers are the future (1)

MoralHazard (447833) | more than 4 years ago | (#31529684)

You don't need bigger and bigger individual machines, if you have fast enough IO and your software engineers know WTF they are doing. There are alternative parallel algorithms for practically any problem you'd naively solve in a highly serial way. Given the right programming skill set, we could run just about any web app you care to imagine on a farm of SheevaPlugs (http://en.wikipedia.org/wiki/SheevaPlug). Kind of cute, don't you think?

Why do you think places like Google and the big quant-heavy finance firms have had such hard-ons for functional programming over the last couple of years? FP lends itself toward parallel problem solving in a real big way, and most of the Big Brains in charge of Big Computing have all come to the same conclusion: in another 10 years, you'll be irrelevant unless your primary business logic is efficiently and elegantly parallel.

Re:Fanless low power servers are the future (0)

Anonymous Coward | more than 4 years ago | (#31530220)

There are alternative parallel algorithms for practically any problem you'd naively solve in a highly serial way.

What's the parallel way to do a CRC-32?

Re:Fanless low power servers are the future (0)

Anonymous Coward | more than 4 years ago | (#31529584)

It's all about performance/power.

Also, in a good server the CPU should consume as big a share as possible of the total power consumed. That is, there should be as little "overhead" power consumption from other parts such as the power supply.

A huge 1kW supply with 70% efficiency is better than a tiny 10W supply with 65% efficiency.

Re:Fanless low power servers are the future (1)

Running Pinata (1166015) | more than 4 years ago | (#31529690)

A server with this [wikipedia.org] draws 7w under full load.

Re:Fanless low power servers are the future (0)

Anonymous Coward | more than 4 years ago | (#31530924)

Now imagine a server rack version. Multiple chips, with multiple cores each, plus a crapload of RAM (a few watts per package). In the rack pictured in the article, that'd be maybe 160 cores per rack. You'll still need to cool it. Even just saying it's 35W * 40 = 1400 watts. That's as much as an electric range or a big microwave.

Canada Anyone? (0)

Anonymous Coward | more than 4 years ago | (#31528766)

Can anyone tell me why we cannot simply move many of these servers to northern Canada? Canada has great fibre optic infastructure and the average temperature 8 months out of the year is well below a nominal temperature for cpu's. Blow the cold air in and push the warm air into administrative buildings. Cheap and Green.

Re:Canada Anyone? (0)

Anonymous Coward | more than 4 years ago | (#31529006)

communist!

Re:Canada Anyone? (1)

h4rr4r (612664) | more than 4 years ago | (#31529124)

Ping times go way up?

Re:Canada Anyone? (1)

MoralHazard (447833) | more than 4 years ago | (#31529896)

Because the technology business is staffed with armies of amateurs who don't understand how to properly implement "lights-out management" at their datacenters. They somehow feel safer, warmer, and fuzzier because they can physically drive to their servers at 3am to press a reboot switch, or pop a CD-ROM into a tray.

Those of us who know better invest in per-port IP-KVM switching with virtual USB media support, plus remote power control. We can hard power cycle a crashed server from the beach, using MidpSSH on a BlackBerry.

If you're really slick, you can even wire a server's mainboard "clear CMOS" jumper to a remote-control relay card (or a USB bit whacker on another host), and you can clear the BIOS settings remotely, if necessary. That's overkill for most organizations, but it's awfully nice to know that you could just leave a stack of spare parts for the the local hands-on monkeys, and never visit the datacenter again.

Personally, I hate datacenters. The over-dried air plays hell with the sinuses, and you have to suck down a liter of water per hour to keep my lips from getting chapped. And then, then the damn minimum-wage security guards act like they're doing you a huge favor when you have to get buzzed through the ManTrap every 30 minutes to piss and chug another Evian. And the noise leaves my ears ringing for hours after a visit. Fuck datacenters--if during an interview I get the feeling that the boss expects regular site visits, I get the hell out of there.

Mainframe (4, Interesting)

snspdaarf (1314399) | more than 4 years ago | (#31529196)

I seem to remember mainframes using distilled water for cooling decades ago. Not being a member of the correct priesthood, I was not allowed in the mainframe room, so I don't know how it was set up then. I have seen how oil-filled systems work, and I would hate to work on one. Nasty mess.

Re:Mainframe (1, Informative)

Anonymous Coward | more than 4 years ago | (#31530030)

Ah yes, the good old days...

As I remember it, there were a couple of levels of coolant that were used to cool off a mainframe - some mystery liquid was pumped around through tubes that would flow by the chips needing cooling- it had all the necessary qualities, including being non-conductive in case of a leak. Then that liquid was pumped through a heat exchanger where the heat would get transferred to distilled water which was then pumped to some cooling unit (up on on the roof in our case).
I still remember when the level of distilled water got low - you'd open up the mainframe enclosure and take your gallon of distilled water from the grocery store and pour it in. Very, very weird.

The idea is interesting, but data centers are also about density. These horizontal enclosures seem to waste a lot of room above the rack. And one rack must easily weigh a couple of tons. Imagine having to drain one of those things... I really don't see this taking off... C

How about using the building incoming water supply (1)

mark_osmd (812581) | more than 4 years ago | (#31529446)

I was wondering if it would cut cooling costs to use the building's main incoming water service as a cool heat sink. The part of this that goes to the hot water heater could be used to soak up heat from servers, then passed to the hot water heater which would then have an easier job. Only using the incoming supply to the water heater would avoid problems with warm tap water but in some cases that wouldn't matter (do most people care if the cold taps in the bathroom produce warm water?). If you could use the whole cold supply it would be a bigger heat sink.

Re:How about using the building incoming water sup (1)

Shadyman (939863) | more than 4 years ago | (#31530354)

I've been pondering that for a while, personally, I just don't see how to create an effective heatpipe from processor or machine to water pipe.

Might as well be... (0)

Anonymous Coward | more than 4 years ago | (#31529620)

These days if your company is underwater your servers might as well be too.

More details on Cray 2 cooling (from one who was t (2, Interesting)

epiphyte(3) (771115) | more than 4 years ago | (#31530728)

The Cray 2 had a three stage cooling system; the flourinert was pumped through a heat exchanger and dumped it's heat into chilled water, which was either provided by the site's existing HVAC infrastructure or (more likely, since the dissipation was in the Megawatt range) by a dedicated freon-based water chiller. The 5th generation Cray Inc (as opposed to CCC) also used immersion cooling in a similar vein. Many other Cray machines (YMP, C90 and so on used the same 3-stage cooling system, but the modules weren't immersed in the flourinert, rather the coolant flowed through channels in a thermally conductive plate sandwiched between the two boards of each processor or memory module. This wasn't a means of cooling the boards more cheaply; this was ECL logic... in those days it was the only way you could deliver the required power and have the thing not literally melt.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>