Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Cooling Bags Could Cut Server Cooling Costs By 93%

timothy posted more than 4 years ago | from the or-other-exact-number dept.

Data Storage 135

judgecorp writes "UK company Iceotope has launched liquid-cooling technology which it says surpasses what can be done with water or air-cooling and can cut data centre cooling costs by up to 93 percent. Announced at Supercomputing 2009 in Portland, Oregon, the 'modular Liquid-Immersion Cooled Server' technology wraps each server in a cool-bag-like device, which cools components inside a server, rather than cooling the whole data centre, or even a traditional 'hot aisle.' Earlier this year, IBM predicted that in ten years all data centre servers might be water-cooled." Adds reader 1sockchuck, "The Hot Aisle has additional photos and diagrams of the new system."

Sorry! There are no comments related to the filter you selected.

Yes, but how much does it cost? (2, Insightful)

captaindomon (870655) | more than 4 years ago | (#30129996)

That's really nifty, and I'm sure it works ok and everything, but... how much does it cost?

Re:Yes, but how much does it cost? (3, Funny)

jgtg32a (1173373) | more than 4 years ago | (#30130378)

About 7% as much as whatever you are using today

Re:Yes, but how much does it cost? (3, Informative)

jaggeh (1485669) | more than 4 years ago | (#30130498)

That's really nifty, and I'm sure it works ok and everything, but... how much does it cost?

Figures cited by Iceotope show that the average air-cooled data centre with around 1000 servers costs around $788,400 (£469,446) to cool over three years. The Iceotope system claims to eliminate the need for CRAC units and chillers by connecting the servers in the synthetic cool bags to a channel of warm water that transfers the heat outside the facility. This so-called “end to end liquid” cooling means that a data centre, fully equipped with Iceotope-cooled servers, could cut cooling costs to just $52,560 - a 93 percent reduction, the company states.

taking the above figures into account as long as the cost to install is under the 200k figure theres an incentive to switch

Re:Yes, but how much does it cost? (1)

rhyno46 (654622) | more than 4 years ago | (#30130616)

It's cheap. Only 93% of whatever you are paying now.

Re:Yes, but how much does it cost? (4, Insightful)

Smidge204 (605297) | more than 4 years ago | (#30130664)

The idea that the mainboard components are sealed inside a liquid-filled compartment seems like a major point against the system. Extra proprietary vendor lock-in components mean extra costs of owning and operating, which probably offset any savings from cooling... if any.

I'm skeptical that it will significantly reduce cooling costs (Compared to, say, a chilled cabinet system) because the total cooling load stays the same. If you're generating a billion BTUs of heat you still need to remove a billion BTUs of heat. Any savings will only be from the higher energy densities water allows versus air and maybe initial installation.

Plus, based on their exploded view, there is no less than three heat exchanges before it even gets out of the cabinet: Chip to liquid (via heat sink), submersion liquid to module liquid, module liquid to system liquid. Each time to go through an exchange your temperature gradient goes up.

What they need is a system that is compatible with commodity components to leverage low cost hardware against lower cost cooling. Why not fit water blocks directly to existing mainboard layouts and circulate chilled water from the main loop directly through them via manifolds and pump at each rack? You can still enclose the mainbaord and cooling block in a sealed, insulated compartment to eliminate condensation problems, but not being submerged means you can actually repair/upgrade the modules.
=Smidge=

Re:Yes, but how much does it cost? (3, Interesting)

fuzzyfuzzyfungus (1223518) | more than 4 years ago | (#30130966)

Their demo, at least, seems to be aimed at blades, so the inability to just slap any old motherboard into the system would not be a significant change.

As for water blocks, I suspect that all the various minor chips in the system would be problematic. Even if your cooling of the CPU, or even the top 3-5 chips, by thermal output, is perfect, there are loads of other components that will heat up and die without airflow. CPU voltage regulators, northbridge, RAM, RAID controllers, ethernet, etc. You can't waterblock them all(at least without a serious redesign that makes using commodity components impossible, or a plumbing scheme that would make Escher wince). Either you go with a hybrid waterblock/conventional air cooling system; which gives you the vices of both; or you have to go with the fluid bath as in this setup.

Re:Yes, but how much does it cost? (1)

Smidge204 (605297) | more than 4 years ago | (#30131320)

Well what I had in mind is a flat plate (say, aluminum) with water channels in it. On this plate there are two or three protrusions that match the main chip locations that need cooling that are milled to physically contact the chips just like discreet heat sinks would.

You attach the mainboard to this plate like you would attach it to the inside of a normal computer case, only backwards. eg; the screws go through from the back side instead of the component side. This puts the components very close to, if not directly contacting, the cooling body.

Put the whole thing in an air/moisture tight enclosure (maybe with some desiccant to dehumidify?) to keep out moisture and dust.

The whole "sandwich" assembly could still be a blade form factor. It's basically exactly the same setup as proposed in the article except the water block makes physical contact with the board components rather than using a fluid intermediary.
=Smidge=

Re:Yes, but how much does it cost? (1)

Alastor187 (593341) | more than 4 years ago | (#30133008)

Well what I had in mind is a flat plate (say, aluminum) with water channels in it. On this plate there are two or three protrusions that match the main chip locations that need cooling that are milled to physically contact the chips just like discreet heat sinks would.

Similar to what you have described, a conduction cooling plate has been used extensively in the military embedded electronics industry. You take an aluminum/copper block of metal and machine the face so that it "touches" multiple components on the circuit card. It doesn't actually touch the components, because of the mechanical tolerances, generally putty or a complaint gap pad are used to interface between the heatsink and the package.

This results is a very robust heatsink. It can conduct a significant amount of heat from the card and also stiffens the assembly making the card more resistant to vibration and shock.

However, the solution tends to be very specific. The heatsink typically will only work for that specific card, and if any components are changed/moved the heatsink needs to be modified. This can be very costly. Of course a more generic design could be made to fit multiple cards, but as it becomes more generic the benefits this approach are diminished.

The notable thing about his approach is that once you get the heat off the card and to the cold-wall there are many options for removing the heat from there. For example the cold-wall could actually be a large heatsink fin-array, and that heatsink could be cooled by either air or water.

Re:Yes, but how much does it cost? (1)

Korin43 (881732) | more than 4 years ago | (#30131788)

No, see all you need is 10,000 gallons on mineral oil, a waterproof server room, and a couple rebreathers..

Re:Yes, but how much does it cost? (2, Funny)

rainmaestro (996549) | more than 4 years ago | (#30133142)

And mutated sea bass patrolling the aisles to maintain security?

Re:Yes, but how much does it cost? (1)

dindi (78034) | more than 4 years ago | (#30132768)

I believe that directly cooling components via liquid is way more effective than pushing some air around.

Think air cooled (loud and ineffective) vehicles compared to modern liquid cooled vehicles, that circulate liquid inside the engine (not the combustion chamber of course)...

I agree with the extra cost for the technology, however you could still use the same components if you e.g. submerge things in oil, that does not harm components and does not conduct electricity.

Ugh. (2, Interesting)

Pojut (1027544) | more than 4 years ago | (#30130006)

For some reason, the filters at work won't let me view the article. Does it happen to mention how much the upfront cost for these bags are?

Re:Ugh. (2, Insightful)

BadAnalogyGuy (945258) | more than 4 years ago | (#30130028)

Like the unpriced bottle of wine at Applebees. If you have to ask...

Re:Ugh. (1)

Pojut (1027544) | more than 4 years ago | (#30130124)

It was more of a curiosity thing :-)

I'm wondering about the upfront costs vs. money saved over time after the initial investment.

Re:Ugh. (0)

Anonymous Coward | more than 4 years ago | (#30130934)

If you're at applebees at all, let alone thinking about ordering wine there, you really need to re-evaluate your priorities. Consider moving "eating out at a place that serves marginally edible food, rather than applebees" to a spot higher up on the list.

Re:Ugh. (1)

Pojut (1027544) | more than 4 years ago | (#30131016)

Applebees has its purpose...the Monterey Grilled Chicken that the location around the corner from my house serves is amazing (although the rest of their menu is indeed questionable.)

Re:Ugh. (1)

Mister Whirly (964219) | more than 4 years ago | (#30132584)

It's pretty hard to screw up a chicken breast sandwich. You can over/undercook the chicken, but short of that it is going to turn out okay.

So Applebees does have a purpose - keeping all the lowest common denominator non-foodies out of good restaurants I like. Making a simple sandwich and having it taste ok is nothing I would give a restaurant bonus points for, but I would certainly take points away if they can't even do that right. (I also knew a few people who worked at various Applebees when I was in culinary school, and from what I saw and heard about, Applebees will hire ANYONE as a line cook.)

Re:Ugh. (1)

Pojut (1027544) | more than 4 years ago | (#30132706)

Agreed, but the sauce they have on top of it is pretty tasty.

As far as hiring anyone as a line cook, that doesn't bother me nearly as much as my own cooking does...so no biggie :-P

Re:Ugh. (1)

Mister Whirly (964219) | more than 4 years ago | (#30132820)

Yes, I did fail to take into account one's own cooking abilities compared to whoever is working as a line cook at Applebees. It is very possible that it may still be a step up with some people. That said I also encourage my friends who aren't great cooks to take some basic cooking classes. A few have really benefited and are no longer afraid to try things out at home now. And one of my friends is now actually enrolled in a professional culinary school after taking a few simple "basics" type cooking classes I suggested.

Re:Ugh. (1)

Pojut (1027544) | more than 4 years ago | (#30133162)

I'm the black sheep in my family when it comes to cooking...most of them are great at it, but if it doesn't come in a box I'm pretty much useless.

On the flip side, I am the fastest typer, so I at least have that going for me :-)

Re:Ugh. (1)

cromar (1103585) | more than 4 years ago | (#30131110)

Don't order wine at Applebee's!!

Re:Ugh. (1)

FlopEJoe (784551) | more than 4 years ago | (#30131426)

What is "bottle of wine at Applebees" in Library of Congresses? thx.

Re:Ugh. (1)

camperdave (969942) | more than 4 years ago | (#30130606)

No mention of cost it the articles I skimmed, however, no mention of cool bags either. Actually I'm more reminded of Pelican cases [thepelicanstore.com] than cool bags [made-in-jiangsu.com] . What they're doing is immersing a motherboard in an inert synthetic liquid, and sealing that in one half of a hard shell. They're running coolant water through the other half of the hard shell through a distribution unit in the rack. All of the coolant water runs through a heat exchanger, which is connected to the building's water cooling system.

So: sealed liquid-immersed motherboard -> sealed rack coolant flow -> building's water supply. No air cooling, just liquid to liquid to liquid, and the liquids are isolated from each other via heat exchangers.

Re:Ugh. (1)

Pojut (1027544) | more than 4 years ago | (#30130990)

Sounds good to me.

I'm still waiting for the day when it is feasable (physically and economically) to lay down small pipes for coolant directly onto a PCB or between PCB layers. That will bring along the true cooling revolution!

Re:Ugh. (1)

nschubach (922175) | more than 4 years ago | (#30132982)

You don't even need to do that... just make a motherboard with a plate behind it that share the same holes for mounting and have a gap for water. Seal it, fill with water, and you have the same thing. If you want to put in transfer "ports" for CPU cooling blocks you can. A motherboard manufacturer could do that now and include a CPU and chipset block with some standard nozzles for connecting GPU block hoses. Drop in a small pump and external heat sink and they could sell it to gamers and server builders today.

Re:Ugh. (1)

Cajun Hell (725246) | more than 4 years ago | (#30131524)

Weird that your filters are malfunctioning. But anyway, these cool new bags are only currently available through barter, in exchange for 2 kiddie porn magazines plus one copy of michaelangelo virus.

Re:Ugh. (1)

Pojut (1027544) | more than 4 years ago | (#30131768)

Not really, things are rather draconian around here...pharmaceutical call center. Oddly enough, Slashdot has always been accessable...It's likely because of someone in IT, lulz.

Excess Heat (1)

smitty777 (1612557) | more than 4 years ago | (#30130060)

TFA mentions using the excess heat to heat the building. I wonder how feasible it would be to actually recycle the heat to generate more power? Anyone have an idea on how much heat could be generated by your typical server farm?

Re:Excess Heat (3, Insightful)

von_rick (944421) | more than 4 years ago | (#30130088)

In winter you'd get quite a few kilowatt hours worth of heating if you route the dissipated heat properly.

Re:Excess Heat (1)

jgtg32a (1173373) | more than 4 years ago | (#30130404)

See my sig, also it keeps the study warm

Re:Excess Heat (1)

afidel (530433) | more than 4 years ago | (#30130356)

Not much at all, delta-t is too low to get any real efficiency.

Re:Excess Heat (0)

Anonymous Coward | more than 4 years ago | (#30130626)

IBM Zurich has developed technology along with a study to support its use in which waste server farm heat is transferred to water and the heated water then piped to the neighboring town to heat homes, which already use hot water heating. To make this process efficient, however, you need to maximize heat transfer to the fluid. When you have a system that emits heat in a nonuniform manner, the efficiency of transferring the heat to the fluid gets worse if you allow the heat to mix and become uniform before doing the transfer. That is, by the time the heat has made it to the outside of the server case, many sources of heat have already mixed together, reducing the ability to transfer this heat.

On the other hand, if you can bring the liquid very close to the actual sources of heat generation, the transfer can be much more efficient. The ideal situation is to use microchannels in the processor casing itself, with more channels or "spray nozzles" located over the parts of the chip that dissipate the most heat and fewer over the rest of the chip. The goal is for the chip to become uniform in temperature because you're pulling heat off at a rate proportional to how much is generated. This maximizes the amount of energy that is ultimately transferred to the fluid and available to heat something else.

In the IBM Zurich study, they noted that this scenario makes the most sense in cold climates where homes rely on hot water heating a large fraction of the year. One way to look at it is that the homes already rely on energy being turned directly into heat in order to generate hot water. The water-cooled servers merely replace a "dumb" source of heat with a source that happens to perform computing in the process but which can be almost as efficient in turning the original source of electrical energy into hot water.

Re:Excess Heat (1)

Smidge204 (605297) | more than 4 years ago | (#30130724)

Very little, since you're dealing with very low quality heat. The hottest temp in your system is going to be the hardware itself (unless you're expending energy to pump it - then what's the point of trying to generate power from it?)

So if your max hardware temp is, say, 38C (100F) that's not good enough to generate any appreciable power from.

On the other hand, you probably will be pumping the heat to chill the system, and the rejected heat temp may be quite a bit higher - maybe as high as 75C. You can use that to heat your building's occupied spaces.
=Smidge=

Do I get at least a pair of rubber gloves? (5, Insightful)

Itninja (937614) | more than 4 years ago | (#30130086)

Seriously. What do we do when a RAM module or a backplane fails? Will a simple hardware swap become a task for those trained in hazmat handling? I do not want to be on the help desk when someone calls and says "Help! The servers are leaking!"

Re:Do I get at least a pair of rubber gloves? (3, Insightful)

dintlu (1171159) | more than 4 years ago | (#30130344)

You pull that server out of the farm and let other servers pick up the slack while you make repairs.

It's hype, based on the assumption that every server on the planet will be virtualized by 2019, and that the separation of hardware from the software that runs on it will allow IT departments ample time to offload work into "the cloud" while they swap out RAM.

Either that or it's made for large datacenters with multiple redundancies and enormous cooling costs. :)

Re:Do I get at least a pair of rubber gloves? (1)

robathome (34756) | more than 4 years ago | (#30130582)

Don't want to reduce your smug, but we're doing just this - restart services from the failed component, service the failed resource on a non-critical timeframe. The small shop with a half-dozen server boxes doesn't give a damn about cooling costs or this level of service, for the most part. If they do, they're likely going to someone else to satisfy that requirement, not doing it in house.

I've got stack of servers in my datacenter that are allocatable on demand. Any unused server blade is a potential spare. If a production blade tips over with a CPU fault, memory error, or similar crash, its personality (FC WWN's, MAC's, boot and data volumes, etc.) are moved to another blade and powered on through an automated process. Since the OS and apps live on the SAN, both VMs and dedicated server hardware can be abstracted away from the actual services they provide.

This is a product my company's selling to the market at large right now, and that I designed. Any of our IaaS customers can take advantage of the redundancy and fault tolerance built into the system. Even the six-server small IT shop.

Even then, a small IT organization can easily virtualize and provide some level of HA services in hypervisor clusters now. It's just not that hard anymore. Take the handful of servers you're running on now, replace them with an equal number of nodes in a VMM cluster, and go to town. Any of those systems fails, shift the load to the other nodes and effect repairs.

Re:Do I get at least a pair of rubber gloves? (4, Funny)

ByOhTek (1181381) | more than 4 years ago | (#30130636)

What, they won't? Oh man, this virtualization thing is brilliant.

So you virtualize a box, so that, if there's a hardware failure, the box can be brought back up on another machine, with minimal downtime! Also, you can run multiple systems on a box saving money!

We virtualized all our servers around here, went from about 200 servers to 8 machines, each with 16 CPU cores. It went well. So we decided to repeat the process. We then had 4 machines, each taking two VM hosts! It was great, more savings, more vodka for my drawer... So I thought, how could I make this even better...?

That's right, I put all four of THOSE VM hosts on a 486 in the back room that doesn't even need special cooling. Let me tell you, in terms of Vodka, this virtualization thing has been *quite* productive.

How could it not be all pervasive by 2019? I'm sure everyone will be virtualizing all of their VM hosts on VM hosts running on 486s by 2019!

Re:Do I get at least a pair of rubber gloves? (0)

Anonymous Coward | more than 4 years ago | (#30130846)

I went a step further... I virtualized the host box. Boom!

Re:Do I get at least a pair of rubber gloves? (0)

Anonymous Coward | more than 4 years ago | (#30132122)

My silly boy, why it's VM Hosts all the way down!

Re:Do I get at least a pair of rubber gloves? (1)

IICV (652597) | more than 4 years ago | (#30133998)

I did you one better - I have my Windows VM running on a Linux VM, and the Linux VM is running on the Windows VM. It was kind of tricky to set up at first, but now I don't even need hardware! There's just a spinning matrix of computation in the basement. My dog is afraid to go in there.

Re:Do I get at least a pair of rubber gloves? (1)

the_lesser_gatsby (449262) | more than 4 years ago | (#30132848)

That's happening right now. I'm seeing the same performance from a virtualized 4-CPU 16GB machine as a real one. (Didn't use to be like that).

Re:Do I get at least a pair of rubber gloves? (1)

perdera (1175261) | more than 4 years ago | (#30130460)

Yeah, I'll keep my FRUs, thanks.

Re:Do I get at least a pair of rubber gloves? (1)

interploy (1387145) | more than 4 years ago | (#30130746)

TFA states it's an inert liquid, so hazmat need not be involved. Actually, it sounds an awful lot like an earlier story [slashdot.org] concerning a full-immersion prototype desktop PC.

A few questions (4, Interesting)

Reason58 (775044) | more than 4 years ago | (#30130094)

Won't this cause accessibility issues for the administrators who have to support these servers? Additionally, Google's evidence supports the idea that warmer temperatures are better for the life of some components, such as hard drives. Last, this may work well for traditional servers, but I fail to see how this can be made to support a large SAN array or something similar.

Re:A few questions (1)

afidel (530433) | more than 4 years ago | (#30130494)

Google and just about everyone else is going to the model where you never touch the server after install. Also their evidence shows that too cool of temperatures negatively effect HDD life, that's quite different from saying warmer temperatures are better, it was also the area of the study that had the fewest number of datapoints so the evidence might not be fully accurate.

Re:A few questions (1)

turtleshadow (180842) | more than 4 years ago | (#30132232)

The data centers I've been in the SAN array and tape library are is a totally different area, level, or building than the computing farm. This is because of security and accessibility to the librarians and vendors of long term storage. You use fibre or some other technology to connect the two areas.

With enough tapes and disk it means a tech or librarian is always walking around handling media and I'd rather them not touch my server cabinet inadvertently. Being 1 company we don't cage intra-department unless its mission critical.

The drives that contain the code to start the system or fast local space could very easily be insulated in some other part of the cabinet. The proposed system is geared to big systems which don't required 1 CPU 1 Disk to start individual cpus. The blade is configured a channel to bootstrap from a disk or disk image somewhere else in the complex. Anyhow with 8-32GB MicroSD you can put that chip into an external USB port and configure a boot from that.

The equilibrium of the system is the most important. Large swings of temps and humidity kill rotating media and robotic tape libraries. These occur when service doors are opened for substantial periods of time for a "hot swap component" removal or extended repairs which involve a cool down to the mechanical parts.

My most pressing admin question is how does the telemetry come in from the complex to warn me of a heat/pump/flow failure? Is this easy to use, Is it secured (IE no one can snmp/telnet to a dumb pump and shut it off) and accountable with a robust logging system I can integrate into my business.

Super cool! (1)

stakovahflow (1660677) | more than 4 years ago | (#30130106)

Super cool! ^_^ If they made those for laptops, I'd be all over it. My wife likes to use her HP as a lap warmer, with a blanket... But there I go thinking again... --Stak

great idea (1)

mikey177 (1426171) | more than 4 years ago | (#30130112)

we all know what happens when you mix water and server rooms http://www.youtube.com/watch?v=1M_QTBENR1Q [youtube.com] better call up Noah

Re:great idea (1)

Reason58 (775044) | more than 4 years ago | (#30130170)

we all know what happens when you mix water and server rooms http://www.youtube.com/watch?v=1M_QTBENR1Q [youtube.com] better call up Noah

"The Iceotope approach takes liquid – in the form of an inert synthetic coolant, rather than water – directly down to the component level," the company said.

Re:great idea (1)

darthwader (130012) | more than 4 years ago | (#30132814)

Actually, this technology would make the data center better protected from a flood. Since each blade is sealed in its own bubble of coolant, if the entire rack is underwater because of a flood, the blades would be protected. Maybe some of the external components like the cooling pumps might be damaged, but most of the contents of the rack would be fine.

I'm not saying they could continue to operate through the flood, but after the water is gone and the mess cleaned up, you replace the UPS and fix the external things which are damaged, and you could get going again without having to actually replace the computers which are in the rack.

Coming back full circle (1)

hwyhobo (1420503) | more than 4 years ago | (#30130128)

Grandma would be proud of her cold compress technology.

Re:Coming back full circle (3, Funny)

Digestromath (1190577) | more than 4 years ago | (#30132482)

Next up, cooling servers with a bag of frozen peas?

New resume requirements... (1)

Firemouth (1360899) | more than 4 years ago | (#30130354)

Interviewer: "Well Mr. Robinson, while your resume is quite impressive, however, you just don't have everything we're looking for to fill the opening on our server maintenance team."

Mr. Robinson: "What do you mean? I have a Masters in Computer science, A+, MCSE, CCNA, CISSP, and 23 years of relevant experience. What am I missing??"

Interviewer: "You see, we're running that new server cooling technology you might of seen on slashdot. I didn't see anything about being SCUBA certified on your resume."

Re:New resume requirements... (1)

Antique Geekmeister (740220) | more than 4 years ago | (#30130586)

Oh, that's listed under hobbies, specifically "cave diving".

Quick Release (4, Informative)

srealm (157581) | more than 4 years ago | (#30130408)

The problem with all this is you need a good piping and plumbing system in place, complete with quick release valves to ensure you can disconnect or connect hardware without having to do a whole bunch piping and water routing in the process. Part of the beauty of racks is you just slide in the computer, screw it in, and plug in the plugs at the back and you're done.

I'm not saying it's impossible, but just building a new case, or blade, or whatever isn't going to do it - you need a new rack system with built in pipes and pumps, and probably a data center with even more plumbing with outlets at the appropriate places to supply each rack with water. This is no small task for trying to retrofit an existing data center.

Not to mention that you have to make sure you have enough pressure to ensure each server is supplied water from the 'source', you cannot just daisy chain computers because the water would get hotter and hotter the further down the chain you go. This means a dual piping system (one for 'cool or room temperature' water and one for 'hot' water). And it means adjusting the pressure to each rack depending on how many computers are in it and such.

The issues of water cooling a data center go WAY beyond the case, which is why nobody has really done it yet - sure, the cost savings are potentially huge, but it's a LOT more complicated that sticking a bunch of servers with fans in racks that can move around and such, and then turning on the A/C. And there is a lot less room for error (as someone else mentioned, what if a leak occurs? or a plumbing joint fails, or whatever. Hell, if a pump fails you could be out a whole rack!).

Re:Quick Release (2, Interesting)

FooAtWFU (699187) | more than 4 years ago | (#30130450)

If you have enough space for a spare rack, and you have a sufficiently virtualized infrastructure, you could just swap in the spare and do rack-at-a-time maintenance. If you're really saving 93% on cooling that could be worth it. (Maybe leave your SAN boxes and other less-failable components on an old air-cooled setup.)

Re:Quick Release (1)

sexconker (1179573) | more than 4 years ago | (#30131188)

Not to mention the very simple fact that when something goes wrong with the servers, you have a team of guys ready to fix it in no time flat.

When something goes wrong with the plumbing no one can touch it unless they're a licensed plumber. He'll take a few days to get there and a few days to do the job, AND he'll charge you more than you paid your server guys in the same time frame.

Re:Quick Release (1)

TheGreatDonkey (779189) | more than 4 years ago | (#30133948)

I agree. Water cooling of a data center has a history. The only thing I see here is they are attempting to bring the water at a small, scalable, "standardized" manner to each blade.

I worked for a large investment company some time ago, and we had an "older" data center that was originally designed to house mainframes and used a pool to hold water for cooling. A side benefit of the pool was that employees could use it for swimming, and the water was at quite an agreeable temperature. The benefit here (besides the kosher swimming) is that component failure impact can be minimized, and the cross contamination much more controlled. It was converted over the years to support servers of today, and last I knew of about 7-8 years ago, they were replacing some of the main pumps and were extending the life. The nice thing in the updated design was that standard commodity x86 HP servers were being used in the room, requiring no fancy server hardware re-designs.

Re:Quick Release (1)

jbengt (874751) | more than 4 years ago | (#30134186)

The issues of water cooling a data center go WAY beyond the case, which is why nobody has really done it yet . . .

They've only been doing direct water cooling of data center computers since the 1950s. Though the last time I worked on one was in the 1980s, and it was mainframes, not PCs/blades.

Water cooling on that size is no small feat... (3, Interesting)

wandazulu (265281) | more than 4 years ago | (#30130512)

The ES/9000 that I had contact with was a series of cabinets that were all water-cooled from the outside in...it was a maze of copper pipes all around the edges and back and looked like a fridge. When you opened a cabinet, you could feel a blast of cold air hit you.

It was no trivial feat to do this, they had to install a separate water tank, some generators (I remember one of the operations guys pointing to a Detroit Diesel generator outside in the alley and saying it was just for the computer's water system), moved a bathroom (only water they wanted around the computer was the special chilled stuff), and I can distinctly remember seeing the manuals(!)... 3-inch thick binders with the IBM logo on them, and all they were for was the planning and maintenance of the water system.

No wonder it took almost a year to install the machine.

Re:Water cooling on that size is no small feat... (1)

tuomoks (246421) | more than 4 years ago | (#30131510)

It's not trivial as you say but once done (correctly!) can be very flexible. I "managed" (as a systems programmer who had to accept all the designs) a "data center" growing from one water cooled system to several mainframes and to install the "next" system only took two days with everything. Yes, we had extra space / capacity - the capacity plans had 5-10 year estimates (a big fight but paid back later!). DAlso did that for a couple of customer later on.

Liquid (water or other, metals, etc) cooling is more efficient than air can ever be, for small systems air may be enough but for any serious power the more near the heat source you can get with a good heat transfer, the better and the cheaper it gets. It's just physics.

Yes, there were "design" manuals from IBM, Hitachi, Matchushita, Amdahl, etc - haven't seen those in years?

One problem which came up - you can be too efficient and start getting "over freeze" even we used the heat for other things - had one incident when everything started freezing up even pumping the cool to some huge buildings, garages, big print shop, etc. I just trusted the engineering calculations too much - be aware!
   

Water is a hassle (4, Informative)

BlueParrot (965239) | more than 4 years ago | (#30130536)

I work with particle accelerators that draw enough power that we don't have much choice but to use water cooling, and even though we have major radiation sources, high voltage running across the entire place, liquid helium cooled magnets, high power klystrons that feed microwaves to the accelerator cavities etc... the only thing that typically requires me to place an emergency call during a night shift is still water leaks.

Water is just that much of a hassle around electronics. Even an absolutely minor leak can raise the humidity in a place you really don't want humidity, it evaporates and then condenses on the colder parts of the system where even a single drop can cause a short circuit and fry some piece of equipment. After it absorbs dirt and dust from the surroundings it starts attacking most materials corrosively, which may not be noticed at first but gives sudden unexpected problems after a few years. If you don't keep the cooling system itself in perfect condition valves and taps will start corroding and you get blockages. Maintenance is a pain because you have to power everything down if you want to move just 1 pipe etc...

I just don't see why you would go through the hassle with water cooling unless you actually have to, and quite frankly if your servers draw enough power to force you to use water for cooling then you're doing something weird.

Re:Water is a hassle (1)

jpmorgan (517966) | more than 4 years ago | (#30131120)

I think that's the advantage of this system. You are never going to avoid leaks, but since computers are immersion cooled and in their own sealed boxes, they are no longer sensitive to environmental issues. At that point, leaks become an annoyance instead of an emergency.

Re:Water is a hassle (0)

Anonymous Coward | more than 4 years ago | (#30131130)

I just don't see why you would go through the hassle with water cooling unless you actually have to, and quite frankly if your servers draw enough power to force you to use water for cooling then you're doing something weird.

...like eating the instrument data of a really huge physics experiment? [wikipedia.org]

Re:Water is a hassle (0)

Anonymous Coward | more than 4 years ago | (#30131370)

You may want to call the folks behind DRM. I've read elsewhere on Slashdot that they've been working on technology to make water not wet. It may come in very handy for your application.

Re:Water is a hassle (2, Insightful)

tuomoks (246421) | more than 4 years ago | (#30131852)

Water (and liquid coolants, even metals) can be a hassle if not deigned correctly. I have had my experiences with water cooled systems but mainly the "over efficiency", well, one burst which shouldn't have happened (LOL).

One thing I have learned (from my son) - in cars, everything replaced with military and/or airplane grade fittings, valves, tubes, etc - makes life much easier. Not much more expensive but very fast pays back. If I would have known that (much) earlier instead of accepting engineering (good enough) / accounting (cheap enough), my life would have been easier but maybe it's a learning process?

Prior Concepts (1)

Demonantis (1340557) | more than 4 years ago | (#30130574)

Reminds me of the sapphire fire suppression [ansul.com] just applied all the time. Or the sealed mineral oil boxes people seem to put computers in. The system could be huge if they apply it right and it actually realizes a 93% reduction in energy cost(I have my doubts). The largest issue I have heard is that it is tricky, but not impossible, to move the heat away from the components once they heat up the liquid.

Cray-2 (3, Insightful)

fahrbot-bot (874524) | more than 4 years ago | (#30130592)

"The Iceotope approach takes liquid - in the form of an inert synthetic coolant, rather than water - directly down to the component level," ... "It does this by immersing the entire contents of each server in a "bath" of coolant within a sealed compartment, creating a cooling module."

Hmm... The Cray-2 [wikipedia.org] was cooled via complete immersion in Fluorinert [wikipedia.org] way back in circa 1988. I was an admin on one (Ya, I'm old). So, this is a bit different, but certainly not ground-breaking.

Re:Cray-2 (1)

Areyoukiddingme (1289470) | more than 4 years ago | (#30130870)

Oh it's groundbreaking. It's unique. I know, because they have patents on it. 1988 doesn't exist. It's all in your head. They invented something new and innovative so they patented it! Duh.

*cough*

Too early yet for that much sarcasm?

Re:Cray-2 (3, Interesting)

jcaren (862362) | more than 4 years ago | (#30131004)

The crays full immersion coolant model hit a big problem - the coanda effect.

This is where layer of fluid near the actual component flows much slower than actual flow - in layers slowing down exponentially as it gets closer to the stationary components.

For air this is not too much of a problem - only a very fine layer of stationary air over compenents that does not affect cooling. But with liquids the effect is both noticable and severely impacts coolant flow over hot surfaces - with some then "next gen" cray chips actually boiling the fluid. As todays chips run much hotter and generate a lot more heat than those Cray chips I can see this being a major problem today...

Crays fix for this was to move from full fluid immersion to immersion in droplets 'injected' using a car fuel injector.
This got everywhere and evaporated taking the heat away from components.

Rumor has it that during devlopment, engineers bought fuel injectors for a wide range of cars and the ones for certain porsche worked best so they bought the entire stock of fuel injectors for this car in the mid-west and used them...

I remember staff at cray giving away Porsche style sunglasses with Cray written on them instead of Porsche and when I enquired why - the above was the tale I was told by sales staff.

Whether true or not is something else - the cray sales staff in those days had a seriously odd sense of humor...

Re:Cray-2 (1)

hey (83763) | more than 4 years ago | (#30131020)

Everything *trickles* from Supercomputers/Mainframes eventually.

Cold mineral oil. (1)

ground.zero.612 (1563557) | more than 4 years ago | (#30130634)

Sixteen years ago, at the end of my highschool career, I was very into overclocking (had multiple celeron 300A). With peltier cooling I was able to run a 300mhz CPU at 450mhz with rock solid stability (ran things like prime95 24hrs a day for weeks). People were starting to experiment with liquid cooling commodity white-box computers.

One of the more interesting applications I saw was an old styrofoam cooler converted into a PC case. All components were submerged in a bath of cold mineral oil. I remember thinking that the data centers of the future would require SCUBA certified technicians in dry suits to swim down to the racks and swap out the broken module. Maybe I was thinking too grand, and this would be feasible as submerge-modules with aquarium like tanks instead of racks.

Re:Cold mineral oil. (1)

krystar (608153) | more than 4 years ago | (#30130816)

Yea but then u'd have to worry about technicians swimming along and going "hey...did it just get really warm over here?"

Re:Cold mineral oil. (1)

zippthorne (748122) | more than 4 years ago | (#30132516)

How did you get a 300 mhz CPU in 1993?

Re:Cold mineral oil. (1)

osu-neko (2604) | more than 4 years ago | (#30133016)

How did you get a 300 mhz CPU in 1993?

He lives in a temporal anomaly. It's been sixteen years for him since the the Celeron 300A first became available in 1999, so some localized effect is causing time to run a bit faster there.

Doesn't look practical (1)

YesIAmAScript (886271) | more than 4 years ago | (#30130642)

Look at the cross section photo. This dispenses completely with convection (air flow) and instead designs the system for direct physical contact from the heat sink to the components. Then the water flows behind the heat sink to take the heat away from that.

The problem is that means that you have to make a heat sink with varying height "fingers" on it to meet every component that produces heat (which is all of them), which means every time you change a component you have to redo the heat sink. And of course if you change the motherboard you also have to. With components available from multiple sources (second sourcing) and changing spec mid-model for cost-reduction, you can expect the profile of the heat sink to change frequently during the life of a model. And of course, you probably need to put heat sink goop on a lot of components, that might make enough surface tension that you'd have trouble getting it apart to service it.

Although this is workable, it seems unlikely it would ever be cost-effective. It'd probably be smarter to have certain major (heat-producing) components cooled by direct contact and a plenum for the rest that uses convection to get heat to a radiator-like assembly on the heat sink (except it isn't radiating here, it's absorbing heat).

I think water cooling is likely for servers in the future. Even end-to-end water heat exchange to the atmosphere like this proposes, instead of transferring the heat to the room air and then taking it out with air handlers might be the future. But I'm not sure these guys have the right strategy at the bottom level.

Re:Doesn't look practical (1)

Destined Soul (1240672) | more than 4 years ago | (#30130906)

Actually, it looks fine after some initial glances. They put up a video on youtube here [youtube.com] where the interior is liquid filled for direct contact with all of the components, then a secondary liquid system (it seems) outside uses the plating case as a heat exchanger and takes it away to the central lines out of the cooling system.

Re:Doesn't look practical (1)

YesIAmAScript (886271) | more than 4 years ago | (#30132670)

That's noticeably worse. Component manufacturers haven't tested their components for extended immersion in liquid, even relatively inert ones. This would drive the cost of the device through the roof.

In addition, I know people like to think of heat transfer as radiation or conduction, but convection is the biggest factor, even in a liquid cooled system, this is why the liquid circulates instead of just sits there. And in this case, the liquid is going to just sit there, the area on the motherboard side of the heat sink has no circulation. You really need the liquid to get close to the components to take up heat and far away to release it, instead of trying to conduct the heat through the liquid.

Re:Doesn't look practical (1)

hey (83763) | more than 4 years ago | (#30131050)

That was my thought too.
I can see heat sinks with liquid pipes in them in the future. Plus regular air cooling. ie a hybrid solution.

Re:Doesn't look practical (0)

Anonymous Coward | more than 4 years ago | (#30131962)

You totally missed the point... The entire board is submerged in a liquid, so every single component down to the smallest part on the board is surrounded 100% by a "heat sink". The liquid removes heat from the components, the water removes the heat from the special liquid, and the hot water can be used to heat the building, etc. There is no air-flow because there is no air. The height or layout of the motherboards could vary drastically and if the right liquid is used, you shouldn't even need special parts...

weight? (4, Interesting)

Clover_Kicker (20761) | more than 4 years ago | (#30130676)

How much does a rack full of water-cooled blades weigh?

Never thought I'd see the UPS become the lightest thing in the server room.

Re:weight? (0)

Anonymous Coward | more than 4 years ago | (#30132492)

How much does a rack full of water-cooled blades weigh?

Less than a rack full of lead batteries.

Hmmm, so what happens when internals break? (1)

darkmayo (251580) | more than 4 years ago | (#30130694)

With all those layers it doesnt seem that sliding one of these out and quickly swapping some RAM or any other part is
going to happen.

As well do these Iceotope guys actually make server hardware or just the cooling specs. Who do they get there guts from or are they just advertising and hoping the guys like HP, IBM or SUN (well maybe not SUN) decide to design there next generation of servers with this in mind?

I'd like to see how easy it is for replacement. doesnt look like there is a lot of room for other bits as well. I only saw a 1U model but do these guys have the same gear for larger more beefy servers? How about blades?

lastly how much does one of these things weigh?

Re:Hmmm, so what happens when internals break? (1)

Verypc (1088231) | more than 4 years ago | (#30131618)

If the Internals die, you just replace the blade and send the old one back.

The motherboard in the video appears to be a Supermicro Twin 1u.

It looks like a blade, not a 1u.

I'd estimate its going to be just over a 1000KG for a full cabinet.

Cray XT5 "Jaguar" (1, Informative)

Anonymous Coward | more than 4 years ago | (#30130794)

The #1 on the top 500 supercomputer list [cray.com] is using water cooling as well (in combination with phase change cooling). Watercooling whole racks can be done. The only difference from TFA is that is also adds immersion cooling [pugetsystems.com] . Immersion cooling has been found to be superior in cooling but comes with (obvious) considerable maintenance problems. The video [cray.com] for this machine shows more or less standard water cooling blocks on the processors, along with various plumbing that to keeps the machine chilled.

night time freezing of liquid would save more (2, Interesting)

Locutus (9039) | more than 4 years ago | (#30130796)

The technique of using cheaper off-peak energy to freeze liquid and then use that liquid for daytime cooling loads is already used in a very few places. Combine that technique with the direct server cooling mentioned in the article and....wait a minute....they are already claiming a 93% cooling cost cut? Either their is huge waste now or they're already expecting to use off-peak energy. But then again, maybe the remaining 7% is still large enough to merit further savings.

Direct cooling makes far more sense than cooling rooms like I keep seeing around now.

LoB

What about the benefits to Joe User? (1)

butabozuhi (1036396) | more than 4 years ago | (#30130802)

There are probably great economies of scale for datacenters, but what about Joe User? The article wasn't clear if 'included in the manufacturing process' would include consumer level systems. Just thinking that cost savings for datacenters is great, but I'd be really interested if it helped out the regular consumer (not to mention what kind of operational issues might this bring up?).

Almost... (1)

hatemonger (1671340) | more than 4 years ago | (#30130810)

There's a joke somewhere about your server being so ugly you have to put a bag over it before you go inside, but I can't quite work it. Help?

Re:Almost... (1)

Knx (743893) | more than 4 years ago | (#30131264)

Hey! I think there's also a joke somewhere about the bag being perforated, bringing a new sense to a "system overflow" but I can't quite work it out either...

Maybe we could create a group on Facebook and have fun with our not-quite-working-slashdot-jokes?

I thought we'd finally learned... (1)

pla (258480) | more than 4 years ago | (#30130888)

Yet another way to increase the density of server farms... Useful if you must grow your servers in Manhattan, a waste of money otherwise.

Among the many great things the internet has brought us (*cough*porn*cough*), "location-independence" ranks pretty high up there. Your servers don't need to all fit in one cargo container that runs so hot it requires LN cooling. For all it matters, you could put them in a single line of half-racks on a mountain ridge, cooled naturally by the wind (with some care to keep them rain-free, of course).

I thought we'd learned our lesson in that regard when tests last year by MS and Intel (not to mention Google's truly inspiring data center designs) showed a substantial payoff by letting servers run hotter and less densely packed. Silly me.

OK so then how do you explain this? (1)

kaizendojo (956951) | more than 4 years ago | (#30131200)

Source for excerpt below [datacenterknowledge.com]

"Intel set up a proof-of-concept using 900 production servers in a 1,000 square foot trailer in New Mexico, which it divided into two equal sections using low-cost direct-expansion (DX) air conditioning equipment. Recirculated air was used to cool servers in one half of the facility, while the other used air-side economization, expelling all hot waste air outside the data center, and drawing in exterior air to cool the servers. It ran the experiment over a 10-month period, from October 2007 to August 2008.

The temperature of the outside air ranged between 64 and 92 degrees, and Intel made no attempt to control humidity, and applied only minimal filtering for particulates, using "a standard household air filter that removed only large particles from the incoming air but permitted fine dust to pass through." As a result, humidity in the data center ranged from 4 percent to more than 90 percent, and the servers became covered with a fine layer of dust.

Despite the dust and variation in humidity and temperature, the failure rate in the test area using air-side economizers was 4.46 percent, not much different from the 3.83 percent failure rate in Intel's main data center at the site over the same period. Interestingly, the trailer compartment with recirculated DX cooling had the lowest failure rate at just 2.45 percent, even lower than Intel's main data center."


And although the failure rate was similar, the electricity bills were night and day. So I'm not buying into this unless your running a HUGE data warehousing op with more transactions than WalMart...

Server standardization... (1)

HockeyPuck (141947) | more than 4 years ago | (#30131294)

The problem with this is that it requires server manufacturers to standardize their designs. There was talk a few years ago about standardizing Bladeservers. I don't see this happening as there's too much control in the bladecenter chassis, switch interfaces, management abilities etc. Plus why would IBM want to sell an empty chassis and then let the customer fill it with HP C-Class blades?

Even racks themselves from IBM/HP/Dell/EMC/netapp/Sun aren't standardized, other than they are 19" wide. This is why if you mix vendors in the same rack you've got to adjust the depth of the rails.

As for going out and buying third party cabinets (APC for example), some of these have complex ductwork associated with them which makes them take up more than one tile of width.

These guys probably want two things, either IBM/HP/DELL license their technology or someone buys the company. Also, last I checked, there's not a large amount of room in my servers.

Costs of water (1)

stimpleton (732392) | more than 4 years ago | (#30131678)

"Earlier this year, IBM predicted that in ten years all data centre servers might be water-cooled."

The costs of cooling air will be replaced by the costs of obtaining water. This system will not be for "water challenged areas".....Californy, etc.

Re:Costs of water (1)

wiedzmin (1269816) | more than 4 years ago | (#30132128)

It doesn't have to be running water - although it would definitely be cheaper to just pick up cold water from the pipeline and dump the hot water into the drain, chances are - systems like that will operate on a closed loop, which would mean virtually one-time costs of obtaining water. However I am a lot more inclined to believe that data centers will become external-air cooled, with which Intel is experimenting right now, rather than liquid-cooled... especially in colder climates.

Not for the real world (1)

wiedzmin (1269816) | more than 4 years ago | (#30132064)

In all honesty, this being a cool concept and all, it would not work in the real world because a) it cannot be retrofitted to existing systems and b) it requires the use of proprietory, unknown hardware. How many large companies are going to switch from tried and trusted server providers (like HP, IBM, Dell and as of late Cisco) in favor of something that, well, looks nifty. Their only shot at this not becoming vaporware is to try and sell the technology to a major server manufacturer, and even then I doubt it will work - imagine all the effort it would require to retrofit your existing data center for liquid cooling... liquids and server rooms don't go well together.

Yay! Water and electricity! (1)

arctic19 (1578959) | more than 4 years ago | (#30132114)

What could possibly go wrong?

Data Centre/Center? (0)

Anonymous Coward | more than 4 years ago | (#30132152)

Ok, so they are British and they spell 'center' with the 'er' the other way around. Why don't they spell server as 'servre'?

Re:Data Centre/Center? (2, Informative)

osu-neko (2604) | more than 4 years ago | (#30133220)

Ok, so they are British and they spell 'center' with the 'er' the other way around. Why don't they spell server as 'servre'?

First of all, it would be 'serveur', not 'servre'. And its use is too new to be one of those words in which the french spelling is retained from the days of the Normans. Incidentally, even in America, we fail to reverse the 'er/re' in some words, consider 'acre', 'massacre', and 'mediocre', so we're not exactly consistent either...

I'm no expert... (1)

Kleppy (1671116) | more than 4 years ago | (#30132958)

...but I've seen water cooling for my short time on this earth as the superior cooling method. So much that I ran it myself. Sony even puts it in their system [pcworld.com] once but it was of a passive system than a pump/coolant system. Big name using it right out of the box. I don't know why leaks would be that big of an issue as this isn't a high pressure water system; being a closed loop it is going to be a very low pressure system unless you are trying to blow water as fast as you can through it. If it moves too fast, it will create a layer of stagnant coolant just off the surfaces and degrade cooling. Low (pressure)and slow (moving) should yield best cooling. No need to move 2000 Lph unless you are using one pump for many heat sources to maintain flow, but I wouldn't put that many devices on one pump.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?