×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Small Startup Prevails In Server Cooling 'Chill Off'

samzenpus posted more than 3 years ago | from the cool-as-ice dept.

Businesses 45

miller60 writes "A small startup has shown exceptional energy efficiency in a data center 'chill off' comparing server cooling technologies. Clustered Systems posted the best numbers in the 18-month vendor evaluation sponsored by the Silicon Valley Leadership Group. The Menlo Park, Calif. company built a prototype server that uses no fans and cools processors with a cold plate with tubing filled with liquid coolant. The testing accidentally highlighted the opportunity for additional energy savings, when the Clustered Systems unit continued to operate during a cooling failure that raised the chiller plant water temperature from 44 to 78 degrees F."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

45 comments

"Silicon Valley Leadership Group" (4, Informative)

Animats (122034) | more than 3 years ago | (#33926278)

The "Silicon Valley Leadership Group" is kind of a joke. It used to be the "Silicon Valley Manufacturing Group", the lobby for the semiconductor industry, but after most of the semiconductor plants closed, it lost focus.

Re:"Silicon Valley Leadership Group" (1, Interesting)

Anonymous Coward | more than 3 years ago | (#33930990)

Why is it a joke? It used to be called Silicon Valley Manufacturing Group because it represented manufacturers. Now those same manufacturers are primarily designers who offshore their manufacturing, so they changed the name of the lobbying group. The group still represents a number of very powerful and successful firms, I fail to see how they are "a joke".

Imagine... (0, Redundant)

ArAgost (853804) | more than 3 years ago | (#33926294)

...a Beowulf cluster of these!

Re:Imagine... (4, Interesting)

CarpetShark (865376) | more than 3 years ago | (#33926634)

I think the appropriate expression is "Imagine having a truckload of shares in these guys, when google buys them out."

Re:Imagine... (-1, Offtopic)

Anonymous Coward | more than 3 years ago | (#33926904)

That does seem to be the going ethic in business these days. What ever happened to "I'm going to make a better way and out-compete the established players"? Is it any wonder the MAFIAA thinks they can have government-mandated control of their market niche?

Re:Imagine... (1)

AmericanInKiev (453362) | more than 3 years ago | (#33927044)

Really? is heat conduction by means of conductive materials and convection by moving liquids patentable? I've got this round thing that reduces friction!

Re:Imagine... (1)

dr2chase (653338) | more than 3 years ago | (#33927934)

Come on, did you Watch TF Video? Heat pipes, not novel, but this is not just heat pipes. Engineering it all to fit into a standard rack, with the modifications to bring the heat up to the top of each separate unit, and the "compliant heat conducting cover surface", that's interesting.

Re:Imagine... (-1, Redundant)

ArAgost (853804) | more than 3 years ago | (#33926648)

“Slashdot, where the second comment to a post will be modded as redundant”

Kinda late to the party (4, Informative)

Ancient_Hacker (751168) | more than 3 years ago | (#33926310)

Seymour Cray's 6600 was cooling with liquid-filled cold plates... in 1962. That's, er, 48 years?

Re:Kinda late to the party (0)

Anonymous Coward | more than 3 years ago | (#33926580)

Yeah, but could you power an entire CLOUD with a Cray 6600? NO!

Re:Kinda late to the party (4, Interesting)

roguer (760556) | more than 3 years ago | (#33927102)

Seymour ran refrigerant (fleurinert?) through coldplates on all his designs (and their descendants) up through the Cray-2. I am told that he used to call himself "the best refrigerator repair man in the industry". His downfall came when he abandoned coldplates for the full refrigerant emmersion that gave the Cray-2 its distinctive "aquarium" look. Unfortunately, in later designs he had to run refrigerant across the emmersed boards so fast that it actually caused friction corrosion.

But, yeah, you have a point. Coldplates are old hat in the supercomputing industry. BTW, RISC is too. We used to joke that it stood for "Really Invented by Seymour Cray".

Ambient noise (5, Interesting)

rcw-home (122017) | more than 3 years ago | (#33926362)

I'm pretty impressed by how quiet their demo rack is - it'd be a challenge to get a good audio recording of a conversation right next to a full rack of air-cooled 1U servers - it's frustrating using a cell phone in most server rooms, just because of the fan noise. 1U systems are the worst simply because the form factor requires a large number of tiny fans running at high speed.

Even if there's some serious impracticalities with their approach, eliminating that fan noise is a huge selling point.

Re:Ambient noise (0)

Anonymous Coward | more than 3 years ago | (#33926480)

I second this, 1U are so loud!!! I have one in my loft, lol

Re:Ambient noise (1)

Hylandr (813770) | more than 3 years ago | (#33926540)

Thats why home servers should be on laptops or xboxes.

- Dan.

Re:Ambient noise (1)

temojen (678985) | more than 3 years ago | (#33926680)

Or desktops with heatpipe based heatsinks... Tiny laptop fans can get pretty loud. With much bigger dissipation surface an overclockig heatsink on a low power CPU can spend most of it's time with the fan off.

Re:Ambient noise (1)

Hylandr (813770) | more than 3 years ago | (#33928580)

True, but the only time I really get the fan going is with ffmpeg or games. And since I don't play games on the server it's not really a big deal. Home server use is really minuscule and it comes with it's own UPS. :)

- Dan.

Re:Ambient noise (1)

houstonbofh (602064) | more than 3 years ago | (#33928852)

Or towers with large slow moving fas, and lots of air space... Cheaper for the performance that way too.

Re:Ambient noise (1)

arivanov (12034) | more than 3 years ago | (#33929700)

A well designed home or branch office server has no problem being QUIETER than a laptop or a Xbox/PS3 type device.

Size matters and servers are no exemption. In a small case you need 3-4 times the revs on fans to get the same airflow. A good example is the old version of the Antec Sonata. Because of its size it can put disks perpendicular to the disk cooling fan. As a result you get 24-27C average disk temperatures with case fans at sub-1700K revs. To get the same temperatures in a 1U case or a laptop for that matter you need fans running at 5000 RPM+. This is also why miniITX failed as a proposition. The smaller cases required more cooling which defeated most of the gains from making the kit smaller and cooler.

Also fans (especially on the power supplies) are not created equal, some are more equal than the others. There is a reason why your average OEM PS made by FoxConn costs 15 quid and an Akasa costs 40 and that reason is the 10db+ difference and 10%+ difference in power efficiency. Ditto for CPU, case fans, etc. It costs around 30-40£ extra per server to make it really quiet. This is less than the premium you pay for a laptop or a a modded console.

As far as the article itselfs - well, generating back leccy from cooling makes little sense. Laws of thermodynamics. The gains from that are less than the gains from making cooling more efficient.

Medieval units (0)

Anonymous Coward | more than 3 years ago | (#33926374)

"44 to 78 degrees F" -- except that I have yet to see a temperature sensor program that shows the output in Fahrenheit.

Re:Medieval units (0)

Anonymous Coward | more than 3 years ago | (#33926416)

try cpuid hwmonitor.

Re:Medieval units (0)

Anonymous Coward | more than 3 years ago | (#33926460)

try the goatse [goatse.fr] thermal monitoring solution enterprise edition

Captcha: colons ...

Water Cooling (0)

Anonymous Coward | more than 3 years ago | (#33926466)

So they used water cooling...this is innovative how?

Another slashvertisement (0)

Anonymous Coward | more than 3 years ago | (#33926482)

See subject.

Cost (5, Interesting)

markbao (1923284) | more than 3 years ago | (#33926498)

The world is no stranger to liquid-cooling in computers, but this is pretty impressive. Does anyone have any numbers on how much traditional cooling costs compared to estimated costs from this company?

Brief description of the technology (4, Informative)

bananaendian (928499) | more than 3 years ago | (#33926594)

The video shows a full size rack with 36 standard 1U rack servers installed on it.

On each server they have installed milled metal blocks on all the components to bring them in contact with the upper cover of the server which has a metal foil interface to complete the fit for maximum heat conduction.

The actual coolant is circulated in the rack in cold plates or shelves installed between the servers. Coolant is exchanged from the top of the racks into the piping that takes it to the heat exchanger outside.

Comment: with this kind of system cooling is a function of the coolant temperature and flow. With the metal blocks, interfaces and surface areas that I could see it is nothing special to be able to cool down the components to very low temperatures. The engineer talks of 450 W dissipation per server with 150W previously going to the fans alone. So getting 300W of heat out of there isn't a problem with a cold plate that size. Military avionics use these a lot: Conduction Cooled cPCI and other standard cards. No need for liquid flow even. Just use aircraft structure as a cold plate. Those custom milled metal interfaces are expensive to make but its still a lot cheaper than anything really MILSPEC and there is no issues with vibration on this one. This would be called modified COTS.

Re:Brief description of the technology (1, Interesting)

Anonymous Coward | more than 3 years ago | (#33926788)

actually most of them on modern aircraft are cooled with jet fuel.

Re:Brief description of the technology (2, Interesting)

bananaendian (928499) | more than 3 years ago | (#33926888)

actually most of them on modern aircraft are cooled with jet fuel.

Wrong. You are confusing jet engine components which are cooled with the fuel. Avionics racks are sometimes cooled with bleed air from the engine (no, not the exhaust, the other side).

Re:Brief description of the technology (2, Interesting)

shougyin (1920460) | more than 3 years ago | (#33927174)

Yes, but only for fixed wing aircraft, with most of the helicopters having their own separate air intake which is filtered. However, the new designs are still causing heating problems. I think the engineers only real concern is that a component won't overheat and fry itself, because the air isn't filtered outside of the closet very well. I would still like to see any of this come into Military Aviation, but i doubt that will be for a long long time.

Re:Brief description of the technology (4, Interesting)

Anonymous Coward | more than 3 years ago | (#33927820)

No, I'm not. I actually fly some of them, and our avionics are cooled by cold plates with jet fuel running through them. Think about aircraft built this century, not last century, with very dense avionics.

Re:Brief description of the technology (0)

Anonymous Coward | more than 3 years ago | (#33931582)

Only a few specific aircraft like the SR-71, and maybe some of the newest fighter aircraft.

Everything else is still cooled with air - LOTS of air.

(I work at an avionics systems integrator.)

Re:Brief description of the technology (0, Funny)

Anonymous Coward | more than 3 years ago | (#33926792)

Get out of your mom's basement. You don't work for the military. In fact, you don't work.

Re:Brief description of the technology (1)

bananaendian (928499) | more than 3 years ago | (#33926864)

Get out of your mom's basement. You don't work for the military. In fact, you don't work.

Actually you are only right about the moms basement ;P (as we live together with our extended families)

Re:Brief description of the technology (1)

dbIII (701233) | more than 3 years ago | (#33928076)

Just use aircraft structure as a cold plate.

In that situation you have a nice cold surface on the other side of the conductor, which is a bit hard to organise in a server room without some nice cold liquid flowing about. Most people forget that conduction is the easiest way to move heat about until you hit other contraints.

False inferences. (3, Interesting)

Richy_T (111409) | more than 3 years ago | (#33926628)

The testing accidentally highlighted the opportunity for additional energy savings, when the Clustered Systems unit continued to operate during a cooling failure that raised the chiller plant water temperature from 44 to 78 degrees F.

Nonsense. I've seen equipment continue to function during AC failures with very high temperatures. It's how much the lifetime of the equipment is reduced during those failures that's the real test. Not unusual to see higher levels of hard drive failures months after the event.

That's still a lot of conduction and not that hot (3, Informative)

dbIII (701233) | more than 3 years ago | (#33928192)

If the water temperature is up to 25C the component temperature may still be relatively low since it's probably overdesigned for 6C anyway.
Also let's say the CPU temperature is 40C, the water temperature is as high as 25C so that's still a 15C temperature difference to move a lot of heat on the conductive part and stop the CPU temperature getting a lot hotter.
78F/25C is still slightly colder than the air at the back of my air cooled server racks anyway and I expect to run most of that gear until it is obsolete.
Your point about remaining life lost due to overheating is valid (thermal fatigue, just plain expansion of drive bearings etc) but 25C isn't very hot so long as there is good conduction to where the fluid is and so long as the fluid keeps moving.

Re:That's still a lot of conduction and not that h (2, Insightful)

chromatix (231528) | more than 3 years ago | (#33929728)

I wouldn't call it "potential for extra efficiency" so much as "robustness in the face of hardware failures". If the cooling remains adequate when a pump or a fan fails or a blockage occurs in an air path, then the increased coolant temperature provides a signal for admins to react to, and the servers don't suffer any downtime.

Which is a very good thing for a cooling system. How many stories are there about overheated and dead servers due to an aircon unit failure?

Re:False inferences. (0)

Anonymous Coward | more than 3 years ago | (#33935384)

Actually, we can cool effectively (i.e. CPUs lid/core temperatures still well within operating envelope) with 30 to 35C coolant. We've demonstrated this with 130W Xeons running Prime95.

Phil from Clustered

solving the wrong problem (0)

Anonymous Coward | more than 3 years ago | (#33929836)

if you want to save energy and have CPUs that wont burn you, use ARM processors! seriously, think about it! their processing power is rather high (the new line will be 2GHz+) and they are really low power. you wouldnt even need heatsinks!

the "industry" is retarded.

Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...