Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Liquid Blade Brings Immersion Cooling To Blade Servers

timothy posted more than 4 years ago | from the you-said-blade-twice dept.

Hardware 79

1sockchuck writes "In the past year we've seen several new cooling systems that submerge rack-mount servers. Now liquid immersion cooling is coming to blade servers. Liquid-cooled PC specialist Hardcore Computer has entered the data center market with Liquid Blade, which features two Intel 5600 Xeon processors with an S5500HV server board in a chassis filled with dielectric fluid. Hardcore, which is marketing the product for render farms, says it eliminates the need for rack-level fans and room-level air conditioning. In recent months Iceotope and Green Revolution Cooling have each introduced liquid cooling for rack-mount servers."

cancel ×

79 comments

Sorry! There are no comments related to the filter you selected.

I want these at my data center... (5, Funny)

tnok85 (1434319) | more than 4 years ago | (#32145510)

Although it's good we don't have them. I'd probably get fired when they find a rack of production servers running at 4.6GHz.

Re:I want these at my data center... (3, Funny)

Anonymous Coward | more than 4 years ago | (#32145598)

Hell, I'd get promoted if I did that... maybe you need to get your boss's job?

Or you would get a raise. (1)

jetole (1242490) | more than 4 years ago | (#32146330)

This sounds awesome and I want one but I'm not gonna put it in my budget until Sun(Oracle), IBM, HP or Dell are selling it

Re:Or you would get a raise. (2, Insightful)

davester666 (731373) | more than 4 years ago | (#32147218)

So you have to pay twice as much?

Re:Or you would get a raise. (2, Interesting)

jetole (1242490) | more than 4 years ago | (#32147288)

Well I didn't say I would buy it from them but then again I might. Fact is I won't buy it though till I know the big dogs are supporting it and economically speaking, by the time it's adopted by trusted firms, well it's reasonable to assume that the cost of the technology itself may have dropped to the point where the firms who charge more for it will probably be cheaper then what it costs now when it's a new product since prices are almost always higher when a new technology has been released and hasn't been widely adopted yet.

my server is leaking (4, Funny)

vacarul (1624873) | more than 4 years ago | (#32145562)

finally some good news for Joe the Plumber.

Re:my server is leaking (3, Funny)

Joce640k (829181) | more than 4 years ago | (#32145714)

Do you need a snorkel and flippers to swap a network cable?

Re:my server is leaking (0)

Anonymous Coward | more than 4 years ago | (#32148164)

Snorkel.... hahahaha

Upholding Moore (4, Interesting)

MacGyver2210 (1053110) | more than 4 years ago | (#32145606)

Do we really NEED liquid cooled servers in datacenters? Is this just our feeble attempt to validate Moore's Law despite diminishing returns on smaller process size and core multiplication...?

What the hell am I talking about? Of COURSE we need them!

Re:Upholding Moore (3, Insightful)

TubeSteak (669689) | more than 4 years ago | (#32147344)

Do we really NEED liquid cooled servers in datacenters? Is this just our feeble attempt to validate Moore's Law despite diminishing returns on smaller process size and core multiplication...?

Yes. No.
The massive densities you can achieve with liquid-to-liquid cooling allows for much smaller data centers (or much more performance in existing data centers).

Just being able to build a smaller data center can mean you've recouped the liquid cooling investment, even before factoring in the savings for increased cooling efficiency/watt, no AC, and no cooling fans.

serviceability (5, Interesting)

arabagast (462679) | more than 4 years ago | (#32145646)

How hard is it to say; change a disk in one of the submerged nodes ? or fix a loose ethernet cable ? If the nodes are separated in compartments, and you could isolate and drain one while servicing it, this would be really nice indeed.

Re:serviceability (5, Funny)

Kratisto (1080113) | more than 4 years ago | (#32145670)

The IT department is required to have SCUBA certification for regular maintenance.

Re:serviceability (1)

EdIII (1114411) | more than 4 years ago | (#32148622)

The IT department is required to have SCUBA certification for regular maintenance.

It sounds neat until you hear about how all the hair off Bob's ass accumulated and blocked the exhaust port and you need to get in there right now and remove the hair before the server farm explodes.

In other words.. a suicide mission.

Re:serviceability (1)

karnal (22275) | more than 4 years ago | (#32149474)

I didn't know you could SCUBA dive naked.....

Re:serviceability (1)

batje14 (1018044) | more than 4 years ago | (#32145726)

Finally time to get your Paddy!

Re:serviceability (2, Informative)

obarthelemy (160321) | more than 4 years ago | (#32146026)

that's PADI: http://www.padi.com/scuba/ [padi.com]

unless you were thinking about rice.. ;hot water, rice... server farms finally derserve their names !

Re:serviceability (1, Interesting)

Anonymous Coward | more than 4 years ago | (#32145804)

you could connect disks with esata?

Re:serviceability (1)

VinylPusher (856712) | more than 4 years ago | (#32146272)

I very much doubt an eSata connection would be considered anywhere near robust enough for a server environment. Especially such a specialised environment that the host company has decided an immersed rack/blade is a viable option.

I'd guess SSD and network attached storage.

Re:serviceability (5, Funny)

BiggerIsBetter (682164) | more than 4 years ago | (#32145822)

How hard is it to say; change a disk in one of the submerged nodes ? or fix a loose ethernet cable ? If the nodes are separated in compartments, and you could isolate and drain one while servicing it, this would be really nice indeed.

Did you see the movie Sunshine [imdb.com] ? You'll have to immerse yourself in the coolant, possibly freezing and/or bleeding to death after getting your leg stuck in the rack. It had better be an important upgrade.

Re:serviceability (0)

Anonymous Coward | more than 4 years ago | (#32147044)

I've seen it, it wasn't that bad. You'll be saving earth every time you freeze to death in one of those racks.

Re:serviceability (4, Interesting)

GreatBunzinni (642500) | more than 4 years ago | (#32145898)

That's exactly what got my attention. In the article, the CTO of hardcore computer is quoted as saying that "Our Core Coolant is 1,350 times better than air, by volume.". I don't know how that works out in energy spending when compared with air but if it has a linear relationship with the energy cost of cooling, I really doubt if the hypothetical energy savings can bring a net positive when considering the additional cost associated with meddling with the hardware, whether by maintenance or by hardware upgrades. After all, this slashvertisement is oh so keen in lauding the qualitative and subjective advantages of this toy but it doesn't even come near presenting the costs associated with forcing your company to be profoundly dependent (if not held hostage) of hardcore computer for support, maintenance of both hardware and cooling rig and upgrades.

Re:serviceability (2, Interesting)

obarthelemy (160321) | more than 4 years ago | (#32146046)

it's mainly bullshit. my guess is, he's talking about the ability of the liquid to retain heat, which is nice and all, but you've got to get that heat in, and then out. basically, you're adding one motor (thing still needs a fan somewhere, plus the water pump), lots of tubing, liquid... i wouldn't use that in my home PC, let alone in a server room.

Re:serviceability (1)

Iamthecheese (1264298) | more than 4 years ago | (#32146328)

But...but.. toy! toy fun! want!

Re:serviceability (1)

snero3 (610114) | more than 4 years ago | (#32204580)

After all, this slashvertisement is oh so keen in lauding the qualitative and subjective advantages of this toy but it doesn't even come near presenting the costs associated with forcing your company to be profoundly dependent (if not held hostage) of hardcore computer for support, maintenance of both hardware and cooling rig and upgrades.

how is this different to choosing a blade solution from another OEM? That is why OEM's love blades because you are generally locked into there hardware for the next 4-7 years due to the life of the chassis lasting that long.

Re:serviceability (2, Informative)

SpaghettiPattern (609814) | more than 4 years ago | (#32145908)

How hard is it to say; change a disk in one of the submerged nodes ? or fix a loose ethernet cable ? If the nodes are separated in compartments, and you could isolate and drain one while servicing it, this would be really nice indeed.

Have you ever been in a huge data centre? There are mostly systems not needing any human interaction for years and in fact people need a map and an index to find the system. Human intervention -e.g. for failing hardware- is usually cast into procedures whereby an "operator" does the work needed. Disk storage is usually separated form application servers. Etc...

Huge data centres analyse the used hardware used and usually set up a palette of system types to select from. (Therefore anything outside of "the palette" is usually possible at significantly higher costs.)

I take these data centres will be capable of analysing pros and cons for using immersion cooling. Higher server density and thus lower real estate requirements add up to significant savings for one.

Re:serviceability (5, Informative)

drinkypoo (153816) | more than 4 years ago | (#32146162)

We're talking about blade servers. they're not submerged nodes. they're submerged blades. storage happens on a SAN. What fucking year is it, anyway? In this design (big fat picture in the TFA, you lazy, reactionary fuck) each blade is sealed into its own unit which can be pulled separately. So it's even more of a non-issue. You just want something to complain about, when there is nothing to complain about. Thanks for helping make slashdot grate.

Re:serviceability (1)

weezer44 (1807722) | more than 4 years ago | (#32147826)

We're talking about blade servers. they're not submerged nodes. they're submerged blades. storage happens on a SAN. What fucking year is it, anyway? In this design (big fat picture in the TFA, you lazy, reactionary fuck) each blade is sealed into its own unit which can be pulled separately. So it's even more of a non-issue. You just want something to complain about, when there is nothing to complain about. Thanks for helping make slashdot grate.

Wow, nice people skills. Name calling is so cool.

Re:serviceability (1)

drinkypoo (153816) | more than 4 years ago | (#32148002)

Wow, nice people skills. Name calling is so cool.

I'm not here to 'keep it real, dawg'. I'm here to say what I have to say in a manner that is pleasing to me, and slay any trolls I meet in the process. Dawn take you all, and be stone to you!

Re:serviceability (1)

Hurricane78 (562437) | more than 4 years ago | (#32149254)

Uuum, and you did expect what from a troll who calls himself “drinkypoo” (and also does that in RL)?

He is an elaborate troll though, as he has several sock puppet accounts who regularly get mod points, which he uses for the real trolling: Moderating things in his (trollish) favor.
Even trolls hate him, as you can see when he clashes with other trolls. He’s a subhuman of the worst kind. And I don’t mean this as an insult, but as an objective measurement.

Re:serviceability (2, Insightful)

EdIII (1114411) | more than 4 years ago | (#32148724)

I would not characterize what he said as reactionary. He does have a valid question, which is how easy is this to service? Your right, that the hard drive is a non-issue since you would want to use SAN, but hard drives generate heat too, so why would we not want it for that too?

Not everybody uses Blades. I looked into it and I found it costly and proprietary compared to other solutions that could provide even greater density.

Even if we did create a completely sealed 1U server case, we would still need to hook up intake and outtake ports. One of those will bust eventually and we are left with a pretty damn big mess on the floor and a server with no capabilities to cool itself. What does it mean if a leak in one server eventually causes an entire rack to empty out? These are valid questions even for these liquid cooled Blades.

Complaining to just complain is stupid. However, I have yet to see a liquid cooling solution for data centers that really addresses all of these issues and provides contingencies for malfunctions.

Eventually you will need to remove a module and service it. How easy is it to service? How messy is it going to be? How reliably can you seal it back up? Will there be tests you can run under pressure to check proper seal before putting it back into production?

Under normal operation are there any safety valves that can detect loss of pressure and isolate a module and shut it down? What redundancy can be provided on coolant loops?

You see there really are a lot a valid questions about how this is going to work, what else are we not thinking about, normal operations, etc. I hardly think we could call that reactionary.

Re:serviceability (1)

drinkypoo (153816) | more than 4 years ago | (#32149324)

I would not characterize what he said as reactionary. He does have a valid question, which is how easy is this to service?

One which would have been answered by loading RTFA (I know, I know) and looking at the big fat picture at its head, and seeing that blades are in enclosed modules containing coolant.

Your right, that the hard drive is a non-issue since you would want to use SAN, but hard drives generate heat too, so why would we not want it for that too?

You can't immerse hard drives and you can't create immersible hard drives without creating whole new classes of issues. Conner tried making non-vented disks back in the early days of ATA and look where it got them. You could have a pressure-regulating mechanism like the nitrogen bladder in typical shock absorbers (what a terrible name) but that's usually the first thing to fail. A seal failure also means goodbye to your data. Liquid cooling would need to be integrated into drives' frames and covers, which would then mean you'd only have to worry about problems with the coolant connectors and lines if applicable (you could use an all-manifold design.)

You see there really are a lot a valid questions about how this is going to work, what else are we not thinking about, normal operations, etc. I hardly think we could call that reactionary.

We talk about some liquid cooling solution every month on average (made that up) and in fact there have been a couple of recent discussions. These ideas were discussed to death without useful progress then, but I guess we could do it again now...

Re:serviceability (1)

EdIII (1114411) | more than 4 years ago | (#32149398)

I would not characterize what he said as reactionary. He does have a valid question, which is how easy is this to service?

One which would have been answered by loading RTFA (I know, I know) and looking at the big fat picture at its head, and seeing that blades are in enclosed modules containing coolant.

I don't think that answers the question at all. Unless you are saying that the Blade itself is completely sealed and non-serviceable. I don't know if that is true, I would imagine some parts on a Blade are serviceable. If so, how easy is it to take it out, open it up, service, seal, and refill?

Re:serviceability (1)

lena_10326 (1100441) | more than 4 years ago | (#32152456)

I don't see why the topic of liquid cooling hard drives isn't on the table. SAN or no SAN hard drives generate heat regardless of where they are housed. The individual drives can be liquid cooled even if not directly submerged in liquid.

http://www.frozencpu.com/products/7500/ex-blc-487/Koolance_HD-60_Hard_Drive_Liquid_Cooling_Block.html [frozencpu.com]

Re:serviceability (1)

EdIII (1114411) | more than 4 years ago | (#32160118)

Hard drives cannot be directly submerged in liquid. However, I feel that it should be possible to design a high density liquid cooled SAN. As you can see in the Koolance product even a block a few mm's wide should be enough to do it. So creating a block combined with a SATA backplane holding many hardrives close together should work out just fine.

Re:serviceability (1)

randyleepublic (1286320) | more than 4 years ago | (#32151858)

The workstations are sub optimal. Ludicrously overpriced as well.

Re:serviceability (0)

Anonymous Coward | more than 4 years ago | (#32147806)

How hard is it to say; change a disk in one of the submerged nodes ? or fix a loose ethernet cable ? If the nodes are separated in compartments, and you could isolate and drain one while servicing it, this would be really nice indeed.

http://www.datacenterknowledge.com/archives/2010/03/17/submerged-servers-green-revolution-cooling/

Video of a ram upgrade.

Re:serviceability (1)

LoRdTAW (99712) | more than 4 years ago | (#32150254)

RTFA.

And I like how they are using an oil like mineral oil. Mineral oil is a great insulator, its used in large transformers for cooling and insulating the windings. It is also used in high voltage oil circuit breakers for arc quenching.

I once thought of using it to submerge PC hardware for cooling but the idea of oil all over my components was not very attractive. If I had to swap something out or upgrade/add I would have to move the PC to the bath tub or kitchen to remove the component and clean the oil off. Granted once you get a setup going it shouldn't be much of a problem because you arent going to have to open it all the time. Cleaning however would have to be accomplished with a solvent which does not sound appealing.

I like the idea but for data centers where components fail every so often. it is a bit impractical. That or certain components could be left dry or in separate modules that can be swapped without leaks.

Liquid blades? Sounds familiar... (1)

capo_dei_capi (1794030) | more than 4 years ago | (#32145660)

Mafia rap [wikipedia.org] + enterprise computing = Samir [geekadelphia.com] ?

Re:Liquid blades? Sounds familiar... (1)

sourcerror (1718066) | more than 4 years ago | (#32145760)

My guess would have been Terminator 2.

Re:Liquid blades? Sounds familiar... (1)

brackishboy (1432215) | more than 4 years ago | (#32150144)

Knives and stabbing weapons. /arnie

Additional certification required (0)

Anonymous Coward | more than 4 years ago | (#32145672)

Last line of a job add:

Having a diving certificate is definitely an asset.

Slashvertisement or informative... (-1, Offtopic)

SpzToid (869795) | more than 4 years ago | (#32145694)

I'm gonna rate this as informative, and really interesting, and even genuinely cool as well; as in global warming cool.

Oh wait, I don't have mod points when I post. C'est la vie. I love useful information anyway. Now, I am contemplating some upgrades, cost savings, etc.

And if the dielectric fluid is non-compressible... (2, Funny)

voodoo cheesecake (1071228) | more than 4 years ago | (#32145732)

I'd like to be under the sea in an octopuses data garden in the shade...

Re:And if the dielectric fluid is non-compressible (1)

Cryacin (657549) | more than 4 years ago | (#32146020)

Go where it's wetter, you know that it's better!

Re:And if the dielectric fluid is non-compressible (1)

longhairedgnome (610579) | more than 4 years ago | (#32150168)

There'll be no accusations,

Just friendly crustaceans

The blind leading the ignorant. (2, Insightful)

GuyFawkes (729054) | more than 4 years ago | (#32145954)

Immersion liquid cooling is something I have done in the past, and that is all well and good, it is after all HOBBY level tech.

For commercial level tech it isn't even a joke, imagine opening the bonnet / hood of your new 2010 car and finding a big tub full of water with the engine immersed in it.

Internal combustion engines have had closed circuit internal liquid cooling circuits for decades, and frankly computers and electronics have had closed circuit internal liquid cooling circuits for decades too.

Think backplane technology and hollow main boards, the liquid coolant flows through the hollow PCB, and mates and either side with the "backplane".

All the advantages of liquid cooling, and almost none of the disadvantages of liquid cooling.

Air cooling has one great advantage, "leaks" don't matter. Provided you have sufficient mass flow you can leak air all over the place.

Older internal combustion engines didn't even have forced circulation in the closed loop liquid coolant systems, they used thermal syphon, much like the space between the racks.

The salient fact here is you have to design in the cooling circuit at the engine block / PCB mechanical design stage, until and unless you do that you are going to be dealing with some god-awful heath-robinson kludge, like fitting an old "stationary engine" (google it) into a 2010 Dodge rolling chassis.

Instead of a 50 buck case containing a 100 buck mobo, you end up with a 100 buck case and a 200 buck mobo for closed circuit air cooling, or a 200 buck case and a 400 buck mobo for closed circuit liquid cooling, and these prices are for large volume manufacture with full economies of scale.

Now go back to your Dodge dealership and take in two 2010 rolling chassis for the annual service, one is running a bog standard cummins, the other is a kludged up stationary engine, and ask the mechanics which one will be more expensive to service.

Closed circuit liquid cooled electronics are not new, it is routinely used in avionics, which of course means that you can back 200 watts of thermal rejection (a modern desktop computer) into a package the size of an iphone, and run it flat out 24/7.

But it costs.

Unless you are in Hong Kong then the cost of land per acre is cheaper, and air is free, and leaks don't matter, and the coolant doesn't cause shorts.

The only other advantage of liquid coolant is it is much quieter, but even so, you can cure that problem by making everything bigger to accommodate much larger passive heatsinks.

http://hackedgadgets.com/wp-content/_hs2.JPG [hackedgadgets.com] for example, this stuff is extruded and bought by the metre, it doesn't have a failure mode.

Re:The blind leading the ignorant. (0)

Anonymous Coward | more than 4 years ago | (#32146066)

You call liquid cooling hobby level tech? It's been used in supercomputers for a long time. My guess is you are both.

Re:The blind leading the ignorant. (0)

Anonymous Coward | more than 4 years ago | (#32146178)

And what are you? He specifically talks about "Immersion liquid cooling..." and afaik supercomputers use the regular kind.

Re:The blind leading the ignorant. (1)

VinylPusher (856712) | more than 4 years ago | (#32146304)

The regular kind? There's at least one supercomputing architecture which uses liquid squirted from nozzles onto CPU's, whereupon it evaporates. Very efficient cooling.

I'm a bit out of date perhaps (I read about this a few years back) but what is the regular kind that you refer?

Re:The blind leading the ignorant. (0)

Anonymous Coward | more than 4 years ago | (#32146454)

Non-immersion kind. You have tubes with liquid going to water blocks which are affixed to all the components that get hot. They eventually run to a radiator somewhere, which, on a supercomputer, is probably external to the machine itself and could be integrated with the facility's HVAC system. Simpler systems will use water (though possibly mixed with some chemicals to prevent corrosion) while more complex ones can use things like liquid nitrogen.

Re:The blind leading the ignorant. (1)

GuyFawkes (729054) | more than 4 years ago | (#32146392)

Here are two real world examples.

1/ the traditional electric kettle, normally a 3 kW element immersed in a few pints of water, or the traditional hot water cistern, a 6 kW element immersed in a few tens of gallons of water.

The above works on open circuit (immersion) and thermosyphon / convection.

2/ the traditional electric showed, normally a 8 kW element in a unit a tad smaller than a soda can.

The above is closed circuit and forced flow.

Simple changing from immersion and convection, to closed circuit and forced flow, UTTERLY changes the device.

Re:The blind leading the ignorant. (1)

GuyFawkes (729054) | more than 4 years ago | (#32146418)

damn, that should be "the traditional electric SHOWER"

Re:The blind leading the ignorant. (3, Informative)

Zemplar (764598) | more than 4 years ago | (#32146466)

Immersion liquid cooling is ... all well and good, it is after all HOBBY level tech.

Really? Cray started doing this back in 1985 [wikipedia.org] , so I wouldn't call it "HOBBY level tech."

Overheating means waste of energy (0)

Anonymous Coward | more than 4 years ago | (#32145988)

Switch to ARM, guys

http://www.greenm3.com/2009/09/arm-pushes-performance-per-watt-announces-2ghz-multicore-designs.html

Re:Overheating means waste of energy (1)

Pinky's Brain (1158667) | more than 4 years ago | (#32146318)

"1.5GBytes of DDR 2 RAM"

HAHAHAHAHA ... no.

Re:Overheating means waste of energy (1)

gilboad (986599) | more than 4 years ago | (#32147364)

*Sigh*

Lets assume, for a second, that I have a system that scales linearly from 12 to 48 cores (2S Xeon 56xx/Opteron 2xxx to 4S Opteron 6xxx) and requires ~4GB per core. (48-192GB)
Now, lets assume that I'm willing to switch to ARM.
1. I will need a board that with at-least 24-96 soon-to-be-released ARM 2.0 Ghz Coretex A9 dual core core CPUs.
2. This board will also need unbelievably wide (and extremely complicated) crossbar, as I can no longer simply put 4 independent buses (one per CPU) and simply connect the sockets via a high-speed inter-CPU bus as AMD and Intel currently do.
3. Let alone the fact that this board will also require huge amounts of L3 cache / snoop-cache, as the built-in per-CPU cache will have a horrible cache hit/miss ratio.
4. ... And this of-course, assuming the ARM can:
    a. Address more than 64GB RAM. (As I recall, ARM was limited to 36bit)
    b. Capable of working on MP configuration.
5. Lets assume that someone actually managed to solve all of the above, this doesn't solve one issue: This machine will have abysmal single thread performance. Even if my application scales nicely into 96 threads (and most applications don't), I will still have code-paths that will be core-speed dependent; and these code-paths will be dog slow on this machine.

In short, currently ARM doesn't come even close to replacing a cheap 2S Xeon/Opteron servers, let alone a super-high-end 4S/8S server.

- Gilboa

Re:Overheating means waste of energy (0)

Anonymous Coward | more than 4 years ago | (#32149444)

There's something us professionals have that you seem to not know about.. It's called "having the right tool for the job". Sometimes you need zillions of slow cores, sometimes you want a few stinkin' fast cores. It's all about having the right tool for the job.

Doing the work that I do, a 96 core ARM machine is exactly what we want. What we're doing runs it's absolute best if it can run a hundred threads in parallel. It would also be a great architecture for web servers, but not much else that I can think of. Just AI and web sites.

What's wrong with tube liquid cooling? (1)

PhrstBrn (751463) | more than 4 years ago | (#32146296)

I don't know why you'd have to immerse the entire blades in cooling.

Couldn't you just use tube liquid cooling, found in many enthusiast machines? You could make one pump/heat exchanger per blade enclosure, with a custom valve/fitting connecting the blade to the enclosure to pass the cooling around. I'm sure a clever engineer could easily come up with a design for redundant heat exchangers/pumps in the enclosure, even hot swappable.

The benefits of liquid cooling (low noise, lower temperatures) with less or no mess. I'd also imagine it would be cheaper than this method as well.

Liquid Blade? (1)

Yvan256 (722131) | more than 4 years ago | (#32146518)

Is that similar to a Light Saber?

Re:Liquid Blade? (1)

garompeta (1068578) | more than 4 years ago | (#32147718)

Maybe the nemesis of Solid Snake?

Re:Liquid Blade? (1)

An ominous Cow art (320322) | more than 4 years ago | (#32157630)

It doesn't *have* to run Ubuntu.

Renderfarms will not buy this (1, Informative)

Anonymous Coward | more than 4 years ago | (#32146530)

> marketing the product for render farms

I used to have root on a renderfarm with a few thousand cores and this is exactly the type of fiddly tech you want to avoid when you scale-out. Many renderfarms are located at multiple sites, mainly due to legacy but also for convenience and resilience. This means the majority of a renderfarm may be co-located where land and electricity is cheap and rely on high bandwidth connections to offices where transport and talent is plentiful. I know of two renderfarms which are split in this manner.

It means that really big renderfarms are co-located with shared cooling facilities. This makes fancy cooling methods very risky and unnecessarily expensive.

Re:Renderfarms will not buy this (1)

mysidia (191772) | more than 4 years ago | (#32148484)

Well, it might be useful for new render farms. They could save money by not getting heavy cooling in the first place, eh?

However, the technology is radical and yet to be proven yet. I don't see any business adopting or trying this, until it has been proven -- or until it becomes inexpensive to try.

It's not worth paying twice as much for a server, for some fancy immersive cooling technology, if it hasn't yet been proven for large deployments.

I think it's more of an option (immediately) for small-scale deployments, like ones you could use Windows-based software in

liquid heatsinks? (1)

drfireman (101623) | more than 4 years ago | (#32146828)

Does anyone make a liquid cpu heatsink, something to slap over your cpu (and seal)? Seems like that would be a nice innovation if it could be kept sealed.

Re:liquid heatsinks? (2, Informative)

petermgreen (876956) | more than 4 years ago | (#32147632)

I doubt sealing to the board would be very practical, it would be very hard to get a good enough seal there and anyway most of the heat comes out of a CPU through direct conduction to the heatsink anyway so I don't see a whole lot of point in immersing the CPU itself.

What you can get easily are "waterblocks" which attatch in place of a regular heatsink and take the heat from the CPU in the regular manner but are designed to transfer it to piped water rather than to the air.

This is a great step (1)

houghi (78078) | more than 4 years ago | (#32147078)

This is a great step in the correct direction. First we had swords with bronze blades, then iron blades. Now we have liquid blades.
The next step should be the plasma blade and the ultimate goal will be the lightsaber.

Capacitive couplings (0)

Anonymous Coward | more than 4 years ago | (#32147414)

I wonder what sort of affect this 'liquid dielectric' has on the hardware itself. Lots of problems associated with high frequency networks are capacitive couplings. That is to say that there are virtual capacitors between copper traces on the circuit boards themselves simply because they are so close together. Any first year EE should know that as the frequency of a signal increases, any capacitive elements inside the circuit behave more and more like short circuits (Tada! Filters!). It seems to me that were removing air as the parasitic capacitor's dielectric and replacing it with something else, probably something that has a permittivity much higher than air's (~1).

Past is prologue (1)

stox (131684) | more than 4 years ago | (#32147592)

Cray Supercomputers were doing this a quarter of a century ago. Fluorinert to the rescue!

this will cost a mint with real deployment (0)

YesIAmAScript (886271) | more than 4 years ago | (#32147630)

Immersion cooling makes sense from a lot of perspectives. However, there is one enormous problem. That is that chips aren't rated to work when immersed. You will have to get the companies who make the chips to specify that the packages the chips are in will are designed and safe to use in liquid. And they're going to charge you a lot of money for that. Frankly, it'll make the device non-cost effective.

You can just omit this step, and throw something together with air-rated chips and hope it works anyway. But you don't really want to depend on such a system for mission critical services.

Remember these rules when using liquid cooling (1)

deadline (14171) | more than 4 years ago | (#32147772)

  1. All liquids will, at some point, leak or spill
  2. When you are convinced that immersive liquid cooling is the future, read rule #1

For special and expensive systems such ideas may be useful, but it is hard to beat air for commodity systems because it is free, there is plenty of it, it cannot spill, it does not put excess load on the floor, it is inert (to electronics), and it can be easily recycled.

Re:Remember these rules when using liquid cooling (0)

Anonymous Coward | more than 4 years ago | (#32147796)

That's what I'm thinking, too. At least the spillage part. I hadn't thought about load on the floor and building structure, but I wonder what the percentage increase is for the amount of water used per rack.

Does this company provide a warranty that covers the servers on the bottom rows of your rack when the one on the top row in the middle starts leaking?

Re:Remember these rules when using liquid cooling (1)

dziban303 (540095) | more than 4 years ago | (#32148010)

Does this company provide a warranty that covers the servers on the bottom rows of your rack when the one on the top row in the middle starts leaking?

No, but they throw in a free Rubbermaid bucket and a 5-pack of ShamWows with every rack purchased.

Re:Remember these rules when using liquid cooling (1)

mysidia (191772) | more than 4 years ago | (#32148402)

The fluid is dialectric and therefore non-conductive and 'safe' it won't short out the other servers.

However, it could create a mess. If you have 1 full immersion blade you'll want them all that way.

CPU and all components are immersed in the fluid. The primary danger behind 'leaking' is that the tank runs low and the server or pump starts overheating due to lack of fluid.

Re:Remember these rules when using liquid cooling (1)

mysidia (191772) | more than 4 years ago | (#32148446)

The fluid used in immersive cooling systems IS dialectric (an insulator) and inert to electronics.

Otherwise, it would make no sense to immerse the blade's components such as CPU and memory in it during normal operation.

This is not a water cooling solution, or a water block / partial immersion solution, where a dangerous liquid might be used, but kept isolated from the components by using tubing, a cooling block, and a pump.

In an immersion solution like the one discussed, all components of the server, except possibly hard drives, and some sensitive components are immersed in the fluid, and probably some simple agitators are present somewhere to ensure the fluid moves across the CPU -- but to a large extent, convection will do that.

If they use SSD drives, everything might be immersed.

It is not a given that it will ever leak. Only if there are shoddy materials in the case itself, or the case gets broken.

Again, this is not the sort of solution where you have fragile tubes.

Hopefully, (if they are smart), the blade slots are on the top of the chassis, and blades insert downwards to be immersed in fluid.

hm.. perhaps a cheaper way to reduce vibrations? (1)

mysidia (191772) | more than 4 years ago | (#32148380)

Than re-fitting all your racks with some carbon fiber advanced material. Surely cooling with a pumped liquid generates less vibration in the air than having a bunch of high-speed fans in your case.

Maybe this will result in better hard drive performance :)

I wonder (1)

atisss (1661313) | more than 4 years ago | (#32148704)

Water is dielectric by itself.. Electric conduction is because of all the mixed salts in it. Perhaps they are just selling distilled water? :)

Anyway, you would have to change the liquid from time to time, because some dust or anything might get into it and dissolve, thus making it conductive..

Liquid blader T1000 ... (1)

sourcerror (1718066) | more than 4 years ago | (#32148732)

... killing machine comes handy both at home and at the office. Call now, and order directly from the creator Cyberdine corporation.

Good luck with that! (1)

Hurricane78 (562437) | more than 4 years ago | (#32149222)

That “dielectric fluid” is really nasty stuff. To give you an idea, how nasty it is, I have an example for you:

Say you have attached your mouse to the computer, and sealed everything off with hot glue and everything. Now you fill the thing with that oil.
Then that stuff creeps trough the connector, trough the inside of the cable (between the wire and the plastic), up to your mouse, out your mouse, over all your table and everything, down to your floor and trough the whole room. Unless you stop it first.
Now add dust, and your desk remembers you of something that’s tarred and feathered.

Nasty nasty stuff. And hey, the heat of course doesn’t magically disappear. So you still need to cool the oil or the surrounding air somehow, and keep the oil moving. Or else you are much more prone to heat accumulation.

So all in all, it simply isn’t worth the effort. By the way: What’s so bad about fans anyway?

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?