Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Supercomputing IBM Hardware Technology

IBM Water-Cools 3D Multi-Core Chip Stacks 170

An anonymous reader writes "Water cooling will enable multi-core processors to be stacked into 3D cubes, according to IBM's Zurich Research Laboratory which is demonstrating three-dimensional chip stacks. By stacking memory chips between processor cores IBM plans to multiply interconnections by 100 times while reducing their feature size tenfold. To cool the stack at a rate of 180 watts per layer, water flows down 50-micron channels between the stacked chips. Earlier this year, the same group described a copper-plate water cooling method for IBM's Hydro-Cluster supercomputer. The Zurich team predicts high-end IBM multicore computers will migrate from the copper-plate water-cooling-method to the 3-D chip-stack in five to 10 years." Reader Lilith's Heart-shape adds a link to the BBC's article on these internally-cooled chips.
This discussion has been archived. No new comments can be posted.

IBM Water-Cools 3D Multi-Core Chip Stacks

Comments Filter:
  • by krog ( 25663 ) on Friday June 06, 2008 @10:44AM (#23681985) Homepage
    But they're really gonna rev up performance once they move to 4-cornered time cubes [timecube.com].
    • More like this [superliminal.com].
  • by CowboyNealOption ( 1262194 ) on Friday June 06, 2008 @10:44AM (#23681991) Journal
    can it run Vista??
    • Re: (Score:2, Funny)

      by Shinmizu ( 725298 )
      No, nothing can run Vista. God wrote it specifically as a challenge to everyone that asked that stupid question, "Can God write an OS that even he can't run?"

      He's still working on that one rock problem, though...
      • by rts008 ( 812749 )
        God is a Microsoft code-monkey?
        Hmmm....no wonder religions are so wacky and bug-ridden. It all makes sense now!
        Maybe the next service pack will patch that pesky 'fundamentalists try to take over all system resources and processes' bug.
  • my favorite (Score:2, Funny)

    by Anonymous Coward
    mmm cool ranch centrinos
  • This reminds me of the water cooled computer from http://www.imdb.com/title/tt0448134/ [imdb.com] . It seems like a pretty cool idea, I don't know why it hasn't been used before.
  • Water cooling is great for the bleeding edge enthusiast, but it's hardly an option for the workaday computer users. Laptops certainly could stand to use some better heat dissipation, and if water cooling through 50nm tubes is possible here, how long until it is both cost effective and size-effective for people who aren't interested in hardware for its own sake to see this type of thing offered to us, the average computer user?

    And is stacking the chips better than laying them flat and in a strip (like Pentium M)?
    • Re: (Score:3, Informative)

      by wattrlz ( 1162603 )

      ... And is stacking the chips better than laying them flat and in a strip (like Pentium M)?

      Sure. The interconnects could be shorter and thus impose much less lag. Core one wouldn't need to go through core two to talk to core three, etc.

    • Re: (Score:2, Informative)

      by nategoose ( 1004564 )
      Laying them out flat is better for cooling because it has more surface area, but the cube can be faster since the maximum distance from any 2 points within it is reduced from what it would be if the same chip area were laid out flat. This is why it NEEDS water cooling.
    • When I was working with some related stuff, the idea was that each die would have a bunch of holes in it, that would allow you to stack the dies and run coolant through the holes, while simultaneously acting as interconnects -- so ideally you could stack your L2 cache, or maybe the entire system memory, right on top of the CPU and have the whole address bus right there. Your memory would be closer than the far side of the processor die so your memory access time would be infinitesimal. Then you stack THAT
    • Re: (Score:3, Insightful)

      by colmore ( 56499 )
      How is this insightful? Water cooling may never be feasable for you. Unless you count the datacenters that you use for networked applications (like... readings slashdot) or the large numerical processors that enable the science and engineering behind the crap you use every day.

      Water cooling wasn't invented by overclockers. Cray used it in many of their production systems in the 70s and 80s and its use with CPUs goes further back than that.

      The stack of chips is to increase the connectivity between the mul
  • Electrolysis (Score:5, Interesting)

    by mrbluze ( 1034940 ) on Friday June 06, 2008 @10:45AM (#23682011) Journal

    To cool the stack at a rate of 180 watts per layer, water flows down 50-micron channels between the stacked chips.
    I wonder what reactivity of water with the surrounding surfaces will do to the life of the chip. AFAIK pretty much anything that uses water has an inherent limitation to its life, owing to the presence of superoxide radicals and free hydrogen ions.
    • Re: (Score:3, Funny)

      superoxide radicals
      Sounds like a sweet name for a band.
      • by jd ( 1658 )
        Prior generations have already tried for better punk through chemistry.
    • If the inside of the system is all made of one material couldn't you just put in deionized water and hope for the best? Copper, silver, and silicon are pretty water-resistant when there isn't anything in there with them to catalyze the reaction.

      • by chthon ( 580889 )

        I once saw a demonstration (mid 80's, I think), on an exhibition, of a water purifying system. The demo consisted of a tank of water with in it a playing television. The backside was removed to demonstrate that all the components where effectively submerged under water.

        • Yeah, DI water doesn't conduct electricity very well at all. It also has a pretty high dielectric constant of about 80. Unfortunately it becomes a pretty good conductor once enough ions leak into it. Was the purifying system on while the demo was running?

    • Re: (Score:2, Insightful)

      by rahunzi ( 968682 )
      What about FREEEZING???? you limit chip to an environment where water is liquid - also size of molecules is finite and chips sizes decrease, water will not... I guess a PLATE or MANIFOLD would work and simplify connectivity... this is also SCALABLE into some supercooling/conductivity
      • Is freezing a big problems with the computers you're using?
        • by Jeremi ( 14640 )
          Is freezing a big problems with the computers you're using?


          I'm running Windows ME, so yes.

      • What about FREEEZING????

        You can only hope. If the heat block attached to the CPU has frozen water, it'll end up absorbing a lot of it before it melts to liquid. Water has a very high specific heat.
    • Perhaps they will utilize a sacrificial anode (another great name for a band...)
    • As I said in another post, the stuff I was working with -- which might be entirely different than this -- didn't have water actually in physical contact with the face of the silicon. The die had holes in it, rather than just pads for the bond-out wires. The holes were attached to short pieces of tubing (somehow, by some magic) that served as interconnects between the die layers, so the water was only touching the inside of the stacked-up tube segments. The heat was being drawn from the die where it touch
    • IIRC the silicon channels are passivated with a Silicon Dioxide layer (think glass). That should act as a barrier to any chemical reactions.
  • by iamdrscience ( 541136 ) on Friday June 06, 2008 @10:47AM (#23682037) Homepage
    How can IBM be this stupid? You can't cool a stack of chips with water, they'll just get soggy. I know it's hard to be patient, but if your chips are too hot to eat, you're better off just waiting for them to cool down.
    • Insightful?

      Moderators are on crack this morning, again.

      • Re: (Score:3, Insightful)

        It makes sense, you see, although modding a funny post as "funny" may be more accurate, modding a funny post as "insightful" instead is definitely more funny, so it's a much more appropriate moderation. You see, the moderator is making a joke about the joke. I believe this is called "metamoderation" -- if you have an account you may have noticed Slashdot encouraging you to metamoderate from time to time.
      • moderating "funny" raises post score but not karma providing opportunity for karma loss.

        if you think "funny" is worth karma then you mod "insightful" or "interesting" or whatever depending on how quirky you're feeling that day. that way the user doesn't ever lose karma for being funny unless they're at the karma kap - where in practicality it no longer matters because if you got there, you can probably get there again just fine and you can afford to lose five points of karma. (It's when there's a modding w

  • 180 Watts per layer (Score:2, Informative)

    by javilon ( 99157 )
    Sounds like too much, with typical numbers around 60 watts per processor this days.
    • These are not you average Centrino procs, these is server hardware running at close to 100% load probably. It gonna get hot :P
    • Sounds like too much, with typical numbers around 60 watts per processor this days.
      Yes but it's 3D ! Ergo the 180.
    • I think a "layer" consists of more than a core: "IBM plans to stack memory chips between processor cores to multiply interconnections by 100 times while reducing their feature size tenfold." So, this doesn't really say, but there could be a 4 core Power6 chip and 2 gigs of ram in a layer for all I know.
    • by rts008 ( 812749 )
      These are server chips, so IBM is trying to engineer a partial solution to the Slashdot Effect [wikipedia.org], bane of servers worldwide.

      Besides, this is not IBM's first rodeo. I imagine that they might have put just a little bit of thought into this.
  • 3D CPU structure (Score:3, Interesting)

    by Lord Lode ( 1290856 ) on Friday June 06, 2008 @10:51AM (#23682087)
    I always liked the idea of a 3D CPU with all the cores and memory interweaved through each other in a way to have the optimal short path for its purposes. A LOT of memory could be there right next to the CPU. It would be fast even without clocking it very high, so not even have to consume that much watts per layer. It's a crazy amount of watts per layer mentioned in the article btw...
  • by kcbanner ( 929309 ) * on Friday June 06, 2008 @10:51AM (#23682095) Homepage Journal
    Right now, if the pump is off, or if the flow isn't flowing, the processor is none the wiser and happily starts up. I've seen my Core2Duo hit 100C when my pump died, my only warning was when the comp just shut off when it hit the temp cap. There needs to be some sort of control system that is actually linked in to the processor, so that it won't start if the flow of water through the block (or now the CPU itself) is below a certain rate. Most people who do use watercooling, however, know what they are doing and this usually isn't an issue, it would just be nice to know the server rack won't melt itself when someone blows the pump breaker.
    • With the rapid self destruction time inherent in this system, I would like to think the engineers have addressed this issue.
    • Or, just make the circulation system extremely reliable. You had a broken pump, and I know I hate fans, since they always break and what good is a video card with a broken fan? I seem to recall some systems where the absorbed heat is used to boil the water, which drives it through cooling fins. This seems great: 1) no mechanical parts to fail, 2) change of state absorbs lots of energy, 3) no additional energy is used to power a pump or fan.
    • by Hatta ( 162192 )
      So, your pump died and then your CPU shut itself off when it got too hot. Isn't that how it's supposed to work?

      I don't understand what you're asking for. Why should the processor care if the pump is running, if it's still cool. If it's too hot it shuts itself off. If the chip gets damaged, the temp cap was too high.
      • Yea, but for this application, where the water is actually flowing through the chip, its pretty critical that it keep flowing...I mean yea, the shutdown will happen, but its kinda like how some UPSs work...they tell the computer, "ok I'm gonna run out of juice soon, better do a clean shutdown"...
  • We just finished removing all the water cooling tubing which the old mainframes used.... But hey, don't tell anyone that watercooling big computers isn't a new idea :-).
  • Risky (Score:2, Funny)

    by Tribbin ( 565963 )
    If the water gets 100C. it will boil and leave the processor in an isolating bubble.
  • by kiehlster ( 844523 ) on Friday June 06, 2008 @11:02AM (#23682239) Homepage
    I can see it now, "IBM struck with class-action lawsuit after several incidents of computers being left out in the cold of winter cause the processors to explode due to the natural properties of water expanding into ice. Other incidents with water contamination in liquid nitrogen-cooled 3-D processors have resulted in a similar lawsuit."
  • I am not an engineer, but I've been kicking this concept around in my head for a while, short paths FTW. I always thought of thermoelectric cooling solutions, water-through-the-chip... wow
    • BTW, I was thinking of 16 exabytes of RAM for each processor core, on the same chip, so the Bus only feeds the peripherals.
        Like I said, IANAE.
  • Funny, I thought that this thing has already existed for a while at IBM? They called it Multi-Chip Module-Vertical (MCM-V) at that time. But maybe just the cooling had to be redone for those power-hungry modern cores.
  • by mad zambian ( 816201 ) on Friday June 06, 2008 @11:08AM (#23682321)
    IBM and water cooling of chips is not really new. I remember reading of some research they did back in the 80's when they etched micro channels on the back of processor chips, and forced water through them. IIRC, they reckoned they could eventually dissipate almost 1KW per square centimeter.
    You want to drive bipolar chips fast, you apply more power. And end up with a piece of silicon dissipating way more heat per unit area than an electric fire. Mind you, so do Athlons.
  • by Anonymous Coward
    If you like to read more information on multicore processors, go to http://www.multicoreinfo.com/ [multicoreinfo.com] .

  • plumbing always leaks eventually - what a mess - my system melted down, and there's coolant all over the cpu -- blech. :-P

  • Does anyone remember the good old days when Metal Gate CMOS represented a power efficient process? We have went from CMOS devices consuming milliwatts and microwatts to processors with a 125W+ Total Power Dissipation. This announcement is talking about 180 Watts per layer!

    How long will it be before my computer heats my house while I browse the internet? When does the first combined datacenter and heating cogeneration system get installed?

    • Re: (Score:3, Interesting)

      by Yetihehe ( 971185 )

      How long will it be before my computer heats my house while I browse the internet? When does the first combined datacenter and heating cogeneration system get installed?
      About two months ago. http://www.ecofriend.org/entry/ibm-manages-to-warm-pool-water-with-its-heat-emissions/ [ecofriend.org]
    • Sure, generating less heat in the first place is a good idea. But there will still be data centers where a lot of processing happens. So you can either have a low-density datacenter in a huge air-conditioned facility, or a smaller high-density setup where the waste heat is collected more efficiently.

      For home PC's, I think power consumption has hit the ceiling already. Power isn't getting any cheaper, and "green" is trendy.

      • For home PC's, I think power consumption has hit the ceiling already

        correct. PC builders are limited to what a single wallplug can produce. 110AC at ~10A is all you're getting.

    • Re: (Score:2, Informative)

      by NameIsDavid ( 945872 )
      In those good old days, CMOS was efficient because a CMOS gate draws very little power when it is not switching. This leakage current could be very small in the old days when power supplies were 5V and thus transistor threshold voltages could be high enough to make leakage small. The power drawn during switching was the main component and was relatively small because clock speeds were low. Now, both static and dynamic power are high and even equal in modern chips. High clock speeds means high dynamic power
    • Re: (Score:3, Interesting)

      by necro81 ( 917438 )
      CMOS is still a whole lot more power efficient than the TTL logic (i.e., bipolar junction transistors) that they replaced. Ideally, a CMOS transistor only requires power when switching states, whereas a BJT burns power continuously. Per transistor, they are a much better way to go.

      The problem with high total power dissipation is the result of several interrelated trends, all of which can be related to Moore's Law. More transistors got crammed onto a single chip (a linear increase in power dissipation
      • by necro81 ( 917438 )

        Thinner gate oxides permitted greater leakage currents.
        Sorry, I meant "thinner gate oxides permitted faster switching at lower voltage, but at the cost of greater leakage current."
  • Isn't this just miniaturizing what Cray used to do? They had processor/system boards in fluorinert, IBM has processor cores in water.
    • yes, kind of. but we all know the ability to miniaturize things is in no way novel or interesting, especially when it significantly improves capabilities.
  • Unless the water is very, very pure, there will be effects of moving water. In all systems that use moving water, some amount of material is removed by impurities over time. For most systems, the amount of damage is microscopic so it doesn't matter as much. Still, periodic maintenance and replacement of parts is enough to combat this. But at the scale of these chips (microns), any damage may be serious.
  • Deionized water is available, cheap and not a problem if there's a spill. No messing with hazardous materials or stringent environmental restrictions. That makes good sense.

    "Welcome to Jiffy-stop. It's time for a power-flush and fill for your supercomputer. That'll be $19.95 with the coupon from Sunday's paper."
  • Why use water? That's harder and slower to move when it gets hot. Here's my idea. Stack boards of processors and then leave two opposite sides of the rack open and put one of those massive movie hurricane wind fans on one side and turn it on. I mean sure it draws some serious power but with a constant 80 MPH wind, your chips will stay damn cool! Plus, no need to worry about dust :D

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...