Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

The Not-So-Cool Future

CowboyNeal posted more than 9 years ago | from the things-heating-up dept.

Technology 155

markmcb writes "Researchers at Purdue University and several other universities are looking to start work on a major problem standing in the way of future chip design: heat. The team is proposing a new center to consolidate efforts in finding solutions for the problem that is expected to become a reality within the next 15 years as future chips are expected to produce around 10 times as much heat as today's chips. The new center would work to develop circuits that consume less electricity and couple them with micro cooling devices."

Sorry! There are no comments related to the filter you selected.

Timeline (0, Redundant)

kushboy (233801) | more than 9 years ago | (#12255506)

I remember reading Timeline and they were talking about the limit of chips. No point in investing since they'll just get so small, that they'll burn themselves up.

Re:Timeline (0)

Anonymous Coward | more than 9 years ago | (#12255622)

In Soviet Russia CPUs heat you!

Re:Timeline (1)

Armadni General (869957) | more than 9 years ago | (#12256113)

I must be in Soviet Russia, then, because my computer keeps my feet toasty warm.

Re:Timeline (0)

Anonymous Coward | more than 9 years ago | (#12255960)

Because if Crichton says it it must be true.

But think about the,,, (5, Funny)

Deltaspectre (796409) | more than 9 years ago | (#12255508)

Think about the people up in northern Canada, who need that precious heat! Unless this is some evil conspiracy to kill them off?

Re:[OT]But think about the,,, (1)

Ghoser777 (113623) | more than 9 years ago | (#12255764)

Sig response: Actually, yes mine is - thanks for asking!

Re:But think about the,,, (0)

Anonymous Coward | more than 9 years ago | (#12256021)

It is alright they will keep warm clubbing baby seals to death.

Nothing new (5, Insightful)

koreaman (835838) | more than 9 years ago | (#12255510)

What this boils down to is "researches are looking at ways to make cooler chips." Well, duh, haven't they always?

Nothing new-Lost and not found. (0)

Anonymous Coward | more than 9 years ago | (#12255535)

As long as there's "work". There will always be losses (usually in the form of heat, but...)

Photonic chips? (4, Insightful)

Mysticalfruit (533341) | more than 9 years ago | (#12255525)

I thought the future of processors was going ot be photonic processors. I'm not sure if these will be producing any heat or not.

Re:Photonic chips? (1)

LiENUS (207736) | more than 9 years ago | (#12255593)

Yes, ultimately they will produce heat. When an electron is excited by a photon it moves to a higher energy orbit, when the electron falls back to its original orbit it gives off that energy as infrared.

Re:Photonic chips? (1)

gnuman99 (746007) | more than 9 years ago | (#12255648)

No. When it falls back down, it most likely will give back the same photon, unless it goes though more than one transition to get back to ground state.

Heat is caused by friction, not electron energy state transitions! There is no energy "loss" as heat in eletron state transitions.

Re:Photonic chips? (1)

Detritus (11846) | more than 9 years ago | (#12256389)

That's why I always keep a can of electron grease in my toolbox. It helps to prevent lasers from overheating and seizing.

Re:Photonic chips? (2, Insightful)

Have Blue (616) | more than 9 years ago | (#12255635)

Everything that performs work produces heat. This is what we mean by "nothing can be 100% efficient".

Re:Photonic chips? (2, Informative)

marcosdumay (620877) | more than 9 years ago | (#12256322)

The tecnologies we have now for fotonics produce an incredible amount of hot (if you use milions of switches). Can't compete with CMOS. And there is no teoric limitation on either field that makes one more attractive than the other for low consumation.

Re:Photonic chips? (1)

renoX (11677) | more than 9 years ago | (#12256379)

Well apart from obvious thermodynamic laws which implies that it must produce some heat, I think that photonic processors will produce much heat.

Think a little bit: photons do not interact directly, so it means that you need some matter to create interactions, and photon-matter interactions will definitely generate heat, possibly lot of heat as many useful interactions are "second order" effect ie the change of transparency of the matter is a 'byproduct' which means light must be very intense to induce the change, which means photonic processors won't be useful for many years..

One part where they may be useful is a 'fourier transform' coprocessor, as an interconnect bus but as the main processor is unchanged, this means electrical/optical conversions which release quite some heat..

Not Cooling (5, Interesting)

LordoftheFrings (570171) | more than 9 years ago | (#12255543)

I think that the solution to the heat problem will not come with better and more powerful cooling solutions, but rather radically changing how chips are designed and manufactured. The article doesn't contradict this, but I just want to emphasize that. Having some liquid nitrogen cooling unit is not the optimal, or even a good solution.

diamond cooling (3, Informative)

myukew (823565) | more than 9 years ago | (#12255558)

they should look for ways to mass produce cheap diamonds.
Diamonds are about five times better at heat conducting as copper and could thus be used for passive cooling.

Re:diamond cooling (1)

AaronLawrence (600990) | more than 9 years ago | (#12255582)

Diamonds would not be any better for passive cooling than aluminium (or copper). The rate they can transfer heat to the air, has nothing to do with how well they conduct heat internally.

Re:diamond cooling (1)

LiENUS (207736) | more than 9 years ago | (#12255612)

I thought diamonds werent any better at conducting heat and if anything are worse, they just didnt burn up when heated as quickly as silicone making them a good replacement for silicone in the processor itself.

Re:diamond cooling (1)

myukew (823565) | more than 9 years ago | (#12255651)

diamonds may conuct heat, but not electricity. not aren't usable at all for chip manufacturing.
fyi silicone is in breasts. silicium is in chips.

Re:diamond cooling (1)

LiENUS (207736) | more than 9 years ago | (#12255747)

You don't put straight diamond into the chips... you dope it with copperjust as with silicon.

Re:diamond cooling (2, Informative)

LiquidCoooled (634315) | more than 9 years ago | (#12255754)

Actually, diamond is looking better and better for use as a replacement for silicon.

see here [geek.com] for more info.
(This was reported extensively at the time)

Re:diamond cooling (4, Informative)

kebes (861706) | more than 9 years ago | (#12255765)

Actually many researchers are in fact seriously pursuing using diamond as a future replacement for silicon. Both diamond and silicon are *very bad* conductors in their pure state. Both have to be doped (with phosphorous, boron, etc.) to become p-type or n-type semiconductors, which makes them useful as a substrate for microprocessors (note that when doped they are semiconductors, not conductors... your microchip would just short-out if the entire wafer was made of a metal/conductor).

Diamond's superior thermal, optical, and chemical-resistance properties make it attractive for future microprocessors... but unfortunately it is more difficult to make it work as a semiconductor, which is why silicon has always been the substrate of choice.

It's very interesting research, and we'll see where it goes. For more info, this C&E News article is good, [acs.org] or check here, [ferret.com.au] or here [geek.com] and there's a bit here. [wikipedia.org]

Re:diamond cooling (1)

drinkypoo (153816) | more than 9 years ago | (#12256321)

but unfortunately it is more difficult to make it work as a semiconductor, which is why silicon has always been the substrate of choice.

Not to mention that sand is cheap, but debeers has been artificially raising the prices of diamonds for ages, and they have usually been expensive and/or difficult to manufacture.

Re:diamond cooling (1)

kebes (861706) | more than 9 years ago | (#12256521)

You're right, diamond is more expensive. But let me add:

Most real proposals for using diamond in microprocessors suggest using synthetic diamond, not natural diamond. You can use CVD (chemical vapor deposition) to make good quality artificial diamonds. Currently, growing CVD-diamond is expensive, but then again, taking sand and purifying it into a huge cylinder of single-crystal silicon is also not cheap. If synthetic diamond research continues, it could prove to be competitive with Si.

The cost of DeBeers natural diamonds is inflated based on the rarity of natural diamonds (and successful marketing), not based on superior performance. Synthetic diamonds are in fact much better for industrial uses (like bits for high-performance oil drills) because they are cheaper and you can tune the manufacturing to optimize for the important figures-of-merit.

Re:diamond cooling (0)

Anonymous Coward | more than 9 years ago | (#12255767)

Though diamond isn't as good a semi-conductor material as silicon due to a wider band-gap (>3eV compared to silicon's 1.12eV), it can be successfully n- and p-doped to form p-n junctions and FETs with (iirc) enough gain for logic applications.

See here [iop.org] for further detail.

You're right in saying that zero-temperature, un-doped diamond is an insulator, but then so is zero-temperature un-doped silicon.

Re:diamond cooling (2, Funny)

Cheap Imitation (575717) | more than 9 years ago | (#12255741)

The ultimate way to propose to that geek girl you love... a diamond engagement heatsink!

Thermal conductivity. (0)

Anonymous Coward | more than 9 years ago | (#12255826)

Thermal conductivity chart. [hypertextbook.com]
Please note plane parallel thermsl conductivity of graphite [electronics-cooling.com] which greatly exceeds diamond.
If you don't want to peruse the linked material...thermal conductivity(W/m - K):copper = 401, diamond = 895, Graphite = 1950.

Re:diamond cooling (1)

Tablizer (95088) | more than 9 years ago | (#12256469)

they should look for ways to mass produce cheap diamonds. Diamonds are about five times better at heat conducting as...

If you tried to do that, Debeers would Jimmyhaffa you faster than the oil companies did to that guy who invented 150 mpg engine.

1kW?! (3, Insightful)

AaronLawrence (600990) | more than 9 years ago | (#12255562)

("ten times as much heat as today's processors")
I don't think that 1kW processors will be practical. Nobody is going to want to pay to run that, and nobody will want a heater running in their room all the time either.

I'd say that they should be looking to limit it to not much more than current figures (100W) - maybe 200W if we are generous. After that it gets silly.

Re:1kW?! (0)

Anonymous Coward | more than 9 years ago | (#12255686)

Maybe all that heat could be turned into electricity...

Re:1kW?! (1)

Mahou (873114) | more than 9 years ago | (#12255726)

yeh but processors will get much smaller. if you remember from your school days, what gives off more heat energy: one candle or a fire(a fire as in ya know like a campfire, i'm not saying candles dont have fire) if they're burning at the same temperature? so even though they get hotter it wont be a heater

Re:1kW?! (1)

gnuman99 (746007) | more than 9 years ago | (#12255729)

I would not buy a processor with a rating of 100W. 80W is crazzy, but beyond 100W the fan gets noisy as hell.

100W * 5c/kWh -> ~$45/year to power it (yeah, low power prices in Canada thanks to tons of hydro :). If you raise it to 20c/kWh, you are paying about $180/year to power your 100W processor... Double that? 10x that? Not me.

Re:1kW?! (0)

Anonymous Coward | more than 9 years ago | (#12255763)

No doubt the eager young /. scamp meant to say a tenfold increase in heat density (which is the important thing)

Re:1kW?! (3, Informative)

kebes (861706) | more than 9 years ago | (#12255866)

FTA:
Current chips generate about 50-100 watts of heat per square centimeter.
"But in the future, say 15 to 20 years from now, the heat generation will likely be much more than that, especially in so-called hot spots, where several kilowatts of heat per square centimeter may be generated over very small regions of the chip..."


Let's not confuse power with power density. When the article says "10 times the heat" they mean kW/cm^2, not kW. Chips of the future will generate a few kW/cm^2 of heat in their hottest spots, but they will still be supplied from conventional 200W power supplies that run off of normal 120V power lines. It's the dissipation of so much heat in such a small area that is the issue, not the raw amount of energy being consumed.

So, again, it's not the the processor will draw 1 kW of power (it may draw considerably less), but rather that it's hottest spots will need to dissipate ~1 kW/cm^2 (i.e.: 1000 joules of heat per second per square centimeter).

Re:1kW?! (1)

Detritus (11846) | more than 9 years ago | (#12256489)

1+ kW processors used to be common, back when processors were built from hundreds, or thousands, of chips. Cooling wasn't that difficult. You just needed a source of chilled air at positive pressure. The power density was low, so all you needed was a steady flow of air over the IC packages.

Breeze (4, Funny)

MikeD83 (529104) | more than 9 years ago | (#12255563)

"Meanwhile, the cloud of electrons would be alternatively attracted to and repelled by adjacent electrodes. Alternating the voltages on the electrodes creates a cooling breeze because the moving cloud stirs the air."

Amazing, Purdue is developing the same technology used in such high tech devices as the Ionic Breeze air purifier. [as-seen-on...tore-1.com]

Re:Breeze (1)

alfrin (858861) | more than 9 years ago | (#12255941)

great, now we are going to have to worry about chips polluting us [healthdiaries.com]

Hot and bothered! (3, Interesting)

3770 (560838) | more than 9 years ago | (#12255583)

Not that I claim to have a solution to the problem with overheating processors. But the power consumption of computers are starting to bother me.

I used to want the fastest computer around. But a few things have changed I guess.

First of all computers are starting to be fast enough for most needs.

Secondly, the way I use computers has changed with always on Internet. I never turn my computer off because I want to be able to quickly look something up on the web.

I also have a server that is running 24/7. Most of the time it is idling, but even when it is working I don't need it to be a speed demon.

So it is starting to be really important for me that a computer doesn't use a lot of power. I don't know if it affects my electric bill in a noticeable way, but it feels wrong.

Re:Hot and bothered! (2, Interesting)

Hadlock (143607) | more than 9 years ago | (#12256026)

So it is starting to be really important for me that a computer doesn't use a lot of power. I don't know if it affects my electric bill in a noticeable way, but it feels wrong.


well a quick google says it's about five cents per kWh... assume your server spins down the disk drives when idling, and your monitor turns off when not in use; you're probably averaging 200watts an hour. That comes out to be abour $6.72/month in electricity, or $80 per year.

If you're looking for power savings, an old laptop with an external hard drive only consumes about 15W at idle... or about $6 per year. In what you spend in two years running you "server" you could have a decent laptop + gianormous 120 gig external drive as your server, and look things up "instantly" from your bedside.

what about parallel (1)

myukew (823565) | more than 9 years ago | (#12255594)

as other slashdot articles have proposed, future PCs (probably) won't be much more powerfull than today, but rather, like back in the mainframe days, dependend on some supercomputer selling it's processing power.
obviously such a mainframe can use massive parallel processing techniques were cooling is less of an issue.

Re:what about parallel (1)

Zo0ok (209803) | more than 9 years ago | (#12255658)

Yeah! If we have long-distance low-ping connections as in The Matrix!

For editing media - maybe...
For playing games - NO

What other high-performance jobs are PCs supposed to perform? Hi-speed decompression of tar-balls?

Re:what about parallel (1)

myukew (823565) | more than 9 years ago | (#12255707)

one has to assume ultra-fast gigabit internet for this to work, of course...

Re:what about parallel (1)

Short Circuit (52384) | more than 9 years ago | (#12256279)

Viewing Slashdot in a tabbed-browsing session. All those CPU-intensive Vonage ads bring my 750MHz Duron to a crawl.

Screw this (2, Funny)

Timesprout (579035) | more than 9 years ago | (#12255597)

We need to start working on the next generation of gerbil powered chips asap!!

Alliances (3, Informative)

Brainix (748988) | more than 9 years ago | (#12255605)

The alliance proposed in the article, to me, seems similar to the AIM Alliance [wikipedia.org] of the early 90s. Several companies united in a common goal. I've heard the AIM Alliance failed because competitors united in a common goal remain competitors, and as such tend not to fully disclose "trade secrets," even to further the common goal. If this proposed alliance takes off, I fear it will suffer the same fate as the AIM Alliance.

But can you make a cluster of them...? (3, Insightful)

ites (600337) | more than 9 years ago | (#12255606)

Not a joke.

The future is multi-core / multi-CPU boards where scaling comes from adding more pieces, not making them individually faster.

Yes, chips will always get faster and hopefully cooler, but it's no longer the key to performance.

Re:But can you make a cluster of them...? (1, Informative)

Anonymous Coward | more than 9 years ago | (#12255665)

Unfortunately, with a multi-core/multi-CPU system, you will use up probably more power, and you will produce an enormous amount of heat within the case (although not all on one die). That heat then has to be removed from the inside of the case one way or another, so it still wouldn't solve the problem.

Re:But can you make a cluster of them...? (1)

drinkypoo (153816) | more than 9 years ago | (#12256306)

monolithic cores with higher speeds are faster for some types of problems even than having multiple processors, and not for others. Having two processors doesn't make your system twice as fast unless you normally spend an inordinate amount of time context switching. while I have been eagerly awaiting the introduction of multiprocessing into the home market - make no mistake, this IS the first time any significant effort is being made to sell multiple cores to consumers - I'd still rather have a few very fast processors than a whole mess of slow ones for most purposes. Connection machines are cool but I can only imagine how hard it would be to parallelize (for example) a first person shooter to that level and really utilize the whole machine.

15 years? Try two years ago. (0)

Anonymous Coward | more than 9 years ago | (#12255615)

I realized heat had gotten way out of control years ago when I got a spankin new Duron 800 and hooked it up without a heat sink. . . so much for that CPU.
Compare that to my trusty 400Mhz K62. The fan died the other day and all it did was reboot.
Now let's not even begin to discuss the P4. The heat problem is not years off, it's today and it is very serious already. In fact we're a few years into it already.

Re:15 years? Try two years ago. (0)

Anonymous Coward | more than 9 years ago | (#12255812)

Your P4 will continue to run just fine without a heatsink due to clock throttling and a thermal diode that reacts fast enough to prevent the chip going up in smoke before it even notices (as opposed to Durons etc.)

There was a THG article where they did this very thing in fact.

hardware DRM (2, Interesting)

GoatPigSheep (525460) | more than 9 years ago | (#12255617)

When I think of future problems that will happen to hardware, Hardware DRM comes to mind.

Re:hardware DRM (1)

Bullfish (858648) | more than 9 years ago | (#12255757)

Two words: bios hack

A strange question, but... (0)

Anonymous Coward | more than 9 years ago | (#12255631)

Where does the heat really come from in chips? What I mean is - wires in my house have current running through them, but they don't need cooling, so why does a chip on a much smaller scale?

Why are CPUs so different from a lot of electronics out there? Is it the component count, the tiny size, or the close proximity of the components that makes them hot?

And another few points: why are they in such large packages and why aren't the pins (metal that passes closer to the core than anything else) designed to aid cooling?

Re:A strange question, but... (3, Interesting)

myukew (823565) | more than 9 years ago | (#12255752)

it's the size.
compare the typical light bulb with the typical wire running through your house. the light bulb gets hot because of the thin wire.

Re:A strange question, but... (0)

Anonymous Coward | more than 9 years ago | (#12255889)

In complementary logic, each gate flip draws some parasitic current, and each transistor has an off-state leakage current to boot. You can't do much about these things, and the design of the tansistor dictates how much current you will draw for a fixed voltage. Hence you need to supply enough current to your chip to supply the demand of each transistor, otherwise they'll all stop working (because the supply voltage will drop). Multiply by 19-million transistors and there's your problem -- you need to dissipate 19-million times some small amount of current in a space the size of a p4 die (just over a sq. cm)

The heat spreader passes about as close to the core as you can get. But you're in a temperature regime where the substrate and ceramic that encase the package aren't that much worse heat conductors anyway and there's a limit to the sensible size of a package and hence a limit to the size of the pins you can make.

Ultimately the heat would have to pass through the bond-pads of the device, and I can't see Intel wanting to double the size of the bond pads considering the associated silicon cost.

Do something about the noise first. (1)

qualico (731143) | more than 9 years ago | (#12255640)

Working on the latest generation of computers, its no suprise that the cheaper/generic fans are very noisy trying to turn faster to compensate for the greater cooling requirements.
An efficient and inexpensive cooling solution would be more desireable, IMHO.

Has anyone else experienced the "jet engine" noise comeing from newer systems?

Guess if you make the chip with less need for cooling requirements, we'll solve the puzzle also, however, that may be the more expensive road to the solution of fan noise, no?

Re:Do something about the noise first. (1)

myukew (823565) | more than 9 years ago | (#12255694)

I think not.
Heavy research is put in silencing aircraft turbines and it's not as easy as one way think. I guess it's the same with the fans cooling your CPU. Unless you want to pay $100 per fan with special widgets to reduce the noise you won't get fans very silent.
IMHO it's much easier to reduce the overall heatoutput of a system than developing silent fans. As a plus less heat means less power consumption and nobody wants to pays those bills.

Re:Do something about the noise first. (1)

qualico (731143) | more than 9 years ago | (#12255762)

Ya thats true.
It would be great if they made a chip that would only need passive cooling instead of using any fans.

Re:Do something about the noise first. (0)

Anonymous Coward | more than 9 years ago | (#12256266)

Check out the Antec "silent" cases. They make silent fans by making the fans larger, so they can displace the same amount of air with lower RPM's. The most noisy component of my Antec Aria system is the CPU fan.

Re:Do something about the noise first. (1)

Short Circuit (52384) | more than 9 years ago | (#12256316)

My 750MHz Duron doesn't give me any trouble with noise. But it's not exactly a "newer" system.

The 24-port switch I picked up recently easily drowns it out. :-(

heat has already been MOBO issue (4, Interesting)

KarmaOverDogma (681451) | more than 9 years ago | (#12255642)

Especially for those of us with newer motherboards who want a completely silent system with as few fans as possible

First it was CPUs with cooling and big/slow/no fans and big heatsinks, then PSUs GPUs and now MOBOs. My current custom box (now 14 months old) was built to be silent and I had a hard time settling on a motherboard that was state of the art, stable, and still used a passive heatsink to cool the board chipset fan-free. I finally settled on an Asus P4P800.

I can definately believe heat becoming even more of an issue. For those of us who want power/performance and quiet at the same time, this will become even more of a challenge as time goes on. I for one hope not to rely on expensive and/or complicated cooling devices, like peltier units, water pumps and the like. I hope the focus is on efficient chips that only clock up/power up as they need to, like the pentuim M.

my 2 cents.

Fans (0)

Anonymous Coward | more than 9 years ago | (#12255650)

No guys please stop this research...I really prefer a cpu fan, a northbridge fan, a graphics card fan, and two case fans that sound like a freaking 747. Why don't they just design a case where one whole side is a freaking fan?

10 times more heat? (3, Funny)

kennycoder (788223) | more than 9 years ago | (#12255674)

Whoa that's cool, now it means no more petrol is needed.

If i take out my CPU cooler it reaches about 100'C. Now lets see, 100 x 10 = 1000'C in only 15 years of chip industry. If we manage out to put this heat into work, lets say we can have 'PC + hairdryer' packages or 'PC + free home-heating' winter offers or even 'PC - burn-a-pizza' boxes. Think about it, its only good news.
Funny, -1

Let me get this straight (2, Funny)

MrP- (45616) | more than 9 years ago | (#12255683)

So.. in 15 years, my PC will be 470C? (1,166F)

And I thought my room got hot when my PC was was ~45C

I mean I have to leave my windows open during snow the winter when its in single digits just so i dont get all sweaty

I think ill need to find a new hobby in 15 years.

Re:Let me get this straight (1)

myukew (823565) | more than 9 years ago | (#12255733)

I thenk you need to get a bigger room. the 300W or so your computer uses would hardly be enough to heat a toilet...

Re:Let me get this straight (1)

Short Circuit (52384) | more than 9 years ago | (#12256335)

Put three 100W light bulbs in a large cardboard box, and stand by with a fire extinguisher.

Re:Let me get this straight (0)

Anonymous Coward | more than 9 years ago | (#12255968)

I think your 470C number is too high.

If your chip current runs at ~45C, and your room temperature is ~25C, then it's running at 20C above RT. Ten times that means that in the future it will run at 200C above ambient, which is 225C (which is still alot!). (And actually, the temperature would be even a bit lower, since the higher the temperature gradient, the faster heat dissipation occurs, which reduces the gradient more efficiently... so 10X heat production generally means less than 10X final temperature excess.)

Of course, the whole point is that 225C is way too much. The 225C assumes that the heat production is 10X higher, but the cooling solution has not changed. The point of all this research is to find cooling solutions that can keep up with these heat dissipation requirements, so that your processor of the future will still run at ~50C even though it's generating tons of heat.

rectus dominus (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#12255692)

I will fuck every last one of you up the ass until you are all dead. Sounds fun, huh?

Why is heat reclamation not worth it? (2, Interesting)

EbNo (174259) | more than 9 years ago | (#12255697)

I'd like to hear from some engineering types about why we can't use the excess heat from CPUs to do useful work. I know virtually all large-scale methods of generating electricity involve generating large amounts of heat through some process (nuclear reactions, burning coal or oil, etc), using it to create a hot gas, which turns a turbine, generating electricity.

I also have some vague handwaving idea that there are processes for generating electricity that have to do with harnessing temperature differentials, but I really don't know what I'm talking about.

Anyway, why can't we have little gas turbine generators (or some other method) in our machines that reclaim some of this lost energy, instead of wasting it? Seems like the aggregate energy amounts would be pretty large.

Re:Why is heat reclamation not worth it? (1)

argent (18001) | more than 9 years ago | (#12255780)

Google on "Stirling engines".

Google on "Thermodynamics, laws of" while you're about it.

A Carnot Engine as Heat Sink? (0)

Anonymous Coward | more than 9 years ago | (#12255838)

Sure, run it by Intel.

It's called cogeneration. The big problem is that source of cold. You want a large temperature differential so you need something really cold or are willing to let your cpu get really hot, hot enough to incandesce. The latter would let you replace your leds with light from the cpu. You could then replace your incandescent light bulbs with your surplus leds.

Re:Why is heat reclamation not worth it? (0)

Anonymous Coward | more than 9 years ago | (#12255903)

something like this [wikipedia.org] ?

Anyone more clued with thermocouples want to comment on using them as a general way to make waste heat do useful work?

Re:Why is heat reclamation not worth it? (1)

chubaca (593891) | more than 9 years ago | (#12256022)

Okay, according to theory it is possible to use that heat. But it would be economically unsound.

Most efficent heat transfer can be achieved by convection using materials (fluids) with high absorption of heat (as water) and movement of said fluid (now hot) to the power generator. The size of the required devices would be the size of your desktop at least. At they would be expensive too.

And, as all thermal and mechanical processes, they are not 100% efficient (2nd law of thermodynamics) nor in the CPU side neither in the turbine side. So your generator would also dissipate heat (and it is noisy too).

As for the methods using temperature differentials, I am not expert but IIRC they are not so efficient or at least are slow (they need large masses of matter to be practical because the ratio of electricity generation / time is low). So, they could not transfer heat fast enough for your CPU to cool off.

Re:Why is heat reclamation not worth it? (2, Informative)

kebes (861706) | more than 9 years ago | (#12256066)

In principle, yes, any temperature gradient can be harnessed to do some amount of useful work. Thermodynamics certainly allows this (without perfect 100% conversion, obviously).

AFAIK, it really is an engineering issue. Converting a temperature gradient to electricity works great when you have huge temperature gradients (like in nuclear reactors, coal plants, steam engine, etc.), but is not so useful in a computer tower. Firstly, the whole point of putting fins on a chip is to spread the heat out quickly, so that it doesn't build up and make the chip too hot (i.e. melt it and stuff). So for our chips to work, we can't run them any hotter than 60C (or maybe 100C or whatever). The gradient between 60C and room temperature, over a few centimeters, is not that great (imagine putting a paddle wheel above your CPU, and letting the current of up-flowing air turn it... now imagine how much useful work that puny paddle wheel is really going to do). If you actually built a device to extract that energy, it wouldn't be worth it. It would take a 1000 years (or whatever) of running it before the electricity savings would offset the cost of having built that little device.

So even though in principle you're right, in practice (from an engineering perspective) there's no economic advantage to doing this.

Another fun-fact is that it takes about ~7 years of using a solar-panel before the energy savings offset the production cost. So solar panels that burn out before this mark are actually *worse* for the environment that getting electricity from coal (or wherever)... (because producing a solar panel also pollutes the environment) Solar power is only going to be viable if they are either 1. cheaper or 2. longer-lasting or 3. more efficient than they are now (all of the above would be great).

Lastly, thermodynamics guarantees that in the winter, in a cold place, it's impossible to waste electricity (if you have a thermostated heating system). Basically any inefficiency in your home (be it from your vacuum cleaner or computer) ends up as heat, which makes the house warmer, and makes the thermostat's job a little easier. In the summer, however, it really is wasted energy.

It's getting hot in here... (1)

Bananatree3 (872975) | more than 9 years ago | (#12255698)

Take off all your....um...inefficient circuits! Its getting hot in here, take off all your inefficient circuits!

Energy (1)

ValiantSoul (801152) | more than 9 years ago | (#12255717)

Using energy creates heat. If they use less energy there is less heat. I think they should ignore the direct problem and fix the indirect problem.

w00t (2, Funny)

zionwillnotfall (876558) | more than 9 years ago | (#12255787)

w00t, no more heaters! now we just need a new way to cool my house...

Mini Lightning next to the CPU??? (1)

qualico (731143) | more than 9 years ago | (#12255796)

"The microscopic cloud of ionized air then leads to an imbalance of charge in the micro-atmosphere, and lightning results. "

Using lightning to cool a CPU?
Doesn't EMF pose a problem here?

Guess you could shield, but thats counter productive isn't it?

Re:Mini Lightning next to the CPU??? (0)

Anonymous Coward | more than 9 years ago | (#12255947)

You don't mean EMF (electro-motive force), you mean ESD (electro-static discharge). And TFA does mention problems with discharge.

If you ask me, it sounds like a daft idea: the pins on modern CMOS are protected by voltage clamp circuits that mean external ESD is much less of a problem (which is why you can handle RAM without blowing it up). But cooling would require convection over the die face, with no protection circuits to help.

Re:Mini Lightning next to the CPU??? (1)

qualico (731143) | more than 9 years ago | (#12256137)

Could have this all backwards since its not my area of expertise.
Thinking more along the lines of Electric and Magnetic Fields, (EMF).

Sheilding [fms-corp.com]

The lighting will induce currents of electricity and interference on everything around it.

Here is a good chuckle:
Home project [asilo.com]
(The Windows95 screen shot)

Interesting sidebar:
A new electric producer [theverylas...ternet.com]

not exactly (1)

CaptnMArk (9003) | more than 9 years ago | (#12255797)

>problem that is expected to become a reality within the next 15 years as future chips are expected to produce around 10 times as much heat as today's chips.

This is bullshit. I am never even considering buying a >>100W CPU for my desktop, certainly not 1000W.

I'd rather see a less fans in my machine, not more.

Looking into heat/area is more reasonable as area will decrease for a while still.

Re:not exactly (0)

Anonymous Coward | more than 9 years ago | (#12256092)

I think your computer will still run off of a 100W power supply... as explained in this comment. [slashdot.org]

Missing an option? (2, Interesting)

andreMA (643885) | more than 9 years ago | (#12255802)

It sounds like (RTFA? who, me?) they're focussing on either reducing the amount of heat generated or finding ways to dispose of it more efficiently. Important, sure... but what about developing more heat-tolerant processors? If things ran reliably at 600C, you'd have an easier time moving x amount of waste heat away to the ambient (room-temp) environment, no? Proportional to the 4th power of the temperature difference, no?

Or perhaps I'm grossly physics-impaired.

Re:Missing an option? (1)

NeoSkandranon (515696) | more than 9 years ago | (#12256138)

Dumping all that extra heat into the environment isn't really an option after a certain point. No one wants computers which will raise the ambient temperature 15 or 20 degrees in a bedroom (AMD and P4 jokes aside)

Re:Missing an option? (1)

Meumeu (848638) | more than 9 years ago | (#12256286)

No, convection and conduction are proportional to the temperature difference, radiation is proportional to the difference of the 4th power of the temperature, but I wouldn't rely only on radiation to cool my CPU...

Expect to see Asynchronous Processors instead (0)

Anonymous Coward | more than 9 years ago | (#12255829)

Begging forgiveness in advance for possibly over-generalising:-

The alarming heat output of modern processors is to a significant degree caused by the fact that they have a multi-GHz clock which must run at all times even when the chip is idle. Asynchronous chips have no clock - which means when they are idle, they do not generate anywhere near as much heat. And as a general rule, RISC processors are more efficient than CISC processors running at the same clock speed, so I predict these factors in combination will push the industry towards asynchronous RISC designs in preference to trying to run current-model processors at insane temperatures. Googling for asynchronous processors will provide a good variety of info on this interesting subject.

Re:Expect to see Asynchronous Processors instead (2, Insightful)

dfghjk (711126) | more than 9 years ago | (#12256378)

"And as a general rule, RISC processors are more efficient than CISC processors running at the same clock speed"

Where did that "general rule" come from? It's nonsense.

ARE WE GETTING DUMBER? (0)

Anonymous Coward | more than 9 years ago | (#12255878)

Are we getting dumber of something? Or more likely is this just academic masturbation?

The solution has been around for a long time. I feel like I should keep it a secret and patent it (again) for this particular purpose. But you know what? I don't give a sh*t, cause the whole patent system sticks to high heaven too. And I'd just get abused by some asshole money man in the end anyway. Heh, can you tell I'm a bright but very poor man living in a sea of captialist pig-sharks. No, I'm not cynical ;-)

Anyway the soultion is.... *drum roll please*...

THE STERLING ENGINE!

Idiots.

Is this a feature of x86? (1)

mister_jpeg (46354) | more than 9 years ago | (#12255942)

What would it take to replace x86 with another chip like Crusoe or MiPS and make it better for desktop PCs?

optical chips are the answer!!!!! (1)

the_2nd_coming (444906) | more than 9 years ago | (#12255963)

hello!!! work on that.

Patents with funding money. (1)

qualico (731143) | more than 9 years ago | (#12255972)

"Mechanical engineers at Purdue have filed patents for ... "

"The patents arose from a research project funded in part by the National Science Foundation."

The idea of getting the NSF funding, (in part), the research that will later lead to mechanical engineers getting the patent would be a great way to make money at the expense of others.

Should not the patent rights be shared among those who funded the project?

Re:Patents with funding money. (1)

Short Circuit (52384) | more than 9 years ago | (#12256393)

Patents are better than, say, making the developed process a trade-secret. When you get a patent, the process is out in the open, for everyone to see. People can license it in order to use it, and, after a while, a license is no longer required.

A trade secret, on the other hand, need never be released.

Various solutions (2, Insightful)

jd (1658) | more than 9 years ago | (#12256102)

One "obvious" solution to the chip heating problem would be the following:


  • Have a thin layer of some liquid like flourinert over the chip surface. It just has to conduct heat well, but not electricity.
  • Put a Peltier device in contact with the top of the liquid. Peliters are metal, so that's why you want the electrically insulating layer.
  • Have the top layer of the Peltier device double as a cold-plate.


This would let you get all the benefits of existing tried-and-tested cooling methods, but would eliminate the bugbears of the chip's casing being an insulator and the possibility of condensation screwing everything up.


A variant on this would be to have the chip stand upright, so that you could have a cooling system on both sides. The pins would need to be on the sides of the chip, then, not on the base.


A second option would be to look at where the heat is coming from. A lot of heat is going to be produced through resistance and the bulk of chips still use aluminum (which has a relatively high resistance) for the interconnects. Copper interconnects would run cooler, and (if anyone can figure out how to do it) silver would be best of all.


A third option is to look at the layout of the chips. I'm not sure exactly how memory chips are organized, but it would seem that the more interleaving you have, the lower the concentration of heat at any given point, so the cooler the chip will run. Similarly for processors, it would seem that the more spaced out a set of identical processing elements are, the better.


A fourth option is to double the width of the inputs to the chips (eg: you'd be looking at 128-bit procrssors) and to allow instructions to work on vectors or matrices. The idea here is that some of the problem is in the overheads of fetching and farming out the work. If you reduce the overheads, by transferring work in bulk, you should reduce the heat generated.

DVDs not DVD's (0)

Anonymous Coward | more than 9 years ago | (#12256451)

Oh for the love of god, where do you people learn this?

Human Brains (1)

Rupy (782781) | more than 9 years ago | (#12256490)

Human brains, being as powerful processors as they are don't run as hot... Therefore as a pc chip doesnt _need_ to either surely?

RE: Human Brains (1)

BuddyJesus (835123) | more than 9 years ago | (#12256566)

The difference is that while today's x86 processors run at full clock speed almost all the time, the human brain does no such thing.
You are never using 100% of your brain all the time, the usage depends on how much you need. Thus, your head never overheats.

its good to get off to an early start (1)

Enrique1218 (603187) | more than 9 years ago | (#12256638)

because I want the G8 to go into Powerbook first when its release. I tired of this whole G5 fiasco
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?