Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Copper-Graphene Nanocomposite Cools Electronics Faster & Cheaper

Soulskill posted more than 2 years ago | from the miles-davis-of-nanocomposites dept.

Hardware 56

samazon writes "North Carolina State University researcher Jag Kasichainula has developed a 'heat spreader' to cool electronics more efficiently using a copper-graphene composite, which is attached using an indium-graphene interface film. According to Kasichainula, the technique will cool 25% faster than pure copper and will cost less to produce than the copper plate heat spreaders currently used by most electronics (abstract). Better performance at a lower cost? Let's hope so."

cancel ×

56 comments

Sorry! There are no comments related to the filter you selected.

better performance, lower cost (2, Funny)

Anonymous Coward | more than 2 years ago | (#39636239)

you mom's got better performance at a lower cost. thanks! I'll be here all week.

Graphene - the super material (1)

cmdr_klarg (629569) | more than 2 years ago | (#39636327)

What's next? Leaping tall buildings in a single bound?

Re:Graphene - the super material (1)

Anonymous Coward | more than 2 years ago | (#39636807)

No, probably soon to be named super carcinogen responsible for the death of thousands.

Odd, hardware as "vaporware" (4, Interesting)

pla (258480) | more than 2 years ago | (#39636365)

First, this sounds great - Cheaper and better (plus the "Now with Graphene(tm)" factor), what not to like?

That said, we've heard about dozens of better way to cool chips, from chips where the heat sink passes through the die, to silicon with fluid channels, to built-in peltiers, to microturbines, etc.

These all have the potential to dramatically improve cooling while reducing the cost to do so... And they all have the same glaring flaw - Where do I buy one?

Re:Odd, hardware as "vaporware" (1)

ArcherB (796902) | more than 2 years ago | (#39636511)

First, this sounds great - Cheaper and better (plus the "Now with Graphene(tm)" factor), what not to like?
?

Don't forget the cool buzzword "Nanocomposite". You can't go wrong that kind of synergy!

Re:Odd, hardware as "vaporware" (2)

davester666 (731373) | more than 2 years ago | (#39639859)

Don't forget, the patent fee will more than make up the difference of any reduction of manufacturing cost.

Re:Odd, hardware as "vaporware" (1)

Bensam123 (1340765) | more than 2 years ago | (#39636553)

All of the above would involve Intel or AMD building them into the chip, which they have no plan on doing... So I'm guessing no where. This can be made by currently producing heatsink manufacturers... which means I'd give it a year or two and you'll start seeing them.

I'm not sure about the interface film, but that may be made into a TIM pad or you might start seeing graphene thermal compounds. Either way this is quite a bit different then the other examples you listed.

Re:Odd, hardware as "vaporware" (1)

Amouth (879122) | more than 2 years ago | (#39636733)

agreeded - i want this mixed with the extremely impressive heat sink/fan design where the majority of the heat sink was spun as the blades rather than a fan forcing air on to a surface area..

http://www.newscientist.com/blogs/onepercent/2011/07/new-heat-sink-could-slash-us-e.html [newscientist.com]

http://prod.sandia.gov/techlib/access-control.cgi/2010/100258.pdf [sandia.gov]

Which should be an extremely cheap design to licence if not free as it was published by a government agency. it's more than 2 years old, requires no new tech to be built, just a difference in how we machine and build parts. why can't we buy these yet? when could we see these made out of this new fun composite compound? do i need to just give up and go down the the local machine shop and make it my self?

Re:Odd, hardware as "vaporware" (1)

xeromist (443780) | more than 2 years ago | (#39636823)

Well, we may actually see some progress. Supposedly Sandia had a demo day in November where they invited "Potential licensees and commercialization partners" [fbo.gov]

Re:Odd, hardware as "vaporware" (1)

Amouth (879122) | more than 2 years ago | (#39637133)

thanks for the update, i'd not heard about that. i've been tempted to go to the local tech shop and try to build a rather large one and give it a run on a small AC unit and an see how well it works.. if their numbers work out it should prove to be a very good tech, and should be cheap but we all know that woln't happen.

Re:Odd, hardware as "vaporware" (1)

Nethemas the Great (909900) | more than 2 years ago | (#39637875)

I don't think they would be able scale production for common use anyway. IIRC indium is in short supply with the overwhelming majority of it coming from China.

Cheaper, in an elemental sense. (1)

pushing-robot (1037830) | more than 2 years ago | (#39636387)

25% faster than pure copper and will cost less to produce

...in much the same way that diamonds, being composed of carbon, cost less than copper.

Re:Cheaper, in an elemental sense. (2, Informative)

Anonymous Coward | more than 2 years ago | (#39636521)

Industrial diamond is not terribly expensive. It can be had for about 2 grand per kilo. Admittedly this is a good deal more than copper, but this graphene composite may be cheaper to manufacture than diamond.

The most important question (1)

Iniamyen (2440798) | more than 2 years ago | (#39636411)

How much will I be able to overclock my videocard with this technology?

Re:The most important question (0)

Anonymous Coward | more than 2 years ago | (#39636841)

As with all graphene techno-hype, the answer is simply "yes"

A bit more seriously, lab tests will need to be scaled and mass-produced. Some efficiency factors will be different in the mass-produced compared to the lab-crafted. The lab case implies that it has ~25% higher thermal conductivity than copper alone, so assume that equal structures and fans will give you 20% higher tolerance for waste heat.

Interesting. (3, Interesting)

jd (1658) | more than 2 years ago | (#39636457)

The biggest obstacle to higher clock speeds has been getting rid of the heat (which is why supercooled processors can be overclocked to 7 GHz). This could potentially lead to adding another GHz to clock speeds of domestic computers, perhaps 2 per node for top-end supercomputers. That's valuable, for although multicores are good, there just aren't that many decent parallel programmers out there. I (and a few others) find parallel programming easy but the vast majority of coders in the world got into the field as a way to get rich quick and aren't adept at anything beyond Visual Basic or the most trivial aspects of Java.

Badly-coded programs won't run better on multi-way chips, but can be forced to run faster on faster chips, so the only way to compensate for the lack of skill is to crank up the clock, which is only possible if you can avoid the chip cooking itself.

Re:Interesting. (3, Interesting)

ZankerH (1401751) | more than 2 years ago | (#39636497)

No, the biggest obstacle to higher clock speeds is the speed of light. At 1 GHz, information can only propagate around 30cm per processor cycle. If die sizes remain around 1 cm square, it's physically impossible to go above 30 GHz (give or take).

Re:Interesting. (0)

Anonymous Coward | more than 2 years ago | (#39636563)

Which means that another order of magnitude could be gained from adequate heat dissipation...

Re:Interesting. (1)

jd (1658) | more than 2 years ago | (#39637279)

That's a terminal barrier for synchronous chips, but it's not an obstacle at the puny 3GHz speeds we're currently operating with (especially as overclockers have already established the same chips are capable of 7GHz without issues).

By the time we get to 30 GHz, we may well be working with 3D chips. Furthermore, you don't need a standardized clock for asynchronous chips and async CPUs already exist. (There's even a program to help you design them listed on Freshmeat.)

Re:Interesting. (0)

Anonymous Coward | more than 2 years ago | (#39639911)

Could you find the program?

Re:Interesting. (1)

marcosdumay (620877) | more than 2 years ago | (#39637425)

Take a look at pipelines and all the other processor design techniques develpped after the 80's.

Re:Interesting. (1)

gstrickler (920733) | more than 2 years ago | (#39637471)

Two corrections:

1. Speed of light in a wire is at best 0.7c, typically .5c-.65c depending upon the material. So, cut your propagation distances accordingly.

2. As another poster suggested, that's only a limit for fully synchronous designs. Async (clockless) and semi-synchronous (partially asych with some clocking) designs are limited by switching times and feature density (which is related to both the speed of light and the gate size, but it's less rigidly limited than in synchronous designs).

Re:Interesting. (0)

Anonymous Coward | more than 2 years ago | (#39637541)

Speed of light in a wire is at best 0.7c, typically .5c-.65c depending upon the material. So, cut your propagation distances accordingly.

actually, light doesn't travel in wires

consider "light" as "electromagnetic radiation" (1)

Chirs (87576) | more than 2 years ago | (#39638231)

and you'll find it does indeed travel in wires.

In fact, it travels at

  1/ sqrt(e u)

where

e - is the electrical permittivity of the material
u - is the magnetic permeability of the material

Re:Interesting. (1)

ZankerH (1401751) | more than 2 years ago | (#39641349)

1. Speed of light in a wire is at best 0.7c, typically .5c-.65c depending upon the material. So, cut your propagation distances accordingly.

That can be fixed by replacing electronic processors with purely optical electronics.

Re:Interesting. (1)

gstrickler (920733) | more than 2 years ago | (#39643993)

The speed of light in an optical fiber is also about .7c. So, you can only improve this if you have optical electronics that use air/gas/partial vacuum tubes/channels as the transmission medium. Speed of light in air is >.999c and will be similar for other gases at/below ATM.

Re:Interesting. (1)

ewieling (90662) | more than 2 years ago | (#39638721)

I would be happy with 15Ghz.

Re:Interesting. (1)

Hentes (2461350) | more than 2 years ago | (#39636623)

It's not just about coding skill, some algorithms simply can't be paralellised.

Re:Interesting. (1)

jd (1658) | more than 2 years ago | (#39637009)

That is perfectly true, but many algorithms can be (and aren't). Even when algorithms have to be serialized, those algorithms generally form a small part of the overall program. (If you were to draw out a timing diagram for a program after the fashion of critical path analysis, you'd see lots of bits of work that don't need to be done sequentially. There isn't a serial list of dependencies, in the general case, for a complete program from start to end.)

Even knowing where things can be done in parallel isn't enough. There are single-threaded webservers that are faster than Apache's multithreaded model because of the overheads incurred by Apache's approach and the communications overheads of some of the decisions. (A serial server simulating parallel activity should NEVER be faster than a natively parallel server, because simulation itself involves significant overheads.) Simply adding threads to a program isn't enough - they have to be the right threads, communicating in the right way (Ahmdal's Law).

You also need to look beyond the algorithm itself. In general, a program will have input, output, housekeeping and other activities. I'm sure you remember from SE that I/O should be independent of the algorithm. By making those distinct event-driven threads, you're not stuffing your program with conditional branches and that will make your program faster, smaller and more reliable.

Re:Interesting. (0)

Anonymous Coward | more than 2 years ago | (#39637313)

Highly concurrent, single-threaded servers can absolutely outperform threaded servers. It all depends on the overhead of threads, which can be quite considerable. A concurrent web server might only need 100bytes for each connection, whereas a threaded server might need a megabyte per connection (usually it's 8MB per stack, actually, on Linux with C; Java can do better, though, with segmented stacks, but there's other overhead to consider, especially with GC).

Even if there's more than a single CPU on the box, it's still feasible (and not uncommon) for the single-threaded server to beat out the threaded server. And that's because of memory bandwidth. If memory bandwidth is the limiting factor with a single CPU, then upping the CPU count isn't going to improve throughput--and may degrade--unless the extra CPUs' local cache can alleviate some of the pain; though the massively multi-core CPUs of today share most of their cache.

I can write single-threaded concurrent Lua scripts, leveraging epoll/kqueue--plus a suite of custom suite C networking code I've written over the years--which can destroy the equivalent Java programs on the low and medium end of the spectrum. For demanding stuff I just run the same scripts on multiple threads or processes and do simple IPC if they need to cooperate; again, destroying everything else.

Basically, "threading" your code is pointless. It's a half measure. Either stick to simple, single-threaded code which is blazing fast. Or get serious about scaling, which means cooperating not just across cores but across the data center; in which case, your design may still be single-threaded at the lowest levels. And that's significantly more complicated than spending several days debugging your cool new lock-free list implementation.

Re:Interesting. (1)

Bengie (1121981) | more than 2 years ago | (#39638131)

"For demanding stuff I just run the same scripts on multiple threads or processes and do simple IPC if they need to cooperate"

Or you can do the exact same thing within a single application and instead of the overhead of IPC, you can communicate within the same protected memory space, much faster.

If something as naturally parallel as a web server is slower multi-threaded, it is HORRIBLY programmed or is an old single-threaded server with a few threading tweaks.

disagree somewhat (1)

Chirs (87576) | more than 2 years ago | (#39638279)

Personally I think that single-threaded asynchronous software gives the highest performance. However, there are cases where you want to do things that don't have async APIs. In these cases you need some way of blocking in a synchronous API while letting the server do other work--and epoll/kqueue isn't always an option. In these cases threading can give slightly higher performance than separate processes.

In most cases though I prefer to use separate processes with explicit message-passing. It's easier to debug than using threads, and there is much less chance for one thing going crazy to cascade through and destroy everything else.

Re:Interesting. (1)

jd (1658) | more than 2 years ago | (#39639641)

Ultimately, if you are switching between tasks, you have the CPU and memory overhead of a task-switching mechanism plus the latency overhead. It makes no difference whether the mechanism is in the OS, the program or a tea cosy. If you have such a mechanism and it is already running and you are already paying the price for it, then provided the mechanism is implemented efficiently it will be cheaper to use what you're already paying for than to re-implement it in yet another layer.

Memory bandwidth is indeed a significant issue, but it's surely more expensive for task-switching (where you're pulling state data off a queue and pushing it back on) versus true parallel operation (where each task maintains its own state independently and the state is always resident). I'd frankly prefer a different architecture in the computer such that bottlenecks between the cores and memory were reduced through a better structure rather than a dependency on miniscule caches, but we have what we have. I'd also prefer network cards to be able to do DMA and bypass the kernel (since that would automatically halve the bus bandwidth consumed by network operations) - and some can, but it's not very common.

Destroying Java is easy, it uses lousy models. However, for any concurrent Lua program, I could write a parallel Occam program that could slaughter the Lua code.

Re:Interesting. (1)

geekoid (135745) | more than 2 years ago | (#39636707)

no. Barring the upper limit and excluding theoretical quantum device, the other bar keeping speeds stagnant is the fabs.
Fabricating the level of density without leaking is very hard.
Fabs are very ex[pensive to built.
Add to the the consumer need for faster clocks has tapered off, it's not worth the expense of massive retooling.

When they can get the metal well below 1 part per billion in the fabs, and create a process to minimize wafer breakage for wafer being cut so precisely, then we may see a doubling of clock speed 2 more times. Then that will be it.

Re:Interesting. (1)

jd (1658) | more than 2 years ago | (#39637167)

For doubling, you're correct. But I'm talking a 25-33% increase in clock speeds, not a 100-200% increase. And there's a far worse increase in leakage dropping from 35nm to 22nm than going from 3GHz to 4GHz (the proof of which is that you CAN run a Core2Duo at 7GHz reliably - which would be absolutely impossible if leakage was causing significant errors at that speed).

Fabrication AS IT STANDS is capable of making a 7GHz chip - we know this because the chips they produce can be run at that speed. The problem is that you've got to have the chip sitting in a bath of liquid nitrogen to run that fast. This isn't practical for home computers (yet) and for laptops could pose certain reproductive health problems if the piping broke.

If, however, you could produce superior cooling with existing chip designs, then that isn't an issue. You can ramp up the clock speed such that you generated the same concentrations of heat after the additional cooling is applied. True, the speedup isn't linear (P=I^2/R) but there is some.

The consumer need for faster clocks I've already addressed -- the programmers simply don't exist to make use of the technology as implemented, so you've got to adapt the technology to make use of the programmers as they are.

Re:Interesting. (3, Insightful)

Calos (2281322) | more than 2 years ago | (#39638349)

Fabricating the level of density without leaking is very hard.

It's impossible. There is always leakage. Yes, as you scale, the leakage does grow, both empirically and as a signal/noise problem. But there are ways to minimize this. This has been foreseen for some time, and a lot of research goes into ways to mitigate it. Despite all the improvements made sub-surface - that is, how the semiconductor itself is altered - to allow scaling and improve efficiency, and how the tools and methods to make the devices have improved... the industry really hasn't had any radical changes in many years. It has been all planar designs that date back to the 70's. Sure, the materials have improved, and it's not entirely silicon any more... but still planar, and subject to some fundamental limits of the planar design and the substrate choice. That's why Intel is pushing into 3d designs. Do some reading on FINFETs, and the benefits of them, especially with respect to leakage and control. And that can still be silicon based, and doesn't push at all into heterogeneous semiconductor systems.

  Fabs are very ex[pensive to built.
Add to the the consumer need for faster clocks has tapered off, it's not worth the expense of massive retooling.

Oh [intel.com] , really [xbitlabs.com] ? Why are the industry giants doing it, then? Smaller die - this improves speed and potential clock, can improve power efficiency, means more die/wafer or more advanced designs. Ability to do different etches, deposit different films, etc., to improve device characteristics.

Clockspeed isn't everything anyway, or we'd still be using the Pentium 4 chips that were pushing 4 GHz from the manufacturer, and not the 2 GHz-range Core 2/iX chips. Smart design can trump clockspeed. (I use Intel as an example here because they had the more recent significant architecture change which illustrates this point very well.) We could, y'know, go back to making Pentiums... with current manufacturing technology, we might make them, what, 1/8 the size? Could probably clock them at several GHz.

When they can get the metal well below 1 part per billion in the fabs, and create a process to minimize wafer breakage for wafer being cut so precisely, then we may see a doubling of clock speed 2 more times. Then that will be it.

What makes you think metal contaminants and wafer breakage are the limiting factors to clockspeed scaling? And from where do you get a "doubling of clock speed 2 more times" from? What are you considering the base clockspeed that you are multiplying? Seems like you're pulling it out of your ass. Think about it. We're doing 3 GHz+ already. Doubling that puts us in the 6-8 GHz range. Doubling again puts us in the 12-16 GHz range. That's what people above are claiming as the fundamental limit in a synchronous chip by the limit of the propagation of a signal in a metal. The speed of light in metal is in no way the limiting factor in clockspeed. That would be the case for a single wire in isolation. There are other effects, namely capacitive coupling, in a chip where you are wiring up billions of transistors, which are much more limiting. And we're tallking wires of non-negligible resistance here - if you want to put a bunch of small transistors close together, you need to be able to make really thin metal wires to connect to make the right connections. Assuming metal is the only interconnect, of course, and completely ignoring all the research into optical interconnects...

Re:Interesting. (0)

Anonymous Coward | more than 2 years ago | (#39637123)

That's valuable, for although multicores are good, there just aren't that many decent parallel programmers out there. I (and a few others) find parallel programming easy but the vast majority of coders in the world got into the field as a way to get rich quick and aren't adept at anything beyond Visual Basic or the most trivial aspects of Java.

So... is being an insufferable arrogant prick a parallelizable problem?

Re:Interesting. (1)

jd (1658) | more than 2 years ago | (#39637297)

How should I know, I'm not you.

Re:Interesting. (0)

Anonymous Coward | more than 2 years ago | (#39639565)

Apparently then, it's a unique problem.

Scala: parallel collections, functional + DSL (1)

MCRocker (461060) | more than 2 years ago | (#39637293)

although multicores are good, there just aren't that many decent parallel programmers out there. I (and a few others) find parallel programming easy

That's why languages like Scala [scala-lang.org] are so appealing.

Sure, there's no silver bullet to automagically solve all parallel programming problems, but languages like Scala have features like Parallel Collections libraries [scala-lang.org] , functional programming and Parallel Domain Specific Languages [scala-lang.org] that can abstract enough of the problems of parallel programming away that journeyman programmers have a decent chance of being able to work effectively with multiple cores [lampwww.epfl.ch] .

Re:Interesting. (1)

StikyPad (445176) | more than 2 years ago | (#39637413)

I (and a few others) find parallel programming easy

Of course you do, which is why you know that not all tasks lend themselves to parallelization, and that even tasks which can be parallelized generally have a point after which the overhead adds more work than time saved. Parallel processing is a great tool to have, but that doesn't make it the right tool for every job.

Re:Interesting. (1)

Bengie (1121981) | more than 2 years ago | (#39638329)

I (and a few others) find parallel programming easy

I think parallel programming is easy and I don't understand all the trouble people have with it. The biggest issues is debugging, but as long as you have clear and concise entry and exits in your code, it is just a matter of time to track down the issue. Unit testing + modular code = win

Graphene (4, Insightful)

geekoid (135745) | more than 2 years ago | (#39636565)

it's the new plastic.

Re:Graphene (1)

elsurexiste (1758620) | more than 2 years ago | (#39640147)

At least you can see plastic. We've been talking about graphene for years and I still haven't even heard of a product that uses it.

informAtive FUCKERFUCKER (-1)

Anonymous Coward | more than 2 years ago | (#39636603)

conversation and MORONIC, DILEETANTE and Michael Smith she had no fear

Imagine... (1)

Ken_g6 (775014) | more than 2 years ago | (#39636701)

Imagine a spinning [slashdot.org] copper-graphene heat sink!

Seamicro (1)

patfla (967983) | more than 2 years ago | (#39636741)

Upon learning of this, I thought it a clever idea for a next step in addressing the heat issue - at the level of rack servers; data centers; etc.

http://www.seamicro.com/ [seamicro.com]

Not your One Ring that Rules Them All but some problems (most) need to be attacked in pieces.

Silver (1)

tmosley (996283) | more than 2 years ago | (#39636871)

That puts it higher than silver, but not as high as diamond, and quite far below pure graphene.

Why not just use pure graphene, which has has a thermal conductivity about 10x as high as copper/silver?

Re:Silver (1)

marcosdumay (620877) | more than 2 years ago | (#39637459)

Probably because their graphene is polycrystaline (if one can call them crystals) and this composite is actualy better at conducting heat from one crystal to the other than pure graphene. But I'm not sure, as I didn't read the scientific article.

Anyway, comparing the heat conductivity of this polycrystaline material with monocrystaline graphene is useless.

Re:Silver (1)

tmosley (996283) | more than 2 years ago | (#39638141)

Thanks, that makes sense. Graphene only conducts electricity within the plane, so it would make sense that that would be the case for heat as well.

Meth Addicts (1)

Anonymous Coward | more than 2 years ago | (#39636887)

Phew. I will have less meth addicts breaking into my computers and trying to scrap the copper from my CPU heatsinks.

Jagannadham Kasichainula (0)

Anonymous Coward | more than 2 years ago | (#39637339)

Got to love a guy who uses his first and last name interchangeably.

Graphite is already proven better than Copper (1)

Ion Berkley (35404) | more than 2 years ago | (#39639259)

Graphite heat straps are already common practice in Space and Aerospace roles. You think your overclocked gaming machine/room heater has problems? Try dissipating heat in a vacuum when there's nothing to convect.
http://www.techapps.com/thermal-straps.html [techapps.com]

Number 1 cooling method: environment (1)

wye43 (769759) | more than 2 years ago | (#39642411)

After spending thousands of euros on many various cooling systems across the years, I can tell you which one is the most effective:
The good old home air conditioning.

Perhaps reducing the power consumption may beat the environment as the number 1 factor. We don't need more and more sophisticated cooling systems, we need less power consumption and good environment.

This isn't exactly new (1)

Khyber (864651) | more than 2 years ago | (#39645001)

Carbon nanocomposite heatsinks have been in my LED panels for a couple of years. I've got an AlC composite that pushes roughly 560wmk.

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>