×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

IBM Heralds 3-D Chip Breakthrough

kdawson posted about 7 years ago | from the more-moore dept.

IBM 99

David Kesmodel from WSJ writes to let us know about an IBM breakthrough: a practical three-dimensional semiconductor chip that can be stacked on top of another electronic device in a vertical configuration. Chip makers have worked for years to develop ways to connect one type of chip to another vertically to reduce size and power use. The IBM technique of "through-silicon vias" offers a thousand-fold reduction in connector length and a hundred-fold increase in connector density. The new chips may appear in cellphones and other communication devices as soon as next year. PhysOrg has more details.

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

99 comments

I for one (2, Insightful)

Mipoti Gusundar (1028156) | about 7 years ago | (#18702491)

Someone please to be taging story "Skynet"! I indeed am for one welcomiming outr new multiply layered siliconical overloads!

I hate big corporations! (-1, Offtopic)

Anonymous Coward | about 7 years ago | (#18702539)

I hate big corporations for no reason! Google was ok, until it went public. Then I irrationally started to hate them!

I'm enlightened!

More information (4, Informative)

karvind (833059) | about 7 years ago | (#18702541)

As article says they had been working on it for a long time, they had published few details before.

http://www.research.ibm.com/journal/rd/504/topol.h tml [ibm.com]

http://domino.watson.ibm.com/comm/pr.nsf/pages/new s.20021111_3d_ic.html [ibm.com]

Re:More information (2, Funny)

Anonymous Coward | about 7 years ago | (#18704339)

This will fail miserably. It will be hell to repair these things when the middle component breaks.

Re:More information (0)

Anonymous Coward | about 7 years ago | (#18704941)

Umm. . . how exactly does one repair a 45nm semiconductor on a *2-D* layout? Not exactly something you use a hammer & screwdriver for. . .

Re:More information (0)

Anonymous Coward | about 7 years ago | (#18705843)

Why not? Young Albert Einstein, while growing up in Tazmania, split a beer atom with a hammer and chisel. And from that day on beer has had bubbles.

Re:More information (1)

Nefarious Wheel (628136) | about 7 years ago | (#18713961)

That's Tasmania, thank you very much. Albert also invented surfing and rock & roll, the latter event related to his girlfriend Marie Curie.

Very nice, but... (2, Interesting)

Rosco P. Coltrane (209368) | about 7 years ago | (#18702575)

Chip manufacturers have better define some kind of common norm for the Vccm Vss, GND, busses, etc... pins on similar devices (like ICs, RAM chips and such), otherwise it's back to square one with a circuit board that has to pick up the lines and reroute them to other components, and the advantage of this technology would be zilch.

custom integration (2, Insightful)

Gary W. Longsine (124661) | about 7 years ago | (#18702703)

It's likely that we'll see custom integration before standards like that settle out. When cell phone vendors crank out tens of millions of a given model, the economy of scale can be achieved reasonably. It won't be much different than the custom IC work that already happens in some devices like this. (The iPhone is a well known example).

Re:Very nice, but... (3, Interesting)

stevesliva (648202) | about 7 years ago | (#18702733)

otherwise it's back to square one with a circuit board that has to pick up the lines and reroute them to other components, and the advantage of this technology would be zilch.
There is no implied change in the chip packaging that is the interface to a circuit board. There are already plenty of packages that have two chip dice side-by-side. This will just stack the dice on top of each other within the package.

Re:Very nice, but... (2, Informative)

MightyYar (622222) | about 7 years ago | (#18702903)

They already do that, too. Stacked die are not new - this is simply a way to connect them without using a wire bonder or flip-chip. One of the traditional problems in wirebonder-less solutions is that you then have to match up the die with the substrate - this means that a simple silicon die shrink also requires a substrate re-design.

I think that this sounds like a relatively expensive process, but it should enable a thinner profile than flip-chip or wirebonding.

Re:Very nice, but... (3, Informative)

MightyYar (622222) | about 7 years ago | (#18703007)

Die shrinks happen way to quickly to establish standards. Most manufacturers don't even try to match up substrates with chips - they just use a wire bonder. Only packages with specialized requirements keep the substrate and chip matched up so that they can use flip-chip or some other interconnect process... inkjet heads still use tab bonding, for instance.

Re:Very nice, but... (0)

Anonymous Coward | about 7 years ago | (#18707143)

You expect every semiconductor company in the world to put aside their differences, to coordinate the pinouts of ICs for some hobbyists?!
Just be glad they aren't all round, bwahaha!

I wonder how they will cool this? (4, Interesting)

LiquidCoooled (634315) | about 7 years ago | (#18702579)

Surely they need to cool the components in the middle of the stack?
Unless they decide to leave some of the holes open then anything in the middle is going to overheat?

I always imagined this kind of tech running on some kind of multi layered wire fence with plenty of room for cooling.

Incidentally, didn't Hitachi beat them to the whole 3d element thing?
http://www.hitachigst.com/hdd/research/recording_h ead/pr/PerpendicularAnimation.html [hitachigst.com]

Re:I wonder how they will cool this? (3, Interesting)

s-gen (890660) | about 7 years ago | (#18702767)

It looks like they might be planning to pump liquid between the layers:

http://www.zurich.ibm.com/st/cooling/integrated.ht ml [ibm.com]

Re:I wonder how they will cool this? (1)

Kythe (4779) | about 7 years ago | (#18703069)

Actually, I would think the tungsten interconnects themselves offer a thermal path currently unavailable with single-chip designs. Some of the interconnects could even be formed explicitly for that purpose.

Re:I wonder how they will cool this? (1)

timholman (71886) | about 7 years ago | (#18702813)

As a former IBMer, I know that one of IBM's biggest strengths in IC research has always been packaging technology. If they are confident enough to announce this as a breakthrough , then you can safely assume they've figured out how to tackle the thermal issues and keep the chip cool.

cooling? Like the 3GHz PowerPC? (1)

Gary W. Longsine (124661) | about 7 years ago | (#18702865)

Although I tend to agree with your statement, there is at least one well known example of a snafu in that area.

Re:I wonder how they will cool this? (2, Insightful)

Zantetsuken (935350) | about 7 years ago | (#18702941)

DISCLAIMER: Of course I didn't RTFA - cmon man, this is /. v2.0...

From the summary saying how it would mostly see use in cellphones and the like, I would think it would operate at low enough speeds/voltages to be able to get by with passive cooling...

Re:I wonder how they will cool this? (4, Informative)

SQL Error (16383) | about 7 years ago | (#18703415)

You wouldn't be able to stack multiple desktop CPUs, because it would generate too much heat. But you could stack a CPU on top of its own level 2 cache instead of next to it, making for shorter wires and a faster chip. Or stack a GPU on top of DRAM, so that you could have a 2048-bit bus instead of 256-bit.

Then they just rely on the upper layer to conduct enough heat to keep the low layers cool.

Re:I wonder how they will cool this? (0)

Anonymous Coward | about 7 years ago | (#18714833)

So we'll have a laptop with a 3GHz G5 any day now, huh?

Re:I wonder how they will cool this? (5, Informative)

duncanFrance (140184) | about 7 years ago | (#18703703)

There are some thermal advantages to this sort of interconnect. Since it keeps the wirelength short it means the drivers don't have to be so powerful. Hence a fair amount less heat will be generated. Driving any amount of capacitance at GHz speeds wastes shed-loads of power.

Average power dissipated = V*V * f * C

So reducing V obviously makes a big difference (hence partly why operating voltages of ICs decrease with frequency), but getting C down will help also.

Re:I wonder how they will cool this? (2)

zmotula (663798) | about 7 years ago | (#18704587)

Surely they need to cool the components in the middle of the stack?
Unless they decide to leave some of the holes open then anything in the middle is going to overheat?

This is where the Menger Sponge [wikipedia.org] comes in...

Re:I wonder how they will cool this? (1)

spankey51 (804888) | about 7 years ago | (#18705839)

I used to ponder that question frequently and found that Neil Stephenson's "the diamond age" had a "cool" solution to the problem... Basically, imagine a cube with holes drilled through it in a 3-d grid pattern... then just run water/coolant through it... or use the extra surface area throughout the holes as a vaporizer medium at the bottom of a heatpipe [wikipedia.org] and move the heat to a heatsink bank somewhere above the processor

Re:I wonder how they will cool this? (1, Interesting)

Anonymous Coward | about 7 years ago | (#18707671)

The most obvious cooling solution would be to create a chip/layer which was a pass-through connector as well as a heat pipe. The processor is then a sandwich of layers alternating between computing layers and cooling layers. There would then be some parallel stack which would connect these heat pipes together and too the primary cooling system (e.g. heat-sink/fan).

How about cooling via HCFC compounds? (1)

wilec (606904) | about 7 years ago | (#18729487)

Unless things move to optical or biological I would think that future cooling of CPU/GPU etc might very well be by immersion in a high dielectric fluid or vapor such as HCFC compounds. For instance where I work we have several Trane chillers that have the motors, about 400hp 480v 3phase, cooled by running them in the same evap side cycle refrigerant fluid/vapor as the compressor vane assembly.

So if one placed the CPU or for that fact the whole dang mother assembly in a hermetically sealed vessel one could simply dump the heat via an external condenser. I am not even sure a compressor would be required if one immersed the entire assembly in fluid and the external casing had sufficient surface area to dissipate the heat the whole thing might work via convection currents in the fluid or with the assistance of a small pump for circulation. Of course the use of a full or even cascaded refrigeration cycle with compressor, condenser and expansion device might be worth it to get lower operating temps for higher end systems. Something with capacity of a small window unit AC should be adequate for a pretty serious system and not really all that expensive a solution.

Another approach might be to use solid state cooling devices cold side bonded one side of the chip and build the chips in a cubical or other appropriate geometric assembly with heat sink sides on the exterior surfaces then simply fan cool those. Though condensation issues might be a problem here unless the cold sides were all well sealed.

Wabi-Sabi
Matthew

I would love.. (-1, Offtopic)

Anonymous Coward | about 7 years ago | (#18702613)

...a beowulf cluster of these.

Heat (1)

chris mazuc (8017) | about 7 years ago | (#18702641)

So they figured out how to make them, but wouldn't you start running into problems with heat retention in the middle of the chip? Or are they still thin enough at this point that this isn't really an issue.... the article doesn't mention it at all.

Re:Heat (2, Interesting)

Jake73 (306340) | about 7 years ago | (#18702977)

Heat is certainly a concern. However, vertical stacking also helps address the issue of disparate technologies. For example, you may have two ICs that are manufactured with, say CMOS and bipolar technologies that together won't generate enough heat to be a concern, but because they are different technologies, need to be separated and therefore take up more space.

On the other hand, it would be neat to see them put heatsinks between each individual chip. They could still drill and insert the tungsten vias through the heatsink. The heatsinks would probably need to be pretty advanced, though, to move the head to the fringes. Maybe a circulating fluid or something.

Re:Heat (1)

LeDopore (898286) | about 7 years ago | (#18702993)

You have a good point. A chip twice as thick takes twice as long for heat to diffuse through it, so even if the new chips have the same power/area of today's chips they will operate at twice the temperature increase above ambient temperature.
If they merely sandwich two processors together, you'll have twice the heat generated with half the conductivity, so these chips would run at 4 times higher than ambient temperature than today's chips.
Aside: do you think we should start asking for chips which need to be run at low temperatures? Two of the biggest heat losses on modern chips are conduction losses (which decrease with temperature) and MOSFET conduction while the gate is neither completely on or off. A fundamental switching range limit comes from thermal noise, such that for a 30 mV range (really kBT/e) thermal noise is enough to keep the gate between on and off, so electrons can get through but they dissipate a lot of heat doing it.
What if we just turned down T to the boiling point of liquid Nitrogen? Then switching losses would be drastically reduced, operating voltages could be lower and wire interconnects could be of lower resistance.
I'm just a physicist; are there any chip designers out there who know the reasons why big data center chips aren't designed at 100 K? Is it that the market share is too small to justify optimizing a new chip?

Re:Heat (2, Informative)

MightyYar (622222) | about 7 years ago | (#18703235)

I don't know much about the cooling issues, but I know that they back-grind the chips to make them thinner. For instance, if they are replacing a memory package that used to consist of 1 chip with 3 stacked chips, they will grind the 3 stacked chips so that they are no taller overall than the 1 chip. Typical silicon thicknesses used to be 14-20 mils (355 - 500 microns). Now we are seeing as thin as 3 mils (75 microns), with folks at trade shows demonstrating even thinner.

Don't ask me why people still use mils in the packaging industry... they just do. It makes for some weird units, like g/mil^2. Yuck.

A better way (1)

stacey7165 (1081097) | about 7 years ago | (#18708409)

Personally this article puzzled me. I remember reading just about a month ago about a WAY cooler (pardon the pun) technology that they had developed for the chipsets. It is based on fiberoptics instead and carried far more data, at an incredibly small cost. I distinctly remember the article saying that they could divide all of manhattan into two camps of 4 million each, and one of the camps could have all 4 million people call the other 4 million people and it would take either a single chip or the same energy as a light bulb. Now THATS cool. This seems complicated and prone to meltdowns. Speaking as a user who has had 4 hard drives implode in 5 years due to heat (and sometimes a bad fan), this is just scary.

What?????? (5, Funny)

Xinef Jyinaer (1044268) | about 7 years ago | (#18702671)

The chips didn't exist as 3-D objects prior to this? Infact, wouldn't a chip that only exists in two dimensions be much more difficult to make?

Re:What?????? (1)

JuanCarlosII (1086993) | about 7 years ago | (#18702739)

"-1 Pedantic" Think of the space you could save though...

Re:What?????? (1)

Xinef Jyinaer (1044268) | about 7 years ago | (#18702925)

"+1 Pedantic" opposed to "-1 Pedantic" (At least I don't consider it a negative thing). Being able to make 2-D objects would save a lot of space.

Re:What?????? (1)

Lijemo (740145) | about 7 years ago | (#18704841)

"Being able to make 2-D objects would save a lot of space."

Yes-- but heat dissipation would only be able to take place along the 1D perimiter. There would be no way to cool the center of the chip unless it was very small or highly heat-conductive.

Re:What?????? (1)

phasm42 (588479) | about 7 years ago | (#18702757)

First line of TFA (emphasis mine)

The IBM breakthrough enables the move from horizontal 2-D chip layouts to 3-D chip stacking

Re:What?????? (2, Informative)

stevesliva (648202) | about 7 years ago | (#18702839)

The chips didn't exist as 3-D objects prior to this? Infact, wouldn't a chip that only exists in two dimensions be much more difficult to make?
One layer of silicon substrate, followed by many layers of polysilicon and wires and insulator. There is as of yet no practical way to fabricate to transistors on top of each other on a wafer. It's always the transistor on bottom, wiring on top. The transistors themselves are only a 2D array (but yes they are 3D devices). Sounds like this technique bores holes through the silicon substrate to make contact with another wafer below, so you could conceivably have transistors above and below.

Re:What?????? (0)

Anonymous Coward | about 7 years ago | (#18706389)

I totally agree, wake me up when they build a 4-D one! Geez, that's challenging!!

Re:What?????? (2, Funny)

Tiles (993306) | about 7 years ago | (#18711629)

Why stick with three dimensions? Let's just skip three, and go straight to five! And add a moisturizing strip!

They told us not to ask where they got it. (4, Funny)

illegalcortex (1007791) | about 7 years ago | (#18702729)

It was scary stuff, radically advanced. It was shattered... didn't work. But it gave us ideas. It took us in new directions... things we would never have thought of. All this work is based on it.

Re:They told us not to ask where they got it. (1, Troll)

pla (258480) | about 7 years ago | (#18702935)

It was scary stuff, radically advanced

July 1947: A "weather balloon" crashes in Roswell, NM.

December 1947: Bell Labs' Bardeen, Brattain, and Shockley "invent" the transistor, using boron-doped silicon, which Bell Labs didn't have the equipment to produce at that time!

Spoooooooky. ;-)

Re:They told us not to ask where they got it. (0)

Anonymous Coward | about 7 years ago | (#18703307)

Ummm, wasn't it germanium doped? And hadn't they been working on it since the war using Lilenfelds earlier work as guidance?

Spoooooky!

Re:They told us not to ask where they got it. (2, Insightful)

Jeremi (14640) | about 7 years ago | (#18706159)

Spoooooooky. ;-)


You think you're scared now, just wait until the alien patent lawyers show up ;^)

New Operating System Required For 3d chips (5, Funny)

crea5e (590098) | about 7 years ago | (#18702763)

LEG-OS. 64 block architecture. Also themeable for star wars and lord of the ring fanboys.

Well (4, Informative)

ShooterNeo (555040) | about 7 years ago | (#18702771)

This is it. Maybe. Possibly major problems with heat dissipation. However, there are some massive advantages :

1. One tradeoff IC designers always face is that the fastest, lowest latency access is always to on-die components. On-Die memory (cache) is almost ALWAYS faster, coprocessor interconnects (like for dual core) are far quicker, ect. With any given level of state of the art, you can get a much higher clock signal over itsy bitty paths on silicon from one side of the chip to the other than going out to big, clunky, exremely long wires.

2. The tradeoff is that a bigger chip radically reduces yields : the chance of a defect causing a chip to be bad goes up with the square of the number of gates.

3. This technology allows one to use multiple dies, and to interconnect them later. There's just one problem.

HEAT DISSIPATION. A 3d chip will of course have it's heating per square centimeter multiplied by the number of layers. The obvious solution, internal heatpipes, has not yet been shown to be manufacturable.

Hence TFA mentioning use in devices such as cell phones, where bleeding edge high wattage performance is not a factor.

Re:Well (2, Informative)

drinkypoo (153816) | about 7 years ago | (#18702891)

HEAT DISSIPATION. A 3d chip will of course have it's heating per square centimeter multiplied by the number of layers. The obvious solution, internal heatpipes, has not yet been shown to be manufacturable. Hence TFA mentioning use in devices such as cell phones, where bleeding edge high wattage performance is not a factor.

It's useful in other spaces, too. If you have a massively parallelizable task, then you could use this technology to have a stack of CPUs in less space on the board, which would reduce the cost of the system. You could run at low clock rates with huge numbers of processes and/or threads.

Re:Well (1)

Hoi Polloi (522990) | about 7 years ago | (#18703231)

Plus you could keep using the standard ATX motherboard dimensions without a sprawling CPU taking over the board's real estate.

Re:Well (1)

ConceptJunkie (24823) | about 7 years ago | (#18708913)

You do realize that 99% of a CPU is just the packaging, and it's only that big because it's the only reasonable way to get that many pins that are large enough not to snap off during socket insertion (or sublimate directly into the atmosphere ;-)).

Re:Well (1)

zippthorne (748122) | about 7 years ago | (#18709579)

I have an exposed pentium chip lying around somewhere, It was a promotional thing I got from MIT once. They really are quite big.

Re:Well (1)

ConceptJunkie (24823) | about 7 years ago | (#18713907)

Perhaps they were. I have an exposed AMD Sempron (or whatever the non-64 bit one was called), it's less than 1cm square.

Re:Well (1, Interesting)

Anonymous Coward | about 7 years ago | (#18703347)

"HEAT DISSIPATION. A 3d chip will of course have it's heating per square centimeter multiplied by the number of layers. The obvious solution, internal heatpipes, has not yet been shown to be manufacturable"

Sure they are. Every metal trace is an internal heatpipe. It doesn't have to be some crazy fluid filled micro cavity if thats what you were thinking. 3-D circuits have been around for quite some time now. Several labs will fabricate your circuit in 3-D. Its not consumer production ready but it exists and it works. The general consensus in the semiconductor industry is that its not worth the cost (as of the last conference I went to, 23rd VMIC fyi)

Also, pretty much every cell phone already has stacked chips in them. They just wirebond the stacks, no through-chip vias yet that I know of. And these chips work so in a lot of cases heat dissipation isn't really a problem anyway.

Re:Well (2, Insightful)

Jeff DeMaagd (2015) | about 7 years ago | (#18704105)

How much power is lost due to the interconnects right now? What fraction of power can be saved by almost eliminating the long wires?

Re:Well (1)

Mike1024 (184871) | about 7 years ago | (#18705345)

The tradeoff is that a bigger chip radically reduces yields : the chance of a defect causing a chip to be bad goes up with the square of the number of gates.

Isn't it directly proportional?

Doubling the number of gates doubles the chip area. Doubling the chip area halves the number of chips per wafer. Assuming a constant number of chip-killing defects per wafer (say 5), halving the number of chips per wafer means you have twice the percentage of dead chips (i.e. 5 dead per 50 chips (= 10%) instead of 5 dead per 100 chips (= 5%))

So doubling the number of gates doubles the percentage of bad chips rather than quadrupling it, i.e. a linear (not squared) relationship.

There's just one problem. HEAT DISSIPATION. A 3d chip will of course have it's heating per square centimeter multiplied by the number of layers.

If I was designing a multilayer microprocessor I'd make sure the least coolable (i.e. central) areas were made with low-leakage-current processes and didn't switch too often, while the more coolable outer areas could be less efficient. In other words, I'd put my big stacks of cache in the middle of the layer sandwich and put my actual processor cores on the top and bottom.

Re:Well (1)

ShooterNeo (555040) | about 7 years ago | (#18705579)

Err. Yes, you're absolutely right. I was thinking software for some reason, meant to write linear. (software can be squared because every tiny part of software can in theory interact with every other part. Hence the reason for compartmentalization of code, to reduce the number of possible interactions) And putting cache in the middle just makes sense for other reasons. One core on top, one on bottom, cache in the middle. Works great. Only improvement in cooling for 3-4 layers like this (hot ICs on the outer edges, cache in middle) would be bottom cooling.

Re:Well (1)

Mike1024 (184871) | about 7 years ago | (#18707985)

I was thinking software for some reason, meant to write linear.

To add to the confusion, some chips, are traditionally measured in one dimension only (optical sensor sizes are based on old camera film formats, which are measured along the diagonal) in which case the relationship *is* squared - and what's more, optical sensors are an area where people are more likely to need to know die size!

One core on top, one on bottom, cache in the middle.

I'm speculating a bit here, but in the modern world of 4 megabyte on-die caches, I'd think the millions of gates of on-die cache would take up several times more area than the actual processor cores. This being the case, you'd think splitting the cache across several layers would mean you could get more chips per wafer, not less! Of course, thinking about it, four times the number of layers means four times as many defects.

Here's some trivia I learned recently: Some manufacturers put redundant/backup hardware in their chips, not to improve reliability but to improve yield - so if one row on your RAM chip has a defect, a backup row is used instead. Patching the defects improves yields, and no-one notices the difference.

What is the new trick here? (1)

gogojcp (957158) | about 7 years ago | (#18702861)

Tungsten vias (in SiO2) are nothing new. Thinned waters are nothing new. It sounds like the new aspect is vias through the Si, (still no big deal since the bottom side of a wafer is impractical to build a circuit on.) and the one thing they don't mention, that is, how do they bond and stack multiple thinned wafers? That is the difficult part, and that is the part totally left out of the article.

I wonder how they do that?

Re:What is the new trick here? (1)

MightyYar (622222) | about 7 years ago | (#18703345)

They probably dice the wafers individually and then stack them using technology that was developed for flip-chip. I doubt that they bond the wafers before dicing, but I only skimmed the articles.

For some reason. . . (0)

Anonymous Coward | about 7 years ago | (#18705175)

this post makes me hungry.

IBM has a tradition of inventing cool stuff (1)

Qbertino (265505) | about 7 years ago | (#18703063)

IBM has a nice track record of cool things they introduced to the world. HDDs, Open Standard Components, etc. ...
This could be another one of those cool things that help shape the next few decades of technology.

Re:IBM has a tradition of inventing cool stuff (0)

Anonymous Coward | about 7 years ago | (#18703487)

Maybe so but I wouldn't go so far as to say IBM invented it. They may have produced a production version but there are already several professional organizations dedicated to 3-D semiconductors. IMIC runs the VLSI multilevel interconnect conference for one. They're up to their 24th now....and also (shameless plug due to AC post) I've been working on a 3-D chip program for the past 3 years...

Re:IBM has a tradition of inventing cool stuff (1)

jedidiah (1196) | about 7 years ago | (#18704093)

Well, they have been working on this for a DECADE.

It's hard to say how much they have contributed to the current state of the
art that everyone else seems to be working with.

No more planar graphs! (1)

Avidiax (827422) | about 7 years ago | (#18703079)

The biggest advantage here is that you no longer need a planar graph as your circuit diagram (meaning, a graph were no two edges cross). The most obvious application for this that I can think of is a neural net chip, but all sorts of other designs that would require a non-planar design are opened up. Cool!

Re:No more planar graphs! (0)

Anonymous Coward | about 7 years ago | (#18703725)

I think you nailed it. An often overlooked problem by x86 enthusiasts is that the layout of the chip is severely limited by not being able to cross over data paths easily. Without even increasing the transistor count or changing the process size, this technology will allow for better routing, thus shorter interconnects, thus faster clock speeds.

Re:No more planar graphs! (3, Informative)

RuleBritannia (458617) | about 7 years ago | (#18703791)

You appear to be under the misapprehension that VLSI designs are planar graphs. The place and route tools used to move from RTL to GDSII layouts make assumptions (depending upon the manufacturing process) of anywhere between 4 and 20 metal layers.

The technology described in the article is exciting but not novel... academics has been exploring memory hierachies, hardware dynamic thread scheduling, and introspective debug solutions for some years.

For reference... Last years ASPLOS (06) conference includes 2 papers with disruptive 3D stacking technologies.

PICOSERVER: USING 3D STACKING TECHNOLOGY TO ENABLE A COMPACT ENERGY EFFICIENT CHIP MULTIPROCESSOR.
Joint paper between Univ. of Mich. and ARM which shows how 3D stacking of DRAM dies (which have difference process req. to logic)on top of logic can radically reduce power, increase memory bandwidth and save area (since L2 cache becomes unnecessary)

INTROSPECTIVE 3D CHIPS.
UC Santa Barbara group show that 3D stacking allows the inclusion of a host of dynamic debug features which allow monitoring of the processor pipeline - without adding cost to the production version of the chip.

So... not just cool, super cool , but fundamental challenges remain, chiefly - can we achieve reliable interconnects between thousands of die-to-die vias (with the implication that if you bugger it up, both dies are useless), secondly, can we develop better wafer level testing so we don't end up going through the expensive stacking process with duff dies. Thirdly, better tools for modelling heat dissipation in such stacks is needed if they are going to be reliable in every-day use.

Kind Regards

Re:No more planar graphs! (0)

Anonymous Coward | about 7 years ago | (#18704749)

Sorry, conventional microchips have already been able to have multiple metal layers (up to ~7 or 8), this hasn't been a constraint on chip design for a long time.

heat dissipation (1)

YoYofella (184938) | about 7 years ago | (#18703081)

I thought the problem that's limiting the current chip density is heat dissipation due to leakage current, rather than the number of device we can squeeze into a die. btw, pls help with a google analytics study (STATS252 [stanford.edu] ).

Nice, but... (0, Troll)

Yaa 101 (664725) | about 7 years ago | (#18703127)

If it got DRM then I say who cares?

Re:Nice, but... (1)

danpsmith (922127) | about 7 years ago | (#18706205)

If it got DRM then I say who cares?

Let's have a hand everyone, for Slashdot's living, breathing stereotype. That is, if it isn't just some kind of machine that posts about DRM and software patents, even where not appropriate.

Seriously though, this is one of those moments where I'm glad someone is doing some serious research and the industry won't stagnate anytime soon.

Tower? (1)

RedMage (136286) | about 7 years ago | (#18703165)

that can be stacked on top of another electronic device in a vertical configuration.


Brings new meaning to the term "Tower configuration"!

RM

I've seen this before... (3, Funny)

Yvan256 (722131) | about 7 years ago | (#18703213)

To further increase R&D of this new 3D chip technology, IBM will be launching a new company called Cyberdyne Systems Corporation.

IBM/AMD versus Intel Death Match! (1)

Nom du Keyboard (633989) | about 7 years ago | (#18704989)

Is it time now for the IBM/AMD versus Intel Death Match? (yes, no, haha). Intel's had a pile of chip improvements. IBM, AMD's main partner, has a pile of their own. Who will win? While Intel has Perlyn at 45nm, could AMD counter with a Barcelona that stacks its cache right on top of its processors? Now that's something I'm waiting to see. Either way, I should win!

From the more-Moore dept.? (1)

treeves (963993) | about 7 years ago | (#18706795)

Interesting; the More Moore project is the name of a real project in Europe, but not having to do with 3D interconnects/chip stacking, but rather having to do with EUV (extreme ultraviolet) lithography to print smaller features.

Too hot? Wear a sweater. (3, Insightful)

epine (68316) | about 7 years ago | (#18710291)


Quite funny to perfect this now, with thermal considerations already dominating chip design costs. A nice little bit of space saving if it pans out for the super-compact, low-power cellphone market. For any other application, pretty much worthless. It might have some applications at the high end to increase supercompting bandwidth for systems where the half the cost is the cooling system. After the planet runs out of refinable bauxite, some prime locations with fat connections to the hydro grid would become available for server centers based on this technology.

Skynet is waiting..... (0)

Anonymous Coward | about 7 years ago | (#18710455)

Anyone remember the chip in Terminator 2? As in the 3D chip they were developing out of Arnie's CPU from the first movie? Cue synthesizer music.....

Fold? Don't you mean Time? (1)

cjensen2k (1073560) | about 7 years ago | (#18732439)

A fold is essential a doubling - think about it - fold a sheet of paper - how many layers are there?

It is basically counting in binary (5 fold = 5 bits = 32 times)

I wish more journalists got this straight.

Same goes for Magnitude - your lucky if anyone knows what that means,
Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...