Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

IBM, 3M Team To Glue Together Silicon "Bricks"

samzenpus posted more than 2 years ago | from the stack-your-chips dept.

IBM 81

coondoggie writes "IBM and 3M today said they will jointly develop a new line of adhesives they hope will let them make it possible to build commercial microprocessors composed of layers of up to 100 separate chips. Such stacking would allow for higher-powered servers and more advanced consumer electronics applications, the companies stated. Processors could be tightly packed with memory and networking, for example, into a 'brick' of silicon that would create a computer chip 1,000 times faster than today's fastest microprocessor enabling more powerful smartphones, tablets, computers and gaming devices."

cancel ×

81 comments

Lego, huh? (1)

Kittenman (971447) | more than 2 years ago | (#37334272)

Hmmm

Try Skynet (0)

Anonymous Coward | more than 2 years ago | (#37334320)

remember the CPU chip for the Terminators? it was more brick-like than chip-like...

hmmmm

Re:Try Skynet (1)

Sulphur (1548251) | more than 2 years ago | (#37334458)

remember the CPU chip for the Terminators? it was more brick-like than chip-like...

hmmmm

The Itanium I ?

Re:Try Skynet (1)

justforgetme (1814588) | more than 2 years ago | (#37337648)

I remember a time when cellphones looked like bricks.
If in the future our processors will look like bricks what will our cellphones look like?

So... (3, Insightful)

Ibiwan (763664) | more than 2 years ago | (#37334288)

Stacking these things is all well and good, but at what point do heat considerations become a primary concern? Lately I haven't gotten the impression that volume of ICs is our biggest bottleneck.

Heat already a consideration in stack ... (1)

perpenso (1613749) | more than 2 years ago | (#37334316)

Stacking these things is all well and good, but at what point do heat considerations become a primary concern? Lately I haven't gotten the impression that volume of ICs is our biggest bottleneck.

The article indicates that heat is already a primary concern. 3M's role in the endeavor is to develop adhesives with good thermal conductivity.

Re:Heat already a consideration in stack ... (1)

Nethemas the Great (909900) | more than 2 years ago | (#37334476)

If they're finally starting to build up then they will probably need to start looking at ways to transport heat out the sides as well as the top of the die. I wonder if they'll be able to reduce the power requirements though by creating shorter "vertical" paths to the next plane instead of making so many longer cross-die journeys.

Three-dimensional integrated circuit (2)

perpenso (1613749) | more than 2 years ago | (#37334616)

A reference for some readers: http://en.wikipedia.org/wiki/Three-dimensional_integrated_circuit [wikipedia.org]

"Power – Keeping a signal on-chip can reduce its power consumption by 10-100 times. Shorter wires also reduce power consumption by producing less parasitic capacitance. Reducing the power budget leads to less heat generation, extended battery life, and lower cost of operation."

"Heat – Heat building up within the stack must be dissipated. This is an inevitable issue as electrical proximity coorrelates with thermal proximity. Specific thermal hotspots must be more carefully managed."

Re:So... (1)

MoonRobot (2456308) | more than 2 years ago | (#37334376)

Stacking these things is all well and good, but at what point do heat considerations become a primary concern? Lately I haven't gotten the impression that volume of ICs is our biggest bottleneck.

Eventually the 'bricks' will end up with shapes that has the best heat transfer properties, e.g. a 'ring' shaped cpu dipped in cooling liquid.

Re:So... (1)

jamiesan (715069) | more than 2 years ago | (#37339182)

Then they can just peel the top one off and stick it to the monitor if it gets too hot right?

Been there, done that... (1)

PaulBu (473180) | more than 2 years ago | (#37334648)

1) "Standard" solution is to interleave copper fins between chiplets to take heat out -- and yes, it has been major problem for 3D integration.

2) Of course it's Intel and 3M, but do not think that this is new at all -- at my previous place of employment (6 years ago) we have been working with these guys: http://www.irvine-sensors.com/r_and_d.html#Neo-Stack [irvine-sensors.com] -- and they have this technology for quite some time before that.

Interesting tidbit I've heard from their CTO (I think): if you take a full height rack of electronics, total active volume of all transistors and metal wires on all the chips inside is about 1 cm^3...

But yes, taking heat out is a problem.

Paul B.

Re:Been there, done that... (0)

Anonymous Coward | more than 2 years ago | (#37335798)

Interesting tidbit..

Very tiddillating! Are their offerings competidively priced? Presumably there's enough latidude for improvement that IBM & 3M can create something novel, even if they weren't the first to think of stidching chips together in a 3d pattern like that. Heat would be a factor, but wouldn't there be a multidude of other problems too? Like interference? Or could that be solved in the same stroke by making the structure partidive?

The word is titbit. Tit.

Re:Been there, done that... (1)

e4g4 (533831) | more than 2 years ago | (#37336020)

The word is titbit. Tit

The internet [google.com] says that both [google.com] are acceptable.

Re:Been there, done that... (1)

tehcyder (746570) | more than 2 years ago | (#37338194)

The word is titbit. Tit

The internet [google.com] says that both [google.com] are acceptable.

Titbit's a lot funnier though, unless you're one of those people who says UR-a-nus instead of Ur-A-nus

Re:Been there, done that... (1)

angel'o'sphere (80593) | more than 2 years ago | (#37338886)

I always wonder why especially the american "english speakers" liek it so much to pronounce foreign words completely wrong.

After all it can't be so hard to figure how Uranus is ment to be spoken. Hint: in the word "look" the doubl eo is spoken like a european U so Uranus is spoken OO-ranÃos (the second oo/u at the and of the word is shorter than the starting one.)

I really wonder if the old gods are gone because everyone speakes/pronounces their names wrong ;D

Zeus is not spoken/pronounced Zoos ... but I leave it to you to figure how he is adressed properly.

Interference? (1)

PaulBu (473180) | more than 2 years ago | (#37336302)

Copper plates between chips should take care of (most) of it as well!

Interference? Yes, if you are attempting a multi-GHz design, you better take care of your impedances and groundplanes, granted... You would have to do it in any case though.

Of course Intel and 3M can come up with something slick, but just making heat-transferring glue might (or might not!) be a deal-breaking situation. I just wanted to point this /. crowd to some prior art that I happened to know about, and that is out in the open. I bet someone liked the possibility, sorry if it were not you personally! ;)

As to "titbit" -- yes, English is not my first language, but it is, indeed, tidbit [reference.com] ... Sorry about you that you are still so fascinated with "tits" (you know, meaty appendages attached to the chests of someone of opposite sex, with "nipples" at the end!) ;) Just kidding, but I think you were wrong...

Paul B.

Re:Been there, done that... (1)

narcc (412956) | more than 2 years ago | (#37336492)

The word is titbit. Tit.

Titbit is a variant of tidbit, not the other way around.

Re:Been there, done that... (1)

StripedCow (776465) | more than 2 years ago | (#37337598)

And how about making a 3D memory chip? If you have one for backup purposes, you don't need the speed, and thus will not dissipate that much heat.

Re:Been there, done that... (0)

Anonymous Coward | more than 2 years ago | (#37337284)

2) Of course it's Intel and 3M,

Excuse me, back from holidays. Did I miss some news, like Intel buying IBM or the other way around?

Re:So... (1)

kheldan (1460303) | more than 2 years ago | (#37334918)

Ironically enough I'm reading an old science fiction trilogy from the 80's featuring a race of aliens whose technology is remarkably similar to this. Them embedded heat-pipes in the layers to conduct heat away. Why can't they do something similar here?

Re:So... (0)

Anonymous Coward | more than 2 years ago | (#37335132)

Why is this ironic?

Re:So... (1)

tehcyder (746570) | more than 2 years ago | (#37338358)

Why is this ironic?

because it's like sunshine on a rainy day?

Re:So... (1)

ProfMobius (1313701) | more than 2 years ago | (#37337180)

What is the name of the book ?

Re:So... (0)

Anonymous Coward | more than 2 years ago | (#37337604)

micro channel heat exchangers are being studied, but the problem is corrosion and contamination, and also the problem of producing thermal gradients across the silicon stack.

Re:So... (1)

badkarmadayaccount (1346167) | more than 2 years ago | (#37337370)

How about we stack RAM chips - lower latency would improve throughput without much heat.

Re:So... (1)

mwvdlee (775178) | more than 2 years ago | (#37338366)

I think part of that problem can be solved simply by making pathways a lot shorter.
Volume of IC's isn't so much the bottleneck as it is the result of another bottleneck; the inability to cross pathways in a single layer.

Re:So... (1)

ultranova (717540) | more than 2 years ago | (#37339580)

Maybe the stack could simply include channels for cooling liquid?

Re:So... (1)

tlhIngan (30335) | more than 2 years ago | (#37340486)

Stacking these things is all well and good, but at what point do heat considerations become a primary concern? Lately I haven't gotten the impression that volume of ICs is our biggest bottleneck.

If you're using low-power efficient processors like ARMs, heat isn't really a huge issue - even going full tilt a typical SoC would draw 2.5W or so tops. Basically, passive cooling without a heatsink is more than adequate.

Stacking the memory and flash on top of the chip (commonly done today with multi-chip packaging and package-on-package) doesn't make the whole unit all that much hotter either.

The primary reason is that SoCs have one huge limitation - the number of pins that can be on a package (basically I/O bound). You want a smaller cellphone, it requires a smaller chip package, which means the pins (balls) have to be tightly packed. The problem is the PCB can only have the pads so close together - and there has to be room for vias and such which further dictate how dense the package can be.

So stacking vertically eliminates the need for so many pins (the I/O pad density on a chip can be much higher as the wire bond machine can make very fine connections with optical feedback) having to be brought to the PCB, and less space is needed for support chips on the PCB, making for smaller devices.

Incidentally, memory chips (flash and RAM) are silicon-bound - their limits are dictated by the size of the silicon they cna use, rather than the number of I/O pads.

With this, the ability for other devices to use more power-hungry chips (e.g., x86 processors) that require more support components is increased - so maybe that embedded x86 system can take less space and be more reliable if we can bond the CPU to the chipset, and leave the remaining I/O pads for stuff like PCI Express buses and the like. Since it's embedded, we can toss in tons of RAM (going up) as well, making for more compact embedded x86 boards.

Cooling... (1)

fuzzyfuzzyfungus (1223518) | more than 2 years ago | (#37334292)

That is going to be fun to cool...

I wouldn't be surprised if there are some specialty niche application guys who are just drooling at the prospect of vastly increased silicon area without more board space or interconnect hassle; but anybody who is cranking the clock, the power handling, or both, is going to find the utility of the layers at the center a bit dubious.

Re:Cooling... (2)

drinkypoo (153816) | more than 2 years ago | (#37335050)

Didn't we JUST see an article about unimolecular pumps? If they put some grooves in the silicon layers they can just use on-die liquid cooling. ;)

Great (1)

MoonRobot (2456308) | more than 2 years ago | (#37334308)

100 more ways to brick a machine.

Re:Great (1)

black soap (2201626) | more than 2 years ago | (#37340856)

It'll be bricked from the factory, if it is working right.

Not the only one (0)

greywire (78262) | more than 2 years ago | (#37334328)

I'm sure I'm one of thousands of folks thinking that how to glue together chips must be the least concern, and how to dissipate heat must be the highest?

The only thing I can think of that makes the adhesive important would be how well it holds up under heat, so maybe thats why its hard to do?

I imagine such a "brick" of silicon would probably have to have active cooling build into it, such as etched-in heat pipes or even some kind of micro fluid cooling system. Thats where the interesting stuff is happening.

I mean, really.. glue.. how exciting is that?

Re:Not the only one (1)

Surt (22457) | more than 2 years ago | (#37334406)

The announcement says that the glue itself will be heat dissipating. That's pretty much the focus of the whole project.

Re:Not the only one (1)

greywire (78262) | more than 2 years ago | (#37335732)

And, what, you expected me to READ the article first? Pshaw! This isn't the slashdot I know and love if you actually expect me to read the article and be informed first...

Re:Not the only one (1)

Surt (22457) | more than 2 years ago | (#37336094)

Not at all! After all, what purpose do the comments serve if not to clarify the summary for all who skipped reading the article?

Re:Not the only one (2)

sehlat (180760) | more than 2 years ago | (#37334408)

I mean, really.. glue.. how exciting is that?

No product on any scale from chips to countries is good unless the infrastructure supports it properly. Glue is "infrastructure," not sexy, but utterly vital. A glue that permits building a Borg Cube in microchip form will permit the firm with the technology to say "Resistance is futile." and mean it.

Re:Not the only one (1)

garyebickford (222422) | more than 2 years ago | (#37339478)

Good point. As I recall, the fundamental advantages that Seymour Cray introduced with the Cray supercomputers were primarily about infrastructure - cooling the boards and chips, and using interconnect wiring that was all the 'exact' right length for the high speed signals. Since the wires were the same length, the propagation delay between boards could be accommodated in the logic on each board, so the individual processors could act in parallel increasing the overall clock speed. (I know this could be stated more clearly - sorry.)

The machines had different cooling methods. In the Cray-1 [wikipedia.org] ) two circuit boards were layered back-to-back against a copper sheet that was attached to a freon plumbing system. In the Cray-2 [wikipedia.org] the entire module was immersed in a bath of Fluorinert (I remember seeing a picture of the first prototype of a new Cray processor, inside a fish tank filled with Fluorinert - the fluorinert evidently had color patterns that resulted from stimulation by the high frequency signals.)

Re:Not the only one (1)

perpenso (1613749) | more than 2 years ago | (#37334410)

I'm sure I'm one of thousands of folks thinking that how to glue together chips must be the least concern, and how to dissipate heat must be the highest? The only thing I can think of that makes the adhesive important would be how well it holds up under heat, so maybe thats why its hard to do?

You are correct that thousands are thinking about heat dissipation, it particular the folks at 3M who are working on this project are thinking about that. The article indicates that their primary role is to develop adhesives with the necessary heat dissipation.

Re:Not the only one (0)

Anonymous Coward | more than 2 years ago | (#37334438)

Jesus F'ing! RTFA - Glue that can conduct heat away!

Moore's Law (1)

kelemvor4 (1980226) | more than 2 years ago | (#37334334)

Three possibilities:

1) Moore's Law broken

2) This won't see the light of day for a LONG time

C) They are exaggerating.

Re:Moore's Law (1)

Surt (22457) | more than 2 years ago | (#37334396)

It's #2. This isn't something they've built, it's something they're aiming to build. Perhaps they have good reason to believe they'll succeed, but that doesn't change the fact that this is a vaporware initiative announcement.

Re:Moore's Law (1)

ArsonSmith (13997) | more than 2 years ago | (#37334480)

Or Moore's law will continue due exactly to this with doubling every 18 months until they hit the 1000x improvement where they will then have to look to other tech approaches to continue keeping up with Moore's law.

4 more years (1)

alexander_686 (957440) | more than 2 years ago | (#37335480)

If I understand correctly, Moore's law should hold out for another 4 years - that is we have mapped out the technology to get to chips down to 11 nanometers - it just a matter of implementing that technology - which is no small feat. After that - what?

3d chips - by gluing chips on top of each other
3d chips with different strata
quantum bits
quantum tunneling to replace current gates
etc.

If Moore's law is to continue, some new rabbits are going to have to pulled out of hats. Maybe this?

Re:4 more years (1)

Surt (22457) | more than 2 years ago | (#37336122)

Moore's law is ending in most of our lifetimes. When, exactly, is somewhat unclear, but the horizon is less than 40 years, at which point our single-atom transistors will be so numerous they will occupy the volume of our houses.

Re:4 more years (1)

macshit (157376) | more than 2 years ago | (#37337374)

Moore's law is ending in most of our lifetimes. When, exactly, is somewhat unclear, but the horizon is less than 40 years, at which point our single-atom transistors will be so numerous they will occupy the volume of our houses.

Ah, but they've got a plan for that too — they'll just start making houses bigger!

(what, you thought McMansions were just a silly affectation?!)

Re:4 more years (1)

ultranova (717540) | more than 2 years ago | (#37340212)

Moore's law is ending in most of our lifetimes. When, exactly, is somewhat unclear, but the horizon is less than 40 years, at which point our single-atom transistors will be so numerous they will occupy the volume of our houses.

You are assuming that we keep on building computers from transistors. But there are alternatives; for example, consider a mechanical adder made using benzene rings as gears. An adder made from molecular transistors would likely be far bigger.

Also, you are thinking of transistor-analogues as transistors-equivalents. But consider an optical transistor-analogue where the transparency or opaqueness can be controlled on a per-frequency basis. For all intents and purposes such a device would function like several separate transistors "stacked" together.

Re:4 more years (1)

Surt (22457) | more than 2 years ago | (#37344906)

If we're not dealing with transistors, it's also the end of Moore's law, which is pretty explicitly about transistor density.
http://en.wikipedia.org/wiki/Moore's_law [wikipedia.org]

Re:4 more years (1)

ultranova (717540) | more than 2 years ago | (#37346268)

If we're not dealing with transistors, it's also the end of Moore's law, which is pretty explicitly about transistor density.

Hard drives are usually considered to fall under Moore's law as well, despite the growth of their capacity having little to do with increasing transistor density. Also, the increasing density of maim memory is because - or at least requires - increasing density of capacitors. And finally, Moore's law is generally used in the context of increasing device capacity, not transistor density.

So, it does seem a bit like pointless pedantry to say that Moore's law is broken if we move away from transistors.

BENDER BRICKS ?? (0)

Anonymous Coward | more than 2 years ago | (#37334362)

She's a brick house
Mighty might just lettin it all hang out
She's a brick house
The lady's stacked and that's a fact
Ain't holding nothing back

She's a brick house
She's the one, the only one
Who's built like a amazon
We're together everybody knows
And here's how the story goes

She knows she got everything
A woman needs to get a man yeah
How can she lose with what she use
36-24-36 what a winning hand !!

The clothes she wears, the sexy ways
Make an old man wish for younger days
She knows she's built and knows how to please
Sure enough to knock a man to his knees

Shake it down, shake it down now !!

Don't forget the batteries! (1)

shri (17709) | more than 2 years ago | (#37334364)

>> enabling more powerful smartphones, tablets, computers and gaming devices." So we can expect any battery powered device to last for 4 mins, compared to the 4 hours one can expect from a dual core mobile phone?

Re:Don't forget the batteries! (1)

Ibiwan (763664) | more than 2 years ago | (#37334392)

Sorry, did you just say you only expect four hours from your phone? Out of curiosity, what brand do you have? (So I know to run the heck away from it)

Embedded (0)

Anonymous Coward | more than 2 years ago | (#37334372)

These chips already exist, and have since the start of IC's. Its called embedded chips... take a core (or 8), put it on a XBAR bus with a bunch of peripherals, flash and SRAM, and you've got yourself a minicomputer... that has ADC's... and is the size of your thumbnail

You are repeating Khan's failure ... (1)

perpenso (1613749) | more than 2 years ago | (#37334490)

You are repeating Khan's failure, you are thinking in 2D not 3D. Oh wait, Khan's failure occurs in future so I guess you are not repeating it. ;-)

On a more serious note we are simply repeating historical urban development. When land was plentiful we tended build out horizontally rather than vertically, I guess the building technology and materials also contributed to this (as it also apparently does in semiconductors). However when land started to become a scarce resource then we started to build vertically. At some point if we want to keep that IC at the size of a thumbnail we will need to go vertical as well.

Re:You are repeating Khan's failure ... (1)

Jeremi (14640) | more than 2 years ago | (#37340126)

However when land started to become a scarce resource then we started to build vertically

This isn't quite right -- land is still plentiful in most countries. (take a flight across the country and look out the window to see all the empty space available!)

What's not plentiful is land that's conveniently close to the existing infrastructure goodies. There's are big business advantages to having an office in Manhattan, as opposed to Alaska, for example. I imagine it's the same on-chip.

Re:You are repeating Khan's failure ... (1)

perpenso (1613749) | more than 2 years ago | (#37340734)

I completely agree. I was referring to land in a desired area. Village becomes town becomes city and then at some tipping point of size things start to go vertical. New York, London and Paris were in mind.

In the chip case is that tipping point something around fingernail size?

Vaporware? (1)

jcohen (131471) | more than 2 years ago | (#37334384)

Isn't this story a little vaporwarish? The companies "hope to develop" these new techniques and materials. There's no mention of an underlying discovery which the two companies might help each other commercialize. There's just this idea -- "Gee, wouldn't it be cool if we could do this? Let's look into it!" Is this actually news yet?

Re:Vaporware? (1)

Surt (22457) | more than 2 years ago | (#37334434)

It's completely vaporware, except for the fact that the two companies involved have the kind of reputation that suggests that if they think this is a good thing to invest in, they already have scientists on staff staking their careers on telling them they will reach the destination.

Re:Vaporware? (1)

angel'o'sphere (80593) | more than 2 years ago | (#37339332)

You should learn to read slashdot properly.

A few Articels belwo the one we are both reading and posting in is this one: http://hardware.slashdot.org/story/11/09/07/2028255/Single-Chip-DIMM-To-Replace-Big-Sticks-of-RAM [slashdot.org]

And those guys basically do the same that IBM and 3M want to do, but with memory chips.

So how much vapour?

benefits (2)

currently_awake (1248758) | more than 2 years ago | (#37334474)

Advantages: speed- Total execution time is based on distance the signal must travel- vertical stacking shortens distance. space- having half your motherboard used up for ram limits what you can do. If you ever want to see TB usb sticks you need this. Board space in a cellphone is very limited, with this you can multiply the number of chips on the board by 10/20/30 depending on how thin the slices are. cooling: you can etch channels on the backside before you glue to run cooling oil through.

CPU cores devoted to DRM (2)

tepples (727027) | more than 2 years ago | (#37335016)

Board space in a cellphone is very limited, with this you can multiply the number of chips on the board by 10/20/30 depending on how thin the slices are.

But how many of these chips will be used for adding functionality, as opposed to adding measures to restrict the owner of a phone from making full use of the functionality? Case in point: the PlayStation 3 and PlayStation Vita have multicore CPUs and dedicate one core to DRM, and the Wii has an extra CPU (nicknamed "starlet") on the northbridge, again devoted to DRM.

Re:CPU cores devoted to DRM (1)

Splab (574204) | more than 2 years ago | (#37338818)

Well from your fine examples we can conclude one chip will be used for DRM. How hard was that?

Re:CPU cores devoted to DRM (1)

josath (460165) | more than 2 years ago | (#37342320)

Hmm, you've got a good point. We should definitely discourage any sort of technical advances, since those advances might be used for DRM!

Re:benefits (1)

bgat (123664) | more than 2 years ago | (#37336416)

A related benefit is that since they can assume that the signals never leave the chip stack, busses can be simpler and more fragile--- and faster.

Even a stack of only two wafers is of huge benefit. Today's package-on-package chips (which are two wafers surrounded by two complete external packages including BGA pads) allow mobile phones to put the memory right on top of the CPU, which reduces chip count and space. Assembly at the die level instead takes that a substantial step forward, by getting rid of the unnecessary pads that exist solely to provide interconnects between those chips. The resulting package footprint is half the size, or better.

SSD capacity (1)

dutchwhizzman (817898) | more than 2 years ago | (#37336488)

Your argument about TB USB sticks is right. Imagine SSD drives and portable media players that can actually hold a significant amount of your media collection.

Looking forward to silicon brick oven pizza. (1)

Kaz Kylheku (1484) | more than 2 years ago | (#37334494)

These things will generate some heat, no doubt.

I've been saying this (0)

Anonymous Coward | more than 2 years ago | (#37334532)

for 20 years.

The fact that today's chip processing is all 2d and was bound to change for 3d at some point comes as no surprise to some I'm sure.

Finally, the Scotch Processor (1)

kawabago (551139) | more than 2 years ago | (#37334586)

Use it to secure your Christmas presents!

Precedent (0)

Anonymous Coward | more than 2 years ago | (#37334622)

A stack of Pentiums, attached with 3M Scotch 665 double sided adhesive tape.

Always too hot, never too cold. (1)

Commontwist (2452418) | more than 2 years ago | (#37334736)

Find some super-thermally conductive material, punch holes through the new bricks in several places (planning ahead of time to avoid stuff like, oh, circuits), place or thread the material into the holes and do a quick compress to ensure it fills up the hole and touches the entire length. Then connect the outside part of the conductor to whatever cheaper heat sink you want. Even if the inner conductor material is expensive at least it transfers the heat outside and away from the inner core.

It would be even neater if the interfacing between the chips was incorporated into the sides of the holes somehow.

Re:Always too hot, never too cold. (1)

KiwiCanuck (1075767) | more than 2 years ago | (#37340020)

Unfortunately, this would take too long and thus cost too much. Space is at a premium. You can't put a massive 1mm size hole in a wafer with 22nm features. We are talking about a 450mm (18-inch) diameter wafer that is about 900um thick. KOH etches Si at 1um/min. So it's take 450 minutes (7.5 hours) to etch through a wafer. Even if you limited each die to 1 hole, the wafer would be too fragile to survive the etch.

Re:Always too hot, never too cold. (1)

Commontwist (2452418) | more than 2 years ago | (#37345906)

Being able to find out how is what research is for. Current chips shown to someone 40 years ago would cause this sort of reaction -> @_@

Yield, not Power (1)

slashfoxi (610738) | more than 2 years ago | (#37334886)

This article is a bit deceptive. IBM is not trying to create a package with 1000 high-end, high-power CPUs in it. Clearly, this would require 1000 times the thermal capacity in the cooling system, not to mention 40kW power supply to drive it and a pair of 40kA copper rails to bring all that current (at 1V) into and out of the package. This is not happening. The issue IBM is looking at is silicon defects. If you make a single MIPS processor per die, then you can get 10,000 of them on a wafer. If that wafer suffers 100 random defects, then you still have 9900 good die for 99% yield. However, if you try to make 64-core processors that are fashionable today then you only have 156 units on your wafer and the same 100 defects leave you with only 56 prime dice, for 36% yield which is shit. IBM's big idea seems to be to manufacture the multi-core processors which can be assembled from a multiplicity of known good die. They aim to build 64-core CPUs, by stacking tiny single-core CPUs, not the 64000-core CPUs that I pictured when reading this article.

Re:Yield, not Power (1)

slashfoxi (610738) | more than 2 years ago | (#37335026)

Another thing I wanted to mention is that IBM makes it's money on flip-chip packages. Flip-chip is technically superior to wire-bond, but does not allow you to stack which is desirable for mobile devices (regulator on DRAM on CPU is a typical in a baseband package for your phone). What IBM really needs to come up with is a superior, proprietary stackable package so they can start making money on mobile.

Re:Yield, not Power (0)

Anonymous Coward | more than 2 years ago | (#37337188)

Normal procedure will be to build 66-core cpu, and advertise it as 64, it will still allow for one or two defects. However, such big cpus will not have uniform array of cores, rather more NUMA-like architecture (it already happens in many 4 and 8 core cpus, they are actually 2x2 or 2x4 construction, which can be inferend based on inter-core communication speed, cache speed, etc.)., so you would like to add 2 redundant cores, but you have actually 8 clusters (8 cores each), and where you would put them?

Of course some people could live with 7 clusters with 8 cores, and one cluster with only 7 cores, it will not be even big problem for most of the systems (OSs will deal with it, most task based scheduler will deal with it, few other programing languages, libraries, etc, will need to accommodate not only non-uniform memory and core latency, but also irregular core number in each cluster, socket, etc. I do not think we can easily solve them using some hardware tricks. Maybe we could do this for 64 cores, but what ablout 256 ? What about 4096 core system which will have failing elements every few minutes? This needs many new ways of programing (at least at OS and library level).

1000 times faster... for no net improvement (0)

Anonymous Coward | more than 2 years ago | (#37335014)

"1,000 times faster than today's fastest microprocessor enabling more powerful smartphones, tablets, computers and gaming devices."

AKA

"1,000 times faster than today's fastest microprocessor enabling the devices to run even more layers of encapsulation, virtual machine languages, and scripting crap at the same speed our computers operate today".

I for one, welcome our Pringles overlords. (0)

Anonymous Coward | more than 2 years ago | (#37336534)

I for one, welcome our Pringles overlords.

MCPs or POPs? (1)

unixisc (2429386) | more than 2 years ago | (#37336842)

Are they talking multiple die in the same package (Multi-Chip Packages) or multiple layers of Package-on-Package? Current MCP technology already allows 5 or so layers in a 1.4mm tall chip, while for Package-on-Package, it would be more difficult in keeping w/ any size constraints, but at least testing would be less expensive. But for servers, why don't they just have an optimal 4-core processor, and then have, say 32 of them in order to get that desired result? Something tells me this is being over-engineered, and is likely to be more expensive than it needs to.

Stacking processors doesn't increase their "speed" (0)

Anonymous Coward | more than 2 years ago | (#37337714)

Since when does clock rate depend on the number of cores?

Last time I heard the clock rate remained the same no matter how many chips are stacked together. At best, multiple cores can achieve a higher data processing throughput but even that's highly dependant on software parallelization.

Pft (1)

sgt scrub (869860) | more than 2 years ago | (#37339992)

Glue?!? Everyone knows if you want it done right you use duct tape.

Computronium v0.3? (1)

bmcraec (1957382) | more than 2 years ago | (#37343612)

HAL in your pocket, doing double duty as a pocket-warmer heat-source for those long ice-skating parties. Awesome.
Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...