×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

The Transistor Wars

Soulskill posted more than 2 years ago | from the not-so-long-ago-in-a-galaxy-that-looks-pretty-familiar dept.

AMD 120

An anonymous reader writes "This article has an interesting round-up of how chipmakers are handling the dwindling returns of pursuing Moore's Law. Intel's about four years ahead of the rest of the semiconductor industry with its new 3D transistors. But not everyone's convinced 3D is the answer. 'There's a simple reason everyone's contemplating a redesign: The smaller you make a CMOS transistor, the more current it leaks when it's switched off. This leakage arises from the device's geometry. A standard CMOS transistor has four parts: a source, a drain, a channel that connects the two, and a gate on top to control the channel. When the gate is turned on, it creates a conductive path that allows electrons or holes to move from the source to the drain. When the gate is switched off, this conductive path is supposed to disappear. But as engineers have shrunk the distance between the source and drain, the gate's control over the transistor channel has gotten weaker. Current sneaks through the part of the channel that's farthest from the gate and also through the underlying silicon substrate. The only way to cut down on leaks is to find a way to remove all that excess silicon.'"

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

120 comments

Leaking silicone... (5, Funny)

mevets (322601) | more than 2 years ago | (#38030586)

I thought that saline was the new medium of choice; especially after all those messy lawsuits in the 90s.

Re:Leaking silicone... (4, Informative)

treeves (963993) | more than 2 years ago | (#38030748)

That silent 'e' adds hydrogen and oxygen to silicon, leaving you with an insulator instead of a semiconductor.

Re:Leaking silicone... (1)

Anonymous Coward | more than 2 years ago | (#38031304)

Why don't they just add hydrogen and oxygen to the silicon when they need it to stop leaks, and take it away when they need it to conduct electricity? Some people are so dumb.

Re:Leaking silicone... (1)

Khyber (864651) | more than 2 years ago | (#38032776)

This actually sounds interesting, as silicone itself can withstand very high temperatures. On the other hand, the insulative capability (low thermal conductivity) means issues in handling heat dissipation.

Re:Leaking silicone... (1, Funny)

f()rK()_Bomb (612162) | more than 2 years ago | (#38032842)

Who pronounces it with a silent e.

Re:Leaking silicone... (1)

RockDoctor (15477) | more than 2 years ago | (#38033726)

Who pronounces it with a silent e.

People who have no understanding that chemistry is more important than the pristine contents of their spell checker.

Re:Leaking silicone... (1)

PDF (2433640) | more than 2 years ago | (#38035468)

Who pronounces it with a silent e.

I've never heard it pronounced without a silent e. How would you pronounce it without a silent e? Sil-ih-co-neh? Sil-ih-co-nee?

Re:Leaking silicone... (1)

f()rK()_Bomb (612162) | more than 2 years ago | (#38035506)

like cone, like ice cream cone, not con, like convict. the e sound is overlayed with the n sound.

Re:Leaking silicone... (1)

PDF (2433640) | more than 2 years ago | (#38035536)

I obviously have much to learn about pronunciation, as I have always understood that the word "cone" has a silent e and is one syllable long.

Re:Leaking silicone... (1)

f()rK()_Bomb (612162) | more than 2 years ago | (#38035708)

you could say its silent I suppose, but it still changes the pronouciation, so it's not truly silent.

Re:Leaking silicone, INTEL has reached a Brickwall (-1, Troll)

Anonymous Coward | more than 2 years ago | (#38030832)

The only way intel can compete against other processor is by rushing to the next node, so while IBM can make a 16-core 64-bit 32-thread processor which uses under 60-watts at 45nm. INTEL rushes to the next node, makes a new processor on the new node which are slower then the older processor, and intel banks a whole bunch of money. Then all those people who bought the first generation realize how slow their processor are compared to the second generations on the same node. INTEL been doing this same shit since 486dx days. Intels plan is to sell you a 4-core processor running 1.2Ghz with a 3M L2, and its supposed to the the greatest because its new. And after that it'll be 16-core running at 800Mhz, then 32-core running at 400Mhz. Intel has hit the brick-wall as far as die shrinks are concerned, they'll just keep adding slower core and calling it the newest and greatest and everyone will buy it. INTEL processor are still slow heaters.

Re:Leaking silicone, INTEL has reached a Brickwall (0)

Anonymous Coward | more than 2 years ago | (#38031922)

Thats why intel trumps amd for performance per mhz AND performance per watt?

in the future (1)

Anonymous Coward | more than 2 years ago | (#38030626)

we use laser interference to calculate. no atoms = no leaks.

Re:in the future (5, Interesting)

Anonymous Coward | more than 2 years ago | (#38032520)

I think I know what you're talking about, and I'll elaborate on it.

Laser interferometry, scanned across an optical ROM using piezoelectric lenses. In essence it makes an optical processor achievable without requiring optical switches.

Such a thing has been done before, using other ROM technologies (usually for digital signal processing). This one has the advantage of being limited in speed only by how fast the piezoelectric lenses can be aimed, and every pseudo instruction takes only one clock cycle. Because of that it can also make CISC instructions outperform RISC (at the expense of a significantly larger ROM).

Granted, most piezoelectric crystals oscillate below 1GHz (considerably), but their limitation is due to conducting a usable amount of electricity for the purpose of making an electronic oscillator. I'm not sure what the ceiling is on how fast a piezoelectric lens could be aimed, but at any rate aiming a laser is a different bottleneck than the heat dissipation of transistors and in my opinion it's a much easier problem to work around.

Re:in the future (0)

Anonymous Coward | more than 2 years ago | (#38035812)

right, in the future (where I'm from) this what we use. When you say optical ROM it's something 2D? we just use a 3D laser interference, I don't know the details. It's by volume the biggest component of any computer.

Next up : Compiler wars! (-1, Offtopic)

Weezul (52464) | more than 2 years ago | (#38030656)

LLVM has taken every small profiling advantage the Hotspot JVM found and back ported it to C.

We could even see functional languages with layers of profiling metacode producing self modifying code that runs blazingly fast.

Re:Next up : Compiler wars! (2)

jbolden (176878) | more than 2 years ago | (#38030682)

We could even see functional languages with layers of profiling metacode producing self modifying code that runs blazingly fast.

Could you expand on that? How? For example how do you avoid the quadratic memory issues in lazy languages?

Re:Next up : Compiler wars! (3, Informative)

epine (68316) | more than 2 years ago | (#38031460)

We could even see functional languages with layers of profiling metacode producing self modifying code that runs blazingly fast.

The computer capable of that level of introspection and inference would snort at your silly fashion bias toward functional languages. The main calling card of functional languages is to offset weakness in human cognition. The human brain struggles to convert a functional specification into an optimal state machine without dropping a stitch. Kasparov and others complain about computer chess precisely because a well-tested adversary never drops a stitch, or so rarely that chess programmers have a dozens of other things to worry about first.

You do realize that the primary virtue of a functional language is purity in the specification domain and that it offers no fundamental advance in the execution domain?

There are two halves to the specification domain: algorithmic correctness, and shaping the performance/resource envelope. Prolog is a fairly reasonable specification language for algorithmic correctness, but almost completely useless at shaping the performance/resource envelope. Ever seen a smart phone OS programmed in Prolog?

There needs to be a word for this particular cognitive bias. This is the cognitive bias that if there's enough food on the planet, no one should starve, neglecting only the distribution challenge modulo politics, history, culture, and human nature.

In a world dominated by programming languages optimized for algorithmic correctness, all our problems will miraculously go away, because all those potent algorithms will sort out the performance/resource envelope without further input of blood, sweat, and tears. Nice vision.

That day will arrive when I specify the desired solution as a shortest path and the computer responds, "no can do, but would you settle for nearly as good almost all of the time under modest stochasticity assumptions in the underlying graph in near real-time to the largest feasible problem size as practically bounded by performance bounds elsewhere in the application feature set as they presently exist for the targeted user base?"

And I will go, could you break that down into smaller pieces? I'm out of practice thinking that hard.

On the transistor topic, it's kind of stupid to neglect the power distribution tree. Idle execution units don't leak if the master valve is slammed shut. In future we can have a much larger set of execution units optimized for different tasks, and only use the one that's needed for a heavy lifting loop.

You're already seeing the shift to dark silicon with the introduction of the ARM A5 as a companion dog to a bigger OOO furnace. One or the other CPU is shut off completely at any given time. Hard to leak power that never arrives.

Re:Next up : Compiler wars! (-1)

Anonymous Coward | more than 2 years ago | (#38033310)

I don't know what any of this means, but you've got my vote!

Re:Next up : Compiler wars! (0)

Anonymous Coward | more than 2 years ago | (#38035782)

Is there some kind of "who can post the most blathering nonsense" contest going on? I mean I get how you'd think that's funny, but it seems boring to me.

bad for amd? (0)

AvitarX (172628) | more than 2 years ago | (#38030674)

Didn't amd bet on extra transistors for bulldozer?

Or is it next Gen architecture when it matters?

Re:bad for amd? (4, Informative)

hairyfeet (841228) | more than 2 years ago | (#38033334)

Bulldozer looks to be like Phenom I, where they try something new and then need to work the bugs out before getting the decent chip. The smart thing is they are focused more on the mobile ATM and have slowed production on the desktop because the Phenom II still has some life left in the low to mid range along with Athlon II on the desktop but those Brazos APUs give a better bang per watt than the mobile Athlons.

But if you look at their roadmap Bulldozer is basically a testbed for their next big jump which if they can crank it out like they do the Brazos chips ought to be pretty sweet. With bulldozer you have two integers but only one FP between them and by doing it this way you seriously cut power while supposedly being able to get around 80% of what you'd get in a dual core. it works pretty good on Brazos but they haven't gotten it down well enough to ramp the clocks which they need for Bulldozer.

But the sweet thing which ought to give them a hell of a nice boost and why the two integer but one FP design makes sense and fits in with TFA on different designs is the new GPU design coming down the pipe from ATI which will be in the future APUs. These new GPU/APU units will switch from VLIW to vector which should give roughly half of native FP speed (currently GPUs are less than 1/4th as fast as CPUs when it comes to double precision FP last I checked) and close to native on second gen with native being the ultimate goal.

What makes this sweet is with the amount of stream processors they'll be able to squeeze in there you'll have basically a super FP so that when you aren't doing graphics work the GPU will stay busy as a GP/GPU. They'll be able to accelerate more and more programs using this super FP than they have with current tech which should again give it a pretty nice kick in the pants.

All I know is if they can keep it close to Brazos in terms of price and power usage i'm all for it. my EEE gives me 40% more battery life than my Athlon II based Wind did while staying cool and playing HD videos smooth as butter. If they can keep the heat down like they have with Brazos it ought to be a pretty damned sweet chip, they just need to work out the kinks with Bulldozer so they'll be able to kick up the clocks on the desktop version.

Small 3D transistors (4, Informative)

Animats (122034) | more than 2 years ago | (#38030768)

3D transistors aren't all that new; high power devices have been 3D for decades. Making 3D transistors this small is new. I wonder how long the lifetime is. The smaller the device gets, the worse the electromigration problem gets. The number of atoms per gate is getting rather small.

Note that this is different from making 3D chips. That's about making an entire IC, then laying down another substrate and making another IC on top of it. Or, in some cases, mechanically stacking the chips with vertical interconnects going through the substrate. The density improves, but the fab cost goes up, the yield goes down, and getting heat out becomes tougher. We'll see that for memory devices, but it may not be a win for CPUs.

Re:Small 3D transistors (4, Informative)

fyngyrz (762201) | more than 2 years ago | (#38030866)

3d integration should become practical when 3d cooling (channels? pipes? something else?) can also be easily integrated into the silicon. Once we can get the heat out, there's no particular reason that 3d can't *really* mean "3d integration", instead of "stack dies." I don't see any reason why this wouldn't come to pass. Even so, at the current geometries, we're approaching true high-performance systems on a chip.

Larger chips provide for more interconnects (more edge space) but at some point, that'll be overkill because the system will be all in there, and only I/O will need to be brought out. We're seeing it (in a kind of feeble way) with some of the microcontrollers, but I rather expect (ok, hope) that this will be how computers are supplied, or at least, one way they are supplied.

Re:Small 3D transistors (4, Informative)

lexman098 (1983842) | more than 2 years ago | (#38031240)

Larger chips provide for more interconnects (more edge space) but at some point, that'll be overkill because the system will be all in there, and only I/O will need to be brought out.

There's a little more to it than that. Larger chips draw a large amount of power (very suddenly) which means the number of pins used just for VDD + GND/VSS goes way up. That's especially true since more of the analog circuitry (notoriously sensitive to rail noise) would have to be integrated into the same chip within same process. That's just power, and depending on the application there's a lot more to consider for your pinout. You can really never have enough pins. That rule of thumb isn't going anywhere soon.

Re:Small 3D transistors (1)

jimmydevice (699057) | more than 2 years ago | (#38033198)

If all the pins were power and optical was used to communicate, that would reduce pin count and increase bandwidth, But of course, this has all been examined ad-infinimum and has problems, Experts care to expound?

Re:Small 3D transistors (1)

HappyPsycho (1724746) | more than 2 years ago | (#38034476)

Would it be possible to have the chip start up layer by layer? I'm using a rack of equipment as my real world example, it effectively reduces the maximum amount of current needed at any point in time. Of course I'm making the assumption that the startup current exceeds the maximum running current under full load.

Re:Small 3D transistors (1)

lexman098 (1983842) | more than 2 years ago | (#38035642)

No this isn't start-up related. Chips have a lot of variation in "switching-activity" or the need to suddenly charge and discharge internal nodes. This is during normal operation, and you do what you can to throw in as much "decoupling capacitance" as possible (to insulate other circuitry from rail noise and relax current draw requirements) but it takes up space and can only do so much.

Re:Small 3D transistors (5, Interesting)

phaserbanks (1977290) | more than 2 years ago | (#38031468)

The more 3D features you pattern onto a wafer, the more mechanical stress you create. This is especially true when you integrate features with different materials and different coefficients of thermal expansion. Such features can increase the warpage and bow of the wafer to such a point, that the fabrication equipment can no longer handle the wafer. It becomes like trying to feed a potato chip into a CD changer.

The larger the wafer, the worse this problem becomes, and today they're running very large 12" wafers that are quite sensitive to mechanical stress. Also, the SOI wafers are more prone to warpage than single crystal silicon.

So, the *real* 3D integration you're taking about is very difficult.

Re:Small 3D transistors (1)

fyngyrz (762201) | more than 2 years ago | (#38032418)

That's why my assertions were predicated upon cooling.

Re:Small 3D transistors (1)

phaserbanks (1977290) | more than 2 years ago | (#38032750)

You were talking about integrated microchannels for cooling, right? Or through hole vias patterned into the die? That's what I'm talking about. Digging holes in the crystal and/or depositing metal both cause the wafer to warp.

Re:Small 3D transistors (1)

fyngyrz (762201) | more than 2 years ago | (#38033744)

I was just handwaving; I've seen cooling strategies (in macro) that range from fan driven air to full-immersion oil baths to pumped, pressurized materials that run through heat sinks. At micro, I really don't know what the solution is (otherwise I'd be at the patent office), I'm just surmising that as it is a physical engineering problem that directly stands in the way of technology (and huge amounts of earning potential), someone is quite likely to solve it.

It also seems very unlikely to me that we've reached a dead end on creating 3D integration. A temporary pause while we work the problem is considerably easier to accept. Perhaps digging holes can be replaced with growing them, stress-free. Or something else we've not even thought of yet. Perhaps silicon isn't the material we'll end up using. Perhaps it'll all be optical and we're not even in the ballpark yet. All I know for sure is that so far, betting on leaps in technology has been an excellent bet since the very first transistor arose out of Gordon Teal's lab work. Heck, it was even true of vacuum tubes.

Re:Small 3D transistors (1)

blind biker (1066130) | more than 2 years ago | (#38034344)

Well, actually nowadays the chip-level (instead of wafer-level) stacking/integration is taking ever more precedence. There are various types of micro-manipulators, and even self-aligning techniques, mostly developed by Japanese scholars, that make this task possible.

Re:Small 3D transistors (2)

sensei moreh (868829) | more than 2 years ago | (#38031672)

d integration should become practical when 3d cooling (channels? pipes? something else?)

The obvious answer is diamond. Semiconduction and high thermal conductivity.

Re:Small 3D transistors (1)

rtb61 (674572) | more than 2 years ago | (#38032398)

Better to add in the third dimension to the calculations themselves, switch from binary '0,1' to trinary '+1,0,-1'. New problems to solve but a definite leap in 'calculation' density.

Re:Small 3D transistors (0)

Anonymous Coward | more than 2 years ago | (#38035102)

Better to add in the third dimension to the calculations themselves, switch from binary '0,1' to trinary '+1,0,-1'. New problems to solve but a definite leap in 'calculation' density.

the russians did that already 50 years ago.

Re:Small 3D transistors (1)

phaserbanks (1977290) | more than 2 years ago | (#38032426)

When diamond becomes as cheap and plentiful as silicon... Lots of research already into using diamond for high voltage power semiconductor devices.

Re:Small 3D transistors (0)

Anonymous Coward | more than 2 years ago | (#38032672)

Someone bought DeBeers a few weeks ago. Maybe they read your post back then.

Re:Small 3D transistors (0)

Anonymous Coward | more than 2 years ago | (#38031900)

"when 3d cooling (channels? pipes? something else?)"

Series of tubes, I think.

Re:Small 3D transistors (1)

fyngyrz (762201) | more than 2 years ago | (#38032428)

Alaska has patented that, won't work. Besides, you can see Russia from there. From the bridge. To nowhere.

Re:Small 3D transistors (4, Interesting)

Animats (122034) | more than 2 years ago | (#38033080)

3d integration should become practical when 3d cooling (channels? pipes? something else?) can also be easily integrated into the silicon.

That's being tried by IBM. [electroiq.com] But it's probably not going to be useful for portable and mobile devices. IBM is looking at it for high-density server farms.

Re:Small 3D transistors (3, Informative)

tlhIngan (30335) | more than 2 years ago | (#38033172)

Larger chips provide for more interconnects (more edge space) but at some point, that'll be overkill because the system will be all in there, and only I/O will need to be brought out. We're seeing it (in a kind of feeble way) with some of the microcontrollers, but I rather expect (ok, hope) that this will be how computers are supplied, or at least, one way they are supplied.

Larger chips are also more expensive. A silicon wafer costs anywhere from $1000-3000 each. Each wafer has a fixed area, and the larger the chip, the less of them per wafer. Additionally, a larger chip means there's more of a chance of an imperfection in the wafer to destroy the entire chip, leading to lowered yields. Lowered yields meach the base price of each chip goes up as there are fewer chips to pay for the entire batch.

There are two kinds of chips - silicon-limited, and I/O limited. Memory devices (both volatile (DRAM) and non (Flash)) are silicon-limited - they are as big as economically possible (more area == more capacity after all) juggling yields and such to reach a usable price point.

CPUs are I/O limited - they are actually very small devices, the only thing keeping them back is the number of I/O pins. And it's not the actual silicon itself - it's the physical package that connects to the PCB. The most popular packages are BGA, but even those have specifications on ball size and ball spacing. Put the balls too close together and too small, and the cost of the base PCB holding the chip goes up significantly as the PCB has to be made to tighter tolerances.

Even so - we're talking about a thousand pins still in the latest high end Intel and AMD parts. This is doable as the PCB chip carrier can be made very specially (it only holds the chip, after all, and doesn't have to hold the rest of the circuits for the device) - basically it's a breakout board.

Re:Small 3D transistors (1)

flyingfsck (986395) | more than 2 years ago | (#38033230)

Cooling pipes and 3D parts are nothing new really. High power thyristor switches have used cooling pipes for decades. There are fully electronic systems that can do DC to AC conversion and many other neat things on the Megawatt range used in the power industry.

Re:Small 3D transistors (1)

Alwin Henseler (640539) | more than 2 years ago | (#38030936)

Note that this is different from making 3D chips. That's about making an entire IC, then laying down another substrate and making another IC on top of it. Or, in some cases, mechanically stacking the chips with vertical interconnects going through the substrate. The density improves, but the fab cost goes up, the yield goes down, and getting heat out becomes tougher. We'll see that for memory devices, but it may not be a win for CPUs.

Well you could use it to pack ever more transistors onto the same area, but also to pack the same number of transistors onto a smaller area. For example not 100 million transistors on a 10*10mm die, but 100 million transistors on a 3*3mm die, stacked 11 layers high (assuming cubic space / transistor is the same).

Sure that would be more difficult to produce & thermally less optimal, but also enable shorter interconnects within an IC. Shorter interconnects -> higher clock speeds? Less interconnect -> more space for logic. So the net result might just make it worthwhile.

Re:Small 3D transistors (0)

Anonymous Coward | more than 2 years ago | (#38030968)

Electromigration is no problem of transistors but of interconnects and more generally your metal wiring. You're probably referring to hot carriers. Also laying down another substrate on an IC is ridiculous (reliability). And lastly you cannot compare a finfet (the 3d fet) to other transistors. (period)

Intel's 3g gate transistors stop all current (4, Insightful)

electrosoccertux (874415) | more than 2 years ago | (#38030812)

Current sneaks through the part of the channel that's farthest from the gate and also through the underlying silicon substrate

big "huh" at this article excerpt, the point of Intel's 3d gate transistors is it allows for a fully depleted region of silicon in the channel. IE, the gate is so close to the silicon, NO electrons exist in the channel when it is off. The only leakage current you can have then through the channel is quantum tunneling, and that's basically nil; bringing the total current consumption of the transistor down by a factor of 10. Ho hum silly slashdot summary, get off my lawn!

Re:Intel's 3g gate transistors stop all - Nope (0)

Anonymous Coward | more than 2 years ago | (#38031122)

Capacitance and quantum tunneling are the reasons for current leak. Instead of fully addressing the known issues management forces smaller dies without understanding what is happening as one gets closer to the quantum realm.

Re:Intel's 3g gate transistors stop all current (4, Informative)

mkiwi (585287) | more than 2 years ago | (#38031144)

I think the point they're trying to make is that there's some sort of depletion going in the channel, which causes a very small, but not insignificant amount of current to flow from drain to source through the transition region in the substrate. From the standpoint that electrons are sitting on the upper part of the Si substrate underneath the channel, the summary makes sense. They want to remove the excess Si so that depletion mode current is more tightly controlled.

Re:Intel's 3g gate transistors stop all current (4, Interesting)

Calos (2281322) | more than 2 years ago | (#38031256)

That part of the summary was probably meant to address traditional planar transistor designs, where it is roughly accurate. It is one of the reasons why Intel has been pursuing 3D transistors - more gate control over the channel and no bulk leakage.

Another approach is to use a buried oxide layer, so that the transistors simply don't have a bulk substrate, and the channel is thin enough to allow better gate control. This approach will help the leakage, but 3D gets you faster transistors, too, because there is more area the gate directly controls to form an inversion layer to conduct current. The upside of this method is that if we can fabricate the wafers, the rest of the processing is mostly the same (though those wafers will be expensive). 3D requires a lot more work, but apparently Intel has that figured out.

Re:Intel's 3g gate transistors stop all current (5, Informative)

Technician (215283) | more than 2 years ago | (#38032468)

SOI limits the depth of the conductive channel by placing a film on an insulator. If the insulator is low K Dielectric, the capacitance is reduced helping the speed. The 3D transistor on the other hand has a vertical fin of semiconductor created by etching away the surrounding material. This places the flat film of semiconductor on edge, then a wrap around gate applies the e-field on both sides and the top essentially surrounding the doped semiconductor path on 3 of 4 sides. This places all of the channel in close proximity to the gate voltage so a smaller voltage can pinch off the channel. SOI is still a gate on only one side (the top) of the semiconductor channel.

If you don't understand the tech, a photo is worth many words. A photo can be seen here.
http://www.pcmag.com/article2/0,2817,2384909,00.asp#fbid=2uqV-rrPnOE [pcmag.com]
Most people do not understand the photo. The center lattice structure contains 12 transistors. It has 6 parallel N channel devices in series with 6 parallel P channel devices. The semiconductor is the shorter fins under the higher fins. There are 6 of these fins with 2 transistors each configured in complimentary pairs as a basic inverter. The 5 bars on top are the Source on the ends and the Drain in the center and the two Gates in-between. The gate wraps the channel under it between the source and drain of each transistor. This is considerably different than SOI technology.

Re:Intel's 3g gate transistors stop all current (1)

Calos (2281322) | more than 2 years ago | (#38034720)

Exactly, but more detail than I decided to go into :)

I'd probably just link to the Ars or TechReport article instead, though those may go over the head of people that have no education in this stuff.

Re:Intel's 3g gate transistors stop all current (3, Interesting)

Kjella (173770) | more than 2 years ago | (#38032028)

This has been one of their major bullet points, the next round of processors will improve power consumption a lot. So if Intel's not on the right path, I don't know who is. AMD Bulldozer certainly is not. Of course sooner or later this is going to come to a halt, silicon atoms are roughly 0.235nm apart. So 22/0.235 = 93.6 atoms. The roadmap [blogspot.com] puts us at 8nm = 34 atoms in 6 years. Just extrapolating in 2023 that it'll be 12 atoms, 2029 4.5 atoms and 2035 1.6 atoms. That's not going to happen, at latest in the 2020s we will hit a brick wall and Moore's "law" will be dead. We'll hit some level of energy efficiency and most likely stay there.

Re:Intel's 3g gate transistors stop all current (1)

electrosoccertux (874415) | more than 2 years ago | (#38034570)

silicon lattice constant is like 4.5 angstroms or something I believe not 2.35...and apparently that puts the wall at about 5nm, "with tweaking maybe 4nm". Maybe we can play with carbon nanotubes then but we'll hit a wall regardless, can't go much smaller...will be interesting to see what happens to intel's stock and employment. If I had to guess they will just lay everybody off except the actual fabrication workers and continue selling the chips...

Re:Intel's 3g gate transistors stop all current (0)

Anonymous Coward | more than 2 years ago | (#38032172)

For this process node a FinFET (3d gate) may work, but as the gate geometry gets smaller, quantum tunneling will cause a leakage current and even a small leakage current times a couple of billion transistors amounts to serious current loss and unwanted heating.

Re:Intel's 3g gate transistors stop all current (1)

Technician (215283) | more than 2 years ago | (#38032400)

The other advantage a wrap around gate provides is the ability to pinch off the channel at LOWER voltage. This is essential for low power high speed transistors. The overall improvement is lower leakage at lower voltage, lower current, and thus lower power at high speed.
    This moves a 90 Watt part to a 9 watt part at about the same speed, or much lower Watt part at slower speeds. This is essential to bring desktop features to Ultrabooks and other low profile devices with relatively small batteries. Long battery life and high performance is important in mobile devices. Low power draw and high performance is important in server farms as a cost cutting item. Again this is why this breakthrough is such a big deal.

It'd be nice if ... (1)

fsckmnky (2505008) | more than 2 years ago | (#38030816)

Commodity priced mobos were available, that didn't have all the lamer gamer gee whizardry on them, but instead, came with pci-interconnect.

ie. A standardized, cheap, blade/cluster platform.

silicon-on-insulator (SOI) (2)

popeyethesailorman (735746) | more than 2 years ago | (#38031066)

That's one reason they invented SOI. Other advantages include higher speeds (due to less capacitance), higher operating temperatures, latch-up free, and radiation hardness.

Re:silicon-on-insulator (SOI) (4, Informative)

UnknownSoldier (67820) | more than 2 years ago | (#38031752)

You are not going to address Moore's Law with silicon -- the problem with silicon is that it has an effective 4 to 5 GHz barrier -- which is its dirty little secret that no one wants to talk about. The army had 100 GHz chips 20 years ago -- guess what, they weren't using silicon, but a germanium compound.

  The only "real" solution is to start looking at other materials.

Re:silicon-on-insulator (SOI) (3, Informative)

rev0lt (1950662) | more than 2 years ago | (#38032084)

The histeresis of the material also applies to copper, aluminium, gold and other conductive metals using in manufacturing of the circuits.
Germanium has been used in semiconductors longer than silicon, and it is widely used today. One of the emerging alternatives to silicon is a germanium-silicon alloy, that has been gaining traction from some years now, so this is nothing new.

Re:silicon-on-insulator (SOI) (1)

Anonymous Coward | more than 2 years ago | (#38032106)

Moore's Law only makes sense when applied to bulk CMOS. No other semiconductor technology has the momentum to challenge bulk CMOS. No other material has an easily made complimentary transistor pair. You can get a III/IV or II/VI compound to operate with majority carriers to THz frequencies, but it is difficult to find a replacement for such easily created p and n enhancement devices that can be created in bulk CMOS. If SOI, GaAs, InGaAs, SiGe, GaN, SiC, nanowire FETs, or any other "exotic" material had a native oxide that enabled complementary devices, could operate at higher frequencies, and consumed less power, folks would have abandoned bulk CMOS long ago.

Re:silicon-on-insulator (SOI) (0)

Anonymous Coward | more than 2 years ago | (#38032552)

Moore's law has nothing to do with clock speed, it has to do with number of transistors in a given space for a given price. I would have thought the whole MHz myth would be dead by now...

Re:silicon-on-insulator (SOI) (1)

Anonymous Coward | more than 2 years ago | (#38032966)

Err... the frequency you're talking about is the transition frequency of the process. Modern CMOS processes have f_t's in the 100+ GHz range. Just because the transistors themselves are that fast, doesn't mean you can design digital systems that clock that fast. It's trivial to design an extremely high speed processor, but it's just not economical since the power consumption goes up as operating frequency squared. The limit to processors today is the ability to dissipate this heat. Of course, new materials would be able to reduce leakage currents or decrease the capacitances that have to be charged during switching events. But the germanium transistors you're talking about from 20 years ago were just BJT's which are unsuitable for dense digital circuits.

Re:silicon-on-insulator (SOI) (0)

Anonymous Coward | more than 2 years ago | (#38034004)

The power in CMOS devices P ~ CFV^2. Power goes up linearly with frequency, but at the square of the voltage. This is why core voltage has been decreasing with each technology node. Unfortunately, due to clock switching noise, it is very difficult to go much below one volt. Also, the I/O has generally held at 3V3 and you still would have to switch that at high frequencies unless you parallel the signals (which is done in many cases).

New physical design. (3, Interesting)

Commontwist (2452418) | more than 2 years ago | (#38031078)

In the whole CPU mounting in desktop PCs the heatsink/fan combos are massive beasts on top of the chip itself. The 'chip' is mostly a heat sink itself with larger connectors to connect up to the motherboard. Compared to the actual chip the connection and heat dissipation materials are huge.

Has anyone tried to create a silicon cube composed of layer upon layer of CPUs that is of low enough speed that heat isn't a problem, especially if you coat it with aluminum? How many CPUs could one fit in such a cube the size of a modern heat sink and how much parallel processing power do you think one could get out of it? Would it be able to stand up to a modern CPU? Given at least a few dozen CPUs could be fit into such a thing the parallel processing should be impressive, I'd think.

Of course, I'm no computer engineer which is why I'm posting this to see how good/bad this idea is.

Re:New physical design. (0)

mikael (484) | more than 2 years ago | (#38031176)

Still never understood why CPU's have to be installed in a motherboard mounted socket rather than on their board connector like a GPU. Or why all the connectors have to be on the motherboard and not on a separate board.

I'm guessing that trying to create a silicon cube based on multiple layers would increase the chances of defects reducing the yield of functional dies. Even if you did get two successful slices, there's always the chance something would get trapped inbetween.

Maybe you could create a heat sink from a combination of a piezo-electric material and another material with high heat expansion/contraction coefficient, so that rapid heating/cooling would create current.

Re:New physical design. (0)

Anonymous Coward | more than 2 years ago | (#38031352)

cpu on a daughter card has been done before notably in the P3 generation. Its less then optimal for many reasons. You limit the number of pins buy using a daughter board. There are bus length issues. Heat dissipation is worse since your PCB is smaller (a large amount of heat is dissipated directly into the PCB). And thats just a start.

Re:New physical design. (1)

kesuki (321456) | more than 2 years ago | (#38031368)

they tried that, the slot based pentiums. they were a hassle. then more recently they had the ball grid array to avoid the oft broken pins...

Re:New physical design. (1)

Taty'sEyes (2373326) | more than 2 years ago | (#38031620)

Just a little FYI. The P3 card design was to stop AMD from being pin compatible with Intel.

Re:New physical design. (0)

Anonymous Coward | more than 2 years ago | (#38032020)

It was an attempt to slow AMD down(which worked) because if all the mother-board manufactures adopted INTEL's slots then AMD would have to re-architect their processor to also fit in a slots. But i do remember when you could buy an AmGen processor that would fit into an Pentium socket.

Re:New physical design. (1)

Commontwist (2452418) | more than 2 years ago | (#38031406)

Makes me wonder what one could do if you tossed traditional 2-D design (Most CPUs do have layers but very much 2D for all that) and went for a more 3D design like in the human brain. Create a miniaturized silicon 'matrix' of semi-conductor connections in a similar way to the human brain, for example.

Re:New physical design. (4, Informative)

slew (2918) | more than 2 years ago | (#38031780)

Makes me wonder what one could do if you tossed traditional 2-D design (Most CPUs do have layers but very much 2D for all that) and went for a more 3D design like in the human brain. Create a miniaturized silicon 'matrix' of semi-conductor connections in a similar way to the human brain, for example.

The 2 main problems with 3d are currently fabrication density (defect issue,, stress, strain, etc) and how to get rid of all that heat. In your brain, that is solved by self-assembly, redundancy and low usage and a circulatory system. The current computing model of a usable CPU (runs an OS, does IEEE floating point arithmetic, does branching/looping) is probably too complex to solve this problem the same way in the forseable future. Of course if we change the definition of what a usuable CPU is, then perhaps this would be more feasable.

On the other hand there is some progress being made on bump stacked or through subtrate vias (TSV) assemblies (sometimes done for DRAM&Flash for cellphones) and even some limited 2-layered silicon devices (instead of the current one layer) of active devices per silicon die.

Stacked silicon die are promising, but there is currently a large overhead for mechanical connection between die so the density isn't very good. Also there's the problem of differing thermal expansion coefficients between the die that cause mechanical instabilities (which currently has to be solved by just putting in even more interconnect area overhead/margin).

The 2-layered devices are usually not done by stacking two active layers on the same wafer (because it's currently hard to grow a new thick uniform layer of silicon on top of existing circutry) , but they are made by patterning on one side, sticking a new clean wafer on top, flipping the stack over, shaving off the new top (used to be the bottom of the original patterned wafer), and then doing a new pattern on the newly shaven surface. As you might imagine, this isn't currently very scalable for more layers as defects will eventually dominate the system.

Neither technique is currently very good for getting the heat out.

People are working on this,and some limited stuff has made it's way out of the lab and into production but none of the 3d stuff is currently much better than just doing the standard planar chip for most typcial CPU projects right now. It's just a niche...

Re:New physical design. (0)

Anonymous Coward | more than 2 years ago | (#38035494)

You get the floating point arithmetic for free in the brain - the time between a particular neuron firing is inversely proportional to the strength of a particular input. Stronger signals mean shorter pulse times. More precision is gained by having more neurons for than input.

Re:New physical design. (3, Interesting)

mikael (484) | more than 2 years ago | (#38033998)

Human brains are more like a supercomputer architecture. The outer layers (gray matter) do all the calculations while the inner layers (white matter) do the connections. There are diffusion-based MRI images that show how all the interconnects go.

For heating, you have the arterial blood supply, while for cooling, you have the venous blood supply which draws the heat out. For pressure equalisation, there's the circle of Willis, a ring of arteries. Even the flow of blood and nutrients isn't a simple pumping process. It's http://www.brain-aneurysm.com/ba1.html [slashdot.org] ">regulated by a neural system of it's own

Re:New physical design. (0)

Anonymous Coward | more than 2 years ago | (#38032008)

You can buy motherboards with the cpu soldered on. check out all the VIA motherboards with their nano/c3/c7 cpus
And all those Atom boards
All the netbooks
And a bunch of laptops (although I think most still use sockets)

Re:New physical design. (1)

SuricouRaven (1897204) | more than 2 years ago | (#38033224)

Soldered on is the usual way to connect an Atom. As the Atom is an embedded processor, it's not expected to be upgradeable.

Re:New physical design. (1)

sjames (1099) | more than 2 years ago | (#38036324)

The socket is so a single SKU of motherboard can be fitted with a variety of CPUs of different speeds after manufacture. In the past, it was copmmon to upgrade the CPU after the fact, but these days by the time you want to upgrade, the new CPUs want a new socket anyway. In the embedded world, the CPU is typically soldered on like all the other chips. The connections all tend to be on the mainboard rather than a daughter board mainly due to the number of connections needed, especially when the memory controller is built in to the CPU.

The whole advantage of stacking the silicon rather than making discreet chips is to take advantage of fast connections between layers. That's defeated if the thing must be slowed down too much to reduce heat. Your point is also valid. The stacked chip would tend to have poor yields.

Re:New physical design. (0)

Anonymous Coward | more than 2 years ago | (#38031294)

well humans can barely handle the data produced by machines already. besides which if the matrix is real why would the machines let us build the machines fast enough to beat them?

Re:New physical design. (2)

ChrisMaple (607946) | more than 2 years ago | (#38031632)

Removing heat is the big problem. Every time you double the number of active layers,
  • ___ you double the heat power
  • ___you double the distance the heat has to travel, thus doubling the heat rise of the innermost layer

Combining the two effects, the innermost layer rises in temperature by a factor of 4, which means that speed has to be reduced by a factor of 4 to get back down to the temperature of the previous iteration. Thus twice as many layers means 2/4 times the processing power.

There's a gain to be had through shorter interconnects, because that means lower capacitance and lower speed-of-light delays, which in turn means more speed and lower power. However, that aspect of improvement quickly hits the point of diminishing returns. Modern ICs with 6 or so layers of interconnect metal can be surprisingly thick, so a signal leaving one plane for processing on the next gets a possibly significant delay.

Re:New physical design. (1)

Commontwist (2452418) | more than 2 years ago | (#38031896)

You might also be able to 'layer' aluminum 'channels' inside the cube to get the lowered heat out or perhaps a contained, liquid heat distribution system? Layered aluminum separating CPUs?

Assuming you could cool the cube down through layers of cooling material equally thick as the CPUs how powerful would the cube be?

Fabless (5, Interesting)

drhank1980 (1225872) | more than 2 years ago | (#38031204)

Where I think AMD really fell behind was they were not able to afford the kind of R&D on the manufacturing side that Intel does for each new process. AMD basically gave up and is now in the same boat as the rest of the "fabless" companies being 100% dependent on what TSMC or Global Foundries can produce. This is always going to put you at a competitive disadvantage at the very high end. While intel is working on pushing down to 22nm FINFET for the "old" architecture people in the design group are without a doubt working on 16nm and getting sample silicon at this node so they can tune their designs for what the transistors will really look like. When you go fabless you get to figure this out with poor yields while in "manufacturing" at the foundry. Maybe at 130 -65nm this wasn't such a big deal but when you need to make your design work with double or tripple patterned 193nm immersion lithography just figuring out some design rules is no simple task.

Also does anyone know if there is more than 1 vendor in the world that can make fully depleted SOI of the quality needed for 32nm - 28nm on a 300mm wafer? Last I knew this was a major reason behind Intel pushing FINFET instead of the fully depleted SOI.

Re:Fabless (1)

nsaspook (20301) | more than 2 years ago | (#38031980)

Where I think AMD really fell behind was they were not able to afford the kind of R&D on the manufacturing side that Intel does for each new process.

This is why it's so good now to be at least a few process generations behind Intel. If you're on the rusty edge of fab technology you get Intelâ(TM)s old equipment for pennies on the dollar and it's been well maintained unlike used equipment from Asia.

Re:Fabless (2)

blahplusplus (757119) | more than 2 years ago | (#38033158)

"Where I think AMD really fell behind was they were not able to afford the kind of R&D on the manufacturing side"

I believe more that it is a matter of market failure and problems related to anti-trust, that intels advanced manufacturing actually hinders chip design advancement through being able to monopolize production facilities. Look at the underhanded tactics intel used during the athlon era when AMD was ahead. In my opinion we have a special case of market failure. The resources that are now required to make chips at smaller and smaller geometries have pushed costs sky high and intel can whether it because it has been a monopoly for so long and they have the largest market share. This is why most other chip companies are fabless - they would need something on the order of sustained government investment to compete with intel because the barriers to entry are now too high producing market failure and limiting competition between chip designers. I'm sure there are lots of great chip designers out there but without the mfg facilities to back it up I'm sure we're wasting a lot of great talent.

Re:Fabless (1)

sjames (1099) | more than 2 years ago | (#38036360)

Why do people think AMD is at such a disadvantage at the high end? They're neck and neck with Intel there and tend to produce less heat. It's the high end desktop where Intel has an advantage. In the low power embedded, AMD has the advantage.

The only place Intel has a clear win is in benchmarks compiled with the intel compiler without disabling the AMD crippling function (I'm NOT making that up!).

Not the only way. (2)

queazocotal (915608) | more than 2 years ago | (#38031280)

An increasingly common way to get powersaving is to divide the chip into oodles and oodles of blocks.
These blocks are rapidly turned off and on as they are needed.
This is what lets your phone last a week on battery, while staying logged into wifi and 3G.

Re:Not the only way. (0)

Anonymous Coward | more than 2 years ago | (#38031360)

Which phone is that? Not the LG Optimus 2X, I can tell you that!

Isn't IBM the one who's 4 years ahead? (1, Flamebait)

gentryx (759438) | more than 2 years ago | (#38031364)

I'm mainly referring to embedded DRAM, which they use for gigantic on-chip caches. Neither Intel nor AMD have this, they have to use SRAM for their caches, which consumes much more space on the die. Sure, IBM is targeting different customers with their pricing strategy, but just imagine having an 8-core POWER7 chip with 32MB on-chip L3 cache in your PC. That thing would just blow any x86 CPU away. Partly because of the gigantic cooler required, I admit.

Re:Isn't IBM the one who's 4 years ahead? (0)

Anonymous Coward | more than 2 years ago | (#38032072)

SRAM is faster and more power efficient than DRAM
It doesn't require a capacitor, a controller or refreshing.
It just uses 6 transistors instead of 1
When it takes 50ns to read from DRAM, it becomes a major bottle neck (even 13ns low latency dram you've only got 80,000 random reads per second)

Re:Isn't IBM the one who's 4 years ahead? (1)

fnj (64210) | more than 2 years ago | (#38032258)

Huh - wouldn't 13 MICROseconds be 76.9 thousand reads/s and 13 ns be 76.9 MILLION reads/s?

Re:Isn't IBM the one who's 4 years ahead? (1)

Anonymous Coward | more than 2 years ago | (#38032128)

Is this a clever troll bait, or do you simply not understand that DRAM is significantly slower to access.

Yes IBM has embedded DRAM as their L3 cache, but Intel and AMD essentially have embedded SRAM as their L3 cache in the range of 4-8MB, with DRAM (DDR) as L4. So what's your point sir? That IBM cannot afford to put large SRAM on their process, so they found a way to get (slower and more power hungry) DRAM closer to the cores as a stop-gap?

Besides, a Power7 @ 3.8GHz has a TDP of ~200W, while an Intel Sandybridge Server @ 3.1 GHz with 20 MB L3 SRAM cache has a TDP of ~150W. It would be interesting to compare benchmarks...

Re:Isn't IBM the one who's 4 years ahead? (2, Interesting)

Anonymous Coward | more than 2 years ago | (#38033302)

IBM's eDRAM solution is very expensive. It is not that Intel doesn't know how to make it. They must have evaluated it but finally figured that it is a big yield issue and not worth the cost. SRAM pretty much uses standard logic manufacturing process with straight forward customization. Including DRAM is lot of extra process steps. And the name of the game, in consumer space, is to reduce cost. Power7 isn't available on any low end system. And high end XEON chips have started to eat Power7's lunch since they are much cheaper solutions.

discount jersey (1)

jersey123456 (2485408) | more than 2 years ago | (#38032328)

Travel to buzz sports food in your amphitheatre to acquisition MLB MLB jerseys [jerseymall.biz] jerseys at a low cost. These food about accept a accumulating of replica NHL jerseys [jerseymall.biz] & awakening jerseys that are agilely beat but amount about bisected the amount of a new jersey.Enhance the amount of collectible MLB jerseys by accessory autograph sessions for your Wholesale NFL jerseys [jerseymall.biz] admired team. The NBA jerseys [jerseymall.biz] account of your jersey will access in case you get the signature of the amateur or the absolute MLB aggregation on the jersey.

Fin Field Effect? (1)

MacGyver2210 (1053110) | more than 2 years ago | (#38032610)

I thought Intel's new FinFET transistor structure was going to be the new standard? They had excellent results without significant retooling or adjustment to the manufacturing process. They just built the transistors upward instead of across at ever increasingly smaller scale.

Xilinx are the leaders in transitor count at 6.8B (5, Informative)

axonis (640949) | more than 2 years ago | (#38032648)

The below processor looks like the current transistor king to me, way beyond the scope of the discussions on moores law here, sometime you need to think outside the mainstream box more than double Intels best [wikipedia.org]


Virtex-7 2000T FPGA Device First to Use 2.5-D IC Stacked Silicon Interconnect Technology to Deliver More than Moore and 6.8 Billion Transistors, 2X the Size of Competing Devices SAN-JOSE, Calif., Oct. 25, 2011-- Xilinx, Inc. [design-reuse.com](Nasdaq: XLNX) today announced first shipments of its Virtex®-7 2000T Field Programmable Gate Array (FPGA), the world's highest-capacity programmable logic device built using 6.8 billion transistors, providing customers access to an unprecedented 2 million logic cells, equivalent to 20 million ASIC gates, for system integration, ASIC replacement, and ASIC prototyping and emulation. This capacity is made possible by Xilinx's Stacked Silicon Interconnect technology, the first application of 2.5-D IC stacking that gives customers twice the capacity of competing devices and leaping ahead of what Moore's Law could otherwise offer in a monolithic 28-nanometer (nm) FPGA.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...