Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Will 7nm and 5nm CPU Process Tech Really Happen?

Soulskill posted about 3 months ago | from the less-is-more dept.

Hardware 142

An anonymous reader writes "This article provides a technical look at the challenges in scaling chip production ever downward in the semiconductor industry. Chips based on a 22nm process are running in consumer devices around the world, and 14nm development is well underway. But as we approach 10nm, 7nm, and 5nm, the low-hanging fruit disappears, and several fundamental components need huge technological advancement to be built. Quoting: "In the near term, the leading-edge chip roadmap looks clear. Chips based on today's finFETs and planar FDSOI technologies will scale to 10nm. Then, the gate starts losing control over the channel at 7nm, prompting the need for a new transistor architecture. ... The industry faces some manufacturing challenges beyond 10nm. The biggest hurdle is lithography. To reduce patterning costs, Imec's CMOS partners hope to insert extreme ultraviolet (EUV) lithography by 7nm. But EUV has missed several market windows and remains delayed, due to issues with the power source. ... By 7nm, the industry may require both EUV and multiple patterning. 'At 7nm, we need layers down to a pitch of about 21nm,' said Adam Brand, senior director of the Transistor Technology Group at Applied Materials. 'That's already below the pitch of EUV by itself. To do a layer like the fin at 21nm, it's going to take EUV plus double patterning to round out of the gate. So clearly, the future of the industry is a combination of these technologies.'"

cancel ×

142 comments

Sorry! There are no comments related to the filter you selected.

Car analogy? (4, Funny)

sinij (911942) | about 3 months ago | (#47281593)

Could someone explain to me why further refinement of fabrication process is the only way to progress? With a car analogy?

Re:Car analogy? (2)

50000BTU_barbecue (588132) | about 3 months ago | (#47281625)

Drivers are getting fatter and fatter, and the only way to get the car to move at the same speed is by continually improving the car... to end up at the same speed as before.

Re:Car analogy? (1)

Noah Haders (3621429) | about 3 months ago | (#47281645)

fuel cell cars are always on the cusp of commercialization, but remain 10 years out due to some technical hurdles. They've been 10 years out for decades.

Re:Car analogy? (1)

infogulch (1838658) | about 3 months ago | (#47282273)

So fuel cell cars are memristors? That actually sounds about right.

Re:Car analogy? (1)

peragrin (659227) | about 3 months ago | (#47283749)

Ah but HP is already testing and design products with memresistors. Of course progress is going slow because HP sucks at bringing products To the market.

Re:Car analogy? (1)

serviscope_minor (664417) | about 3 months ago | (#47283055)

fuel cell cars are always on the cusp of commercialization, but remain 10 years

Cars maybe. But fuel cell busses are a regular sight round these parts:

http://en.wikipedia.org/wiki/L... [wikipedia.org]

Re:Car analogy? (1)

Opportunist (166417) | about 3 months ago | (#47282073)

In a rather odd way it's incredibly fitting.

Re:Car analogy? (0)

Anonymous Coward | about 3 months ago | (#47282713)

Yes , fat people do fit oddly in the seats of cars, planes, etc. that are made for normal-width people.

Re:Car analogy? (0)

Anonymous Coward | about 3 months ago | (#47283677)

Fat's not thunny.

Re:Car analogy? (1)

dkman (863999) | about 3 months ago | (#47284045)

Or aim it downhill

Re:Car analogy? (0)

Anonymous Coward | about 3 months ago | (#47281709)

Could someone explain to me why further refinement of fabrication process is the only way to progress? With a car analogy?

There's a limit to how big the engine compartment can be, so you need to keep squeezing more stuff into that limited space to improve.

Re:Car analogy? (5, Insightful)

spire3661 (1038968) | about 3 months ago | (#47281791)

Lets extend this. You can only bore out the cylinders so much before you have to start looking at a new design or your cylinder walls will be too thin.

Re:Car analogy? (1)

Zeromous (668365) | about 3 months ago | (#47281857)

If you take anything away from the car analogy, let it be this.

Re:Car analogy? (1)

schlachter (862210) | about 3 months ago | (#47282839)

You have to know when to push past those barriers...until you have a single cylinder engine with more displacement than before!

Re:Car analogy? (1)

Anonymous Coward | about 3 months ago | (#47281807)

Why do we need a bigger engine? Oh, to tow the larger and larger trailer of crap that programmers *I mean drivers* keep trying to tow with the vehicle.

Re:Car analogy? (1)

Sockatume (732728) | about 3 months ago | (#47281779)

Everyone wants faster, cheaper, and lighter cars, but you cannae break the laws o' physics, captain.

Re:Car analogy? (3, Interesting)

Zak3056 (69287) | about 3 months ago | (#47282231)

Everyone wants faster, cheaper, and lighter cars, but you cannae break the laws o' physics, captain.

That doesn't sound like breaking the laws of physics: making the car lighter will make it faster, as well as (assuming you avoid exotic materials) making it cheaper.

Re:Car analogy? (4, Insightful)

drinkypoo (153816) | about 3 months ago | (#47282957)

That doesn't sound like breaking the laws of physics: making the car lighter will make it faster, as well as (assuming you avoid exotic materials) making it cheaper.

It's not breaking the laws of physics, but it is ignoring the current state of materials technology. You have to build a lot of cars before you can get the cost of building an aluminum body down to the same as the cost of building a steel body, and carbon fiber (the only other credible alternative today) is always more expensive.

Also, they forgot "stronger". Cars which have a more rigid body not only handle better but they're actually more comfortable, because the suspension can be designed around a more rigid, predictable body. Getting all four of those things in the same package is the real challenge.

Re:Car analogy? (1)

Anonymous Coward | about 3 months ago | (#47281837)

As you get smaller channels they start to interfere with eachother. Much like shrinking lane size on an interstate would cause similar problems.

Re:Car analogy? (0)

Anonymous Coward | about 3 months ago | (#47281883)

If cars progressed the way chips did, your car would get 1,000,000 MPG, drive 1,000,000,000 MPH, weigh 1 mg, and cost a nickel. I made those numbers up but the bottom line is that continuously making transistors smaller means you're always getting a lot more for your money.

Re:Car analogy? (2)

alen (225700) | about 3 months ago | (#47282017)

same with cars

30 years ago you had an AM radio, you needed a V8 for 190hp and dozens of features we take for granted today may have been thought of to be only on ultra luxury cars. not like we had navigation, blue tooth and lots of other gizmos in cars. a lot of the safety features and the new features people want suck up gas as well

Re:Car analogy? (2)

drinkypoo (153816) | about 3 months ago | (#47283003)

a lot of the safety features and the new features people want suck up gas as well

Safety, yes. But none of the new features people want weigh anything notable. Indeed, most of them come free with the size reduction associated with modernization. If you replace a bunch of relays with some logic and a couple of relays then you can also add automation along with it and the whole thing actually weighs less. Even the lightest cars today have ABS, traction control, and yaw control, and virtually no cars [in the USA] are not offered with AC. Even modern adaptive suspension requires very little additional equipment, if it uses ferrofluids that is.

In fact, virtually all the weight increase in modern cars that doesn't relate directly to crash safety has to do with asphalt, added for sound deadening. That's the primary difference between (for example) a Toyota and a Lexus. Sometimes the Lexus has a fancier powerplant than you can get in a Toyota, but then the same sort of thing will make it there within a few years. I'm just picking on them because I like to, it's the same with all uprated marques, whether it's infiniti to nissan or even mercury to ford. More asphalt, same car.

Re:Car analogy? (2)

Opportunist (166417) | about 3 months ago | (#47282151)

Yeah, but brakes would break occasionally for no reason, or it would just not start for no good reason, you could only drive on roads that the car makers approved and only transport goods that were approved to be transported by this specific kind of car, you'd have to get a new car every other year because you would not get any service for your old one anymore, people could easily hotwire your cars and drive away with them and everyone would tell you whatever goes wrong with it, it's only YOUR fault, not the manufacturers'.

Re:Car analogy? (0)

Anonymous Coward | about 3 months ago | (#47283703)

Yeah, but brakes would break occasionally for no reason, or it would just not start for no good reason, you could only drive on roads that the car makers approved and only transport goods that were approved to be transported by this specific kind of car, you'd have to get a new car every other year because you would not get any service for your old one anymore, people could easily hotwire your cars and drive away with them and everyone would tell you whatever goes wrong with it, it's only YOUR fault, not the manufacturers'.

Sounds like a Microsoft Car(tm)

Re:Car analogy? (5, Insightful)

DahGhostfacedFiddlah (470393) | about 3 months ago | (#47281981)

We're trying to make smaller and smaller cars out of silicon, because then we can fit more cars onto parking lots. The number of cars we can fit onto a parking lot has been doubling approximately every 18 months for the past half-century, but we appear to be approaching some hard physical limits for the actual size of cars. In addition to the limits imposed by the size of the cars themselves (below a certain size, cars start interacting at a quantum level with the other cars around them), there are also challenges inherent in manufacturing cars at such a tiny scale. There is some new car-making technology on the horizon that may resolve these issues by using higher-frequency car-making lasers in our car foundries. But top researchers still have technical hurdles to pass before they can manufacture cars that are smaller than 7nm.

Re:Car analogy? (0)

Anonymous Coward | about 3 months ago | (#47282013)

We've reached the limit of how fast cars can travel, so the only way to speed up transportation is to make the roads shorter...

Re:Car analogy? (1)

ifiwereasculptor (1870574) | about 3 months ago | (#47282603)

Could someone explain to me why further refinement of fabrication process is the only way to progress? With a car analogy?

Easy enough. Take a car X driven by a driver Y. One driver can drive one car, so X = Y. If you make the car 50% smaller, then you'll have 2X = Y. If each car has a top speed of V, then the same driver Y can achieve 2V by driving those two smaller cars at once.

For a sense of scale (0)

Anonymous Coward | about 3 months ago | (#47281623)

Silicon atoms are 0.2nm wide. We're getting into "why aren't you just directly pushing the atoms around with atomic force microscopy?" territory.

Re:For a sense of scale (1)

SuricouRaven (1897204) | about 3 months ago | (#47281717)

IBM could build a chip that way if they wanted to. It just wouldn't be cost-effective - would it take decades of very delicate work to make a single processor that way.

Re:For a sense of scale (5, Informative)

mc6809e (214243) | about 3 months ago | (#47281897)

We're already at the point where 22nm components are more expensive per transistor than those at 28nm. [eetimes.com]

Previous shrinks lowered the cost of each transistor. It doesn't look like it's going to happen after 28nm.

Re:For a sense of scale (1)

SuricouRaven (1897204) | about 3 months ago | (#47282259)

There are other advantages to shrinking components. Higher clock rates become possible. The power consumption is also lessened, if you can offset the leakage issue somehow.

Re:For a sense of scale (3, Informative)

ifiwereasculptor (1870574) | about 3 months ago | (#47282709)

Kind of. Heat dissipation starts being a bigger problem, and thermally limit slock speed. Look at overclocking sandy bridge vs ivy bridge chips.

Re:For a sense of scale (4, Informative)

mc6809e (214243) | about 3 months ago | (#47283349)

There are other advantages to shrinking components. Higher clock rates become possible.

You'd think so, but the problem is global interconnect. Not gates. It was all the way back at the 250nm node when interconnect and gate delay were about the same.

At the 28nm node, wire delay is responsible for something like 80% of the time it takes for signals to work their way through a circuit.

And it some cases inverters are actually used to help signals propagate more quickly down long wires. In other words, long wires are so slow compared to gates that adding gates can speed things up!

Re:For a sense of scale (1)

Z00L00K (682162) | about 3 months ago | (#47282801)

So unless we come up with a novel technology to build with a higher density we are at the end of the road for that.

Maybe it's time to instead focus on other ways to improve performance. It may of course mean that the current architectural dogmas has to be abandoned.

Re:For a sense of scale (1)

Anonymous Coward | about 3 months ago | (#47284295)

>We're already at the point where 22nm components are more expensive per transistor than those at 28nm. [eetimes.com]

Only if you have a crappy GF fab.
If you invested in a decent finfet capable process on sufficiently large wafers, your margins will improve at smaller geometries.

Re:For a sense of scale (4, Insightful)

hairyfeet (841228) | about 3 months ago | (#47284347)

And this little tidbit I'm sure has CPU OEMs scared....they passed "good enough" on their designs and went so far into "insanely overpowered" that consumers really have no reason to buy before the previous unit dies.

Take what I'm typing on as an example, its an HP Pro 3000 which since it came with Vista (which I of course upgraded to Win 7, putting 32bit Vista on a PC with 4GB? WTH HP?) I would date it around 07-08. It has a Pentium Dual at 2.7Ghz, 4GB of RAM, and a 500GB HDD....how many home users are actually gonna be able to max this out? I pound the shit out of this machine, downloading drivers and burning discs and yanking data off of memory cards, often at the same time, and it just purrs, so why buy a new one? Now we are seeing the same thing with ARM, my dad recently picked up a tablet I recommended which has 4 cores, 1GB of RAM and 8GB of onboard storage, final cost? $140 shipped, the odds that he will be able to max it out? pretty much zero. this thing has enough power it can easily drive his widescreen TV over HDMI, surf, chat, and gets great battery life...what motivation does he have to buy a new one?

Lets face it X86 systems have become like washers and dryers, no need to get a new before the old one dies. Hell this is even true for gamers, my gaming PC at home is fricking 5 years old now which is ancient history in the PC world yet with a hexacore, 8GB of RAM, and 3TB of HDD space the only thing I've had to do since buying it is upgrade my GPU. That's it, that is all I've had to do and I'm playing Bioshock Infinite and Far Cry 3 and anything else i want to play with plenty of bling and decent framerates. We are seeing X86 play out on fast forward with ARM now going up to octocore because MHz bumps are getting harder to do without blowing the power budget, there is just no reason to buy before the current one dies which I'm sure is scarier than trying to hit 14nm to Intel and TSMC.

Re:For a sense of scale (2)

Old97 (1341297) | about 3 months ago | (#47281993)

Then IBM would saddle it with some really complex, bloated, crappy middleware called "WebSphere Atomic Appliance for Business". It would be more expensive and run slower than a no-name Intel based blade running Linux and an open source framework. You'd need their professional services to manage it for you.

Re:For a sense of scale (1)

necro81 (917438) | about 3 months ago | (#47282047)

Silicon atoms are 0.2nm wide. We're getting into "why aren't you just directly pushing the atoms around with atomic force microscopy?" territory.

you probably could. However, for a processor with 10^9 transistors and perhaps a dozen layers, it gets pretty time-consuming to build it by pushing atoms around one at a time.

Re:For a sense of scale (1)

wisnoskij (1206448) | about 3 months ago | (#47282205)

Well I think he is saying, that is pretty much what we are already getting to. When you are printing a 10nm wire into the silicon chip, you are not very far from doing it atom by atom as the wire is only like 50 atoms wide.

Re:For a sense of scale (2)

necro81 (917438) | about 3 months ago | (#47282233)

When you are printing a 10nm wire into the silicon chip, you are not very far from doing it atom by atom as the wire is only like 50 atoms wide.

Perhaps, but at least with lithography you can do it across the entire wafer (or die) area in a single go. That's batch processing all the transistors at once, rather than serially processing them with AFM.

e-beam lithography? (4, Informative)

by (1706743) (1706744) | about 3 months ago | (#47281639)

Clearly e-beam has some serious issues (throughput, to name one...), but progress is being made on that front. For instance, http://www.mapperlithography.c... [mapperlithography.com] ( http://nl.wikipedia.org/wiki/M... [wikipedia.org] -- though it appears there's only a Dutch entry...).

The Betteridge Exception (0)

Anonymous Coward | about 3 months ago | (#47281723)

The answer is yes.

Re:The Betteridge Exception (1)

LordLimecat (1103839) | about 3 months ago | (#47282009)

Bettridge's first exception:

Any headline whose question contains thinly veiled skepticism, will instead be best answered with a "yes".

Re:The Betteridge Exception (1)

timeOday (582209) | about 3 months ago | (#47282521)

This particular headline is an example of incredible restraint. It would be well-justified as: "Is Moore's Law Dead"?

Certainly it is on its deathbed at least.

Re:The Betteridge Exception (1)

ArcadeMan (2766669) | about 3 months ago | (#47282763)

If somebody took care of that Moore guy, his laws wouldn't apply anymore.

Re:The Betteridge Exception (1)

TechyImmigrant (175943) | about 3 months ago | (#47282809)

14nm -> 7nm.

2:1 Looks good to me.

Down at 2nm I think we're going to be worrying about whether the gate has an odd or even number of atoms across its width.

Technical (1)

FrozenToothbrush (3466403) | about 3 months ago | (#47281847)

This seems highly technical which is great. I would say at best these issues are 5 years out. Plus, stacking processors + making them larger is always an option. The margins on processors can be slim at the low end, to many fold at the top. The manufacturers will have to learn to live on leaner margins all round.

Re:Technical (2)

Guspaz (556486) | about 3 months ago | (#47284101)

The problem with stacking is the thermal/power situation. Specifically, how much power can a processor use before it's impractical to power and cool it? And when you have two or more processor dies stacked on top of eachother, the heatsink is only going to contact the topmost one. How do you remove that heat from the bottom one?

I suspect the answers to those questions are, it's not practical to use that much more power that we use in high-end desktop chips today (150-200W is probably the limit of practicality), and I recall some interesting stuff from IBM years ago where they were building vertical cooling channels into CPU dies to handle stacking, so that the heat could be moved from lower dies up to where it could be removed.

Perhaps the approach could be going with CPU designs that optimize for power consumption rather than performance (but still more efficient, consuming less power per unit of work), and then stack a bunch of them.

Not exactly news (0)

Anonymous Coward | about 3 months ago | (#47281871)

By the time awareness filters down to the semi-technical press like /., it's pretty much old news. Lithography has been running into bigger and bigger challenges and that has been behind architectural changes like multi-core systems, the re-emergence of specialized co-processors (e.g. GPGPU, FPGA ), and, most recently, embedding of FPGAs on Xeons [slashdot.org] from Intel. There's been some speculative talk about directly providing some configurable logic (basically a FPGA) merged into the processor to allow creation of custom instructions on the fly.

The future is hardware; learn a HDL today.

Re:Not exactly news (1)

phozz bare (720522) | about 3 months ago | (#47282143)

The future is hardware; learn a HDL today.

You're correct here, but I'd like to mention that recent advancements in HLS (High Level Synthesis) allow regular software programmers to write C code that is compiled directly to hardware logic. There are some new rules to learn, things don't always work as expected and debugging is completely different to debugging software, but my point is that it's definitely possible to write major logic blocks in C without writing a line of VHDL code. So not necessarily will everyone need to learn a HDL to be a part of this change.

Re:Not exactly news (0)

Anonymous Coward | about 3 months ago | (#47282729)

HLS isn't magic; you can't just feed app code from J. Random Programmer into it. The C code has to be structured with similar constraints to a HDL (as you allude to in your post) with a full awareness of the constraints and available resources of the target FPGA and the results themselves are mediocre compared to directly doing the same thing in a HDL. Might as well use the HDL straight up, IMO.

Same story (1)

JeffOwl (2858633) | about 3 months ago | (#47281891)

Last time it was leakage would prevent us from breaking 65nm. Before that it was lithography wouldn't get us below 120nm. Something will happen like it always does.

Re:Same story (3, Insightful)

rahvin112 (446269) | about 3 months ago | (#47282529)

There is a limit we'll hit eventually, we're approaching circuits that are single digit atoms wide. No matter what we'll never get a circuit less than a single atom. Don't get me wrong, I don't think 10nm is going to be the problem but somewhere around single digit atoms wide we're going to run out of options to make them smaller.

Re:Same story (2)

drinkypoo (153816) | about 3 months ago | (#47283053)

There is a limit we'll hit eventually, we're approaching circuits that are single digit atoms wide. No matter what we'll never get a circuit less than a single atom.

We'll go optical, and we'll use photons...

Re:Same story (0)

Anonymous Coward | about 3 months ago | (#47282857)

Last time it was leakage would prevent us from breaking 65nm..

Hello, McFly? It's mitigated a bit but the leakage is still getting worse and worse the smaller we go. Why the hell do you think clock has been topping out at 3.x GHz or so since the Pentium IV? Why do you think people are looking into dark silicon?

Leakage doesn't "prevent" anything. It's just another problem among the other problems that are rapidly piling up as feature size keeps shrinking.

what was the excuse for 90nm again? (1)

alen (225700) | about 3 months ago | (#47281979)

i remember in the 90's everyone swore it was impossible to go under 90nm how 1GHz was the maximum speed you could get

Re:what was the excuse for 90nm again? (0)

Anonymous Coward | about 3 months ago | (#47282267)

And I remember when the big breakthrough was "sub-micron" design. For a while I had to keep translating between engineers and marketers: 0.65 microns is 650 nm, etc. I left CAE at about the time that rumors of geometries below 100 nm started to surface. And now we're looking at sub-10nm!

Re:what was the excuse for 90nm again? (2)

TechyImmigrant (175943) | about 3 months ago | (#47282823)

I read an article in the Apple ][ days claiming going beyond 16MHz was impossible, given track to track inductance.

Re:what was the excuse for 90nm again? (3, Informative)

Bryan Ischo (893) | about 3 months ago | (#47284483)

I remember the 90's too and I don't remember any of that.

The race to 1 GHz were heady, optimistic days, and I don't recall anyone thinking that once we got there, it would all be over.

So I call bullshit on your post.

Low Hanging Fruit (4, Funny)

necro81 (917438) | about 3 months ago | (#47282015)

I am amused by this bit in the summary:

But as we approach 10nm, 7nm, and 5nm, the low-hanging fruit disappears

I'd say the low-hanging fruit disappeared a few decades ago. Continuing down the feature size curve has for many years required a whole slew of every-more-complicated tricks and techniques.

That said: yes, going forward is going to be increasingly difficult. You will eventually reach an insurmountable limit that no amount of trickery or technology will be able to overcome. I predict it'll be somewhere around the 0 nm process node.

Re:Low Hanging Fruit (0)

Anonymous Coward | about 3 months ago | (#47282355)

The atoms in a silicon lattice are spaced more than half a nm apart, so I'd say you're right. Even at 5 nm, you can literally use your fingers to count the number of atoms across the width of the feature.

Re:Low Hanging Fruit (0)

Anonymous Coward | about 3 months ago | (#47282459)

I can't wait to see a "pm" abbreviation, "nm" is getting old.

Re:Low Hanging Fruit (1)

ssam (2723487) | about 3 months ago | (#47282627)

how many transistors can you etch onto the side of a silicon atom?

Re:Low Hanging Fruit (0)

Anonymous Coward | about 3 months ago | (#47282733)

can't tell if really stupid, or trying to be funny.

Re:Low Hanging Fruit (0)

Anonymous Coward | about 3 months ago | (#47283307)

Obviously we need to go deeper.

We must create our hardware using quarks.

Re:Low Hanging Fruit (1)

Anonymous Coward | about 3 months ago | (#47283451)

You're asking the wrong question. The better one: How can we get a single silicon atom to behave like a full logic gate?

Re:Low Hanging Fruit (0)

Anonymous Coward | about 3 months ago | (#47282633)

Unlikely, the silicon atom itself has a diameter of around 220pm.

Re:Low Hanging Fruit (0)

Anonymous Coward | about 3 months ago | (#47284353)

My i7 is 22,000 pm!

Re:Low Hanging Fruit (1)

l0ungeb0y (442022) | about 3 months ago | (#47282925)

Came in here to say the same thing -- as if even getting to the nanometer level was somehow "low hanging fruit", never mind the current ability to fabricate at 14nm. Welcome to the brave new world of the gadget generation who take tech for granted yet are completely ignorant of the fundamentals behind it.

Re:Low Hanging Fruit (0)

Anonymous Coward | about 3 months ago | (#47282939)

How does this all play in relation to HP's memristor?

Re:Low Hanging Fruit (2)

mr_mischief (456295) | about 3 months ago | (#47283321)

The part of HP's work that applies here isn't the memristor. That's a low-cost SRAM (as opposed to DRAM). HP does have something to say about electron leakage, though. Their photonic interconnects use photons rather than electrons, hence the name.

Re:Low Hanging Fruit (1)

Anonymous Coward | about 3 months ago | (#47283695)

Memristor is a (IMO) more ambitious goal.

Currently, we have 3 passive circuit elements; resistor, capacitor and the inductor.

For the resistor, you have a linear relation between the R, I and V.

In the capacitor, you use the rate-of-change of the V to understand its behavior.

In the inductor, you use the rate-of-change of I.

The memristor tries to use the change of R to give you a passive element to use in conjunction with the above. Where resistance (or C or H) were merely constants before, now you have something which can track the change of its impedance and thus can act as memory.

I think the memristor could shrink down the size of ICs if it became reliable. However, to hold a simple bit steady, you now need at least 4 transistors. The memristor can (in theory) accomplish this same memory feature with a single, tiny component. However, to prove your mechanism is reliable; that you can put millions of these in a tiny package and be sure they will act independently from their environment, requires a lot of R&D.

Will it last with 10yrs of continuous use? (4, Insightful)

mrflash818 (226638) | about 3 months ago | (#47282211)

I worry about the reliability with tinyer and tinyer CPU feature size. ...how will those CPUs be doing, reliability-wise, 10yrs later?

When I buy something 'expensive', I expect it to last at least 10yrs, and CPUs are kinda expensive, to me.

(I still have an Athlon Thunderbird 700MHz Debian workstation that I use, for example, and it's still reliable.)

Re:Will it last with 10yrs of continuous use? (1)

TechyImmigrant (175943) | about 3 months ago | (#47282775)

Yes.

If you ever get a job designing chips, you will find that RV has become an important part of the design flow.

Re:Will it last with 10yrs of continuous use? (0)

Anonymous Coward | about 3 months ago | (#47283283)

I really don't think recreational vehicles belong in the design flow for computer chips. I'd like to talk to your manager.

Re:Will it last with 10yrs of continuous use? (1)

TechyImmigrant (175943) | about 3 months ago | (#47283461)

You don't? You'll never get a job around here then.

Re:Will it last with 10yrs of continuous use? (-1)

Anonymous Coward | about 3 months ago | (#47282845)

a 700mhz workstation? really nigga? you're better off replacing that shit with a raspberry pi, wth is wrong with you.

Re:Will it last with 10yrs of continuous use? (1)

drinkypoo (153816) | about 3 months ago | (#47283029)

(I still have an Athlon Thunderbird 700MHz Debian workstation that I use, for example, and it's still reliable.)

And I have an Athlon Thunderbird 700 MHz debian system that I retired years ago, and replaced with a pogoplug. It's no slower and it draws over an order of magnitude less power; IIRC 7W peak before USB devices. You can get one for twenty bucks brand new with SATA and USB3, and install Debian on that. It'll probably pay for itself in relatively short order, what with modern utility rates.

If you want another Athlon 700 though, you can have mine. I need to get rid of it. I need the shelf space for something else.

Re:Will it last with 10yrs of continuous use? (1)

jellomizer (103300) | about 3 months ago | (#47283533)

In my decades of experience I do not ever remember a case where the CPU is the cause of failure to the system.

Hard Drive failures, GPU, Modem, Network Card, Monitor, Keyboard, Mouse, Power Supply. But the CPU seems to always keep kicking, Granted I had the CPU fan die, but I tend to replace that rather quickly after failure.

But I also don't do stupid things like over clocking

Re:Will it last with 10yrs of continuous use? (-1)

Anonymous Coward | about 3 months ago | (#47284141)

Overclocking doesn't damage a CPU. Increasing the voltage applied to it does. You ignorant little faggot.

Re:Will it last with 10yrs of continuous use? (0)

Anonymous Coward | about 3 months ago | (#47283665)

I worry about the reliability with tinyer and tinyer CPU feature size. ...how will those CPUs be doing, reliability-wise, 10yrs later?

When I buy something 'expensive', I expect it to last at least 10yrs, and CPUs are kinda expensive, to me.

(I still have an Athlon Thunderbird 700MHz Debian workstation that I use, for example, and it's still reliable.)

In my last 20 years of computing... I have never seen any system last more then 5 years, becuase at that point it is so far behind 'modern' systems in terms of computing power and the software it can run... That it is nearly useless to me.

extreme ultraviolet (EUV) lithography? (0)

Anonymous Coward | about 3 months ago | (#47282295)

That's just not good enough. Let's cut to the chase and use straight-up gamma rays. Then we can complain that atoms are too fat.

Why are we worried about size? (1)

Mitchell Lafferty (3701091) | about 3 months ago | (#47282357)

Why don't we use smaller architecture in larger dies, so that we have higher densities, and higher speeds? Also that wouldn't that allow room for more cores and cache.

Re:Why are we worried about size? (1)

sexconker (1179573) | about 3 months ago | (#47282433)

Why don't we use smaller architecture in larger dies, so that we have higher densities, and higher speeds? Also that wouldn't that allow room for more cores and cache.

Because that doesn't lower costs and increase margins.
With this last shrink we saw pretty much no gain (and in some cases losses) in cost efficiency, so with further shrinks they may have to wake the fuck up and start working on upping clock speeds, giving us a larger die with an entire butt of cores and cache, etc.

Re:Why are we worried about size? (1)

TechyImmigrant (175943) | about 3 months ago | (#47282761)

> entire butt of cores and cache
I checked my copy of Measure for Measure, but 'butt' doesn't appear anywhere as a unit.

BTU (energy) and buito (mass) a both close.
Bucket (volume) is semantically close I suspect.

Re:Why are we worried about size? (1)

Anonymous Coward | about 3 months ago | (#47283145)

>I checked my copy of Measure for Measure, but 'butt' doesn't appear anywhere as a unit.

A butt is precisely 1/2 of a tun (sic).

(Yes, really. See http://en.wikipedia.org/wiki/Butt_(unit)#butt )

Re:Why are we worried about size? (1)

TechyImmigrant (175943) | about 3 months ago | (#47283333)

Thank you.

I guess it's not a sufficiently well regulated unit to make it into the engineering references.

Re:Why are we worried about size? (0)

Anonymous Coward | about 3 months ago | (#47282945)

With this last shrink we saw pretty much no gain (and in some cases losses) in cost efficiency, so with further shrinks they may have to wake the fuck up and start working on upping clock speeds, giving us a larger die with an entire butt of cores and cache, etc.

Can't up clock speed any more; the current leakage problem (and therefore heat) is already as bad as can be tolerated.
Can't make larger dies; they're already as large as is economically feasible.

tl;dr: we're screwed

Re:Why are we worried about size? (1)

sexconker (1179573) | about 3 months ago | (#47283435)

Dies are tiny.
Fuck power and heat. I have a desktop.

Re:Why are we worried about size? (0)

Anonymous Coward | about 3 months ago | (#47282579)

Why don't we use smaller architecture in larger dies, so that we have higher densities, and higher speeds? Also that wouldn't that allow room for more cores and cache.

Heat?

This affects our entire industry (1)

default luser (529332) | about 3 months ago | (#47282597)

Because whatever you do in the computing world, you are affected by processing power and cost. Growth in these regions drives both new hardware and new software to go with it, and any hit to growth will mean loss of jobs.

Software (what most of us here create) usually gets created for one of two reasons:

1. Software is created because nobody is filling a need. Companies may build their own version if they want to compete, or a company may contract a customized version if they can see increased efficiency or just have a process they want to stick to. There used to be a lot of unfulfilled need out there, but this demand is much sated in the 21st century.

2. Software is created because a company desires increased performance/new features (basic need is filled, this is a WANT). Once a new processor/feature becomes available, you either wedge it into existing code. Or, if it's a massive enough of an improvement, you create entirely new software enabled by the new level of performance-per-dollar.

Without continued growth, the industry is in danger of cratering because there's only so much processor architecture optimization you can do in the same process node, and the same goes for optimized libraries on the software side. In addition, brand-new industries enabled by cost reductions (e.g. digital FMV explosion in the 1990s, or the movement to track your every move in the 2000s) will no-longer be so common, and that will again force people to look elsewhere for employment.

Software engineers won't disappear, but they will be culled. The industry has not had to deal with that yet in it's entire history, so it will be painful. I'm hoping they can hod this off for as long as possible!

Re:This affects our entire industry (1)

TechyImmigrant (175943) | about 3 months ago | (#47282695)

>Software (what most of us here create)

Really? A lot of us create hardware. We have an existential interest in the answer to TFA.

Re:This affects our entire industry (1)

default luser (529332) | about 3 months ago | (#47282859)

I figured the hardware effect was fairly obvious :D

I concentrated on the software side effects because more readers here work on that end.

Re:This affects our entire industry (1)

PRMan (959735) | about 3 months ago | (#47284157)

There are always new things to do and never enough people to do them. I for one will be surprised if developers have a culling in the next 20 years. There are too many other jobs to eliminate first.

CMOS scaling limited by process variation (4, Interesting)

Theovon (109752) | about 3 months ago | (#47283195)

There are a number of factors that affect the value of technology scaling. One major one is the increase in power density due to the end of supply and threshold voltage scaling. But one factor that some people miss is process variation (random dopant fluctuation, gate length and wire width variability, etc.).

Using some data from ITRS and some of my own extrapoliations from historical data, I tried to work out when process variation alone would make further scaling ineffective. Basically, when you scale down, you get a speed and power advantage (per gate), but process variation makes circuit delay less predictable, so we have to add a guard band. At what point will the decrease in average delay become equal to the increase in guard band?

It turns out to be at exactly 5nm. The “disappointing” aspect of this (for me) is that 5nm was already believed to be the end of CMOS scaling before I did the calculation. :)

Incidentally, if you multiply out the guard bands already applied for process variation, supply voltage variation, aging, and temperature variation, we find that for an Ivy Bridge processor, about 70% of the energy going in is “wasted” on guard bands. In other words, if we could eliminate those safety margins, the processor would use 1/3.5 as much energy for the same performance or run 2.5 times faster in the same power envelope. Of course, we can’t eliminate all of them, but some factors, like temperature, change so slowly that you can shrink the safety margin by making it dynamic.

Guard bands sound like (0)

Anonymous Coward | about 3 months ago | (#47283603)

Guard bands sound like sloppy design.

Re:Guard bands sound like (1)

radarskiy (2874255) | about 3 months ago | (#47284159)

Guard bands are a rational engineering tradeoff, when confronted with the physical laws of random fluctuations on one hand and developing entirely new computational models on the other.

When a difference of one dopant atom creates a measurable change in device characteristics you have to accept that its past the point where just spending money can tighten up the tolerances. Sometimes it's just faster to overdesign the part than to re-invent mathematics, physics, and chemistry simultaneously.

Not even that small (0)

Anonymous Coward | about 3 months ago | (#47283379)

Given how long it's taken TSMC to get past the 28nm node, I'd be surprised if we even make it into the teens. The main problem seems to be heat dissipation. Fabricating the chips is a solvable problem that I think we will be able to overcome quite readily. Making these chips stable, viable, faster than what we have now, and cheaper or at least the same price, is an entirely different proposition altogether.

EUV not going to happen (2)

edxwelch (600979) | about 3 months ago | (#47283669)

An interesting article here discribes the horrendiously difficult challenges that face EUV:
https://www.semiwiki.com/forum... [semiwiki.com]

Memristance (2)

Suiggy (1544213) | about 3 months ago | (#47284085)

The problem is that memristance effects begin to manifest below 5nm

Thus, start using memristors to build IMP-FALSE logic circuits.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>