Beta

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

How Much Smaller Can Chips Go?

Soulskill posted more than 3 years ago | from the don't-call-me-tiny dept.

Intel 362

nk497 writes "To see one of the 32nm transistors on an Intel chip, you would need to enlarge the processor to beyond the size of a house. Such extreme scales have led some to wonder how much smaller Intel can take things and how long Moore's law will hold out. While Intel has overcome issues such as leaky gates, it faces new challenges. For the 22nm process, Intel faces the problem of 'dark silicon,' where the chip doesn't have enough power available to take advantage of all those transistors. Using the power budget of a 45nm chip, if the processor remains the same size only a quarter of the silicon is exploitable at 22nm, and only a tenth is usable at 11nm. There's also the issue of manufacturing. Today's chips are printed using deep ultraviolet lithography, but it's almost reached the point where it's physically impossible to print lines any thinner. Diffraction means the lines become blurred and fuzzy as the manufacturing processes become smaller, potentially causing transistors to fail. By the time 16nm chips arrive, manufacturers will have to move to extreme ultraviolet lithography — which Intel has spent 13 years and hundreds of millions trying to develop, without success."

cancel ×

362 comments

Sorry! There are no comments related to the filter you selected.

FP (-1, Offtopic)

Anonymous Coward | more than 3 years ago | (#33242056)

first post

Don't make them smaller (5, Funny)

AhabTheArab (798575) | more than 3 years ago | (#33242062)

Make them bigger. More space to put stuff on them then anyway. Tostito's Restaurant style tortilla chips can fit much more guacamole and salsa on them than their bite size chips. Bigger is better when it comes to chips.

Re:Don't make them smaller (1)

PatrickThomson (712694) | more than 3 years ago | (#33242082)

Distant parts of the chip then have a communication lag, but yes, this will really help. Certainly much less lag than communicating with something outside the die.

Re:Don't make them smaller (2, Insightful)

Anonymous Coward | more than 3 years ago | (#33242104)

It's not about communication lag, it's about cost. Price goes up with die area.

Plan the dark areas around the defects (2, Interesting)

grahamsz (150076) | more than 3 years ago | (#33242446)

Larger dies generally cost more because it's more likely that they'll have a defect. I haven't done any chip design since college (and even then it was really entry level stuff) but if you could break the chip down into 10 different subcomponents that need to be spaced out, you could put 100 of those components on the chip and then after manufacture you could select the blocks that perform best and are defect free, spacing your choices accordingly.

I'm pretty sure chip makers likely already

GPUs work kind of like this (3, Informative)

Sycraft-fu (314770) | more than 3 years ago | (#33242918)

Since they are so parallel they are made as a bunch of blocks. A modern GPU might be, say, 16 blocks each with a certain number of shaders, ROPs, TMUs, and so on. When they are ready, they get tested. If a unit fails, it can be burned off the chip or disabled in firmware, and the unit can be sold as a lesser card. So the top card has all 16 blocks, the step down has 15 or 14 or something. Helps deal with cases were there's a defect, but overall the thing works.

CPU caches also work like that (2, Informative)

imgod2u (812837) | more than 3 years ago | (#33243318)

Actually, it's pretty common practice to put spare arrays and spare cells in the design that aren't connected in the metal layers. When a chip is found defective, the upper metal layers can be cut and fused to form new connections and use the spare cells/arrays instead of the ones that failed by use of a focused ion beam.

But that still adds time and cost. Decreasing die area is pretty much always preferable. Also, larger dies means even more of the chip's metal interconnects have to be devoted to power distribution.

Re:Don't make them smaller (0)

Anonymous Coward | more than 3 years ago | (#33242730)

Communication plays a massive part in modern processors. The bus line lengths cause sync issues we never had to worry about a few years ago. Faster clock ticks also causes issues with changing state. When signals are fast, the high/low voltages stop appearing as on/off, but start to exhibit themselves as ramps. Add this to communication sync, and you have a real problem. This is why they don't make faster and faster processors like years gone by, there are properties of physics getting in the way of desired engineering. It's far easier to put lots of cores onto a die than make a screaming CPU.

Re:Don't make them smaller (4, Interesting)

ibwolf (126465) | more than 3 years ago | (#33242268)

Distant parts of the chip then have a communication lag, but yes, this will really help. Certainly much less lag than communicating with something outside the die.

Wouldn't that suggest that three dimensional chips be the logical next step. Although heat dissipation would become more difficult, not to mention the fact that the production process would be an order of magnitude more complicated.

Re:Don't make them smaller (5, Informative)

TheDarAve (513675) | more than 3 years ago | (#33242354)

This is also why Intel has been investing so much into in-silicon optical interconnects. They can go 3D if they can separate the wafers far enough to put a heat pipe in between and still pass data.

Re:Don't make them smaller (0)

Anonymous Coward | more than 3 years ago | (#33242366)

Wouldn't that suggest that three dimensional chips be the logical next step. Although heat dissipation would become more difficult, not to mention the fact that the production process would be an order of magnitude more complicated.

IBM has been working on that [arstechnica.com] . That story is two years old.

Re:Don't make them smaller (2, Insightful)

Xacid (560407) | more than 3 years ago | (#33242520)

Built in peltiers to draw the heat out of the center perhaps?

Re:Don't make them smaller (1)

mlts (1038732) | more than 3 years ago | (#33242582)

I'd like to see more work with peltiers, but IIRC, they take a lot of energy to do their job of moving heat to one side, something that CPUs are already tight on.

Re:Don't make them smaller (4, Interesting)

rimcrazy (146022) | more than 3 years ago | (#33243092)

Making 3D chips is the holy grail of semiconductor processing but is still beyond reach. They've not been able to lay down a single crystal second layer to make your stacked chip. They have tried using amorphous silicon but the devices are not near as good so there is no point.

We are already seeing the outcrop of all of this, as next years machines are not necessarily 2x the performance at the same cost. I really think that money would be better spent helping all of you coders out there in creating a language/compiler programing paradigm that can use 12 threads efficiently for something beyond rendering GTA. I certainly don't have the answer and given that that problem has not been solved yet, neither does anybody else at this time.

Its a very very hard problem. It is going to be interesting here in the next few years. If nothing changes, your going to have to start becoming accustom to the fact that next years PC is going to cost you MORE not less and thats really going to suck.

Re:Don't make them smaller (1)

RulerOf (975607) | more than 3 years ago | (#33243252)

money would be better spent helping all of you coders out there in creating a language/compiler programing paradigm that can use 12 threads efficiently for something beyond rendering GTA.

From what I've heard, the number of cores you throw at GTA doesn't matter, it still runs like crap. ;)

Re:Don't make them smaller (0)

Anonymous Coward | more than 3 years ago | (#33243198)

Wouldn't that suggest that three dimensional chips be the logical next step.

No. It would suggest that massively parallel processors are the next logical step. You don't need every part of the chip communicating with every other part of the chip. In fact, probably more than 99% of the connectivity in modern architectures isn't being utilized. The smart design is the GPU design. It is time for a paradigm shift. We need to stop focusing on speed and start focusing on parallel processing. I am not talking 16 cores, or even 512 cores. I am talking 65K or 1M cores. Imagine the power of a 1M core machine. It would almost certainly begin to rival the power of the human brain.

Re:Don't make them smaller (0)

Anonymous Coward | more than 3 years ago | (#33242108)

I very nearly gave an EE response to why smaller and closer is better before realizing your statement was an obvious joke. My brain isn't working today.

Re:Don't make them smaller (0)

mrsteveman1 (1010381) | more than 3 years ago | (#33242410)

Have some guacamole, you'll feel better.

Oh no.... (1)

AnonymousClown (1788472) | more than 3 years ago | (#33242568)

First the car analogy. Then the pizza analogy. Now, the taco chip analogy.

At some point, we're going to see an argument that starts out with "It's like a Nazi eating a Tostito with ..."

At least a Tostito is a chip.

Re:Don't make them smaller (1)

Andrewkov (140579) | more than 3 years ago | (#33243030)

Yep, common people, work smarter, not smaller.

The Atoms (5, Interesting)

Ironhandx (1762146) | more than 3 years ago | (#33242080)

They're going to hit atomic scale transistors fairly soon from what I can see as well, the manufacturing process for those is probably prohibitively expensive but that is as small as they can go(according to our current knowledge of the universe at least).

I can't imagine Intel has all of its eggs in one basket on Extreme Ultraviolet Lithography though. Something thats been in development for even 5 years and doesn't show any concrete signs of success should at least have alternatives developed for it. After 5 years if you still can't say for certain if its ever going to work, you definitely need to start looking in different directions.

Re:The Atoms (4, Funny)

Lunix Nutcase (1092239) | more than 3 years ago | (#33242140)

Something thats been in development for even 5 years and doesn't show any concrete signs of success should at least have alternatives developed for it.

You haven't followed much of the history of Itanium's development have you?

Re:The Atoms (1)

Ironhandx (1762146) | more than 3 years ago | (#33242308)

No, I really haven't. I tend not to pay much attention to things that are released more than 2 years after their original announced release date.

Though, I have to point out I didn't advocate terminating a project after 5 years of zero results(a la Itanium) just looking in additional directions and not keeping all the eggs in the questionable basket.

Re:The Atoms (1)

Lunix Nutcase (1092239) | more than 3 years ago | (#33242384)

You seem to miss the point. You imagine that Intel doesn't point all of its eggs in one basket. The development of Itanium disproves that notion as they had no other real alternatives being developed at the same time.

Re:The Atoms (1)

grahamsz (150076) | more than 3 years ago | (#33242464)

Yeah, it's really hurt them. They've been wildly unprofitable since then

Re:The Atoms (1)

Lunix Nutcase (1092239) | more than 3 years ago | (#33243326)

Because I made either the claim that it hurt Intel or it made them unprofitable? Oh wait...

Re:The Atoms (0)

Anonymous Coward | more than 3 years ago | (#33242456)

It's people like you that caused Duke Nukem Forever's cancellation. They realized there were just too many of you stupid people who would not play the game just because it was delayed for around a decade.

Re:The Atoms (1)

mrsteveman1 (1010381) | more than 3 years ago | (#33242454)

You haven't followed much of the history of Itanium's development have you?

I saw the movie though, Leo dies at the end.

or WIMAX (0)

Anonymous Coward | more than 3 years ago | (#33242740)

or WiMAX

I'd say you haven't (4, Interesting)

Sycraft-fu (314770) | more than 3 years ago | (#33243028)

For one, Itanium is still going strong in high end servers. It is a tiny market, but Itanium sells well (no I don't know why).

However in terms of the desktop, you might notice something: When AMD came out with an x64 chip and everyone, most importantly Microsoft, decided they liked it and started developing for it, Intel had one out in a hurry. This doesn't just happen. You don't design a chip in a couple months, it takes a long, long time. What this means is Intel had been hedging their bets. They developed an x64 chip (they have a license for anything AMD makes for x86 just as AMD has a license for anything they make) should things go that way. They did and Intel ran with it.

Ran with it well, I might add, since now the top performing x64 chips are all Intel.

They aren't a stupid company, and if you think they are I'd question your judgment.

Re:The Atoms (1)

bill_mcgonigle (4333) | more than 3 years ago | (#33242822)

They're going to hit atomic scale transistors fairly soon from what I can see as well

Yeah, there was an article here in the spring on atomic computing, where I did a little math on it. I was surprised, but it worked out that in roughly a decade Moore's Law would get down to atomic transitors if reducing the part size was the method employed.

I had always presumed before that it would never run out, but it's going to have to zig sideways if that's going to be true.

Google recently bought that company working on packet switched CPU's - I guess I really don't care at all about transistor count, just performance - alternates to the superscalar approach would be fine too.

Re:The Atoms (1)

Griffon26 (709915) | more than 3 years ago | (#33243002)

The article calls it "a (...) partnership to develop EUV". That's not just product development.
5 years is not a long time at all if you include the research period (and I don't mean general research in the area, but focused research/feasibility studies).

Re:The Atoms (1)

alen (225700) | more than 3 years ago | (#33243096)

i remember reading a long time ago that 90nm or 65nm would be impossible due to physics and science

Why do they need to? (4, Funny)

Revotron (1115029) | more than 3 years ago | (#33242116)

Why does Intel need to push the envelope that hard and that fast just to create a product that will, in the end, have extremely low yield and extremely high cost?

Just so they can adhere to some ancient "law" proposed by one of their founders? It's time to let go of Moore's Law. It's outdated and doesn't scale well... just like the x86 architecture! *ba-dum, chhh*

Re:Why do they need to? (4, Interesting)

mlts (1038732) | more than 3 years ago | (#33242212)

At the extreme, maybe it might be time for a new CPU architecture? Intel has been doing so much stuff behind the scenes to keep the x86 architecture going, that it may be time to just bite the bullet and move to something that doesn't require as much translation?

Itanium comes to mind here because it offers a dizzying amount of registers, both FPU and CPU available to programs. To boot, it can emulate x86/amd64 instructions.

Virtual machine technology is coming along rapidly. Why not combine a hardware hypervisor and other technology so we can transition to a CPU architecture that was designed in the past 10-20 years?

Re:Why do they need to? (2, Insightful)

Andy Dodd (701) | more than 3 years ago | (#33242282)

The problem is that x86 has become so entrenched in the market that even it's creator can't kill it off.

You even cited a perfect example of their last (failed) attempt to do so (Itanic).

Re:Why do they need to? (1, Insightful)

mlts (1038732) | more than 3 years ago | (#33242438)

Very true, but it eventually needs to be done. You can only get so big with a jet engine that is strapped onto a biplane. The underlying architecture needs to change sooner or later. As things improve, maybe we we will get to a point where we have CPUs with enough horsepower to be able to run emulated amd64 or x86 instructions at a decent speed. The benefits will be many by doing this. First, in assembly language, we will save a lot of instructions because programs will have enough registers to do actions at once, rather than keep shuttling data to and from RAM to complete a calculation. Having few access to and from RAM will speed up tasks immensely because register access is so much faster. Take a calculation that adds up a bunch of numbers. The numbers can be loaded into separate registers, added, result dropped back into RAM. With the x86, it would take a lot of load and stores to do the same thing.

Re:Why do they need to? (3, Informative)

Anonymous Coward | more than 3 years ago | (#33243406)

We already have this. All current x86's have a decode unit to convert the x86 instructions to micro-ops in the native RISC instruction set.

Re:Why do they need to? (1)

Joce640k (829181) | more than 3 years ago | (#33243128)

Itanium failed because it used a VLIW architecture - great for specialized processing tasks on big machines but for general purpose computing (ie. what 99.9% of people do) it wasn't much faster than x86.

Are computers really 'too slow' now? It seems to me that an x64 desktop at 3GHz is fast enough for just about anything a normal person would do. The only "normal task" I can think of that's too slow at the moment is decoding x264 video on netbooks and they're better off with a little hardware decoder tacked on than a mega-CPU upgrade.

Games are more constrained by RAM and GPU than CPU at the moment. RAM and GPUs are catching up fast and game logic is a good target for parallel processing - more cores is the way to go (by that I mean "will be cheaper/easier than making the existing cores faster").

For almost everything else, more cores and better software would give much more of a boost then making individual cores faster.

At the end of the day the deciding factor will be simple economics: If a process/factory costs $X then Intel has to sell $Y chips to justify it.

Expensive chips are going to be an increasingly harder sell. I can get a decent quad core for $100 now and soon it will be more like $50.

If only 0.005% of people are buying the $1000 chips then the factories aren't going to be built.

Re:Why do they need to? (1)

Lunix Nutcase (1092239) | more than 3 years ago | (#33242310)

Itanium comes to mind here because it offers a dizzying amount of registers, both FPU and CPU available to programs.

And it's been such a smashing success in comparison to x86, right?

Re:Why do they need to? (4, Insightful)

mlts (1038732) | more than 3 years ago | (#33242484)

x86 and amd64 have an installed base. Itanium doesn't. This doesn't mean x86 is any better than Itanium, in the same way that Britney Spears is better than $YOUR_FAVORITE_BAND because Britney has sold far more albums.

Intel has done an astounding job at keeping the x86 architecture going. However, there is only so much lipstick you can put on a 40 year old pig.

Re:Why do they need to? (0)

the_fat_kid (1094399) | more than 3 years ago | (#33242810)

"However, there is only so much lipstick you can put on a 40 year old pig."

Hey, you insensitive clod, that's my wife!

Re:Why do they need to? (1, Troll)

pitchpipe (708843) | more than 3 years ago | (#33243296)

However, there is only so much lipstick you can put on a 40 year old pig."

Hey, you insensitive clod, that's my wife!

Sarah Palin is your wife!?

Re:Why do they need to? (0)

Anonymous Coward | more than 3 years ago | (#33243082)

>However, there is only so much lipstick you can put on a 40 year old pig.

The same might be said of UNIX. Or more recent clones of said 40 year old porcines.

Re:Why do they need to? (1)

ibwolf (126465) | more than 3 years ago | (#33242314)

You are right in that a new architecture could offer improved performance, however it is a one shot deal. Once you've rolled out the new architecture there will be a short period while everything catches up and then you are right back to cramming more on the die.

Re:Why do they need to? (1)

psbrogna (611644) | more than 3 years ago | (#33243188)

The point of a new architecture would be for it NOT to be a one shot deal and that it would give you ample room for evolution before hitting physical limitations at least for a few "generations". The problem of stepping sideways is the risk. You don't have to look too far on /. to find other examples of civilization being irrationally tied to a legacy they're unwilling to walk away from even if by doing so they accept mediocre technology.

Ummm... it's called x64. (0, Troll)

Joce640k (829181) | more than 3 years ago | (#33242570)

Intel and AMD have both been producing it for a number of years now.

Re:Ummm... it's called x64. (0)

Anonymous Coward | more than 3 years ago | (#33243012)

x64 is just extensions to x86. It's actual architecture sign is x86_64. It still is fundamentally x86 just with a few modifications to add on 64-bit.

Re:Why do they need to? (1)

Twinbee (767046) | more than 3 years ago | (#33243018)

And if we did, are we talking about 2x speed returns very roughly, or even up to 20x?

Would it really help though? (1)

Sycraft-fu (314770) | more than 3 years ago | (#33243152)

It seems to be almost an article of faith with geeks that if only we didn't have that nasty x86 we could have so much better chips. However the thing is, there ARE non-x86 chips out there. Intel and AMD may love it, others don't. You can find other architectures. So then, where's the amazing chip that kicks the crap out of Intel's chips? I mean something that is faster, uses the same or less power and costs less to produce (it can be sold for more, but the fab costs have to be less). Where is the amazing chip that uses a bunch less silicon but does the same work?

You can't find one, and there's a reason for that. The API of the chip is just not such a big deal these days. For a lot of reasons you can have many kids of APIs and it doesn't really increase the complexity of the chip by a lot, nor mess with performance.

I'd love to see that I'm wrong about this, I'd love to see a desktop CPU (remember that's what we are talking about here, not embedded stuff) that can outperform an i7 with less silicon and less power, but I haven't and I don't think I will.

Re:Why do they need to? (3, Informative)

imgod2u (812837) | more than 3 years ago | (#33243376)

Because nowadays, the ISA is really very little impact on resulting performance. The total die space devoted to translating x86 instructions on a modern Nehalem is tiny compared to the rest of the chip. The only time the ISA decode logic matters if for very low power chips (smartphones). This is part of the reason why ARM is so far ahead of Intel's x86 offerings in that area.

Modern x86, with SSE and x86-64, is actually not that bad of an ISA and there aren't too many ugly workarounds necessary anymore that justify a big push to change.

Re:Why do they need to? (0)

Anonymous Coward | more than 3 years ago | (#33242296)

Less power, and faster can still be a money maker to off set the research and production costs.

Re:Why do they need to? (1)

timeOday (582209) | more than 3 years ago | (#33242318)

Yeah, progress is such a hassle.

Re:Why do they need to? (1)

maxume (22995) | more than 3 years ago | (#33242372)

Each time they shrink the die size, they reduce the number of silicon in the chip, lowering the cost of the silicon in the chip, lowering the cost of the chip.

If they had not successfully increased yield on past technology generations, do you think chips today would still be cheaper and faster?

Re:Why do they need to? (2, Insightful)

T-Bone-T (1048702) | more than 3 years ago | (#33242418)

Moore's Law describes increases in computing power, it does not proscribe it.

Depends on your definition of "chip" (0)

cormander (1273812) | more than 3 years ago | (#33242146)

Referencing science fiction, Star Trek's Voyager was the first ship to utilize bio-neuric computer technology. I imagine that the cells in the sacks are smaller than any chip that the Enterprise D had. I would consider the cells in Bio-neuric computer technology as "chips", and it exists in our brains. We just don't know yet how to harness it. So yes, smaller computer chips are possible.

Re:Depends on your definition of "chip" (0)

Anonymous Coward | more than 3 years ago | (#33242634)

Returning from lah-lah trekkie fuckwit land for a moment... the cells of your brain are not smaller than current processors.

Maybe we will start seeing more cores? (1, Interesting)

mlts (1038732) | more than 3 years ago | (#33242150)

I have a feeling that once doing smaller and smaller lines becomes prohibitive, we will see a return to either revving up the clock speed (if possible), or adding more cores per die. Maybe even adding more discrete CPUs, so a future motherboard may have multiple CPUs on it similar to how mid to upper range PCs ended up with multiple procs present around 2000-2001.

There are always more ways to keep going with Moore's law if one item gets near exhausted.

Re:Maybe we will start seeing more cores? (5, Funny)

Anonymous Coward | more than 3 years ago | (#33242398)

You have an uncanny ability to predict the present!

Re:Maybe we will start seeing more cores? (0)

Anonymous Coward | more than 3 years ago | (#33242540)

>There are always more ways

You are extremely short-sighted. Growth is exponential only at the very beginning of natural processes. Every process reaches a plateau, that's simply how the universe works.

Re:Maybe we will start seeing more cores? (4, Insightful)

phantomfive (622387) | more than 3 years ago | (#33242600)

It has always been about making it smaller. Clock speed was able to increase because the chips got smaller. We were able to add more cores per die because the chips got smaller. Moore's law is about size: it doesn't say computers will get faster, it says they will get smaller.

What we are able to do with the smaller chips is what's changed. Raising the clock speed worked for years, and that is the best option, but because of physical problems, in the latest generations we weren't able to do that. So the next best thing is to add cores. Now the article is suggesting we may not even be able to do that anymore.

I will tell you I've been reading articles like this for as long as I've known what a computer was, so if you're a betting man, you would do well to bet against this type of article every time you read it. But in theory it has to end somewhere, unless we learn how to make subatomic particles, which presumably is outside the reach of the research budget at Intel.

Re:Maybe we will start seeing more cores? (3, Insightful)

Abcd1234 (188840) | more than 3 years ago | (#33242984)

Well done, you've just described... today!

And today, we already know the problem with this approach: most everyday problems aren't easily parallelizable. Yes, there are specific areas where the problems are sometimes embarrassingly parallel (some scientific/number crunching applications, graphics rendering, etc), but generally speaking, your average software problem is unfortunately very serial. As such, those multiple cores don't provide much benefit for any single task. So if you want to execute one of these problems faster, the only thing you can do is ramp up the clock rate.

Clock speed is a no-go (1, Informative)

Anonymous Coward | more than 3 years ago | (#33243236)

With greater clock speed comes greater heat dissipation needs (most heat is created at clock switching); they have basically hit this wall already, hence the multi-core direction everyone is taking (can't go faster, so lets just go the same speed, but in parallel).

Re:Maybe we will start seeing more cores? (0)

Anonymous Coward | more than 3 years ago | (#33243490)

There is no financial incentive to create scalable parallel processors. They would essentially be putting themselves out of business.

I miss the pressure AMD used to put on Intel (1)

xxxJonBoyxxx (565205) | more than 3 years ago | (#33242154)

I miss the pressure AMD used to put on Intel. When Intel had an agile competitor often leaping ahead of it chip speeds shot up like a rocket - seems like they've been resting on their laurels lately...

Re:I miss the pressure AMD used to put on Intel (1)

localman57 (1340533) | more than 3 years ago | (#33242244)

Really? I got a Core i5-750 in January, and I have been happier with it for the money than any chip I've ever had.

Re:I miss the pressure AMD used to put on Intel (5, Insightful)

Revotron (1115029) | more than 3 years ago | (#33242270)

The latest revision of my Phenom II X4 disagrees with you. The Phenom II series is absolutely steamrolling over every other Intel product in its price range.

Hint: Notice I said "in its price range." Because not everyone prefers spending $1300 on a CPU that's marginally better than one at $600. It seems like Intel has stepped away from the "chip speed" game and stepped right into "ludicrously expensive".

Re:I miss the pressure AMD used to put on Intel (2, Interesting)

Lunix Nutcase (1092239) | more than 3 years ago | (#33242356)

The only Intel chips that are $1000+ are those that are either a few months old and/or are of the "Extreme" series. The core i7-860s and 930s are under 300 bucks and pretty much the entire core i5 line is at 200 or less.

Re:I miss the pressure AMD used to put on Intel (1)

PitaBred (632671) | more than 3 years ago | (#33242548)

The problem is that the Intel motherboards are more expensive, and they lock you into your chip "class". You can't upgrade to an i7 from an i5 in some cases.

Re:I miss the pressure AMD used to put on Intel (1)

amazeofdeath (1102843) | more than 3 years ago | (#33242628)

The price difference is negligible between AMD and Intel boards, unless you are attending the race to bottom, where AMD rules. You also can't upgrade from an AM2 to AM3 CPU on a AM2 board. The talk about upgrading is meaningless in a broader sense too: Why would you buy something not optimal just so that you can upgrade it later? It's false economy, get the best you can afford now, and a whole new rig with whole new tech a few years later.

Re:I miss the pressure AMD used to put on Intel (4, Informative)

Rockoon (1252108) | more than 3 years ago | (#33242922)

What are you talking about? AM2 boards support AM3 chips.

You also present a false dichotomy, because upgrading isnt ONLY about buying suboptimal hardware and then upgrading it later. Anyone who purchased bleeding edge AM2 gear when it was introduced can get a bios update and then socket an AM3 Phenom II chip. They still only have DDR2, but amazingly Phenom II's support both DDR2 on AM2 and DDR3 on AM3.

So that guy who purchased a dual-core AM2 Phenom when they were cutting edge can now socket a hexa-core AM3 Phenom II.

Its amazing what designing for the future gives your customers. Intel users have only rarely had the chance to substantially upgrade CPU's.

Re:I miss the pressure AMD used to put on Intel (1)

amazeofdeath (1102843) | more than 3 years ago | (#33243102)

You probably mean AM2+ boards. All AM2 boards definitely don't support AM3 CPUs, feel free to check the manufacturer sites.

For the false dichotomy part, you build up another in your case, too. In the last few years (AM2 and AM3 age), the quad cores haven't been too expensive compared to the dual cores. Your example user has made the wrong choice when buying the dual core in the first place; the combined price of the dual and the hexa core CPUs would have given him/her a nice time in multithreaded apps for the whole duration.

Re:I miss the pressure AMD used to put on Intel (0)

Anonymous Coward | more than 3 years ago | (#33243228)

[quote]You also can't upgrade from an AM2 to AM3 CPU on a AM2 board.[/quote]
You can -- so long as the board BIOS is properly updated, which many respectable board manufacturer do. For example, my AM2 (not AM2+) board from 2006 I bought with AthlonX2 3600+ had no problem with last year's Phenom-II X3 720. You just need to upgrade to the latest BIOS and make sure it supports the AM3 CPU you are upgrading to.

The board vendor homepage should have CPU compatibility list for all boards and which version of BIOS you need for each CPU supported.

Re:I miss the pressure AMD used to put on Intel (1)

amazeofdeath (1102843) | more than 3 years ago | (#33243418)

Sorry, like I later mentioned, it's board-specific. Care to give the board model?

Re:I miss the pressure AMD used to put on Intel (1)

amazeofdeath (1102843) | more than 3 years ago | (#33242546)

So you haven't really done any research there? Intel's i5 750 and 760 "steamroll" all the Phenom II X4 CPUs in the price range. Don't trust me, trust benchmarks.

Re:I miss the pressure AMD used to put on Intel (1)

Rockoon (1252108) | more than 3 years ago | (#33242970)

So you haven't really done any research there? Intel's i5 750 and 760 "steamroll" all the Phenom II X4 CPUs in the price range. Don't trust me, trust benchmarks.

Phenom II X6 chips with Turbo Core in the same price range would like to have a word with you about you cherry picking old X4 chips.

Re:I miss the pressure AMD used to put on Intel (1)

amazeofdeath (1102843) | more than 3 years ago | (#33243178)

In what uses? X6 CPUs don't really deliver compared to i5, except in uses where you can really blast out all the cores, like vid encoding with certain programs.

And the OP especially was telling how his/her *Phenom II X4* beats everything Intel has to offer in its price range, which is blatantly false. LTR.

Re:I miss the pressure AMD used to put on Intel (0)

Anonymous Coward | more than 3 years ago | (#33243300)

I don't know about this...My last 2 phenom ii purchases were x4 940's at $108 shipped....what at ~ $110 from intel steamrolls this?

Re:I miss the pressure AMD used to put on Intel (1)

amazeofdeath (1102843) | more than 3 years ago | (#33243392)

So, how do you define "price range"? Is that the exact price?

Re:I miss the pressure AMD used to put on Intel (0)

Anonymous Coward | more than 3 years ago | (#33242552)

Depends on the task. Some friends and I make videos and I encode the result to AVCHD using x264. At that the Core i7 is 50% faster than a Phenom II at the same clock speed. If it weren't for the video encoding I'd definitely buy AMD, but when I bought my current PC in January the Core i7 860 was an obvious choice since I got a lot of extra performance for a little extra money and today it's as fast as an equally clocked Phenom II X6.

Didn't we learn about unbreakable limits (0)

Anonymous Coward | more than 3 years ago | (#33242162)

When all those people died when steam engines could go faster than 25mph. And no aircraft has gone faster than the sound barrier.

All these so-called rules and laws are meant to be broken.

And the fact that Intel cant make something work after 13 years means nothing. They will surely make up for that with all the profits they make in the discrete graphics business.

This question (3, Interesting)

bigspring (1791856) | more than 3 years ago | (#33242184)

I think there has been a major article asking this question every six months for the last decade. Then: surprise surprise, there's a new tech development that improves the technology. We've been "almost at the physical limit" for transistor size since the birth of the computer, why will it be any different this time?

Re:This question (4, Insightful)

localman57 (1340533) | more than 3 years ago | (#33242300)

why will it be any different this time?

Because sooner or later, it has to be. You reach a breaking point where the new technology is sufficiently different from the old that they don't represent the same device anymore. I think you'd have to be crazy to think that we're approaching the peak of our ability to solve computational problems, but I don't think its unreasonable to think that we're approaching the limit of what we can do with this technology (transistors).

Re:This question (0)

Anonymous Coward | more than 3 years ago | (#33242636)

Ah, you must be an economist. You see, here in the real world, we're bound by laws of physics. The laws of physics do not bend to moore's law, it's the other way around.

Re:This question (1)

MozeeToby (1163751) | more than 3 years ago | (#33242972)

Eventually there's a theoretical limit, a limit that can't be exceeded without violating the laws of physics, specifically quantum mechanics. Once your transistors get close enough together, the probability of an electron tunneling from one side to the other gets high enough that it isn't possible to tell between your on and off states. We are rapidly approaching that limit even if all the manufacturing issues can be overcome (I believe it's somewhere around 5nm, but I could be wrong).

Plank's Law (4, Funny)

cosm (1072588) | more than 3 years ago | (#33242208)

Well I can say with absolute certainty that they will not go below the Planck length.

Re:Plank's Law (0)

Anonymous Coward | more than 3 years ago | (#33242284)

That's what you say now. Great hubris, our scientists have!

Re:Plank's Law (1)

imgod2u (812837) | more than 3 years ago | (#33243204)

*For classical computation

Go? (-1, Troll)

Anonymous Coward | more than 3 years ago | (#33242224)

Well since they can not walk or move in any shape or form they are not going to go anywhere (stupid US-Americans).
Mhhh, now that I think about it. Maybe with Salt and Vinegar and inside one of your fatsos, that may work!

Quantum Computing (0)

Anonymous Coward | more than 3 years ago | (#33242302)

Perhaps as we get closer to these physical limits of classical computing we'll start to see more and more money invested in quantum computing research.

Re:Quantum Computing (2, Insightful)

psbrogna (611644) | more than 3 years ago | (#33243338)

I'd settle for less bloat-ware. Back in the day amazing things were done with extremely limited CPU resources by programming closer to the wire. Now we have orders of magnitude more resources but most programming is done at a very high level with numerous layers of inefficiency which negates, possibly more than negates, the benefits of increased CPU resources. Yes, yes- I wax a little "in my day/up hill both ways, etc." but do the benefits of high level programming and efficient use of resources have to be mutually exclusive?

Nothing new here, move along... (1)

axafg00b (398439) | more than 3 years ago | (#33242654)

Actually, the issue of decreasing linewidths has been a major concern ever since UV lithography came into play. The progress is really amazing. It was a big deal in the late '80s to get under 100nm, now there is consistent production at 32nm. There have been research programs investigating X-Ray lithography and electron-beam lithography, but I don't think any of these have panned out for mass production. Now, another concern is electron leakage from these tinier linewidths. Sure, high-K materials help, but there is still some loss.

Re:Nothing new here, move along... (1)

EmagGeek (574360) | more than 3 years ago | (#33242714)

There was a lot of electron beam lithography research going on when I was in grad school. One of the major hurdles was that every time someone slammed a door somewhere else in the building, it would vibrate the fixtures enough to ruin the device, despite the best efforts to shield the fixture (and the building, and the room, and the stand, and everything else) from vibration.

This is what reversible computing is for, right? (2, Interesting)

TimFreeman (466789) | more than 3 years ago | (#33242680)

The article mentions "dark transistors", which are transistors on the chip that can't be powered because you can't get enough power onto the chip. This is the problem that reversible [theregister.co.uk] computing [wikipedia.org] was supposed to solve.

Re:This is what reversible computing is for, right (2, Insightful)

imgod2u (812837) | more than 3 years ago | (#33243264)

People have been proposing circuits for regenerative switching (mainly for clocking) for a long long time. The problem always being that if you add an inductance to your circuit to store and feedback the energy, you will significantly decrease how fast you can switch.

Also, you think transistors are difficult to build in small sizes? Try building tiny inductors.

3D chips to keep the scale going (1)

RichMan (8097) | more than 3 years ago | (#33242694)

Current technology is based on a single planar layer of silicon substrate. A chips is built with a metal interconnect on top. But the base layers are essentially a 2D structure. We are already postprocessing things with thru vias to stack substrates into a single package. The increases density from the package perspective.
Increasing technologies in stacking will keep Moors law going for another decade (as long as you consider Moor's law to be referencing density in 2D).

"Extreme Ultraviolet" (1)

markbark (174009) | more than 3 years ago | (#33242900)

because "X-rays" is such an UGLY word....

Obviously, one transistor per atom (1)

Surt (22457) | more than 3 years ago | (#33243308)

That's how small they can go. Beyond that, increasing the functional density of our CPUs will get really challenging.

Sci Fi Story about that (1)

ch-chuck (9622) | more than 3 years ago | (#33243316)

Sort of off topic but there was a science fiction story about this scientist who created a potion that could make him smaller, and he just kept shrinking and shrinking, and all the different worlds he went thru each time, atoms turned into solar systems, and he just kept going down, down, down into infinite smallness. The story is here [blogspot.com] .

Better software (5, Insightful)

Andy_w715 (612829) | more than 3 years ago | (#33243368)

How about writing better software. Stuff that doesn't require 24 cores and 64GB of RAM?
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?
or Connect with...

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>