Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Why Intel Leads the World In Semiconductor Manufacturing

Soulskill posted more than 2 years ago | from the nobody-else-seems-to-want-to dept.

Intel 226

MrSeb writes "When Intel launched Ivy Bridge last week, it didn't just release a new CPU — it set a new record. By launching 22nm parts at a time when its competitors (TSMC and GlobalFoundries) are still ramping their own 32/28nm designs, Intel gave notice that it's now running a full process node ahead of the rest of the semiconductor industry. That's an unprecedented gap and a fairly recent development; the company only began pulling away from the rest of the industry in 2006, when it launched 65nm. With the help of Mark Bohr, Senior Intel Fellow and the Director of Process Architecture and Integration, this article explains how Intel has managed to pull so far ahead."

cancel ×

226 comments

Sorry! There are no comments related to the filter you selected.

What's the mystery? (5, Funny)

Anonymous Coward | more than 2 years ago | (#39865785)

Andy Grove paid billions to get access to Area 51 alien technology back in 1998. What's so hard to understand?

Re:What's the mystery? (4, Funny)

Chrisq (894406) | more than 2 years ago | (#39865857)

Andy Grove paid billions to get access to Area 51 alien technology back in 1998. What's so hard to understand?

Ah its that chip from the android that came from the future. What could possibly go wrong.

Re:What's the mystery? (2)

WrongSizeGlass (838941) | more than 2 years ago | (#39866243)

Andy Grove paid billions to get access to Area 51 alien technology back in 1998. What's so hard to understand?

Ah its that chip from the android that came from the future. What could possibly go wrong.

It was the chip used by the mother ship in Independence Day that could run the virus from Goldblum's Powerbook. It already had cross platform virtualization technology and was years ahead of its time.

Maybe there's an Area52, in Tel Aviv (1)

Taco Cowboy (5327) | more than 2 years ago | (#39865917)

Remember Pentium M?

Intel had to rely on Pentium M to pull itself out of that big sink hole back then

How come Apple (-1)

Anonymous Coward | more than 2 years ago | (#39865799)

doesn't have any of this advanced stuff Intel has, yet selling so well and making so much money?

Apple is not a semiconductor company (4, Informative)

tanveer1979 (530624) | more than 2 years ago | (#39865819)

Apple is a product company. It designs its products, and then someone else makes it.. Many components like the processor are third party, and companies like APPLE design a system around it.
After that the design goes to samsung, and its manufactured by samsung. I think samsung uses TSMC fab.

So if apple wanted to have a 22nm chip it could
1. Build a Fab(invest many billions)
2. Pay TSMC and partner with them in tech (invest some billions).

Return on investment may not justify the cost.

As you go smaller, you do gain an area and cost advantage, but you also run into lot of issues related to physics. So 28->22nm is not easy, and its really commendable Intel has done it.

Re:Apple is not a semiconductor company (2)

Intrepid imaginaut (1970940) | more than 2 years ago | (#39865953)

Indeed, there's a reason fabs are seen as national treasures.

Re:Apple is not a semiconductor company (1)

Kjella (173770) | more than 2 years ago | (#39866083)

As you go smaller, you do gain an area and cost advantage, but you also run into lot of issues related to physics. So 28->22nm is not easy, and its really commendable Intel has done it.

There's also the limited competition to TSMC, since nobody but Intel has access to Intel's plants (partnering with FPGA companies don't count) the rest of the market "has to" go with them even if they're behind Intel. Their competition is GloFo and UMC, none of which are impressing much. So what if Intel has 22nm? AMD still has to buy from TSMC. nVidia still has to buy from TSMC. Apple still has to buy from TSMC. They simply don't feel the pressure that their customers do, they sell and make a profit anyway. Meanwhile Intel is shipping a 160mm^2 IB to compete with a 315mm^2 Bulldozer for a huge cost advantage.

Re:Apple is not a semiconductor company (1)

Anonymous Coward | more than 2 years ago | (#39866227)

I think samsung uses TSMC fab.

Samsung makes its own chips.

Re:Apple is not a semiconductor company (1)

moss45 (2543890) | more than 2 years ago | (#39866273)

Apple is not a semiconductor company

They've bought both Anobit and P.A. Semi, doesn't that make them a semiconductor company?

Apple already co-invests with building fabrication plants through extremely large pre-purchases, and some analysts think Apple is already directly funding Samsung's A6 fab. You are right about Apple being a product company, they don't want to fund these factories. But they are forced to, because that is the current price of staying ahead of the competition in the phone industry.

These fabs aren't an optional investment, they are the cost of doing business.

Re:Apple is not a semiconductor company (2, Informative)

Anonymous Coward | more than 2 years ago | (#39866811)

Anobit and P.A. Semi are fabless companies. They design chips but do not manufacture them.

Re:Apple is not a semiconductor company (5, Informative)

rimcrazy (146022) | more than 2 years ago | (#39866431)

Let me say a few words here as I worked in the semiconductor industry for over 28 years. So you fully understand just what it means to make a semiconductor foundry these days, here is a thought experiment for you I worked a few years back.

1) You want to build facility for manufacturing wigit.
2) That facility will cost you between 3b to 5b dollars.
3) In order to justify the ROI on that facility you need to take at least 5% total world wide market share for that wigit
4) You get to scrap your factory in 3 years.

My numbers may be a little outdated today but that only means my cost projections are too low as well as the total market share. From simply an accounting standpoint this is nuts. When I got into the business in the early 70's there were hundreds and hundreds of fabrication facilities. Every start-up had it's own fab. Today you can count the premier companies that have fabs on maybe 1 hand and the total number of significant players in the semiconductor market with their own fabs on both hands.

Intel deserves very high kudo's for what they have accomplished. The risk they take is enormous but they demonstrate time and time again what a manufacturing powerhouse they really are.

Fab has become a service (1)

tanveer1979 (530624) | more than 2 years ago | (#39866815)

You are right, and you are also right that your numbers are on lower side.
Consider the Auto analogy.
If you start an Auto company, manufacturing common rails, you will mostly buy tech from either Bosch(likely) or Delphi.
you won't start from scratch.

Same deal here. To design a chip does not require more investment.
As you go further down the line, you need more investment

For example
1. Algo development - Chip arcitechture - Basically an algo or an idea
2. RTL (Actual Behavioural model) - Now you need Simulators and verification engineers to make sure your RTL works
3. You want to create a netlist too? - Welcome to synthesis tools ($$++)
4. Place and route - GDS II - Even more!
5. Manufacturing - FAB - Really big cost

Since TSMC has fabs, financial prudence dictates companies use those. Its not an ideal situation. With so few fabs in the market, its a sort of monopoly. Probably a handful of semiconductor companies have enough cash to invest in a fab, even then results won't show for many years(profits)

Re:Apple is not a semiconductor company (3, Informative)

Anonymous Coward | more than 2 years ago | (#39866857)

Intel's really in another league.

There's a reason Intel shows yields on a log scale. Hint: it's not because it's low. The rest of the industry is reasonably happy at 50-70% yields (and everyone knows it since every buyer sees the yields on the chips its buying). That's why Intel dominates. Getting to smaller feature sizes means they can make smaller die, which means more die per wafer, which means cheaper CPUs. People arguing over little performance gain are missing the fact that going from 32nm to 22nm means Intel just cut their costs in half. But did you notice the price getting cut in half? Nope.

There's another huge advantage having your own fab: turnaround time. When I worked at HP and HP had its own fab, turn around times (from tape-out to parts back) was 2-3 weeks. Later, fabbing a chip at an external fab, it was about 2 months, and you had to pay a lot to get that. Time is extremely valuable, and fabless companies are stuck waiting for chips. Because fabs are optimizing their own time, not their customers, because that's how they are paid.

Re:Apple is not a semiconductor company (1)

Anonymous Coward | more than 2 years ago | (#39866977)

4) You get to scrap your factory in 3 years.

Why would you scrap it? Just because 16nm would come along, doesn't mean 22 stops being useful. I'm sure you can sell capacity to third-parties once something better comes along if they don't care about getting the latest and greatest. There are chips everywhere (cars, washing machines, stoves) that can happily be produced on an "older" technology without issue.

Whether you scrap it or keep it would simply be another accounting decision. Once you've sunk the cost, I would think that OpEx would be fairly predictable, and so extra cash flow from third-parties wouldn't hurt one's bottom line.

commendable maybe (1)

Shavano (2541114) | more than 2 years ago | (#39866803)

It's notable. It'll be commendable if the huge investment they made to pull it off gives them an advantage with more than the investment. You can't know that until you've gone to mass production and seen what the problems are.

Re:How come Apple (3, Insightful)

Pieroxy (222434) | more than 2 years ago | (#39865885)

Let's see. How come GM doesn't have these tires factories that Firestone and Michelin has? I wonder... GM is making a heck of a lot more money than Firestone though...

Re:How come Apple (1)

Anonymous Coward | more than 2 years ago | (#39865913)

Not this decade they aren't.

Re:How come Apple (1)

Jeff DeMaagd (2015) | more than 2 years ago | (#39866895)

This decade or previous decade? The 2000's were pretty sad, but the 2010s GM has shown a pretty robust recovery.

The example doesn't really fly though, GM still makes a lot of its own parts, they haven't farmed out the core product.

Re:How come Apple (1)

petermgreen (876956) | more than 2 years ago | (#39866195)

Apple takes techology designed and manufactured elsewhere (though they may make some minor tweaks), combines it with their own software and turns it into slick products that people are prepared to pay a premium for. That requires design and marketing prowess but not any great technicalcapability.

Hmmm. (-1)

Anonymous Coward | more than 2 years ago | (#39865805)

Industrial spying worked for Intel. That is why.

Re:Hmmm. (5, Funny)

Anonymous Coward | more than 2 years ago | (#39865883)

Spying... on their competitors who are all years behind them?

You must be pretty high up in the CIA to have thought of such a genius spying scheme.

Re:Hmmm. (0)

Anonymous Coward | more than 2 years ago | (#39865981)

I think the bigger issue is that Intel has large sums of money to spend on it and when they lose out they just use it to pay people not to use the competition's superior chips. Which seems to be a winning strategy for them. Of course AMD doing some really stupid things also helped.

Re:Hmmm. (1)

Sable Drakon (831800) | more than 2 years ago | (#39866215)

AMD's been doing stupid crap for years. Like the original six-core Phenom II chips and the fact MSAA is still a feature that shouldn't be used on their graphics cards. It's going to be a sad day when AMD crumbles, but it is coming.

Re:Hmmm. (0)

Anonymous Coward | more than 2 years ago | (#39866965)

There was only one stepping of the six-core Phenom II chips so what would be the point of comparison?

why? (-1, Troll)

crutchy (1949900) | more than 2 years ago | (#39865821)

"wintel"

why will it eventually be overtaken by samsung?

"winfail"

Too bad their 22nm 3D failed (1, Interesting)

Anonymous Coward | more than 2 years ago | (#39865825)

The shrink from 22 to 32nm is a staggering size change - 33% finer lithography - and it uses their much-hyped 3D transistor technology on top of things. Yet, Ivy Bridge, being just a shrink of the older Sandy Bridge die, shows no improvements over the 32nm version. Traditionally, Intel has always been able to show lower power consumption and more than a tangible performance improvement when just doing a process shrink, but the Ivy Bridge does nothing extra in terms of performance and consumes not lower power than its older 32nm sibling - and let's not mention the inefficient heat packaging causing temperatures hotter than the 32nm Sandy Bridge. There's a problem here, Intel.

Re:Too bad their 22nm 3D failed (2)

DarkTempes (822722) | more than 2 years ago | (#39865903)

Is this true or just trolling?

Every benchmark I've seen so far has shown performance increases and power consumption decreases at around the same price.
If your statement is true then that suggests a lot of review sites out there are spoofing their results and that's very very bad.

Sure, if you have a sandy bridge chip there isn't a whole lot of reason to upgrade because for most users anything made in the last 5 years or so can handle everything you'd want to throw at it.

Re:Too bad their 22nm 3D failed (1)

Big Hairy Ian (1155547) | more than 2 years ago | (#39865927)

Is this true or just trolling?

From what I understand it's about 30% more efficient I don't know about it being any faster though. I also thought it could power down core's independently.

Re:Too bad their 22nm 3D failed (1)

Anonymous Coward | more than 2 years ago | (#39866163)

From what I understand it's about 30% more efficient I don't know about it being any faster though. I also thought it could power down core's independently.

The ability to power down cores (without the apostrophe!) independently is not related to transistor size or transistor design.

Re:Too bad their 22nm 3D failed (3, Informative)

Anonymous Coward | more than 2 years ago | (#39865969)

Most of the complaints are tiered towards the overclocking space; doesn't go as far as the old sandy bridge.

http://www.overclockers.com/intel-i7-3770k-ivy-bridge-cpu-review /
http://www.overclockers.com/ivy-bridge-temperatures

http://www.anandtech.com/show/5771/the-intel-ivy-bridge-core-i7-3770k-review /
http://www.anandtech.com/show/5763/undervolting-and-overclocking-on-ivy-bridge

etc.

Re:Too bad their 22nm 3D failed (0)

Anonymous Coward | more than 2 years ago | (#39866097)

Actually, the overclocking would've been no worse if they had used thermal soldier to dissipate heat. This is a case of Intel cutting corners because they can - AMD is far behind, and by their cost-cutting preventing high overclocking on Ivy Bridge they'll sell more Haswells next year.

Re:Too bad their 22nm 3D failed (2, Interesting)

macraig (621737) | more than 2 years ago | (#39865971)

It's true, more or less. [slashdot.org]

Re:Too bad their 22nm 3D failed (0)

Anonymous Coward | more than 2 years ago | (#39865925)

Bullcrap. You just have no idea what you're talking about.

The 22nm chip has significant power advantages attributed to the shrink. It also has around 10% better CPU performance at that lower power, so it's significantly better perf/watt. GPU performance is far increased. And it is 75% the die size of Sandy Bridge, with 20% more transistors (mostly gone into large increases in GPU performance)

Re:Too bad their 22nm 3D failed (1)

macraig (621737) | more than 2 years ago | (#39865963)

Re:Too bad their 22nm 3D failed (2)

jkflying (2190798) | more than 2 years ago | (#39865997)

That's just for overclocking. Regular usage is improved.

Re:Too bad their 22nm 3D failed (4, Insightful)

Goragoth (544348) | more than 2 years ago | (#39866001)

And you seem to have missed the part where "running hotter than SandyBridge" applies only to overclocking. Yes, IB is a worse overclocker than SB, but under normal conditions IvyBridge is faster and uses less power than SandyBridge. Remember that overclockers are a tiny portion of the market. IvyBridge isn't the amazing revolutionary chip some people were expecting but it is a successful, evolutionary step forward. Just like most processor generations.

Re:Too bad their 22nm 3D failed (5, Insightful)

Theophany (2519296) | more than 2 years ago | (#39866263)

I think you've completely missed the point.

1. Ivybridge is a die shrink, nothing more nothing less. Everybody who really thought it would be lightyears ahead of Sandybridge in terms of performance was simply deluded. It's a new die size that anybody has yet to perfect, that will come in Haswell.

2. All this ire is unfairly directed at Intel. On the basis that AMD seems to have no idea what it's doing at the moment, Intel can relax and do as they please. If you want to be pissed at anybody, be pissed at AMD for not being anywhere near competitive and pushing Intel to continuously raise their game.

Since 1998 I've only ever used AMD CPUs in my builds. When I came to build a new rig in February, I simply couldn't justify buying AMD for my CPU again because they were so far behind and there was zero indication that Bulldozer would rectify that. It's sad watching them busily engage in killing themselves off as a serious desktop CPU manufacturer and leaving Intel to potentially become lazy and overpriced, but that isn't Intel's fault.

Re:Too bad their 22nm 3D failed (5, Informative)

dpilot (134227) | more than 2 years ago | (#39866417)

> but that isn't Intel's fault.

Actually it is, to at least some extent. Go back a few years, when Intel was making misstep after misstep, and AMD was coming on gangbusters with K8. At that point, Intel had missed the market so badly that had they been AMD they would have gone under. They weren't AMD, they were Chipzilla. AMD enjoyed a good product cycle with K8, until Intel managed to come back. But they didn't enjoy the great product cycle they should have. Their great product cycle was turned into a merely good product cycle because Chipzilla twisted a few arms and kept K8 out of key opportunities.

The other piece of reality is that Intel combines first-rate process technology with first-rate design capability. (I say "capability" because more than once they've shown themselves to be very capable of letting their eye off the ball, design-wise.)

AMDs biggest problems have always been financing and less-than-best process technology. Bulldozer is a misstep, agreed. But it's not a misstep of the degree of Netburst or IA64. Had K8 gotten the success it deserved, AMD would have been better able to properly fund their design shop. That wouldn't have helped their process problems, however.

The simple fact is that the way things are today, Intel can afford to screw up badly, and can recover. None of their competitors can.

Re:Too bad their 22nm 3D failed (1)

macraig (621737) | more than 2 years ago | (#39866475)

Whose ire exactly are we talking about here? It's not mine. I've used mostly AMD processors myself for the last 15 years, but that will probably change because of the better power and thermal characteristics of the Sandy Bridge series. My next system from scratch will likely host an Intel CPU.

As far as the parent of my original comment, I think he probably read that earlier article BEFORE someone corrected the summary to qualify that it involved overclocking (the comments make clear it wasn't initially so). Someone who posts a comment that is merely incorrect doesn't warrant the post being moderated as Troll. The correct moderation is no moderation at all. Some idiot(s) let his emotional investment in some idea/company distort his response to that post.

Re:Too bad their 22nm 3D failed (0)

Anonymous Coward | more than 2 years ago | (#39866235)

There is nothing on the Ivy Bridge die that is "far increased" in terms of performance. NONE of the almost dozen of tests have been able to show anything remarkable. Not in CPU perf, not in GPU perf, not in lowered power consumption. We're talking about loads of tests that show only a measly few watts lower TDP, a measly few percentiles extra of CPU perf, and a measly few extra percentiles of GPU perf - nothing even scratches on "large increases". Something went wrong here. This isn't what we're used to from Intel when they do a massive shrink like this.

Re:Too bad their 22nm 3D failed (0)

Anonymous Coward | more than 2 years ago | (#39866403)

http://www.anandtech.com/show/5771/the-intel-ivy-bridge-core-i7-3770k-review/

GPU performance is 60% increased, in some cases.

Re:Too bad their 22nm 3D failed (4, Interesting)

SuricouRaven (1897204) | more than 2 years ago | (#39865947)

That's in part because they put the extra die space freed up to a new purpose: Graphics performance. If you just look at processor performance, Ivy is no better. Benchmark the inbuilt graphics and it's far ahead.

Of course, anyone who actually needs decent graphics wouldn't be using the on-chip graphics anyway, so I question just how useful this really is.

Re:Too bad their 22nm 3D failed (5, Insightful)

dkf (304284) | more than 2 years ago | (#39866177)

Of course, anyone who actually needs decent graphics wouldn't be using the on-chip graphics anyway, so I question just how useful this really is.

There's a whole world of people who would quite like decent graphics, but who don't want to spring another hundred bucks or two to get something fancy. There's also the mobile market (laptops, tablets, etc.) where fitting an extra graphics card looks more like a liability than a good thing. Overall, it looks to me like a smart area for Intel to pitch their transistor budget at.

Re:Too bad their 22nm 3D failed (0)

Anonymous Coward | more than 2 years ago | (#39866911)

Yes, but how many of these people who don't want to spring $100 on fancy graphics want to spring $2-300 on their CPU?

Re:Too bad their 22nm 3D failed (4, Informative)

Kjella (173770) | more than 2 years ago | (#39865949)

Traditionally, Intel has always been able to show lower power consumption and more than a tangible performance improvement when just doing a process shrink, but the Ivy Bridge does nothing extra in terms of performance and consumes not lower power than its older 32nm sibling

Is there any reason the parent is at +4, Interesting and not -1, Troll? Are the AMD fanbois really so desperate that they have to mod up blatant lies? Ivy Bridge uses 25-30W lower power at stock speed to deliver marginally better than SB CPU performance and considerably better (but still crappy) GPU performance. The only people that whine are those who want a 4.5+ GHz overclock. Anandtech called it quite possibly the strongest tick [Intel] has ever put forth [anandtech.com] , but I guess if you don't like reality you can invent your own.

Re:Too bad their 22nm 3D failed (1)

Anonymous Coward | more than 2 years ago | (#39866363)

Is there any reason the parent is at +4, Interesting and not -1, Troll? Are the AMD fanbois really so desperate that they have to mod up blatant lies?

Looks like the AMD fans are faster than the Intel sockpuppets. Although the latter seem to be more persistent, what with getting paid for it and all ;)

Re:Too bad their 22nm 3D failed (5, Interesting)

QQBoss (2527196) | more than 2 years ago | (#39866069)

The shrink from 22 to 32nm is a staggering size change - 33% finer lithography - and it uses their much-hyped 3D transistor technology on top of things. Yet, Ivy Bridge, being just a shrink of the older Sandy Bridge die, shows no improvements over the 32nm version. Traditionally, Intel has always been able to show lower power consumption and more than a tangible performance improvement when just doing a process shrink, but the Ivy Bridge does nothing extra in terms of performance and consumes not lower power than its older 32nm sibling - and let's not mention the inefficient heat packaging causing temperatures hotter than the 32nm Sandy Bridge. There's a problem here, Intel.

While I will accept you reversed some numbers (the shrink was from 32 to 22, not the other way around) and Intel is using tri-gate transistors, most everything else you describe is just flat out wrong. Ivy Bridge DOES show lower power consumption at stock voltages (TDPs of 77W vs 95W are a testament to that), and it is higher performance at the lower power consumption (though not by huge amounts, nor was it intended to be). Since it is lower power than Sandy Bridge at the same frequency, it is not having any issues related to thermals and packaging.

Now, if you want to rant about the fact that it doesn't handle overvoltage well for overclocking purposes, that is fine, but it is a separate discussion compared to stock. What you are seeing now is that Intel (probably extremely wisely for the market they are chasing most heavily [anandtech.com] ) has tuned in their process node for stock voltages, but this is resulting in very leaky transistors at high voltages. Additionally, while the current packaging has the ability to remove heat just fine at stock voltages, when you start leaking too much the heat builds up too quickly- which certainly is a 22nm node issue and not actually a packaging issue. [tweaktown.com] Quite possibly, though how far in the future I can't begin to guess, they will probably tweak the process for the Extreme Edition CPUS to make them handle an overclock without leaking so much, but that will take some time learning how they can play with the various knobs to get what they want without destroying what they need.

This leaves me with the feeling that the only problem here is your expectations of a CPU that was manufactured with the intent of taking the mobile market by storm (and they have tuned the process properly for that) when what you want is an overclocking king. Let's see how they tune the process technology for the Extreme Edition (and hopefully copy into other desktop-bound CPUS) before any decisions are made that they have screwed the pooch on being able to overclock.

Re:Too bad their 22nm 3D failed (2, Insightful)

Anonymous Coward | more than 2 years ago | (#39866071)

why the hell did this get marked intersting?

1. if it was "just" a shrink over 32nm and had the same die size and power consumption, it would be no improvement. instead the die size is quite smaller than 32nm. past shrinks have held the die size only slightly smaller as they either added more cache or other logic in addition to the shirnk. in this turn things are mostly the same resulting in a smaller die. this means more dies per wafer and, eventually, lower cost (noting a move to 450mm wafers within the next 5 years).

2. ivy bridge does have lower power consumption than sandy bridge. same speed chips doing the same task can have the ivy bridge system using up to 30w less power at peak performance. where it hasn't really improved is in 'idle' conditions. why? ecause most tech to turn of various sections of the cpu were introduced in other process shrinks. as this is gennerally "just" a shrink, the idle power being roughly the same isn't about lack of improvement, but moreover less innovation in existing silicon power saving techniques. i mean the chip already slows it's overall speed down when idling, and will shutdown significant portions of it cache as well.

3. there has only been a performance improvement for various definitions of "performance". in other iterations intel has tweaked the chip. this is based on feedback from the chips being in the field. of course with sandy bridge running so effectively, there hasn't been much call to 'tweak' the design. still at the same speed, ivy bridge has better "power" performance than sandy bridge.

4. i don't think you've seen any benchmarks to be honest. graphics has improved significantly, cpu not so much. but it is still an improvement over snady bridge.

5. the packaging problems have only been encountered when people overclock the chip. that is, people running it out of spec. sure there's 1% of us out there that will push the limits. but for most people, they will never ever see those hotter temperatures. personally i go the opposite direction by undervolting to consumer less power and in turn producing less noise.

Re:Too bad their 22nm 3D failed (0)

Anonymous Coward | more than 2 years ago | (#39866187)

1. if it was "just" a shrink over 32nm and had the same die size and power consumption, it would be no improvement. instead the die size is quite smaller than 32nm.

The complete die is quite smaller than a single transistor on it? Not that's an achievement! ;-)

Re:Too bad their 22nm 3D failed (-1)

macraig (621737) | more than 2 years ago | (#39866079)

I'm getting real sick of dogmatic groupthinkish people being allowed to moderate here. The result is bullshit like this post being punished as Troll, when effectively what he's done is reference another recent Slashdot article [slashdot.org] . I had already commented here before the real trolls masquerading as moderators leapt in here, so I can't do my part to correct the small injustice; I hope some proper meta-moderation takes place.

Re:Too bad their 22nm 3D failed (2)

QQBoss (2527196) | more than 2 years ago | (#39866107)

It is being punished as a troll, because it is wrong in annoyingly misleading ways. It probably would not have been punished as a troll if the GP had said "in the area of overclocking, 22nm is a fail because..." By making a sweeping comment that only applies to a very small subset of the market, potentially informative because troll. The mods got it right.

Re:Too bad their 22nm 3D failed (1)

QQBoss (2527196) | more than 2 years ago | (#39866113)

grrrr.... because -> becomes

Re:Too bad their 22nm 3D failed (1)

macraig (621737) | more than 2 years ago | (#39866133)

Fine, then mark the previous article - or the editor who approved it - as Troll, and leave this guy the hell alone for being misled by it.

Re:Too bad their 22nm 3D failed (1)

QQBoss (2527196) | more than 2 years ago | (#39866229)

Given that the previous article you cite clearly calls out that this is an overclocking issue, "When overclocked, Ivy Bridge runs as much as 20C hotter than its Sandy Bridge predecessor at the same speed, despite the fact that the two chips have comparable power consumption", I believe that the correct troll is self-identified.

Re:Too bad their 22nm 3D failed (1)

macraig (621737) | more than 2 years ago | (#39866411)

Just because a post might be incorrect doesn't warrant moderating it as Troll. Just leave it unmoderated.

Re:Too bad their 22nm 3D failed (1)

Calos (2281322) | more than 2 years ago | (#39866233)

Yep, totally not his/her fault for spreading lies and not bothering to read the several articles linked to the previous /. story or the commentary attached to the /. article. S/he deserves being modded up, plus some bonus karma for being such a victim, and not having lies modded down so that they stop sustaining themselves is a necessary sacrifice. The fact that s/he is posting as AC and will not suffer/benefit beyond that one post notwithstanding.

Sorry. I think I need more coffee.

Re:Too bad their 22nm 3D failed (1)

macraig (621737) | more than 2 years ago | (#39866407)

Just because a post might be incorrect doesn't warrant moderating it as Troll. Just leave it unmoderated. Nowhere did I advocate a positive moderation.

Re:Too bad their 22nm 3D failed (1)

Jeff DeMaagd (2015) | more than 2 years ago | (#39866931)

When you've mistaken overclocking for normal operation, I think you've either missed the boat or you're not being honest with your agenda.

IB is hotter in overclocking, but at recommended settings, it is cooler and a little faster than SB. If you only care about overclocking, that's fine, but to not say that much is not being honest to us.

Or did you not even bother to understand the article you're referencing?

Intel makes for awesome Linux boxes. (5, Insightful)

MnemonicMan (2596371) | more than 2 years ago | (#39865835)

Intel, with their open-source graphics stack, makes for some of the easiest-to-maintain Linux boxes around. I'm typing this right now on Arch with Intel graphics. Sure, they don't have a lot of "gaming punch" but they are darn stable and just work with Linux.

My desktop right now has Windows and is running a first-generation Core i5 with an AMD Radeon 6870 added in. When that machine get's replaced with another gaming Windows machine in a year or two I'll be pulling the AMD graphics out of it and running on the i5 integrated Intel graphics. It will be super-low-maintenance in Linux. None of this rebuilding fglrx or nVidia modules every time you upgrade the kernel.

When I go looking for a Linux machine the very first thing I look to check-off is "Intel graphics"? Yup, then it's a buy.

Re:Intel makes for awesome Linux boxes. (1)

MnemonicMan (2596371) | more than 2 years ago | (#39865847)

Typing this on my Arch laptop, my desktop is Windows..

Re:Intel makes for awesome Linux boxes. (0, Flamebait)

Anonymous Coward | more than 2 years ago | (#39865853)

Do you work for Intel's marketing department?

Re:Intel makes for awesome Linux boxes. (0)

spyked (1878060) | more than 2 years ago | (#39865879)

That wouldn't make sense. I mean what marketing person would use "Linux machine" and "graphics" in the same sentence?

Re:Intel makes for awesome Linux boxes. (2)

MnemonicMan (2596371) | more than 2 years ago | (#39865887)

Ha, nope. Anyone who's been around the Linux block a few times should confirm that Intel graphics drivers (xf86-video-intel) are just the shiznit. Try getting fglrx to run with xorg-server 1.12.1 - what Arch ships - and you'll find that that proprietary driver hasn't been updated to support that version yet. You could always use xf86-video-ati or xf86-video-nouveau but, honestly, both those drivers lack the polish of xf86-video-intel. Intel just works.

Re:Intel makes for awesome Linux boxes. (1)

Malvineous (1459757) | more than 2 years ago | (#39865859)

When I go looking for a Linux machine the very first thing I look to check-off is "Intel graphics"? Yup, then it's a buy.

I've decided to do the same with my next PC, after growing tired of the lacking multimonitor support under nVidia (sure it's usable, but still very buggy.) But what are Intel graphics like with multiple monitors? I currently have four screens connected via two nVidia discrete cards (no onboard graphics) so how could I achieve this with Intel? The fabled DisplayPort daisy-chaining hasn't seem to have materialised, and the one-to-many DP adapters are still stuck at very low resolutions.

Re:Intel makes for awesome Linux boxes. (1)

MnemonicMan (2596371) | more than 2 years ago | (#39865901)

I'm running a dual-head using XRandR. It works fine. Honestly however I've never used anything more complicated than a dual-screen so I don't want to go off and promise you the moon. Xinerama may be an option for you: before using XRandR I set up with that and even though it's not supposed to compiz did work when in that mode. XRandR, Xinerama, TwinView, you'd think between all that something would work for you.

Re:Intel makes for awesome Linux boxes. (0)

Anonymous Coward | more than 2 years ago | (#39866045)

One of the touted new features of Ivy Bridge is support for triple head displays, so four screens is probably going to require an additional graphics card.

Re:Intel makes for awesome Linux boxes. (1)

Malvineous (1459757) | more than 2 years ago | (#39866179)

XRandR, Xinerama, TwinView, you'd think between all that something would work for you.

Thanks for the reply. I'm currently using Xinerama (without TwinView, as some of my screens are in portrait mode) with the "Awesome" window manager which works fine (apart from the nVidia bugs like OpenGL freezing on one monitor until you switch virtual desktop on another monitor), but I was thinking more of the physical connection for four screens. AFAIK Intel don't make discrete graphics cards so it sounds like I'd only be able to go triple-head before I have to pick from one of the closed source camps ("closed source" referring to officially supported manufacturer's drivers.)

Re:Intel makes for awesome Linux boxes. (1)

Issarlk (1429361) | more than 2 years ago | (#39865937)

Okayyyy... So you'll use intel integrated graphics in your next windows gaming machine that'll run on Linux? That sounds like masochism ; do you actually play some games or is Mine sweeper and Tetris all you need to run on your rig?

Re:Intel makes for awesome Linux boxes. (2)

MnemonicMan (2596371) | more than 2 years ago | (#39865991)

No, when I buy my next Windows machine it'll have either AMD or nVidia graphics. My current desktop machine, which has AMD graphics, will have those graphics pulled at that time. Which means the machine will then be using its Intel graphics - what is integrated right now on the processor. That machine, the old one when I get my new one, will be Linux. With Intel graphics. And I don't play games under Linux, that's what Windows is for, which will be the new machine - in a year or two.

Re:Intel makes for awesome Linux boxes. (1)

pr0nbot (313417) | more than 2 years ago | (#39866111)

Just for the record... I've used an Ubuntu box for a 5+ years of WoW and now SWTOR, under wine. The proprietary nvidia drivers are easy to install and work well under Linux, and have coped with my moves from 7600 to 9600GT to 460.

Is it masochism? Yes, a bit, but I've not really got much use for a Windows box aside from gaming, so I thought I'd give it a go.

Re:Intel makes for awesome Linux boxes. (0)

Anonymous Coward | more than 2 years ago | (#39866139)

OMG....GaMiNg!!!!!! I'm off to raid a troll with all my friends!!!!! ("friends")

Re:Intel makes for awesome Linux boxes. (2)

Bert64 (520050) | more than 2 years ago | (#39866103)

There are open source drivers for radeon too, they might not perform as well as the closed drivers but they still outperform most intel cards while being just as convenient, plus you have the option of using closed drivers if you want the extra performance.

Re:Intel makes for awesome Linux boxes. (1)

MnemonicMan (2596371) | more than 2 years ago | (#39866141)

The last time I tried the open-source radeon driver it was with an Ati Xpress 1100 chipset. The driver would randomly garble the screen and there was nothing I could do to fix it. Things have perhaps changed but I've never had any issues with Intel.

On Arch, at this time, the X.Org server - as already stated, because the X server is too new - isn't supported yet by the latest AMD proprietary driver.

Re:Intel makes for awesome Linux boxes. (1)

jawtheshark (198669) | more than 2 years ago | (#39867145)

I still use the Ati Xpress 1100 on one of my laptops. Given that there are no more proprietary drivers for it and the open source drivers are, ehm, "interesting", using Ubuntu with anything but Unity-2D is impossible. Intel graphics in comparison are lighting speed. The garbling on the Xpress 1100 seems to be gone now, though (Ubuntu 12.04). Keep in mind, the Ati Xpress 1100 can run Half Life 2 in low settings on Windows XP, so really to display a desktop.....

However, Intel doesn't get a free pass either. I have an Atom based machine with Intel GMA 3150 and I think it has problems with some OpenGL things. For example, Cheese doesn't work at all, while it does just fine with my work laptop (NVidia-based) and the same webcam. (again Ubuntu 12.04)

Re:Intel makes for awesome Linux boxes. (0)

Anonymous Coward | more than 2 years ago | (#39866259)

I have a MythTV box with integrated AMD HD3200 graphics running Arch, the last time I updated it the opensource drivers which I was using just wouldn't work. I had to switch to the proprietary ones, which are also buggy, but they do at least work.

As much as I hate to say it, AMDs linux drivers are buggy pieces of shit, both the proprietary and open source ones. Which is a shame because I would like to support AMD by buying another AMD-based machine, but I don't want the headaches trying to get working graphics when I update.

And on the other hand my netbook with Intel graphics which I have been using for nearly 4 years hasn't given my the slightest trouble with its graphics.

Re:Intel makes for awesome Linux boxes. (1)

Ed Avis (5917) | more than 2 years ago | (#39866147)

It's a shame there are no add-in PCI Express cards with Intel's graphics hardware (perhaps a couple of low-end Core processors with the CPU part disabled and just the GPU running). I would love to use Intel graphics instead of Nvidia but if you have several monitors it's not possible - unless you can now get a motherboard with two or four video outputs.

Re:Intel makes for awesome Linux boxes. (1)

MnemonicMan (2596371) | more than 2 years ago | (#39866197)

I have a laptop, it has its built in screen and a VGA port on the side. Here is it's xorg.conf file:

XRandR xorg.conf [pastie.org] .

I did briefly use a Xinerama configuration, and here that is:

Xinerama xorg.conf [pastie.org] .

I'm using Xfce 4.10, with the Xinerama config compiz worked fine. Later on someone told me it wasn't supposed to work with Xinerama. Huh, it did.. But, anyway I went with the XRandR config anyway because it is much shorter. However, on my login manager screen - SLiM - the XRandR config has both screens as clones of each other where with the Xinerama config they are independent displays.

Re:Intel makes for awesome Linux boxes. (1)

Ed Avis (5917) | more than 2 years ago | (#39866949)

Thanks - yes a decent laptop can drive an external display as well as the builtin screen. I am thinking more of desktops with two, four or even eight DVI outputs.

Re:Intel makes for awesome Linux boxes. (0)

Anonymous Coward | more than 2 years ago | (#39866341)

If that miniscule amount of convenience is the dominating requirement when chosing a GPU, you either don't actually need a GPU or you get paid for your opinin :D

Re:Intel makes for awesome Linux boxes. (0)

Anonymous Coward | more than 2 years ago | (#39866787)

Then you end up buying 50x as many machines when you build your render farm. Good luck with that. Both ATI and nVidia have had Linux support for several years and consistently many times faster performance. I suppose when all you want is bash, and vi, Intel runs great.

Re:Intel makes for awesome Linux boxes. (1)

MnemonicMan (2596371) | more than 2 years ago | (#39867015)

You're positing a straw-man. I am not, and never mentioned, "running a render farm." I'm using it for desktop use and programming. I have a full GUI desktop, Xfce 4.10, with Compiz enabled and doing just fine. Flash video plays perfect in my Chromium browser. Intel is fine.

Process the new Speed (0)

Anonymous Coward | more than 2 years ago | (#39865873)

Ok, so, they happen to be ahead on the lithography. And it's yielding nice kit, to be sure. But in a sense it's the new megahertz race: It means the design itself isn't as important as it should be. And that's something intel actually has trouble with, qv itanic.

TI has their own fabs (1, Informative)

Anonymous Coward | more than 2 years ago | (#39865897)

The article is wrong, TI has their on fabs, they don't outsource manufacturing as the article states.

Only partially wrong (2)

tanveer1979 (530624) | more than 2 years ago | (#39866135)

TI closed a lot of FABS. All TI designed stuff does not exclusively get manufactured in TI fabs. Much of it actually goes to TSMC

Re:Only partially wrong (1)

Anonymous Coward | more than 2 years ago | (#39866613)

Not true.

In order to move a device to a 3rd party fab, you must prove that the technology does not exist in a TI fab or that TI fabs are at capacity.

Where are the cores? (0)

Anonymous Coward | more than 2 years ago | (#39865993)

While a 22nm fabrication process is all well and good, why haven't Intel bothered to release their 80-core, teraflop chip that they touted some 6-7 years ago and said that we would have in our computers by 2011? http://techresearch.intel.com/ProjectDetails.aspx?Id=151 [intel.com]

Re:Where are the cores? (1)

garrettg84 (1826802) | more than 2 years ago | (#39866169)

One word: necessity. They simply haven't had to. Besides render/compute farms, we don't really need that kind of power. Most of what may have been pushed towards those massively parallel processors is now being pushed towards the GFX cards - where they DO have more than 80 cores (think CUDA, greater than 512 cores now). Most of the games out there still struggle to use more than a few actual processor threads. Some problems are linear and simply can't have more cores thrown at them for faster work.

Because Intel has the best astroturfers? (0)

Anonymous Coward | more than 2 years ago | (#39866095)

Gotta love the absolutely objective slashvertisements...

Shill here
shill there
everywhere a shill
lalalala...

Competing with ARM... (0, Flamebait)

Bert64 (520050) | more than 2 years ago | (#39866123)

A while ago Intel stated they were intending to keep at least one process shrink ahead of everyone else because it was the only way they could compete with ARM...

Personally i find this despicable and extremely arrogant, pushing their own inferior architecture and holding everyone back when they could be making ARM chips that were superior to everyone else's. Recent benchmarks show their latest low power atom chips are barely competitive with last year's ARM designs (and wouldnt be competitive at all if built on the same process)...

Re:Competing with ARM... (1)

msgmonkey (599753) | more than 2 years ago | (#39866253)

This is called business, using whatever advantage you have to compete against a competitor. Last time I checked Intel was a business.

Re:Competing with ARM... (1)

Anonymous Coward | more than 2 years ago | (#39866291)

That's not new: Intel was able to beat RISCs thanks to its superior manufacturing process.
The benchmarks of Intel "lndian phone" has shown that Intel is now in the same category as ARM: I expect that Intel's next generation will beat ARM in performance/power, thanks to its better fab process.
Competition is about earning money not "fighting fair"..

Re:Competing with ARM... (2)

serviscope_minor (664417) | more than 2 years ago | (#39866739)

pushing their own inferior architecture and holding everyone back

Oh brother, not this again.

On the low end, Intel will never beat ARM because of the large, expensive instruction decoder. That applies to the deep embedded stuff.

Cellphone chips aren't low-end any more. They're getting bigger and bigger and bigger. For big processors, intel does very well, as they have the best OoO scheduling and branch prediction which keeps the large, expensive ALUs busy, giving a very high IPC.

In 5 years time, ARM will need to have similar guts if they want anything but lots of weak cores. At that point, the main advantage that they have will be more or less down in the noise.

good thing Intel also does fab for ARM (0)

ChipMonk (711367) | more than 2 years ago | (#39866159)

ARM-based CPU's are out-selling x86 by a fairly hefty margin, thanks to the mobile/embedded market, while the desktop x86 kingdom has been nearly saturated for, well, forever as these things go. And until Intel gets a clue and makes a chipset that renders on-screen for less than 5 times the mA-h required by a comparable ARM, it's going to stay that way.

Based on that, it's only good business sense that Intel brings in ARM business for their fabs.

Re:good thing Intel also does fab for ARM (0)

Anonymous Coward | more than 2 years ago | (#39866309)

Eh, who does intel fab ARMs for? AFAIK, they only fab their own chips; they had an ARM license (I think they still have it) and made their own ARM lines (StrongARM from DEC, then XScale) back in the day, but they quit in 2006, when ARM's upward progress was seen as threatening x86's planned downward progress to smartphones, MIDs, and UMPCs.

cutting edge is not the market (0)

JasterBobaMereel (1102861) | more than 2 years ago | (#39866441)

Note this is manufacturing not design, many other companies design chips

Note this is only cutting edge chips, how many PC's were bought with these cutting edge chips... ?

What are people actually buying, Moderate PC's, a few servers, and a lot of Phones and Tablets .... all of which use FAB methods less advanced than this ...

Yes Intel are ahead, but the rest of the world are not necessarily buying their products

Mark Bohr? (1)

seven of five (578993) | more than 2 years ago | (#39866711)

Any relation to Neils [nobelprize.org] and Aagie [nobelprize.org] ? No wonder Intel's headed for the atomic realm.

Intel is half a node ahead. (0)

Anonymous Coward | more than 2 years ago | (#39866897)

Hmm, let's readjust this article.

So ... (1) 28nm products were available in early January, 22nm is available now, in May. (2) 28nm is a half-node behind 22nm. (3) TSMC's transistor density at a given node is actually very high - I believe the density of TSMC 40nm was comparable to Intel's 32nm according to diagrams posted at RWT or B3D. (4) 32nm at GlobalFoundries is a year old, and given a typical two year cadence we are again talking about half a node behind.

However ... TSMC aren't making a lot of 28nm still, whereas Intel will be making a lot more 22nm.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>