×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Moore's Law Blowout Sale Is Ending, Says Broadcom CTO

samzenpus posted about 4 months ago | from the paying-the-price dept.

The Almighty Buck 267

itwbennett writes "Broadcom Chairman and CTO Henry Samueli has some bad news for you: Moore's Law isn't making chips cheaper anymore because it now requires complicated manufacturing techniques that are so expensive they cancel out the cost savings. Instead of getting more speed, less power consumption and lower cost with each generation, chip makers now have to choose two out of three, Samueli said. He pointed to new techniques such as High-K Metal Gate and FinFET, which have been used in recent years to achieve new so-called process nodes. The most advanced process node on the market, defined by the size of the features on a chip, is due to reach 14 nanometers next year. At levels like that, chip makers need more than traditional manufacturing techniques to achieve the high density, Samueli said. The more dense chips get, the more expensive it will be to make them, he said."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

267 comments

350mm (18inch) wafer (2)

Taco Cowboy (5327) | about 4 months ago | (#45615521)

I thought Intel, Samsung and TSMC claim that the upcoming 350mm wafer going to bring along another round of cost saving.

Are they telling the truth, or are they blowing smoke ?

Re:350mm (18inch) wafer (5, Informative)

CaptBubba (696284) | about 4 months ago | (#45615627)

350 may bring costs down, but it isn't a process node advancement and won't help cram more transistors per unit area into a chip.

Instead it will just let them process more chips at once in most time-consuming processing steps such as deposition and oxide growth. The photolithographic systems, which are the most expensive equipment in the entire fab on a cost-per-wafer-processed-per-hour basis, gain somewhat due to less wafer exchanging, but the imaging is still done a few square cm at a time repeated in a step-and-scan manner a hundred times or more per wafer per step. Larger wafers however are posing one hell of a problem for maintaining film and etch uniformity, extremely important when you have transistor gate oxides on the order of a few atoms thick.

Re:350mm (18inch) wafer (2, Insightful)

Xicor (2738029) | about 4 months ago | (#45615795)

more transistors per unit area on a chip is worthless atm. you can have a million cores on a processor, but it will still be slowed down dramatically due to issues with parallelism. someone needs to find a way to increase parallel processor speed.

Re:350mm (18inch) wafer (5, Interesting)

x0ra (1249540) | about 4 months ago | (#45615883)

The problem is not so much in the hardware than in the software nowadays...

Re:350mm (18inch) wafer (5, Interesting)

artor3 (1344997) | about 4 months ago | (#45616163)

This hits the nail on the head. For decades, software developers have been able to play fast and loose, while counting on the ever-faster hardware to make up for bloated, inefficient programs. Those days are ending. Programmers will need to be a lot more disciplined, and really engineer their programs, in order to get as much performance as possible out of the hardware. In a lot of ways, it will be similar to the early days of computing.

Re:350mm (18inch) wafer (2)

AdamHaun (43173) | about 4 months ago | (#45615975)

more transistors per unit area on a chip is worthless atm.

That assumes your processor is a fixed size. It isn't. The smaller your die is, the cheaper it is. That's how process improvements make things cheaper.

Re:350mm (18inch) wafer (1)

tepples (727027) | about 4 months ago | (#45616161)

The smaller your die is, the cheaper it is. That's how process improvements make things cheaper.

The point of the featured article is that this has become no longer the case now that smaller processes are requiring far more complex fabrication.

Re:350mm (18inch) wafer (1)

artor3 (1344997) | about 4 months ago | (#45616169)

Only to a certain point, which is what the article is getting at. Eventually the cost of the machines required to make smaller die outweighs the cost savings from having more die per wafer.

Re:350mm (18inch) wafer (1)

drhank1980 (1225872) | about 4 months ago | (#45616037)

There will also be a cost reduction from the more efficient use of the ARC (anti-reflective coating), top coats, and Photoresist applications on the larger wafers. Coat dispense volumes do not go up significantly with larger wafers in a spin coat application so you effectively get more imaging for the same volume of chemical. Seeing as many of the lithography materials are some of the most expensive in the process this benefit can be very significant. Of course controlling these thicknesses to within a few angstroms across essentially a medium pizza will be a major challenge.

Also I was pretty sure the SEMI standard was for 450mm wafers. It will be interesting to see how many people adopt the new equipment for 450mm production, because the up front costs will be astronomical.

Re:350mm (18inch) wafer (0)

Anonymous Coward | about 4 months ago | (#45616049)

At least for Intel, cost saving only serves to increase margins. Since AMD gave up the fight, prices and core count have both leveled off at unacceptable levels given current technology. (and using good RISC cores and denser memory technologies like T-RAM, even more ought to fit.)

I only hope that someone will be willing to pile ARMv8 cores on a chip for us, and relegate x86 to the legacy niche where it belongs.

Re:350mm (18inch) wafer (1)

currently_awake (1248758) | about 4 months ago | (#45616283)

Smaller geometries mean cheaper cost per chip. Unfortunately wafer yields drop with smaller geometries, meaning the benefits of shrinking silicon get eaten by a reduction in number of chips that work per wafer. If we could shrink the chips without a drop in yield then moors law would continue (for now). The other issue with shrinking the chips is transistor reliability. As we make them smaller the chips stop working reliably.

CEOs are FULL OF SHIT (-1)

Anonymous Coward | about 4 months ago | (#45615525)

BULLSHIT. It will continue on as before. We wants to artifically jack up prices. CEO scum.

Right. (0, Informative)

Anonymous Coward | about 4 months ago | (#45615527)

The smell of bovine feces is palpable.

Herb Sutter wrote about this 8 years ago (1)

Anonymous Coward | about 4 months ago | (#45615555)

There is no free brunch [utexas.edu].

er, LUNCH. Lunch.

Re:Herb Sutter wrote about this 8 years ago (0)

Anonymous Coward | about 4 months ago | (#45616089)

Yet Multicore Computing within the last eight year!

Re:Herb Sutter wrote about this 8 years ago (3, Interesting)

gl4ss (559668) | about 4 months ago | (#45616195)

8 years ago I was rocking a single core pc with two gigs of memory and a phone with ~320mhz cpu....

and when did they attach to moores law "lower power use" - if that were it then the law would have been out of the window 1985 and athlons would never have been either..

The pace is engineering, not marketing. It varies (1)

Anonymous Coward | about 4 months ago | (#45615565)

The idea that there's going to be a smooth curve between performance, cost and process size/voltage at all points is pretty stupid really.

A decade long product cycle sounds good to me (4, Insightful)

Anonymous Coward | about 4 months ago | (#45615569)

Used to be you used to have to upgrade every 2 years. Now you really have to upgrade every 5 or 7 years. Once every 10 years sounds pretty good to me. As the pace of computer innovation slows, less money has to go towards upgrades. Computers are now more like appliances, you run them down until they physically break.

Of course if you manufacture computers or work in IT, then such a proposition is horrible as a long product lifecyle means less money coming to you. As a consumer, I like it because I no longer have to shell out hundreds of dollars every other year to keep my computers usable.

Re:A decade long product cycle sounds good to me (4, Interesting)

alexander_686 (957440) | about 4 months ago | (#45615641)

No, you are still upgrading at the same rate. Except now, because more and more stuff is being pushed out onto the web, it is the servers that are being upgraded. So it is transparent to you. Oh, and phones too.

Re:A decade long product cycle sounds good to me (0)

Anonymous Coward | about 4 months ago | (#45615661)

It is astounding how many do not grasp this.

Re:A decade long product cycle sounds good to me (0)

Anonymous Coward | about 4 months ago | (#45616191)

No, you don't grasp that, I DO NOT UPGRADE, THEY DO.
And again, web apps require marginal amount of power anyways.

Re: A decade long product cycle sounds good to me (0)

Anonymous Coward | about 4 months ago | (#45616159)

You do realize much of the public facing web is virtualized don't you... Because supply has outpaced demand there as well, that's how we can afford to squish many things onto fewer physical servers.

Re: A decade long product cycle sounds good to me (0)

Anonymous Coward | about 4 months ago | (#45616433)

Keep dreaming!

The reason we can afford to centralize workloads in the first place is because just like your desktop has more power than it knows what to do with, our servers do as well. Then ON TOP of a server doing the work a thousand clients used to, we virtualize dozens of servers onto one physical machine to keep it from sleeping all the time. A VERY big portion of the Internet is virtualized.

Sorry but demand is not outpacing supply on servers either, and without virtualization they'd grow even slower.

Most performance problems are software related these days, even games. Look at the stupidly high double digit performance gains NVidia posts in each driver update for Christ's sake, and that's not counting updates from the games developer. You probably gain performance faster through software updates than if you replaced your video card every month in a lot of circumstances.

There is nothing funnier than someone buying a $1000 video card just to run inefficient software faster, like racing a powerful car in the sand.

Re:A decade long product cycle sounds good to me (1)

Kjella (173770) | about 4 months ago | (#45615737)

Used to be you used to have to upgrade every 2 years. Now you really have to upgrade every 5 or 7 years. Once every 10 years sounds pretty good to me. (...) As a consumer, I like it because I no longer have to shell out hundreds of dollars every other year to keep my computers usable.

Really, you like products that are just marginally better than before? You wouldn't like it if next year there was a car that could get you to work at twice the speed and half the price? I love that in 2013 I can buy a much better processor for the same amount of (inflation-adjusted) dollars than I could in 2003 or in 1993 and ideally I'd like to say the same about 2023 as well. You really think you'd be better off with 1993-era level of technology and two rebuys because they wore out?

With real income stagnating, you should at least hope that you get more for your dollar in ways that can't be easily compared. "Communication expenses" might be measured in dollars but it doesn't mean an old landline (when that was the only thing) and a smart phone are the same thing. What you get with computers today couldn't be had for any price 20 years ago, but you can now through the progress of technology. Take that away and your life would be very similar to that of your grandparents.

Re:A decade long product cycle sounds good to me (4, Insightful)

hairyfeet (841228) | about 4 months ago | (#45616091)

The problem is for the vast majority? To use a car analogy its like using a top fuel funny car to go to the store for milk, they have more power than they can possibly use.

Take my dad for example, he is the perfect "Joe Average" user. he uses social media, watches videos, uses his bookkeeping software, the kind of everyday tasks the majority do daily. When the price dropped on the Phenom IIs to make way for the FX I thought "Well it has been awhile since I built him that Phenom I quad so maybe its time for an upgrade" and ran a usage monitor for a week to see how bad he was hitting the CPU,what did I find? 35%. That is the average amount of usage that quad was getting. Sure he'd get occasionally over 50% but that was only for a few seconds.

And THAT is why its really not gonna matter to Joe and Jane average, because their systems already idle more than they run and the prices are already crazy cheap. I mean I just got dad a quad core Android tablet for Xmas.,..think he'll EVER come up with enough to do to peg all 4 cores enough that an upgrade would help? Not likely. Hell I was the guy that built a new system every other year with a major overhaul on the odd years,now? My system is 4 years old and I have zero reason to upgrade to a new one. Why should I? I have a hexacore, 8Gb of RAM, 3TB of HDD, the only thing I upgraded was my HD4850 for an HD7750 and even that was about lowering heat and not performance.

Lets face it, Moore's Law made systems several orders more powerful than the work the masses can come up for them to do. Who cares if Moore's Law finally winds down when the systems are so powerful they spend more time idling than anything else?

Exponential growth (1)

flatulus (260854) | about 4 months ago | (#45616093)

Moore's Law is an expression of exponential growth. All we are seeing is the logical conclusion of applying exponential growth expectations to a real world finite resource (i.e. the fact that atoms have an essential finite size). See Wheat and Chessboard problem [wikipedia.org] for reference.

Re:Exponential growth (2)

NixieBunny (859050) | about 4 months ago | (#45616249)

The historical fact that 20% per year die shrinkage was possible for 50 years running, just means that atoms are a lot smaller than the first IC features.

It was good while it lasted.

Re:A decade long product cycle sounds good to me (1)

Artifakt (700173) | about 4 months ago | (#45616309)

One of the things that allows the US government to claim the inflation rate is extremely low is that they get to adjust for improved tech capabilities. If Moore's law is finally hitting its end, the extra value of computation on cheaper, newer iron will stop being one of the things that lets them fudge the reporting. The other most major fudge area is those stagnant wages you alluded to, which will have to become where just about all of the lying with statistics will take place in the future. It's interesting you found yourself connecting these same two factors to clarify your point, despite what looks like a completely different subject.

Re:A decade long product cycle sounds good to me (1)

Dahamma (304068) | about 4 months ago | (#45616477)

Just because the increase in transistor count slows down, that doesn't mean *inflation* will increase. It just means technological gains will be slower. You think somehow the average computer is just going to jump to $10,000 because Intel just needs to charge $8000 for some absurd chip no one wants? If that were true the Itaniam would have been popular.

Or to use the usual "car analogy" ;) - cars just haven't changed that drastically year over year for many decades - they have had slow, incremental improvements while still keeping them affordable to the masses, and have not in themselves had a major impact on "inflation" (gasoline, on the other hand. . .) Sure, you can go buy a Tesla if you want, but that doesn't change the INFLATION RATE, it's a different product!

Re:A decade long product cycle sounds good to me (3, Insightful)

Opportunist (166417) | about 4 months ago | (#45615865)

Considering the quality of contemporary components, you'll still be upgrading every 2-3 years. Or however long the warranty in your country is.

Just in time too. (4, Insightful)

140Mandak262Jamuna (970587) | about 4 months ago | (#45615581)

Well, we had a good run. 99% of the computing needs of 99% of the people can be met by the existing chips electronics. For most people network and bandwidth limits their ability to do things, not raw computing power or memory. So Moore's observation (it ain't no law) running out of steam is no big deal. Of course the tech companies need to transition from selling shiny new things every two years to a more sedate pace of growth.

Re:Just in time too. (5, Insightful)

Anonymous Coward | about 4 months ago | (#45615653)

When people say this, I think that the person is not being imaginative about the future. Sure, we can meet 99% of current computing needs, but what about uses that we have not yet imagined.

  Image processing and AI are still pretty piss poor, and not all bound by network and bandwidth limits. Watch a Roomba crash into the wall as it randomly cleans your room, Dark Ages!

Re:Just in time too. (1)

DigiShaman (671371) | about 4 months ago | (#45615657)

Get rid of the bloat and start coding in ASM. Of course, those developers aren't cheap are they?

Re: Just in time too. (3, Insightful)

VTBlue (600055) | about 4 months ago | (#45615739)

Hold the boat, a return to C or C++ would be a HUGE boost, no need to throw the baby out with bath water.

ASM programmers could never build rich content apps that the world relies on today. The code would be ridiculous, think the worst COBOL app times a 1000 for every application used today.

No, moving dynamic languages and compiling them to optimized C or chunking out high level critical code into optimized C/C++ is what every major web service is focusing on today. Facebook for example is realizing well over 50% gains by just scraping some PHP components with unmanaged code.

Re: Just in time too. (1)

rubycodez (864176) | about 4 months ago | (#45615913)

the most bloated crap on the planet is written in c/c++, by Microsoft

putting web facing services into C is just asking for exploits due to the languages deficiencies (no size or limit checking, etc.). Hell most malware today propogates by about a dozen common mistakes that "genius" programmers make again and again because they're so clever they are morons

Re: Just in time too. (2)

Dahamma (304068) | about 4 months ago | (#45616507)

the most bloated crap on the planet is written in c/c++, by Microsoft

No, no, it's not. The most bloated crap on the planet can be seen every day on many of your favorite web sites.

Why do in 20,000 lines of C++ linked efficiently into a binary and shared libraries what you can do in 50,000+ lines of Javascript, most of which are included without any knowledge of what's in them and that just bloat your browser without actually being executed.

Not that Microsoft should be absolved of blame there. WinJS is an abomination.

Re: Just in time too. (0)

Anonymous Coward | about 4 months ago | (#45615931)

No, moving dynamic languages and compiling them to optimized C or chunking out high level critical code into optimized C/C++ is what every major web service is focusing on today. Facebook for example is realizing well over 50% gains by just scraping some PHP components with unmanaged code.

Well, that's because the website was coded by a script-kiddie creep who originally wanted to create a platform for harassing women and only ever got anywhere because his family was rich.

Re: Just in time too. (0)

Anonymous Coward | about 4 months ago | (#45616251)

High Level Languages are the future!

Re: Just in time too. (1)

gweihir (88907) | about 4 months ago | (#45616387)

On MC68xxx it was possible and was being done. It could also be done on Intel, but that assembler model is so cluelessly complex, the language is a real issue. "Content rich" has nothing to do with it.

As to C, competent people are using it, no need to hold the boat. Just realize that all those that can only do Java are not competent programmers. Also, C coders are highly sought after, see, e.g. http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html [tiobe.com]

Code reviews I have done confirm this, Java programmers are making the most clueless mistakes and are doing the least research when they actually need to code logic themselves. My explanation is that they are so used to just call libraries that they never learn any real programming. C coders cannot do that and hence more of them do understand time and space complexity, algorithms, efficiency and think before coding. That is not to say all C coders are good.

Re:Just in time too. (5, Interesting)

Opportunist (166417) | about 4 months ago | (#45615857)

Erh... no.

As an "old" programmer who happens to know a few languages, ASM for a few different machines among them, I can reassure you that you do NOT want to return to the good ol' days of Assembler hacking. For more than one reason.

The most obvious one is maintenance. I still write ASM for embedded applications where size does matter because you're measuring your available space in Bytes. Not even kBytes. Where it matters that your code takes exactly this or that many cycles, none more, none less. But these are very, very specific routines with a "write once, never touch again" policy in mind. You do not want to be the poor bastard who gets to maintain ASM code. Even less so if it's not your own (which is already anything but trivial). ASM is often a very ugly mess of processor side effects being used for some kind of hack because you simply didn't have the time and/or space to do it "right".

C is probably the closst you should get today to the "metal" anymore. Unless of course you have a VERY good reason to go lower, but I can not really think of anything that doesn't deal with the OS itself.

Re:Just in time too. (5, Interesting)

JanneM (7445) | about 4 months ago | (#45615917)

As an addendum to the parent (I, too, have a background in ASM programming): You're working at such low level of detail that any application of non-trivial size becomes extremely difficult to write truly effectively. You just can't keep so many details in mind at once. And when you need to work as a team, not alone, interfacing code becomes a nightmare.

So of course you abstract your assembler code. You define interfaces, develop and use libraries of common application tasks, and just generally structure your code at small and large scales.

But at that point, you are starting to lose the advantage of ASM. A good, modern C compiler is a lot better than you to find serendipitous optimization points in structured code, and it is not constrained by human memory and understanding so it doesn't need to structure the final code in a readable (but slower) way.

Small, time-critical sections, fine. Small embedded apps on tiny hardware, no problem. But ASM as a general-purpose application language? That stopped making sense decades ago.

If there is such a compiler (1)

tepples (727027) | about 4 months ago | (#45616273)

A good, modern C compiler is a lot better than you to find serendipitous optimization points in structured code

Provided that a developer can find and afford a "good, modern C compiler" targeting a given platform. What's the state of the art in compilers for 6502-based* microcontrollers again? Last I checked, code produced by ca65 was fairly bloated compared to equivalent hand-written assembly language. And I'm told that for years, GCC severely lagged behind $6000-per-seat Green Hills compilers.

* Why 6502? Maybe I'm making an NES game for the competition [nintendoage.com]. Or maybe I need to code a hash table for the storage controller in a Terminator [pagetable.com].

Re:If there is such a compiler (2)

JanneM (7445) | about 4 months ago | (#45616371)

Provided that a developer can find and afford a "good, modern C compiler" targeting a given platform.

The thread is about application development on general-use PCs, which means Intels compiler, the MS compiler, gcc and the like on x86 or ARM.

Re:Just in time too. (2)

50000BTU_barbecue (588132) | about 4 months ago | (#45616153)

Oh yes. I just did a small microcontroller project for a customer and just having to re-learn how to just compare two 8 bit bytes was enough for me. "Not this crap again!" Move one operand to the accumulator. Subtract from memory. Um, which way around is it again? ACC - memory or memory - ACC? Do I need to clear the carry bit myself first? Wait, how do I mask that bit again? It's AND for resetting to 0, OR to set to 1? Wait, there's a macro for that already?

10 minutes later, I'm fairly confident I've put the three instructions in the right order... OK, now what is it again? If the numbers are the same, the result is zero. OK, so I test for the ZERO flag. Wait, where is it again? Bank 0? Bank 1? Oh it's mirrored to all banks? OK. Well, I just wanted to test greater than. So I guess that means the carry bit isn't set?

if a>b then LED=ON is a bit simpler. It's only because I had to use the microcontroller that was already there and it was designed to use as little power as possible to spare the battery. But assembler is a pain especially if those fine details of binary math are far in the past. I suppose I could spend a week or so re-learning all that... But who'd pay me to do that to get a low battery warning light?????

I'm just happy I didn't have to compare two 12 bit numbers on an 8 bit machine.

Re: Just in time too. (1)

VTBlue (600055) | about 4 months ago | (#45615685)

While most non-tech people would agree with your 99% statement, if left to people with this belief, we'd still be stuck with command line PC. Hell, why innovate at all?! The reality is that we are really just scratching the surface of ubiquitous computing. PC absolutely need to get faster because life has more resolution, more data, and ever increasing demand for analysis. I have a perfectly working iBook G4 that can't even browse today's web because it's so media intense. Imagine a world where HD content or HI DPI displays are the norm. Imagine the amount of number crunching a future version of Kinect needs to better understand sensor data.

The point is that if you spend a few minutes to think about all the things that would be nice to be able to do when not at a computer, you will realize that our demand for local compute capacity is not really slowing. The business model maybe changing, but we all still want better faster cheaper and more.

Its fucking 2013 and voice recognition still sucks beyond belief in real-world scenarios. Need more compute and storage and bandwidth to address the issues.

Re:Just in time too. (0)

Anonymous Coward | about 4 months ago | (#45615859)

Is that you Bill Gates?

Re:Just in time too. (2)

Dahamma (304068) | about 4 months ago | (#45616489)

Or someone actually comes up with something *new* in computing and changes the game again. For example, quantum computing has seemed like a bit of a pipe dream so far, but a major breakthrough there would kickstart a whole new era of development.

Or maybe it will come from another direction - if we can only improve 2 of (speed, power consumption, cost), what if someone came up with an exponentially improved battery technology? And/or drastically reduced the power consumption for the same cost? Those could easily result in some very interesting new technologies spurring new industries we haven't even thought of yet.

On schedule (4, Interesting)

Animats (122034) | about 4 months ago | (#45615585)

About ten years ago, I went to a talk at Stanford where someone showed that the increasing costs of wafer fabs would make this happen around 2013. We're right on schedule.

Storage can still get cheaper. We can look forward to a few more generations of flash devices. Those don't have to go faster.

Re:On schedule (1)

gweihir (88907) | about 4 months ago | (#45616413)

Indeed. The slow-down has been happening for about a decade now. My personal indicator is that once a year or so, I think about upgrading my CPU. For the last few years, I have not been able to find anything significantly faster. That used to be no problem. I have to admit that I quite like this trend. Maybe we can no start to build better software?

What a Luddite (0)

Anonymous Coward | about 4 months ago | (#45615593)

Technology gets better. All technologies. All the time. Forever. We'll just 3D print our own 64GB flash chips at home, bud, thank you very much.

Re:What a Luddite (1)

gweihir (88907) | about 4 months ago | (#45616429)

Nonsense. A lot of tech is finished and does not get better at all. For a common item, look at the pencil or the hammer. They are finished. They were available in the same quality decades ago. They do not get cheaper. Or take paper. Or take gate logic, foil capacitors or discrete transistors. There are countless other examples.

Moore's law (0)

Anonymous Coward | about 4 months ago | (#45615625)

says nothing about speed, power consumption, or cost.

FFS, it's the first sentence in the damn Wikipedia article!

Re:Moore's law (1)

alexander_686 (957440) | about 4 months ago | (#45615699)

ummmm.... Moore's law mentions cost and speed explicitly The rule is, every 2 years, twice as many transistors at 1/2 the cost. If transistor density doubles the speed doubles - more or less - more true in the 70s then today.

Re:Moore's law (2)

rubycodez (864176) | about 4 months ago | (#45615933)

wrong. your reading comprehension is abysmal. The number of transistors doubling says NOTHING about density, only the number of transistors without any mention of area taken. Speed is not mentioned at all.

Search "Samueli AND insider trading" ... (0)

Anonymous Coward | about 4 months ago | (#45615643)

This character Samueli has an agenda which doesn't tend to
involve quaint concepts like honesty or ethical behavior.

Do a search on his past behavior and ask yourself if you really think
it is a wise choice to take virtually anything he says at face value.

Software keeping pace? (5, Insightful)

CapOblivious2010 (1731402) | about 4 months ago | (#45615655)

If that's true, we can only hope that the exponential bloating of software stops as well. Software has been eating the free lunch Moore was providing before it got to the users; the sad reality is that the typical end-user hasn't seen much in the way of performance improvements - in some cases, common tasks are even slower now than 10 years ago.

Oh sure, we defend it by claiming that the software is "good enough" (or will be on tomorrow's computers, anyway), and we justify the bloat by claiming that the software is better in so many other areas like maintainability (it's not), re-usability (it's not), adherence to "design patterns" (regardless of whether they help or hurt), or just "newer software technologies" (I'm looking at you, XAML&WPF), as if the old ones were rusting away.

Re:Software keeping pace? (2)

Opportunist (166417) | about 4 months ago | (#45615811)

You know, I'm kinda tempted to see how an ancient version of Windows + Office would run on a contemporary machine.

Provided they do at all, that is...

Re:Software keeping pace? (1)

sjwt (161428) | about 4 months ago | (#45615981)

I tried a few years ago to get Access 97 running on windows 7, no luck.. And so I moved to open office, the libre office.

Re:Software keeping pace? (4, Insightful)

adri (173121) | about 4 months ago | (#45616055)

Go get Windows 3.1 and Works. Stick it in a vmware VM. Cry at how fast the VM is.

-adrian

Re:Software keeping pace? (4, Insightful)

mcrbids (148650) | about 4 months ago | (#45615843)

Software has been eating the free lunch Moore was providing before it got to the users; the sad reality is that the typical end-user hasn't seen much in the way of performance improvements - in some cases, common tasks are even slower now than 10 years ago.

This point of view is common, even though its odd disparity with reality make it seem almost anachronistic. Software isn't bloating anywhere near as much as expectations are.

Oh, sure, it's true that much software is slower than its predecessor. Windows 7 is considerably slower, given the same hardware, than Windows XP which is a dog compared to Windows 95, on the same hardware. But the truth is that we aren't running on the same hardware, and our expectations have risen dramatically. But in actual fact, most implementations of compilers and algorithms show consistent improvements in speed. More recent compilers are considerably faster than older ones. Newer compression software is faster (often by orders of magnitude!) than earlier versions. Software processes such as voice recognition, facial pattern matching, lossy compression algorithms for video and audio, and far too many other things to name have all improved consistently over time. For a good example of this type of improvement, take a look at the recent work on "faster than fast" Fourier Transforms [mit.edu] as an easy reference.

So why does it seem that software gets slower and slower? I remember when my Dell Inspiron 600m [cnet.com] was a slick, fast machine. I was amazed at all the power in this little package! And yet, even running the original install of Windows XP, I can't watch Hulu on it - it simply doesn't have the power to run full screen, full motion, compressed video in real time. I was stunned at how long (a full minute?) the old copy of Open Office took to load, even though I remember running it on the same machine! (With my i7 laptop with SSD and 8 GB of RAM, OpenOffice loads in about 2 seconds)

Expectations are what changed more than the software.

Re:Software keeping pace? (0)

Anonymous Coward | about 4 months ago | (#45615907)

Have you actually compared simple tasks in Win7 and XP? Try copying files between a USB stick and the hard drive. I happen to have a laptop from 2005 with XP on it that I still use for work alongside my brand new, no-bloat Win7 workstation with umpteen cores of processing power. Copying files on the XP laptop is about three times faster, even if you ignore the lack of "yes to all" when overwriting a file in Win7. I don't know what it does (combs through and indexes the files for search purposes?) but Win7 is terrible at this considering it has ~100x the processing power at its disposal.

Re:Software keeping pace? (1)

Billly Gates (198444) | about 4 months ago | (#45616041)

Have you actually compared simple tasks in Win7 and XP? Try copying files between a USB stick and the hard drive. I happen to have a laptop from 2005 with XP on it that I still use for work alongside my brand new, no-bloat Win7 workstation with umpteen cores of processing power. Copying files on the XP laptop is about three times faster, even if you ignore the lack of "yes to all" when overwriting a file in Win7. I don't know what it does (combs through and indexes the files for search purposes?) but Win7 is terrible at this considering it has ~100x the processing power at its disposal.

Do you have SEP or McCrappy AV? or is the USB set to 1.0 on the newer machine?

I notice no slowdowns and speed ups in my case. Something sounds fishy

Re:Software keeping pace? (0)

Anonymous Coward | about 4 months ago | (#45616077)

Sounds like your hardware is fucked.

Re:Software keeping pace? (1)

Billly Gates (198444) | about 4 months ago | (#45616065)

Windows 7 is much faster than XP on a modern system.

With Ahci vs eide on SATA even a mechanical drive is quicker. The swaping in XP is HORRIBLE with the algorithm. XP does not support command queing in SATA which means if it is swaping lack a mad dog (even if ram is free) your other files wont load until it is done hence the slower Open Office load. XP does not scale SMP wise beyond 2 cpus that well.

Windows 7 was compilied iwth more flags for video (SSE 3), compression, and uses extra registers on modern cpus which is another boast.

Many have a negative view of Windows 7 out of fear of change because it looked a little like Vista and subconciously looked for bad things.

Really I love Windows 7 and find it supperior to XP and have a hard time wondering why people buy new machines and waste a weekend trying to hack XP to work on them poorly?!

The indexing only happens during the install. It is not like Vista at all and I like instant search and I never even use the mouse anymore to load programs from the start menu. I just type away. It is not bloated at all compared to XP and is more efficient.

Re:Software keeping pace? (2)

tepples (727027) | about 4 months ago | (#45616509)

Really I love Windows 7 and find it supperior to XP and have a hard time wondering why people buy new machines and waste a weekend trying to hack XP to work on them poorly?!

Because they own specialist peripherals with no Windows 7 driver. These may include printers whose manufacturer is trying to pad revenue with repurchases to replace otherwise working hardware, or drivers whose hobbyist author can't afford to renew a kernel-mode code signing certificate. Or because they own copies of expensive specialist proprietary software that doesn't run properly under Windows 7, even in compatibility mode.

Re:Software keeping pace? (1)

Tablizer (95088) | about 4 months ago | (#45616419)

I was stunned at how long (a full minute?) the old copy of Open Office took to load, even though I remember running it on the same machine! (With my i7 laptop with SSD and 8 GB of RAM, OpenOffice loads in about 2 seconds)

Open Office has a pre-load option for quick launch. (At the expense of machine startup time.) You may have unwittingly selected that option on the newer machine when you installed it. 60 sec. versus 2 sec. doesn't sound right.

Re:Software keeping pace? (1)

Anonymous Coward | about 4 months ago | (#45615957)

What complete BS. 15 years ago I'd turn on my PC and go make a sandwich. A full minute to launch a program was not uncommon. Even middle-road modern computers are speedballz fast at all those common tasks now.

Re:Software keeping pace? (0)

Anonymous Coward | about 4 months ago | (#45616069)

A full minute to launch a program was not uncommon.

Luxury!
I use ChromeOS over dialup you insensitive clod!

Heard that before... (0)

Anonymous Coward | about 4 months ago | (#45615701)

8080's used to be above $1000 USD at one time... It was a very advanced chip that required very specialized equipment to achieve that transistor density.

Re:Heard that before... (0)

Anonymous Coward | about 4 months ago | (#45615885)

That's right, that obviously means there are no limits. Thanks for clearing that up.

Wait, what? (1)

pushing-robot (1037830) | about 4 months ago | (#45615731)

Without adjusting for inflation Intel's processors cost about as much as they did 20+ years ago.

http://www.krsaborio.net/intel/research/1991/0422.htm [krsaborio.net]

http://www.newegg.com/Product/Product.aspx?Item=N82E16819116492 [newegg.com]

http://www.newegg.com/Product/Product.aspx?Item=N82E16819116899 [newegg.com]

http://www.nytimes.com/1992/01/09/business/company-news-intel-moves-to-cut-price-of-386-chip.html [nytimes.com]

http://www.newegg.com/Product/Product.aspx?Item=N82E16819116775 [newegg.com]

Almost every other component (except maybe the GPU) has dropped tremendously in price over the past couple decades, but CPUs have stayed almost flat. Hopefully the newly competitive ARM processors will finally drive prices down (iSuppli estimates a measly $18 for Apple's new A7 CPU+GPU) but I'm not holding my breath.

Re:Wait, what? (1)

JanneM (7445) | about 4 months ago | (#45615783)

Pointless comparison without inflation adjustment. If you do adjust for inflation, CPU's have become a lot cheaper as well.

how fast does it have to be (1)

Ralph Spoilsport (673134) | about 4 months ago | (#45615743)

To run MS Office and watch cat videos on Youtube? Not very? THen I guess "Good Enough" computing will will moderate the situation....

Is this new? (0)

Anonymous Coward | about 4 months ago | (#45615765)

When has chip manufacturing not been more expensive and complicated with each new generation? 8088 chips would seem like magic in WW2, and today you could have a room of drunk monkeys make them out of materials you would find in a kindergarten craft drawer.

And I don't think "cheaper" was ever part of Moore's law.

Re:Is this new? (1)

queazocotal (915608) | about 4 months ago | (#45615921)

Yes, this is new.
I got my first computer about 30 years ago.
It is approximately a million times slower than the desktop I bought for less money this year. Admittedly, it uses about a quarter of the power.

Yes, each successive fab has been costly - with newer ones costing over a billion dollars.
But, until relatively recently, you could go from 2x to x feature size, and get lower power consumption, and higher performance.

This simple geometrical scaling has broken down - narrower features may mean you need more power, not less, and though they may be able to run at a higher frequency, cooling becomes a major problem.

As the summary stated, you now can't make - for any amount of money spent on a new fab, a denser process simply by virtue of geometry that gets you better power and clock speed.

I feel I'm really unlikely to see another factor of a million increase in computation, barring mature nanotech.

Re:Is this new? (1)

gweihir (88907) | about 4 months ago | (#45616445)

No, the monkeys cannot. Even making 8088 requires a fab costing a few 100 Millions. These chips are cheap today because they take so little wafer-space, but the wafer and doing anything with it has not gotten cheaper. The other thing is that the fabs that old chips are made in have been paid off a long time ago. Bomb the right 20 places on the planet an the human race would need a few decades to be able to make chips again.

Hmm... (1)

Opportunist (166417) | about 4 months ago | (#45615805)

I think a cartel exam is in order. If someone tries to explain a price hike in a field that is allegedly contested, especially when the reason given is threadbare at best, it's time to watch for price fixing.

Re:Hmm... (1)

rubycodez (864176) | about 4 months ago | (#45615949)

no, they are almost at the limit of physics for CMOS. At 10nm gate thickness becomes a monolayer, and quantum tunneling significant problem.

some exciting alternative technologies exist though.

Yawn (5, Funny)

jon3k (691256) | about 4 months ago | (#45615923)

Oh look the 100th executive to predict the end of moore's law in the last month.

Re:Yawn (0)

Anonymous Coward | about 4 months ago | (#45616143)

I'm sure his skills are a _real_ asset. /s :-P

Re:Yawn (0)

Anonymous Coward | about 4 months ago | (#45616155)

That way when Moore's law ends they can all add "predicted the end of Moore's law" on their resume to make themselves more sellable. Who wouldn't want an executive that can predict technology?

Re:Yawn (1)

jd (1658) | about 4 months ago | (#45616189)

Oh no! You know what that means! 100 monkeys is the critical threshold! The brains of all of humanity will now be wiped! I can feel it sta....gurhcfjgjxhhfhcCARRIER LOST

Re:Yawn (5, Funny)

Anonymous Coward | about 4 months ago | (#45616409)

18 months from now, it will only require 50 executives to predict the end of Moore's law.

Maybe... (2)

Alex Vulpes (2836855) | about 4 months ago | (#45615983)

But an end to Moore's Law has been predicted before multiple times, and it hasn't happened yet. (Things have slowed down, yes, but they're far from stopping.) A few years back hard drives were predicted to reach a storage density limit, but this was solved by turning the magnetic cells vertical. So Moore's Law may finally be coming to an end, but don't be surprised if something new comes along and blows silicon transistors away.

Whenever someone says Moore's law is ending... (0)

QuietLagoon (813062) | about 4 months ago | (#45616009)

... I look at that person as one who has lost confidence in American ingenuity.

.
When one admits defeat, one will succumb to defeat.

Re:Whenever someone says Moore's law is ending... (1)

Anonymous Coward | about 4 months ago | (#45616149)

He hasn't lost his confidence, it's just that his inner fan boy has died.

People are falling out of the "Cult of Tech" left and right. They swallowed the marketing message hook line and sinker and are starting to get bitter than their life hasn't become amazing and fullfulling after buying their 4th iPhone.

Personnally I see this as nothing more than a lul that happens every so many years. Specifically in gaming. Once 1900x1080(1200) became the pinical of screen resolution incentive to improve raw hardware died. We've been running laterally for some time now. Multiple monitors, energy conservation only can take you so far inovatively. The big killer is lack of fast ISP service for internet gaming. Now though we are starting to see the roll out of truly fast fiber sevice and there is a noticeable uptic in much higher resolution monitors being available which is going to start putting alot of pressure on the PC hardware to feed it.

Re:Whenever someone says Moore's law is ending... (0)

Anonymous Coward | about 4 months ago | (#45616219)

"one who has lost confidence in American ingenuity"

LOL hate to be a hater, but when I look at this post, I see someone who's unaware of where most the advances in microelectronics come from.

Or, y'know.... (4, Interesting)

jd (1658) | about 4 months ago | (#45616181)

Encourage inventors rather than patent troll them into oblivion.

Just a thought, I know it would destroy much of the current economic model, but maybe - just maybe - those expensive techniques are merely the product of insufficient brains. Does the semiconductor world forget so soon that "cutting edge" in the 1970s was to melt silicon and scrape off the scum on top? Does it eve r occur to anyone that, just as we use reduction techniques to obtain silicon today because older methods were crap, there exists the potential that the expensive, low-quality techniques of today could be the rejects of tomorrow?

There are no inventors any more because silicon is a bloody expensive field to get kicked out of by patent trolls. Mind you, it's also a difficult area to get into, what with TARP being used to fund golden parachutes, bonuses and doubtless a few ladies of the night rather than business loans and venture capital. There's probably a few tens of thousands of mad scientists on Slashdot, and I'm probably one of the maddest. Give each of us 15 million and I guarantee the semiconductor market will never be the same.

(P.S. For the NSA regulars on Slashdot, and if you don't know who you are, you can look it up, feel free to post on your journals or as an article all the nifty chip ideas you've intercepted that have never been used. After all, you're either for us or for the terrorists.)

Who says increased cost is the better way to go? (1)

thepacketmaster (574632) | about 4 months ago | (#45616233)

No more speed increases coupled with decreases in power consumption and cost. Fair enough, but who says increasing cost is the way to go? (That's rhetorical, we all know it's the business people saying that). Focus on less power consumption and at least keep costs the same. Use the chips we have to make systems with more processors. Take advantage of the cloud and Hadoop. Refocus on more efficient coding practices. We're so focused on chips getting faster, but parallel processing is a viable method of getting more processing done.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...