Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Opportunities From the Twilight of Moore's Law

timothy posted more than 3 years ago | from the finest-corinthian-leather dept.

Hardware Hacking 148

saccade.com writes "Andrew 'bunnie' Huang just posted an excellent essay, Why the Best Days of Open Hardware are Yet to Come. He shows how the gradually slowing pace of semiconductor density actually may create many new opportunities for smaller scale innovators and entrepreneurs. It's based on a talk presented at the 2011 Open Hardware Summit. Are we entering an age of heirloom laptops and artisan engineering?"

Sorry! There are no comments related to the filter you selected.

patents (1)

ToBeDecided (2426750) | more than 3 years ago | (#37482988)

As long as this technology is patentable, corporations will not allow it.

Re:patents (1)

jhoegl (638955) | more than 3 years ago | (#37483088)

To be fair, patents do help companies recoup their costs, but the length of patents, especially in the tech industry is not realistic.

Perhaps what you are talking about are "method patents", or software patents. And yes, those are the worst technical innovation inhibitors ever produced by the United States of Corporate America.

Re:patents (1)

Surt (22457) | more than 2 years ago | (#37486152)

So a couple of decades then.

Twilight (-1, Redundant)

Anonymous Coward | more than 3 years ago | (#37482990)

i hated that movie...

doubt it (2)

Osgeld (1900440) | more than 3 years ago | (#37483002)

the market wont tolerate it, look at the 1980's everyone and their brother was making computers, often times using the same core parts but totally incompatible with each other. when the IBM clones started to hit market all those makers vanished. It was not because the IBM format was better, it was because everyone got even footing on a platform and their confidence in making an investment to hardware was reassured to not be worthless 6 weeks later.

We still see this today, ie ohh windows 8, oh desktop windows8 doesnt run the same shit as arm windows 8, well fuck that!

Re:doubt it (2)

0123456 (636235) | more than 3 years ago | (#37483078)

when the IBM clones started to hit market all those makers vanished.

But that took years. For quite some time the ST, Amiga and the like were considered to be the home computers to own while PCs were primarily for business use; only when Windows 3.0 came along did the PC really take off for home users because it then offered most of the capabilities of those other computers at a lower price.

Re:doubt it (1)

Dogtanian (588974) | more than 3 years ago | (#37483512)

For quite some time the ST, Amiga and the like were considered to be the home computers to own

Waiting for some US user to come along in 3, 2, 1..... and explain that you're totally wrong because *everyone* knows that the Amiga was a total flop. Where "everyone" is defined as US users who don't know or care that their market *wasn't* synonymous with the situation worldwide and that the Amiga was massively popular in Europe. Then again...

PCs were primarily for business use; only when Windows 3.0 came along did the PC really take off for home users because it then offered most of the capabilities of those other computers at a lower price.

You're kind of guilty of the reverse yourself here :-) My understanding is that in the US, the home market went straight from the early-80s 8-bit computers (mainly the C64 there) to the PC compatibles at the higher end (despite their lousy specs and primarily text-based interface at that time (*)) and the NES at the low end. The Amiga and ST generally *were* a flop there.

(*) Yes, early versions of Windows and whatever-happened-to contender GEM [wikipedia.org] were around in the mid-80s. No, no-one (or very few people) used them at the time- Windows *really* took off with Windows 3.1 in the early 90s and GEM never did (though it *was* included with Amstrad PCs in the UK- which were the first really popular low-cost PC compatibles here- so some people, my Dad included, must have used it!).

Re:doubt it (1)

aztracker1 (702135) | more than 3 years ago | (#37484126)

The Apple II was pretty popular as well, especially in education and the Mac wasn't too shabby in terms of market share by the beginning of the 90's as well. The 90's really brought the MS-Windows dominance, pushing it much farther ahead of other OSes.

Re:doubt it (1)

Dogtanian (588974) | more than 3 years ago | (#37484612)

The Apple II was pretty popular as well

I understand this was the case in the US. Again, in the UK however, not so much- though my Dad *did* have one at work in the early 80s and I understand some businesses used them before the PC became the de facto standard for business (if not home and hobbyist) use.

Probably didn't help that the PAL-compatible (European TV system) versions of the Apple II were apparently incapable of colour because the original US Apples' colour was generated using idosyncracies of the US TV system that didn't work with the different spec of PAL.

I also suspect that its US success was due to establishing itself in the early days, so that even after it had been surpassed by other machines that offered more for less, the ecosystem and support made it worthwhile. I don't think that was ever the case here (did Apple push it as hard in Europe? did they focus on their home market first? and did tarriffs and/or the generally lower disposable income in Europe hinder it?) hence other cheaper, more modern computers getting in by the time the market here took off in the early 80s(?)

especially in education

Nope, the BBC Micro (and to a lesser extent some Research Machines models) were the leaders in education here- never saw an Apple II at school, ever.

the Mac wasn't too shabby in terms of market share by the beginning of the 90's as well

Used a Mac in the school's English department circa 1993, and someone I knew had one, but they were never that common. Really, I get the impression that the Mac was always a bigger deal in the US than elsewhere in its early years.

That was the year I finished school, and also around the time the PC compatible *was* starting to take over and the worldwide market did become more homogenous.

Re:doubt it (1)

QuantumLeaper (607189) | more than 3 years ago | (#37484790)

The problem as Commodore Europe knew how to market the Amiga, Commodore USA couldn't have marketed water to someone in the desert. Example is when they announced the REU for the 128, marketing never asked engineers if a ram expansion for the 128 was even possible.

Windows 95 and the internet, did more to market computers to the home users than Win3.11 every did.

Re:doubt it (1)

sick_soul (794596) | more than 3 years ago | (#37485112)

Yes, GEM was also included with my Amstrad PC-1512 here in Italy.
The PC also came with MS-DOS 3.20.

Both GEM and Windows were completely useless at the time, as there were no useful apps (=games and programming tools).

The Amstrad 1512 also had a special graphics mode, 640x200x16 colors that was not compatible with any of the graphic standards of the time [Hercules, TGA (Tandy), CGA, EGA]. That special graphics mode was supported only by GEM. But it was pointless, since there were no apps. GEM Paint was the only program I remember where I could use the 16 colors on screen that were advertised in the magazine-ads.

At the time, I looked up at the Amstrad 1640 as the "perfect" computer, with 640K RAM and an EGA adapter (this time, 16 colors on screen _for real!_).

Good times...

Re:doubt it (1)

Dogtanian (588974) | more than 3 years ago | (#37485254)

The Amstrad 1512 also had a special graphics mode, 640x200x16 colors that was not compatible with any of the graphic standards of the time [Hercules, TGA (Tandy), CGA, EGA].

IIRC according to my Dad, the Amstrads also had text mode(s) that weren't quite standard and caused some programs to crash due to the lack of a bottom line (or something like that). He considered them "almost" compatibles in that there were a few areas like that where they weren't *quite* standard that could cause problems. But I don't get the impression it was a major deal.

At the time, I looked up at the Amstrad 1640 as the "perfect" computer, with 640K RAM and an EGA adapter (this time, 16 colors on screen _for real!_).

I remember deciding I wanted a PC at some point in the late 80s, but could never could have afforded one then. No great loss- unless you wanted to run that crappy text-based office software, an Amiga was a lot more impressive back then! :-)

Re:doubt it (1)

Renegrade (698801) | more than 3 years ago | (#37483868)

I don't see why that ever happened anyways. They were still making revisions to 8088 machines in 1985 and slowly moving towards the 80286 with a few models. The clone makers were a bit faster off the mark, but they were still making PC-BIOS/DOS machines.

I fail to see how a single-tasking, segmented-memory, 1-meg-max machine could be considered even remotely "professional", when even a lowly CoCo could run a multitasking system (OS-9. The non-Apple one). The IBM-PC was completely and totally a member of the 8-bit home computer club. Real IBMs even had a BASIC ROM!

Windows was a complete bomb too. It was fat, bloated, unresponsive and slow. 386enh mode did nothing for stability. Lack of responsiveness killed user experience, and resulted in RS-232 communication difficulties. What's the point of memory protection if you crash hard anyhow?

Yeah, sure, there were lots of games for the other systems*, but that didn't mean you had to put up with DOS / Win3.x hell just to run business apps.

NT-class Windows addressed some of the issues, but at the expense of uncontrolled bloat, DLL hell, and registry BS.

To give an idea to Win3.x people who buried their heads in the sand, here's a translation of Amiga technologies to modern PC technologies:

AUTOCONFIG -> Plug and Play
Datatypes -> codecs/filters
RigidDiskBlock(RDB) -> EFI boot (only leaner, faster, and less retarded)
Intuition Screens -> Whatever Windows 8 Metro term applies to having those full-screen Metro apps. Only Intuition screens could be shared by apps(feature) but could only tile vertically (limitation) and didn't restrict one to Metro-style only (feature++)
Preemptive multitasking -> Name hasn't changed~
Blitter -> GUI hardware acceleration (95% abandoned in Vista, 10% added back to Windows 7).
Volume name -> Like a volume label in Linux.
Drive name -> like a drive letter in Windows, only it's 30 characters in legacy AmigaDOS instead of 1 or 2.
DosPacket interface -> Asynchronous I/O.
DosPacket interface -> zero copy read/write.
AllocMem/AllocVec -> uhh nothing? Windows/Linux compilers/VMs still brk()s for memory and almost never gives it back, even under Java?

* = the PC was a fairly game-oriented system then, just not as good as the competitors. The irony is that it became the primary game system for a very long time, and is still a major force despite the number of "durp! I just want it to work!" people who have trouble double clicking.

Re:doubt it (0)

Anonymous Coward | more than 3 years ago | (#37484436)

I don't see why that ever happened anyways. They were still making revisions to 8088 machines in 1985 and slowly moving towards the 80286 with a few models.

The IBM PC AT (80286 only) debuted in 1984.

I fail to see how a single-tasking, segmented-memory, 1-meg-max machine could be considered even remotely "professional", when even a lowly CoCo could run a multitasking system (OS-9. The non-Apple one).

The lack of multitasking was a function of the software, not the hardware. As for why it could be considered professional and the Amiga not... old Amiga partisans always overestimate the importance and significance of multitasking. Always. They also tend to ignore that in practice, DOS power users had a lot of workarounds allowing them to do side tasks without exiting their main program, in the form of TSRs. It was hackish, but it worked well enough.

Aside from that...

1. Amigas had shitty video. Don't kill me, they did -- for professional purposes. The Amiga video hardware reflected its roots as a game console and only supported output to NTSC/PAL. This meant low resolution, low refresh rate, and interlaced flicker. In the 1980s, professional computing outside of the niches addressed by Amiga (video production, mostly) required high res (for the time) 70Hz+ progressive scan monochrome monitors, because high res flicker free color wasn't yet economical.

2. Amigas didn't have the right applications.

3. Amigas had a terrible operating system. Once again, don't kill me, they did. I don't care how cool you thought it was, in the 80s it was well known as a crashtastic disaster area, and that didn't improve until after Commodore had squandered lots of its market opportunity.

Blitter -> GUI hardware acceleration (95% abandoned in Vista, 10% added back to Windows 7).

Don't be dumb. Yes, Vista abandoned the blitter... in favor of adopting something much more powerful and general purpose, the GPU. Blitters have been a tiny, wimpy legacy wart on the side of GPUs for a decade now and are clearly on the way out. In this, Microsoft was actually playing catch-up to Apple, which has been using the GPU for GUI acceleration since OS X 10.2 (IIRC).

(I bet you're a typical Amigan who thinks blitters were invented by the Amiga's creators. If so, expand your horizons. It wasn't invented by anyone in the personal computer industry.)

AllocMem/AllocVec -> uhh nothing? Windows/Linux compilers/VMs still brk()s for memory and almost never gives it back, even under Java?

Wow, that's some cheek, pushing Amiga memory handling as an advantage over modern Windows/Linux.

Also, you seem to suffer from a deep misconception here. When you call brk() you aren't allocating real memory, you're telling the OS that your process would like a larger virtual address space. A call to undo the effects of brk() wouldn't make much sense. Calls to hint to the OS that "this process is no longer using these dirty pages, you may reclaim them at will without saving the contents" would. (And for all I know, such calls are already there.)

Re:doubt it (1)

Anonymous Coward | more than 3 years ago | (#37483110)

I see you don't understand the concept the article is presenting.

Their point is that as the potential gains from developing a new architecture approach 0 the incentive to upgrade before the physical failyer of the hardware will also approach 0.

Thus in a theoretical future where computers have been against the performance wall for half a decade manufacturers will have to compete on build quality and expected longevity of their products. Taken to it's logical conclusion if computing power were to completely flat line for a couple generations people could literally buy a computer when they go to college use it for their professional life and pass it on to their now college aged grandchildren, who will use it until they pass it on to their grandchildren, etc.

This has nothing to do with how many competing platforms the market can support.

Re:doubt it (0)

Anonymous Coward | more than 3 years ago | (#37483172)

And yet that will never happen. The premise is absurd.

Re:doubt it (1)

Osgeld (1900440) | more than 2 years ago | (#37485802)

well your point happened about a half decade ago, and quality still doesn't matter because computers have become disposable.

Honestly unless I try playing a game (or maybe editing hd video, I dont do that) I cant really tell an appreciable difference between my 2.25GhzAMD XP, my 2.5 Ghz X2 or my 2.8Ghz X4, and 2 of those computers I fished out of the dumpster at my apartment. The software was screwed up and I can imagine someone getting a estimate for 175 bucks in repair on a 200$ computer.

Re:doubt it (1)

jbolden (176878) | more than 3 years ago | (#37484214)

Actually the grey box market was mainly in the early 1990s. The 1980s clones were much rarer, Compaq was just winning their lawsuits to allow an imitation of the IBM bios.

Re:doubt it (1)

Osgeld (1900440) | more than 2 years ago | (#37485716)

mid to late 80's is when it really started hitting the fan, especially with the asian clones and very many of them could not care less about IBM and laws ... course that came back and bit them in the arse pretty quick but the cat was out of the bag by then.

but heck even the tandy 1000 was released in 86 ... maybe a bit sooner and that was a legal clone

Re:doubt it (1)

sjames (1099) | about 3 years ago | (#37486740)

There were plenty in the '80s. The Computer Shopper was packed full of them. By the '90s we quit even worrying about how compatible they were (they were all 100%) and PC started referring to the class of machine rather than meaning specifically the IBM (where the others were called clones).

Oddly, Compaq was amongst the last to iron out the incompatibilities. TI never did, they just gave up on clones.

The first thing that came to mind... (1)

Jmanamj (1077749) | more than 3 years ago | (#37483020)

Was a wood and brass encased laptop with exquisite scrollwork around the keyboard and webcam, inherited by an archeologist who caries it around for data analysis and note taking.

Re:The first thing that came to mind... (0)

Anonymous Coward | more than 3 years ago | (#37483080)

do not get reference, please reference reference.

Re:The first thing that came to mind... (2)

Fallon (33975) | more than 3 years ago | (#37483128)

I believe you are talking about Datamancer's steampunk laptop. http://www.datamancer.net/steampunklaptop/steampunklaptop.htm

Re:The first thing that came to mind... (1)

fast turtle (1118037) | about 3 years ago | (#37486696)

Thank you for that link again. I'd seen this when it was first completed and loved the effort and detail that went into it.

Good, we don't need no fancy store-bought CPU's! (1)

elrous0 (869638) | more than 3 years ago | (#37483082)

My grandpappy made his own CPU's, you lazy whippersnappers! And if we're going to get back on top, American kids gonna start having to learn how to again. Now git' your lazy ass in that clean room and get to work!

You opened up the 'back then we ...' talk too low (1)

unity100 (970058) | more than 3 years ago | (#37483218)

You should at least open the bet with 'back then, we ate our own shit' -> then add 'young whippersnappers' and whatnot. noone can top 'eating our own shit back at that time'

Re:You opened up the 'back then we ...' talk too l (1)

gstoddart (321705) | more than 3 years ago | (#37483266)

You should at least open the bet with 'back then, we ate our own shit' -> then add 'young whippersnappers' and whatnot. noone can top 'eating our own shit back at that time'

Do not tempt rule 34 [urbandictionary.com] ... I shudder to ponder it.

Re:Good, we don't need no fancy store-bought CPU's (1)

geekoid (135745) | more than 2 years ago | (#37485792)

In my day, we just shouted zero's at the window.

I am a physicist (2)

drolli (522659) | more than 3 years ago | (#37483106)

and i seriously dont think that moores law will end soon. Bumps in both directions, extending over some time are nothing unusual. New technologies will rise and Metal-Oxide-Semiconductor processes wont be dominating forever.

Re:I am a physicist (4, Interesting)

Guspaz (556486) | more than 3 years ago | (#37485352)

There are theoretical limitations to how small things can get, and how much work can be done per unit of space, but we're nowhere near that yet.

The author claims that semiconductor density improvements have been slowing over the past few years, but that's not true at all. One need only look at past schedule of Intel's die shrinks, or their transistor counts, to realize that we're still going ahead at full steam. The pace of reductions has held pretty much constant to Moore's law for at least the past decade, and Intel's roadmaps seem to show that continuing for at least another two die shrinks (which will each double density).

It's kind of amazing, when you think of it. Comparing the best of 2002 to 2012, you get from 90nm to 22nm. In just one decade, that is a 16.7x increase in density, and that doesn't even take architectural improvements into account.

Re:I am a physicist (2)

i_b_don (1049110) | more than 3 years ago | (#37485562)

What are you smoking? Theoretical limitations aren't too far away. There are lots of games that we can yet play to increase performance, but miles of runway we don't have.

And yes, I remember a headline "Can we break the 1 micron barrier?" and I marvel that we're at 22nm lithography and shrinking.

d

Re:I am a physicist (1)

Anonymous Coward | more than 3 years ago | (#37485634)

Sub-20nm is truly difficult though. Photolithography is running out of steam. Currently they're using 193nm wavelength lasers, which is huge compared to the feature size. It is very difficult and expensive to move to the next available wavelength, extreme ultraviolet at 13nm.

Re:I am a physicist (0)

geekoid (135745) | more than 2 years ago | (#37485918)

A) You job in know way makes you an expert in this, so stop using it to imply you are.

B) Moor'es law is over.

Mores law ONLY applies to silicone chips. Double the number of components in the same area at the same cost. This isn't happening anymore. At least not in fabs that make consumer wafers.

I have 2.6G hz chip n my computer. It's 3 years old. That means I should be able to buy a 10.4GHz chip for the same amount of money right now. hmmm

It's getting very hard to get the metal content in fabs down below 1 part per billion.
The crack when being moved.
Electron microscopes are expensive to make, and make finer.

And eventual we will have gate problems du to quantum properties.

Will computer get more powerful? yes. Will Moore's law always be applicable? no.
BTW:
http://download.intel.com/research/silicon/moorespaper.pdf [intel.com]

Re:I am a physicist (1)

Surt (22457) | more than 2 years ago | (#37486198)

Moore's law is explicitly about transistor count, not clock speed. Compare the number of transistors in the cpu you bought, vs the number you can buy for the same price today.

Re:I am a physicist (0)

Anonymous Coward | more than 2 years ago | (#37486270)

I have 2.6G hz chip n my computer. It's 3 years old. That means I should be able to buy a 10.4GHz chip for the same amount of money right now.

facepalm.jpg
No. That's so not how Moore's law works. Yes, Moore's law is continuing to hold. Since the "quantum effects" argument is based on a 2 dimensional scale, the advent of 3d chips means we have quite a long way to go before it hits the end. Even the most cursory perusing of the wikipedia article reveals http://en.wikipedia.org/wiki/File:Transistor_Count_and_Moore%27s_Law_-_2011.svg showing 2011 has so far been slightly above expectations, not plateauing.
.
And as you sort of implied, a doubling of transistors does not equate to a doubling of cycles, and a doubling of cycles does not equate to a doubling of flops. Processing power is actually increasing at a rate proportionally greater than Moore's law because of things like parallel computing and better architectures. see: Top500 (which has a direct correlation to consumer units; if I recall correctly it's around 8 years from 500 to consumer)

Re:I am a physicist (0)

Anonymous Coward | about 3 years ago | (#37486844)

You can buy a 10GHZ chip for the same money:

4x2.5GHZ for the same price..

the performance has been used to fit more cores on the die, rather than allow more space so it can be cooled at higher clock temp.

and since when does 'experience in a related profession *not* count for something?'

Re:I am a physicist (1)

Surt (22457) | more than 2 years ago | (#37486190)

How many atoms currently make up a transistor? How many atoms do you think is the smallest possible number to make up a transistor?
How many years of Moore's law does that leave?

(Hint: the answer to the last question is not a large number).

not good for most people (1)

Threni (635302) | more than 3 years ago | (#37483130)

Good for them, but bad for everyone else. Users lose the continual improvements we're used to, manufacturers and retailers have to deal with people making their kit last years longer as they have fewer reasons to upgrade. Probably very good for Google and other companies who "rent" computers.

Re:not good for most people (0)

Anonymous Coward | more than 3 years ago | (#37483258)

Maybe it's time to look for new work, new products, new things make and spend money on, if you have any that is.

Re:not good for most people (1)

sjames (1099) | about 3 years ago | (#37486808)

It's actually fairly bad for rental. When you can buy a usable low-end system for $300 or so, rental because you can't afford to buy isn't going to happen. That just leaves rental because you expect the thing to be obsolete in short order and want a rapid upgrade cycle. That will actually be less likely if the tech hits the wall.

where is my mind (-1)

Anonymous Coward | more than 3 years ago | (#37483188)

useful myfriends.. [drmustafaerarslan.net]

This assumes I/O and software technology stagnates (0)

Anonymous Coward | more than 3 years ago | (#37483194)

Some new bit of software will require some widget that can't be supported on an existing platform
and poof, there goes your old platform.

Parabolical (1)

unity100 (970058) | more than 3 years ago | (#37483200)

Thats what the development graphic of open stuff is like.

you can expect supercomputers made of open source chips in 10-15 years.

Moore (5, Funny)

Anonymous Coward | more than 3 years ago | (#37483206)

The number of people predicting the end of Moore's Law doubles every two years.

Definitely slowed ... (1)

gstoddart (321705) | more than 3 years ago | (#37483230)

It seems computers have been stuck at 3GHz (plus or minus a bit) for a while.

Sure, we've added more cores and the like, but it's interesting to see that plateau at the end of the curve.

I'm sure some things actually are faster, but in terms of what's available to consumers, it hasn't seemed to get all that much faster the last few years.

Re:Definitely slowed ... (3, Informative)

BZ (40346) | more than 3 years ago | (#37483242)

A 3GHz i7 is a _lot_ faster than a 3GHz P4. Have you tried actually comparing them?

Heck, a 2GHz Core 2 Duo core was about comparable in raw speed to a 3.6GHz P4 core last I measured. And an i7 is a lot faster than a Core 2 Duo.

More to the point, Moore's law is about transistor count, not clock speed. Transistor count continues to increase just fine; scaling clock speed just got hard because of power issues and such.

Re:Definitely slowed ... (1)

medv4380 (1604309) | more than 3 years ago | (#37483354)

scaling clock speed just got hard because of power issues and such.

Nothing that can't be fixed with a little liquid helium. Now give me my 8ghz processor.

Re:Definitely slowed ... (2)

LordLimecat (1103839) | more than 3 years ago | (#37483518)

The new i5s and i7s are about 7 times faster at AES encryption with the new instruction set, than an equivalent processor without it. So try scaling that up to 21ghz, if you want to compete with an i5 2600k. Which do you suppose is easier, adding that instruction set or dealing with the TDP of a 21ghz cpu?

Re:Definitely slowed ... (1)

gstoddart (321705) | more than 3 years ago | (#37483392)

No, but if I look at consumer laptops, it's more or less the same quad-core AMD CPU as I bought three years ago. There actually was a good solid decade where the CPU speed was growing at super crazy speeds.

In a lot of ways, I don't miss the whole "oh, crap, the machine I bought six months ago is half the speed of the one I can buy now" ... there was nothing more annoying than spending $2K on a box only to have it become obsolete right away.

Though, I am hoping that next time I get a new PC I can go beyond the 4 cores/8GB of RAM I have now and not be outrageously expensive. That would rock.

Re:Definitely slowed ... (3, Insightful)

Desler (1608317) | more than 3 years ago | (#37483522)

For the.millionth time, Moore's law is not about processor speed. And no a 3 year old cpu is not the samesame as today. Take for example the sandy bridge intel cpus. They blow away the older core2quads in performance and at lower clock speeds.

Re:Definitely slowed ... (1)

sick_soul (794596) | more than 3 years ago | (#37485468)

Moore's law is about transistors, and no, the 3 year old cpu is not "the samesame as today".
But the experienced performance gains now are much slower, and much more workload-dependent than back in the day.

And back in the day, more MhZ _was_ the (main) way to increase processor performance.
You could feel the incredible yet predictable pace at which that happened, and suddently you had to upgrade, since the 3 year old cpu could not keep up with newer software anymore. This is less the case today.

Re:Definitely slowed ... (1)

geekoid (135745) | more than 2 years ago | (#37486050)

components, not just transistors. It's an important distinction.

In 2006, the Dual-Core Itanium 2 had 1,700,000,000 components
in 2011(over 2 doubling) 10-Core Xeon Westmere-EX has 2,600,000,000

should that be > 6,800,000,000 components'

Of course we are living in a golden age whining about who has the sweetest wine.

Re:Definitely slowed ... (1)

jandrese (485) | more than 3 years ago | (#37483570)

It doesn't help that licensing disputes and general cattiness in the industry have led to weird situations in the past couple of years. For instance, I bought my wife a white MacBook with a Core2Duo clocked at 2.4 Ghz four years ago. A few months ago we needed a second laptop, so I ended up buying a 13" Macbook Pro that had pretty much the same Core2Duo at 2.4 Ghz. Is this because we hit a wall with Moore's Law? No, it's thanks to a stupid licensing dispute between nVidia and Intel that caused Apple to stick with the older slower chips just to avoid Intel's crappy integrated graphics.

The worst part? It's pretty much the same 2.4 Ghz Core2Duo that's in my gaming machine I built 5 years ago, and it's thus far not been in danger of feeling obsolete. I can still run all of the latest games at good speed. That's probably due partially to the (at the time) grossly overpowered 8800GTX I stuck in there though. Driver support for that card is probably going to disappear before it's really obsolete.

It's not easy to use all of the power in a modern CPU. We're well past the point where you need more power just to keep the interface snappy, and many previously CPU hungry tasks are now pretty trivial (media encoding for instance). That's not to say there won't be uses for even more powerful CPUs in the future, but it's going to have to be something a bit different than what we have today. Maybe more "AI" type systems?

Re:Definitely slowed ... (1)

sjames (1099) | about 3 years ago | (#37486838)

We're there. You can get a decent 6 core machine dirt cheap.

Re:Definitely slowed ... (1)

fast turtle (1118037) | about 3 years ago | (#37486862)

Though, I am hoping that next time I get a new PC I can go beyond the 4 cores/8GB of RAM I have now and not be outrageously expensive. That would rock.

You can. It's called Llano by AMD and Asus is offering a Matx board that supports 16GB of memory. You do need to spend some money on the 4GB DDR3 sticks but the price isn't too bad as this shows http://www.newegg.com/Product/Product.aspx?Item=N82E16820134927 [newegg.com] -$25 a stick isn't bad at all for 4GB memory.

I've been planning a new build for year end and it's going to use the Asus board and the Llano Quad Core with APU. Cost of the hardware w/o the Wacom Cintiq 12WX and Corel Painter 12 is right around a grand ($1000) and that's with 16GB of memory, 5x - 2TB drives, a 1GB video card (radeon 6570), Icy Dock Drive Tray system with 3 trays/caddies. Add in the Cintiq and Corel Painter and we're talking $2500 dollars for the entire system. Note that it doesn't include the Win Tax as I've got a legal copy of Win7-64 to install.

Re:Definitely slowed ... (1)

epine (68316) | more than 3 years ago | (#37484274)

The whole point of Netburst was to achieve premature ejaculation in the frequency domain. This is what happens when people evaluate performance on a proxy indicator (given a choice between simple and correct, what does your average careerist commuter choose?)

Apparently it was a pretty good ruse, because ten years later, people are busy citing the frequency plateau as if it means a great deal more than it does. It didn't hurt Enron's raping of California that Intel took out a few hundred megawatts of generating capacity so that they could play top dog in marketing souffle GHz. Law of unintended consequence or not? Netburst was chosen over an alternate proposal (from their Israeli team IIRC) that later became CoreDuo. As much as Netburst marketing hurt AMD, it was maybe the last good thing to happen to them.

We could easily have 5GHz single-core processors now if Intel wished to shift 10% of their revenues into after-market cooler upgrades. They hummed and hawed and finally decided against this.

There's a lot we can do to more fully exploit the chips we already have, now that the ground is not moving so fast under our feet. It won't have the glamour of more with more. It will be mostly in the unglamorous vein of more with less.

The same thing is happening in agriculture according to the Econmist's last "feed the world" quarterly report, written amid a growing chorus of worry "it's not like the seventies, how will we ever manage!"

According to Jared Diamond, more with more often achieves a pawn promotion to the hallowed realm of archaeology. It's no panacea in the long run.

Re:Definitely slowed ... (1)

geekoid (135745) | more than 2 years ago | (#37485970)

yes, but how many transistor per sq. cm are there? and is it's double* and is it is, did they cost the same?
That is Moore's law. Ironically, its not the most genius thing about his paper. the fact that he realized very complex systems will be made from smaller pieces is the real forward thinking in the paper.

*assuming they where fabricated 18 months apart, otherwise adjust based on 18 month

Re:Definitely slowed ... (0)

Anonymous Coward | about 3 years ago | (#37486534)

Yep, tried comparing a Core 2 with a P4. Unfortunately the Core 2 was running Windows and the results came out the wrong way.

Re:Definitely slowed ... (1)

Desler (1608317) | more than 3 years ago | (#37483250)

*facepalm* Moore's law has nothing to do with processor speed ratings.

Re:Definitely slowed ... (1)

gstoddart (321705) | more than 3 years ago | (#37483514)

No, but if you actually RTFA, the author is talking about machines no longer getting faster. There's even a graph showing the speed of Intel processors at release, and it plateaued several years ago.

Clock speeds haven't grown any more, we just keep throwing more cores on. (Don't get me wrong, I love me some CPU cores.)

I realize Moore's law is about transistor density, but there was a good period of time where CPU speed was growing proportionally with the Moore's law curve. Back in the early 90's, CPU speed actually did double every couple of months.

Re:Definitely slowed ... (1)

Desler (1608317) | more than 3 years ago | (#37483614)

Yes speed was growing but in many cases didn't mean performance was improving as much which is why amd was blowing away intel even with lower clock speed cpus. And who cares about clock speeds other than pc ricers? My sandy bridge quad core will beat the socks off of a 3 year old quad core even if it was overclocked.

Re:Definitely slowed ... (1)

Joce640k (829181) | more than 3 years ago | (#37483866)

Clock speeds haven't grown any more, we just keep throwing more cores on.

AND they do more per clock cycle...

Re:Definitely slowed ... (1)

geekoid (135745) | more than 2 years ago | (#37486082)

Yes, but the author is just proving they don't know what Moore's Law is.

http://download.intel.com/research/silicon/moorespaper.pdf [intel.com]

Yes Moore's law is coming to and end. No, it doesn't have a damn thing to do with speed, yes it has everything to do with power of the chip.

Re:Definitely slowed ... (4, Insightful)

LordLimecat (1103839) | more than 3 years ago | (#37483496)

I'm sure some things actually are faster, but in terms of what's available to consumers, it hasn't seemed to get all that much faster the last few years..

Heres a reality check for you.

Im speccing out a machine for a pfSense firewall; Ive settled on a low power, 20 watt Xeon E3 1220L. At about 1/5th the power consumption of a Pentium 4 2.8ghz (and at about 75% the clockrate), it can handle about 13.5gbits of AES encryption, compared to the Pentium's 500mbps.

So we're talking a 36-fold improvement in processor performance in the area of encryption, along with a 5-fold reduction in power requirements; not to mention the improved memory bandwidth and whatnot.

Processors continue to improve at a rapid pace; Intel is supposed to be releasing Ivy Bridge soon, which should have another ~15% performance increase, and they just released Sandy Bridge which mostly eliminates the need for a dedicated GPU on laptops and about 80% of users.

So when people bemoan the rate of computer improvement, despite the MASSIVE leaps in performance, reductions in power usage, and price drops (a core i3 @ $100? A phenom x3 @ $60? Yes please), it boggles my mind. 5 years ago a "modern", decent gaming rig could be had for about $800. Prior to that, getting a fabled 2GB of ram was like $200 on its own. These days, you can have a decent gaming rig for about $500, with none of your parts costing substantially more than $60. For goodness sake, RAM is down to about $6 per GB.

Heck, I just priced out and ordered 2 laptops for 2 different clients-- they come with i3s, 4GB of RAM, a 4hr battery life, and very high build quality, all for under $500. Where the heck could you have gotten a laptop anywhere close to that value 3 years ago? A celeron? A crappy AMD mobile?

Seriously, come back to reality please.

Re:Definitely slowed ... (0)

gstoddart (321705) | more than 3 years ago | (#37484666)

Heck, I just priced out and ordered 2 laptops for 2 different clients-- they come with i3s, 4GB of RAM, a 4hr battery life, and very high build quality, all for under $500. Where the heck could you have gotten a laptop anywhere close to that value 3 years ago? A celeron? A crappy AMD mobile?

Seriously, come back to reality please.

Look, not all of us are benchmarking machines on a constant basis. Nor do we all constantly buy and install new hardware. I get a new laptop when my company gives me one, and I replace my desktop every couple of years. Beyond that, the only machines I'm involved in are ones we plan out for work -- and even then, the server team does the physical builds. I just tell them that I need x machines so I can install my stuff and build it out. None of the tasks I run are sufficiently CPU intensive to notice for the most part --- but IO speeds, now that's a big deal.

So when I look at the company I bought my last several machines from, and see what looks like the same specs as three years ago ... it's hard to see that anything significant has happened. That quad core AMD with 4GB still has the same CPU speed as it did a couple of years ago. It might be a bit cheaper, but the posted specs are pretty much identical. It's not like there's anything which makes me think "oooh, this machine is faster than what I have now."

Unless you have a large amount of turnover of machines, to simply read the specs of machines they read like they did 2-3 years ago. If you're talking about consumer machines like you'd buy at Best Buy, I'd be surprised if there's all that much difference over the last few years.

Get off your sanctimonious high horse and get over yourself.

Re:Definitely slowed ... (0)

Anonymous Coward | more than 3 years ago | (#37485514)

>it can handle about 13.5gbits of AES encryption,
>compared to the Pentium's 500mbps.

Unfair comparison. And you know it.

Speed of light limit (1)

mangu (126918) | more than 3 years ago | (#37484652)

At 3 GHz light travels 10 cm in one clock cycle. Faster speeds would make it hard to send a request of data from one end of the chip to the other and get a response during the execution of a machine instruction. Having to insert no-operation cycles while waiting for data to arrive would negate the usefulness of faster clock rates.

Re:Definitely slowed ... (1)

Surt (22457) | more than 2 years ago | (#37486222)

Moore's law is about the number of transistors, not their clock rate.

Revisionist, or just not old enough (1)

dtmos (447842) | more than 3 years ago | (#37483248)

In the beginning, hardware was not "open." Any antique radio collector will tell you that the schematics of 1920s radios were some of the best-kept secrets the manufacturers had (at the time), since the parts used in their products were readily available. Giving the user a schematic was viewed as a license to compete, and there were hundreds, if not thousands, of radio receiver manufacturers -- many of whom got started by reverse-engineering an existing design.

It was only in the 1930s that schematics began to be shipped with products, as the shakeout of the manufacturers meant that competition was based on economies of scale and factors other than just knowing how to build a radio.

Re:Revisionist, or just not old enough (1)

geekoid (135745) | more than 2 years ago | (#37486144)

Funny, my 1920 radio has the schematic stabled inside the cabinet.

Re:Revisionist, or just not old enough (0)

Anonymous Coward | about 3 years ago | (#37486658)

When was it stapled there?

On the other hand, integration (3, Interesting)

Kjella (173770) | more than 3 years ago | (#37483252)

By the time Moore's law slows down, we'll also have systems on a chip. Replaceable parts? We've moved the other way from the days you could solder chips and until today. Extension cards are almost gone, more and more of the north/south bridge and motherboard chips is moving into the CPU, now even the graphics card is moving into the CPU for many.

His argument sounds to me to use the same logic as arguing that once computers don't get faster, we'll have to make applications faster so we'll see a return of assembler language and hand optimization. Somehow, I don't think that's very likely. I'd make a fair bet that custom hardware is even more of a niche in 20-30 years than it is now.

Re:On the other hand, integration (0)

Anonymous Coward | more than 3 years ago | (#37484520)

We've moved the other way from the days you could solder chips and until today.

Have we?

Hardware hobbyists today solder BGA chips if necessary.
As soon as you step out from serious production into hobbyland you can pretty much skip all the expensive equipment and can use heat guns, rebuilt toasters or a hot plate to solder pretty much anything. You will probably heat the components way out of specified temperature range but you'd be surprised how rough you have to be to actually damage something. Most über-sensitive chips still survives resoldering after resoldering after resoldering.

Here is a clip of BGA reworking without special reballing equippment [youtube.com] and you can get your PCB for less than $200 from PCBCart [pcbcart.com] or wherever if you need a multilayer PCB for your projects. (You mostly pay for setup so you will probably order 50 pieces or more depending on PCB size.)

It's pretty cool to think that hobbyists today can make stuff that large corportations couldn't dream of doing 20-30 years ago.

Re:On the other hand, integration (1)

blahplusplus (757119) | more than 3 years ago | (#37485296)

"Extension cards are almost gone, more and more of the north/south bridge and motherboard chips is moving into the CPU, now even the graphics card is moving into the CPU for many."

You're missing the point unfortunately, the laws of physics demands that expansion cards are here to stay. The memory bandwidth you can get from a dedicated card exceeds that of today's modern CPU's. Specializations is here to stay. The same argument was made by epic games over 10 years ago when they thought everything would be pushed into the CPU it didn't happen. Some stuff is being integrated but there's still the problem of heat, clock frequency and power dissipation that will make 'integration' for high speed stuff problematic. By the time we have system on a chip specialized hardware will be that much faster.

Memory bandwidth and heat are the big elephant in the room though.

Re:On the other hand, integration (1)

geekoid (135745) | more than 2 years ago | (#37486162)

Right now, I can build a mid range game machine with no expansion cards. HD, 7.1 dolby.

So why A card can be faster, it not necessarily NEEDED.

If you want to play quake 10, and you can get HD 100FPS without adding component why would you? And would many other people do ti for the same reason.

Unlikely (3, Interesting)

Nethemas the Great (909900) | more than 3 years ago | (#37483278)

Firstly it's high unlikely that Moore's law will be retiring any time soon. All we are seeing is a slow down in the advancement of shrinking the manufacturing process. That doesn't say anything about performance improvements by other means. We are continually seeing advancement in performance per watt which is enabling CPUs to spread their dies not only "out" but even now we're seeing the prospect of "up" with promising research in layering techniques. Beyond that there are carbon rather than silicon based materials coming online that promise to further improve upon the performance per watt angle. We're even starting to see glimmers of hope in the quantum computing arena which would be game changing.

With respect to small companies being able to enter the market and compete with the "big guys" I would seriously doubt it. The first and obvious reason being the cost factor being a barrier to entry. The equipment isn't cheap and contending in the patent arena is worse. The most you'll ever see here is university level research being sold off to the big guys.

Re:Unlikely (1)

k8to (9046) | more than 3 years ago | (#37483442)

Well.. moore's law is specifically about transitor density, specifically the rate of its increase per time. So a slowdown in that increase is in fact an end to the law in that the "law" predicted a specific rate that would no longer be met.

Re:Unlikely (1)

NoSig (1919688) | more than 3 years ago | (#37483914)

Moore's law is not about density, it is about the price per transistor. Doesn't matter how you bring the price down, the law will still hold if it does indeed go down as the law predicts. Density has been the most important component of that so far, that you are right on, but it doesn't have to continue to be true. If you can stack slices of transistors on top of each other, and keep making that process cheaper, that will help with Moore's law too.

Re:Unlikely (1)

geekoid (135745) | more than 2 years ago | (#37486188)

NO IT IS NOT. It's appalling on how ignorant so many posters are about Moore's law on a tech site. Simply appalling.

Double the amount of Components per sq.cm at the same cost.

Re:Unlikely (1)

Surt (22457) | about 3 years ago | (#37486368)

Layering buys you a 15 year extension on Moore's law if you're lucky. After that it gets really hard to layer fast enough to produce the finished chips within 18 months.

Re:Unlikely (2)

dorianh49 (988940) | more than 3 years ago | (#37483448)

I may be wrong, but I thought Moore's law was specifically referring to the manufacturing process, and not performance. So, Moore's law can retire and performance can still improve. They're not mutually exclusive.

Re:Unlikely (1)

geekoid (135745) | more than 2 years ago | (#37486194)

CORRECT!
Well done. You can have all the nerd point most these other yahoos have lost with there careful planned out incorrect explanation of Moore's law.

Re:Unlikely (1)

ZankerH (1401751) | more than 3 years ago | (#37483834)

Moore's law doesn't say anything about "performance improvement". Moore's law is precisely about "shrinking the manufacturing process" - how many transistors you can fit on a chip.

Re:Unlikely (0)

Anonymous Coward | more than 3 years ago | (#37484258)

With respect to small companies being able to enter the market and compete with the "big guys" I would seriously doubt it. The first and obvious reason being the cost factor being a barrier to entry.

You got that right. I once toured an ASML plant (ASML being one of 3 or 4 companies in the world who produce wafersteppers -- the machines that make chips). If you buy one of their high-end machines, they will inspect your factory site. They will only sell it to you, if they agree the site is okay, and they approve of your factory design plans. You read that right, you cannot buy a machine of theirs for fun. You have to convince them that you'll give it a good home, otherwise they don't part with it.

Of course, for the price tag on such a machine, you do want it to operate near-continuous and a long time without any problems -- and these things are unbelievably sensitive. I forgot the details, but their babies require top-grade cleanrooms -- your average day cleanroom is a dirty pigstall to them.

So, yeah. You not only need an insanely expensive machine (3 years ago, Wiki says [wikipedia.org] averaging at 40 million euros = roughly 60 million dollar), but you must also build a very good factory around it. And that's only one part in the process -- you're not designing chips yet, you're not marketing them yet. You only got a place where you can make 'em. Le ouch.

Re:Unlikely (0)

Anonymous Coward | more than 3 years ago | (#37484582)

You got that right. I once toured an ASML plant (ASML being one of 3 or 4 companies in the world who produce wafersteppers -- the machines that make chips). If you buy one of their high-end machines, they will inspect your factory site. They will only sell it to you, if they agree the site is okay, and they approve of your factory design plans. You read that right, you cannot buy a machine of theirs for fun. You have to convince them that you'll give it a good home, otherwise they don't part with it.

I'm sure they'd sell it to you without seeing the site if you agree to buy it without expecting them to support it. Of course, nobody in their right mind would buy a waferstepper sans manufacturer support contract, so there you are.

Re:Unlikely (1)

allanw (842185) | more than 2 years ago | (#37485710)

Processors nowadays are limited by the thermal dissipation limits, instead of how many transistors they can fit on a chip. Transistor scaling was supposed to keep power usage the same while increasing density, but supply voltages have stopped decreasing, so the power efficiency gains are very low. Thus it is now important to think about power efficiency: http://hardware.slashdot.org/story/11/09/13/2148202/whither-moores-law-introducing-koomeys-law [slashdot.org]

Comeon Boys (0)

Anonymous Coward | more than 3 years ago | (#37483338)

Are you a semiconductor or a hemiconductor

Twilight? Riight... (2)

TechGooRu (944422) | more than 3 years ago | (#37484090)

Moore's law has applied, and will apply - at least by inference - to all past and future computing paradigms.

The exponential growth trends of price/performance started long before CMOS processes were developed. While Moore's law specifically refers to integrated circuits, the facts remain: exponential growth trends were present in relay-based machines, vacuum tube based machines, transistor based machines (pre-IC), and integrated circuits.

In fact, the exponential growth trends are actually accelerating at an accelerating pace, as we are just now approaching the "knee" in the exponential curve.

The simple truth is today's ICs are manufactured at the nano scale ( 100nm), and will continue to shrink for several more generations of chips.

Before transistor size even begins to approach theoretical limits, new paradigms will emerge to replace current metal-oxide-semiconductor technologies.

We can already see this today. As we approach the limit to two dimensional ICs, we see the new emerging trend of three-dimensional circuits. We see the rumblings in research circles of optical systems at nano-scales. We're just now beginning to scratch the surface of quantum computing.

While Moore didn't invent the exponential, the trends he predicted more than four decades ago will be alive and well throughout the 21st century, even if by inference, as we transition away from CMOS to new, as-of-yet undiscovered paradigms.

To those seriously interested in this field, please consider reading Ray Kurzweil's "The Singularity is Near". You may not agree with everything the man has to say, but the man's data on this subject doesn't lie.

Re:Twilight? Riight... (1)

Anonymous Coward | more than 3 years ago | (#37485368)

Moore's law has applied, and will apply - at least by inference - to all past and future computing paradigms.

Inference from what? Moore's Law was never really a "law". It was more of an observation that, at the time, the number of transistors per unit area was doubling roughly once every 18 months, and Moore didn't see an immediate end in sight for this trend based on what he knew about the technology.

Well, now we see an end in sight. You can't scale below atoms, and most people think we'll hit practical limits before then. So you can't infer that functionality will keep doubling every 18 months unless something fundamentally changes the game.

The exponential growth trends of price/performance started long before CMOS processes were developed. While Moore's law specifically refers to integrated circuits, the facts remain: exponential growth trends were present in relay-based machines, vacuum tube based machines, transistor based machines (pre-IC), and integrated circuits.

So what? This doesn't mean exponential growth trends are guaranteed to go on forever. You're essentially waving your hands and saying "this is the way it's always been, THEREFORE this is the way it'll always be!!!". That doesn't make any sense at all.

In fact, the exponential growth trends are actually accelerating at an accelerating pace, as we are just now approaching the "knee" in the exponential curve.

Have you never noticed the similarity between the lower half of a S-curve and an exponential curve? Curve-fitting and noticing that an equation you like happens to fit the part of the data available to date doesn't guarantee you've selected the correct function.

To those seriously interested in this field, please consider reading Ray Kurzweil's "The Singularity is Near". You may not agree with everything the man has to say, but the man's data on this subject doesn't lie.

Ray Kurzweil is a nutter obsessed with escaping death. He really, really wants there to be a Singularity, so that he can download his brain to a computer before his meatsack body dies. You shouldn't mistake this obsession for "data", and you should be careful when evaluating his claims at being a super accurate predictor of the future.

Re:Twilight? Riight... (1)

geekoid (135745) | more than 2 years ago | (#37486208)

What a nice explanation of what Moore's law isn't.

Dumbass.

Moore's Law only talks about one thing (1)

Tumbleweed (3706) | more than 3 years ago | (#37484354)

And it's far from the only thing that impacts a computer's performance. The advent of SSDs proves there are still major areas of computer performance to be addressed outside of the CPU. While the GPU will be brought onboard before system RAM is, I'm sure that's another area that will find its way onto the die eventually. Gigabit ethernet is good enough for now - I don't know of any broadband connections anywhere that exceed that just yet, but an increase to 10gigabit isn't out of bounds in the nearterm, and will last a very long time.

Once SSDs can affordably hold all the media we have and spinning disks go away completely, we'll pretty much be where we need to be for the foreseeable future. Modern GPUs are capable of moving 4K displays pretty easily, and the Ivy Bridge-era GPU will be capable of that even without a discrete card, so displays will be able to scale up easily. The advent of the 10" 2560x1600 panel are upon us - put three of those together for a nice 30" display, and you're good to go. The more things are offloaded FROM the CPU, the more irrelevant CPU density and speed becomes.

Re:Moore's Law only talks about one thing (1)

140Mandak262Jamuna (970587) | more than 3 years ago | (#37484564)

The advent of the 10" 2560x1600 panel are upon us - put three of those together for a nice 30" display, and you're good to go. The more things are offloaded FROM the CPU, the more irrelevant CPU density and speed becomes.

To maintain the aspect ratio, you need nine 10"(diag) displays to make a 30"(diag) display.

Re:Moore's Law only talks about one thing (1)

Tumbleweed (3706) | more than 3 years ago | (#37484674)

The advent of the 10" 2560x1600 panel are upon us - put three of those together for a nice 30" display, and you're good to go. The more things are offloaded FROM the CPU, the more irrelevant CPU density and speed becomes.

To maintain the aspect ratio, you need nine 10"(diag) displays to make a 30"(diag) display.

Whoops. I knew that, but somehow typed three instead of nine. *sad panda*

They haven't announced prices on those new panels, so who knows if that will be affordable. I know they're planning to make 2560x1600 panels in sizes ranging from 4" - 10", so let's hope it's affordable. A 30" panel is rather big for me for a desktop display; I'm more comfortable with my current 24", but higher pixel density would be appreciated.

It's going to be strange having a 4" smartphone with a 2560x1600 display, though. That's a lot of pixels in a small space...

Re:Moore's Law only talks about one thing (1)

Anonymous Coward | more than 3 years ago | (#37485082)

I more or less agree with you on consumer computers: the little improvements you mention would be nice but I can't see a clear use for much more at the moment (I'm sure something will come up like real 3D holographic displays that require far more graphics processing power or something no one has thought up). On the other hand, on the high-end and low-end there are opportunities for much cooler things to happen.

In terms of high-end, scientific computing can always use more processing power. I am particularly thinking about medical simulations like protein folding and DNA sequencing as bioinformatics is big right now, but other fields of science could probably make good use of more processing power as well (Watson cheap enough for every company to have one for searching their documents?).

In terms of low-end, there's the whole "internet of things" concept where it finally gets cheap enough to really put a computer in everything, but on even smaller computers, I have heard people talk about tiny implantable medical devices that could, say, deliver medicine at precise times at precise places in your body or monitor places where you can't use external sensors well.

EOMA/PCMCIA initiative (1)

lkcl (517947) | more than 3 years ago | (#37484752)

funny... i was just writing up a post to the http://openhardwaresummit.org/ [openhardwaresummit.org] mailing list about a way to accelerate the process by which enthusiasts can work with the latest mass-produced embedded hardware.

the initiative, which has a specification here - http://elinux.org/Embedded_Open_Modular_Architecture/PCMCIA [elinux.org] - is based around the fact that, just as mentioned above, the development of processor "speeds" is slowing down. this funnily enough allows so-called "embedded" processors to catch up, and it's these embedded CPUs which are low-power enough to base an entire computer around that is still desirable yet consumes between 0.5 and 3 watts instead of 10 to 500 watts.

if anybody would like to participate in this initiative, please do join the arm-netbook mailing list - http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook [phcomp.co.uk]

Computers are about fast enough to do anything now (0)

Anonymous Coward | more than 3 years ago | (#37484916)

A quad core ARM with 3D video chip, some custom silicon to do specific tasks, like encode and decode ogg vorbis audio and theora video at 10 times realtime rates, a 100,000 neuron neural net chip and some multi trillion dollar software investments and we could have HAL on all our desktops.

Blogpost author is generally clueless (0)

Anonymous Coward | more than 3 years ago | (#37485176)

FTA:

In the post-Moore’s law future, FPGAs may find themselves performing respectably to their hard-wired CPU kin, for at least two reasons: the flexible yet regular structure of an FPGA may lend it a longer scaling curve, in part due to the FPGA’s ability to reconfigure circuits around small-scale fluctuations in fabrication tolerances,

Uh, no.

FPGAs are built from the same elements as normal chips, transistors. Process improvements give us new transistors which are some combination of smaller, faster, and more power efficient than their predecessors (generally speaking, not all 3 at once). This doesn't favor either FPGAs or ordinary CPUs, as both are built from transistors.

His comment about reconfiguring around process variation is total nonsense. FPGAs are, if anything, more sensitive to variation, simply because the company making the FPGA never knows which part of the FPGA will become a critical timing path for the end user. Every programmable cell and every part of the routing matrix has to pass minimum standards. Otherwise, there's no way for the FPGA mfr. to guarantee that when their timing analysis software claims a customer design has timing closure, it really does.

Fixed function chips, on the other hand, have lots of known paths which have no problem meeting timing at any process corner, and thus can easily tolerate process variation. The key difference is known vs. unknown -- with FPGAs, all chips have to be capable of running all possible designs, so the only thing known when you make the chip is that it all has to be good. With ASICs, you know the characteristics of that fixed design and can design test programs for looser tolerances where it makes sense to do so.

In fact, Xilinx already has a program where they do roughly what he's talking about, but the implications are a bit different than he thinks. If you qualify for the program (you need to have enough sales volume, and you need to be using a relatively large and expensive FPGA, and you need to be willing to pay an upfront fee), you can submit your design to Xilinx and get cheaper prices. Xilinx uses your design to write a custom test program which doesn't test parts of the FPGA which you don't use. This lets you use chips which Xilinx would otherwise have to reject as defective, whether due to process variation or outright defects. It also means you can't change your design! (I think you get one mulligan before they want you to pay another fee to have a new test program written.)

It's useful mainly for certain niche industries like high speed telecom backplanes, where the product is too low volume to justify taping out an ASIC, but sufficiently high volume that losing the ability to update hardware in the field is worth it to shave some dollars off the unit cost. As for the suppositions made by the blogposter, it doesn't improve performance, it just reduces cost.

and because the extra effort to optimize code for hardware acceleration will amortize more favorably as CPU performance scaling increasingly relies upon difficult techniques such as massive parallelism. After all, today’s massively multicore CPU architectures are starting to look a lot like the coarse-grain FPGA architectures proposed in academic circles in the mid to late 90’s.

Er... no, no they aren't. Those coarse-grained FPGA architectures fell far short of having a full-featured CPU in each tile.

An equalization of FPGA to CPU performance should greatly facilitate the penetration of open hardware at a very deep level.

Good luck with that, "Bunnie".

I don't have the patience to dissect the rest of it... suffice it to say that he's clueless, and doesn't know it.

Less is more, slavery is freedom (1)

syousef (465911) | about 3 years ago | (#37486706)

It's all just propaganda.

I'm not convinced (1)

tsotha (720379) | about 3 years ago | (#37486846)

I'm not convince of the central postulate here. We may not be able to continually increase CPU performance by slavishly increasing the transistor count any more than we could do it by slavishly increasing the clock frequency, but that doesn't mean the Intels of the world can't maintain dominance in performance. There are all sorts of possibilities that haven't been as yet fully explored because the industry has billions (trillions?) sunk into the silicon-specific end-to-end fabrication process, and they still see incremental improvements using current technology.

It may be true, as the author says, the farthest we can push silicon is about 5nm design rules. But what about cnt and graphene? What about Diamond Age style nanoprocessors? What about fully optical processors and interconnects? That's not the kind of stuff people are going to build in their garages.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?