Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

End of Moore's Law Forcing Radical Innovation

Soulskill posted about 10 months ago | from the how-many-transistors-can-dance-on-the-head-of-a-pin dept.

Hardware 275

dcblogs writes "The technology industry has been coasting along on steady, predictable performance gains, as laid out by Moore's law. But stability and predictability are also the ingredients of complacency and inertia. At this stage, Moore's Law may be more analogous to golden handcuffs than to innovation. With its end in sight, systems makers and governments are being challenged to come up with new materials and architectures. The European Commission has written of a need for 'radical innovation in many computing technologies.' The U.S. National Science Foundation, in a recent budget request, said technologies such as carbon nanotube digital circuits will likely be needed, or perhaps molecular-based approaches, including biologically inspired systems. The slowdown in Moore's Law has already hit high-performance computing. Marc Snir, director of the Mathematics and Computer Science Division at the Argonne National Laboratory, outlined in a series of slides the problem of going below 7nm on chips, and the lack of alternative technologies."

Sorry! There are no comments related to the filter you selected.

Anonymous Coward's Law (-1)

Anonymous Coward | about 10 months ago | (#45894991)

Anonymous Coward's Law: When ever someone shows a slide of Moore's Law, I spend the next three slides checking my email/purging my bladder/thinking about lunch.

Re:Anonymous Coward's Law (5, Funny)

Anonymous Coward | about 10 months ago | (#45895167)

Moore's yawn ... er, law. It has ended, again again. It must be the co-joined twin of Voyager which has left the solar system 78 times in the past 14 years.
Wake me up when some real news gets in.

Re:Anonymous Coward's Law (1)

JWSmythe (446288) | about 10 months ago | (#45895471)

I thought they finally killed that off last month. Or the other 7K times it was declared dead.

Rock Star coders! (5, Insightful)

Anonymous Coward | about 10 months ago | (#45895001)

The party's over. Get to work on efficient code. As for the rest of all you mothafucking coding wannabes, suck it! Swallow it. Like it! Whatever, just go away.

Re:Rock Star coders! (4, Insightful)

Z00L00K (682162) | about 10 months ago | (#45895143)

Efficient code and new ways to solve computing problems using massive multi-core solutions.

However many "problems" with performance today are I/O-based and not calculation based. It's time for the storage systems to catch up in performance with the processors, and they are on the way with SSD disks.

Re:Rock Star coders! (5, Interesting)

lgw (121541) | about 10 months ago | (#45895165)

I think the next couple of decades will be mostly about efficiency. Between mobile computing and the advantage of ever-more cores, the benefits from lower power consumption (and reduce heat load as a result) will be huge. And unlike element size, we're far from basic physical limits on efficiency.

Re:Rock Star coders! (1)

jones_supa (887896) | about 10 months ago | (#45895169)

But efficiency is largely based on element size.

Re:Rock Star coders! (1)

gweihir (88907) | about 10 months ago | (#45895289)

You still do not get it. There will be no further computing power revolution.

Re:Rock Star coders! (4, Insightful)

K. S. Kyosuke (729550) | about 10 months ago | (#45895425)

There hasn't been a computing power revolution for quite some time now. All the recent development has been rather evolutionary.

Re:Rock Star coders! (4, Interesting)

Forever Wondering (2506940) | about 10 months ago | (#45895535)

There was an article not too long ago (can't remember where) that mentioned that a lot of the performance improvement over the years came from better algorithms rather than faster chips (e.g. one can double the processor speed but that pales with changing an O(n**2) algorithm to O(n*log(n)) one).

SSD's based on flash aren't the ultimate answer. Ones that use either magneto-resistive memory or ferroelectric memory show more long term promise (e.g. mram can switch as fast as L2 cache--faster than DRAM but with the same cell size). With near unlimited memory at that speed, a number of multistep operations can be converted to a single table lookup. This is done a lot in a lot of custom logic where the logic is replaced with a fast SRAM/LUT.

Storage systems (e.g. NAS/SAN) can be parallelized but the limiting factor is still memory bus bandwidth [even with many parallel memory buses].

Multicore chips that use N-way mesh topologies might also help. Data is communicated via a data channel that doesn't need to dump to an intermediate shared buffer.

Or hybrid cells that have a CPU but also have programmable custom logic attached directly. That is, part of the algorithm gets compiled to RTL that can then be loaded into the custom logic just as fast as a task switch (e.g. on every OS reschedule). This is why realtime video encoders use FPGAs. They can encode video at 30-120 fps in real time, but a multicore software solution might be 100x slower.

Re:Rock Star coders! (3, Insightful)

Anonymous Coward | about 10 months ago | (#45895627)

(e.g. one can double the processor speed but that pales with changing an O(n**2) algorithm to O(n*log(n)) one).

In some cases. There are also a lot of cases where overly complex functions are used to manage lists that usually contains three or four items and never reaches ten.
Analytical optimizing is great when it can be applied but just because one has more than one data item to work on doesn't automatically mean that a O(n*log(n)) will beat a O(n**2) solution. The O(n**2) solutions will often be faster per iteration so it is a good idea to consider how many items one usually will work with.

Moore's Law isnt a law you know (5, Insightful)

Osgeld (1900440) | about 10 months ago | (#45895007)

Its more of a prediction, that has mostly been on target cause of its challenging nature

Re: Moore's Law isnt a law you know (3, Funny)

crutchy (1949900) | about 10 months ago | (#45895213)

now now don't you go spreading propaganda that laws aint laws... next there will be idiots coming out of the woodwork claiming that einstein may have got it wrong and that it's ok to take a dump whilst being cavity searched by the police

Moore's "law" & AI (3, Interesting)

globaljustin (574257) | about 10 months ago | (#45895217)

In my mind it was an interesting statistical coincedence, *when it was first discussed*

Then the hype took over, and we know what happens when tech and hype meet up...

Out of touch CEO's get hair-brained ideas from non-tech marketing people about what makes a product sell, then the marketing people dictate to the product managers what benchmarks they have to hit...then the new product is developed and any regular /. reader knows the rest.

It's bunk. We need to dispel these kinds of errors in language instead of perpetuating them, because it has tangible effects on the engineers in the lab who actually do the damn work.

Part of what made the Moore's "Law" meme so sticky is how it was used, usually in a simple line graph, by "futurists" who barely can check their own email to pen mellodramatic, overhyped predictions about *when* we would have 'AI'.

AI hype is tied to computer performance, and Moore's "Law" was something air-head journalists could easily source, complete with a nice graph from a tech "expert"

I know my view of AI as a fiction is in the minority, but IMHO we need to grow up, stop with the reductive notion that computing is progressing towards some kind of 'AI' singularity and focus on making things that help people do work or play.

Our industry looses **BILLIONS** of dollars and hundreds of thousands of work-hours chasing a fiction when we could be making more useful, powerful, and imaginitive things that meet actual, real world human needs.

To bring this back to Moore's Law, let's work on better explaining the value of tech to non-techies. Let's give air-headed journalists something to sink their teeth into that will help our industry progress, not play the bullshit/hype game like every other industry.

Re:Moore's "law" & AI (0)

Anonymous Coward | about 10 months ago | (#45895449)

"Moore's Law" was expliclity marketing from the very fucking beginning!

Go look at the original paper, it actually has a cartoon of a very Steve Jobs-esque salesmen with a table of Mac Mini-sized computers. The paper was about the economic trajectory of computers, not boring ass material science.

Meanwhile you can shove airhead journalists up your expansive rectum, while you consider Intel's historical stock prices.

Re: Moore's Law isnt a law you know (0)

Anonymous Coward | about 10 months ago | (#45895321)

Its challenging nature...and the fact that the company that originated felt a need to make it a reality to stay competitive and be the leading edge of computer technology. It worked.

Blind ants, now need to search more branches (3, Insightful)

ka9dgx (72702) | about 10 months ago | (#45895013)

Now the blind ants (researchers) will need to explore more of the tree (the computing problem space)... there are many fruits out there yet to discover, this is just the end of the very easy fruit. I happen to believe that FPGAs can be made much more powerful because of some premature optimization. Time will tell if I'm right or wrong.

Re: Blind ants, now need to search more branches (2)

jarfil (1341877) | about 10 months ago | (#45895123)

So true. I also happen to believe that adding an FPGA coprocessor to general purpose CPUs, that applications could reconfigure on the fly to perform certain tasks, could lead to massive increases in performance.

Re: Blind ants, now need to search more branches (4, Interesting)

gweihir (88907) | about 10 months ago | (#45895317)

As somebody that has watched what has been going on in that particular area for more than 2 decades, I do not expect anything to come out of it. FPGAs are suitable for doing very simples things reasonably fast, but so are graphics cards and with a much better interface. Bit as soon as communication between computing elements or large memory is required, both FPGAs and graphics cards become abysmally slow in comparison to modern CPUs. That is not going to change, as it is an effect of the architecture. There will not be any "massive" performance increase anywhere now.

Re: Blind ants, now need to search more branches (3, Interesting)

InvalidError (771317) | about 10 months ago | (#45895517)

Programming FPGAs is far more complex than programming GPGPUs and you would need a huge FPGA to match the compute performance available on $500 GPUs today. FPGAs are nice for arbitrary logic such as switch fabric in large routers or massively pipelined computations in software-defined radios but for general-purpose computations, GPGPU is a much cheaper and simpler option that is already available on many modern CPUs and SoCs.

Re: Blind ants, now need to search more branches (2)

linearz69 (3473163) | about 10 months ago | (#45895577)

FPGAs are relatively expensive compared to graphics chips, actually most chips. Its still not clear what FPGAs can accomplish in a general computing platform that will be of value, considering the other lower cost options available.

The other half of the problem here is that, in comparison to GPU programming interfaces such as OpenCL and Cuda, there is relatively little effort in bringing FPGA development beyond ASIC-Lite. The tool chains and development processes (what FPGA vendors like to call "design flow") are miles apart between FPGA and software code. Right now, SystemC is the only thing close to software develpment for FPGAs (mainly because of C++ syntax) , but it really isn't that close. Also consider that there are really no common architectures - RTL synthesis can vary from part to part, and place & route is different for every flippen part number. This makes it nearly impossible for any third party, beyond pricey CAD vendors with cozy relationships to the FPGA Mfgs, to develop the libraries required to cleanly integrate FPGAs into software development.

The FPGA manufactures have done quite well on the low volume, high margin game. They have no incentive to drop the cost required for consumer volumes. GPUs are a completely different story....

Re: Blind ants, now need to search more branches (0)

Anonymous Coward | about 10 months ago | (#45895687)

FPGAs are relatively expensive compared to graphics chips, actually most chips. Its still not clear what FPGAs can accomplish in a general computing platform that will be of value, considering the other lower cost options available.

The other half of the problem here is that, in comparison to GPU programming interfaces such as OpenCL and Cuda, there is relatively little effort in bringing FPGA development beyond ASIC-Lite. The tool chains and development processes (what FPGA vendors like to call "design flow") are miles apart between FPGA and software code. Right now, SystemC is the only thing close to software develpment for FPGAs (mainly because of C++ syntax) , but it really isn't that close. Also consider that there are really no common architectures - RTL synthesis can vary from part to part, and place & route is different for every flippen part number. This makes it nearly impossible for any third party, beyond pricey CAD vendors with cozy relationships to the FPGA Mfgs, to develop the libraries required to cleanly integrate FPGAs into software development.

The FPGA manufactures have done quite well on the low volume, high margin game. They have no incentive to drop the cost required for consumer volumes. GPUs are a completely different story....

Yummm... $15 part....Zynq 7015. Is that what everyone is talking about? Need more space, try Zynq 7100. If you are not that motivated to get performance, compile system C code:

http://www.xilinx.com/products/silicon-devices/soc/zynq-7000/

Re:Blind ants, now need to search more branches (4, Funny)

crutchy (1949900) | about 10 months ago | (#45895225)

just need to shoot more advanced alien spaceships down near roswell

Ends of Moore's Law in software ? (5, Insightful)

Taco Cowboy (5327) | about 10 months ago | (#45895019)

The really sad thing regarding this "Moore's Law" thing is that, while the hardware had kept on getting faster and even more power efficient, the software that runs on them kept on becoming more and more bloated.

Back in the days of pre-8088 we already had music notation softwares running on Radio Shack TRS-80 model III.

Back then, due to the constraints of the hardware, programmers had to use every trick on the book (and off) to make their programs run.

Nowadays, even the most basic "Hello World" program comes up in megabyte range.

Sigh !

Re:Ends of Moore's Law in software ? (-1)

Anonymous Coward | about 10 months ago | (#45895053)

Would you rather that your CPU and memory were always underutilized by software, going to waste?

Re:Ends of Moore's Law in software ? (4, Insightful)

lister king of smeg (2481612) | about 10 months ago | (#45895099)

Would you rather that your CPU and memory were always underutilized by software, going to waste?

yes more efficient and fast code would be much better

Re:Ends of Moore's Law in software ? (0)

Anonymous Coward | about 10 months ago | (#45895521)

Would you rather that your CPU and memory were always underutilized by software, going to waste?

yes more efficient and fast code would be much better

Then you should be using a 20 years old computer, with its lean software and scarce resources. Why buy more powerful hardware if you have no use for its inward capabilities? The rest of us will keep using hardware that allow possibilities that were unheard a few years ago.

Re:Ends of Moore's Law in software ? (4, Insightful)

lister king of smeg (2481612) | about 10 months ago | (#45895753)

Would you rather that your CPU and memory were always underutilized by software, going to waste?

yes more efficient and fast code would be much better

Then you should be using a 20 years old computer, with its lean software and scarce resources. Why buy more powerful hardware if you have no use for its inward capabilities? The rest of us will keep using hardware that allow possibilities that were unheard a few years ago.

To quote an old adage What Moore's law giveth Gates Taketh away

I would prefer to use lean software on powerful hardware so as to actually gain the advantages of said hardware rather than bad code and bloat roll back the advantages new hardware has given.

Re:Ends of Moore's Law in software ? (2)

purpledinoz (573045) | about 10 months ago | (#45895623)

It's cheaper to buy more computers than hire more programmers.

Re:Ends of Moore's Law in software ? (0)

Anonymous Coward | about 10 months ago | (#45895731)

Would you rather that your CPU and memory were always underutilized by software, going to waste?

yes more efficient and fast code would be much better

and you could use the extra remaining CPU power to help find a cure for cancer, hiv/aids, arthritis, etc.

Re:Ends of Moore's Law in software ? (3, Insightful)

MightyMartian (840721) | about 10 months ago | (#45895173)

Pointless cycles because of poor code and compiler optimizations is hardly what I would call "utilization".

Re:Ends of Moore's Law in software ? (3, Interesting)

jones_supa (887896) | about 10 months ago | (#45895209)

Would you rather that your CPU and memory were always underutilized by software, going to waste?

Of course, because then we would either save in power consumption or alternatively do more interesting stuff with the extra free resources that we get.

Re: Ends of Moore's Law in software ? (1)

jarfil (1341877) | about 10 months ago | (#45895259)

Yes, reducing energy consumption, heat dissipation, hardware lifetime, and being able to swiftly react to sudden spikes of activity, would be excellent.

Re:Ends of Moore's Law in software ? (0)

Anonymous Coward | about 10 months ago | (#45895065)

Thank you Unicode!

Re:Ends of Moore's Law in software ? (1)

HiThere (15173) | about 10 months ago | (#45895179)

If you use utf8, then unicode is as efficient as ascii...for everything you can do with ascii. Nobody should be required to use utf32 except when it's the most convenient choice. (And there's never a good argument for using utf16, unless you're on a computer with 16 bit words.)

Re:Ends of Moore's Law in software ? (1)

jones_supa (887896) | about 10 months ago | (#45895501)

How do you know how much memory to allocate for a chunk of text if the character width varies?

Re:Ends of Moore's Law in software ? (0)

Anonymous Coward | about 10 months ago | (#45895541)

How do you know how much memory to allocate for a chunk of text if the character width varies?

The same way as you would with ASCII, measuring the length of the text before allocating memory. Unless you have a crystal ball, there's no other way.

Re:Ends of Moore's Law in software ? (1)

petermgreen (876956) | about 10 months ago | (#45895549)

If you use utf8, then unicode is as efficient as ascii.

For storage.

But when you actually come to display stuff rendering engines that can handle massive character sets, variable byte count encodings of code points, converting mixed directionality text from logical order to physical order, combining elements in various ways to apply arbitary diacritics to a character and so-on doesn't come free. Having those things in your text rendering engine comes at a price even if you don't actually use them.

Re:Ends of Moore's Law in software ? (0)

Anonymous Coward | about 10 months ago | (#45895079)

Spending 10x the man hours for a 10% performance gain, only to have to refactor the code shortly after, is not beneficial. If slow code causes more servers to be purchased, they will start to balance the cost of servers compared to man hours optimizing. I wouldn't mind a job doing optimizations, I already do that to a slight degree.

Re: Ends of Moore's Law in software ? (4, Insightful)

jarfil (1341877) | about 10 months ago | (#45895327)

How about spending 20x the man hours for a 10,000% performance gain? That is what I've recently experienced myself, in the reverse: an embedded device interface getting rewritten to require 20x less man hours to mantain... at a 100x performance hit. Suffice to say it went from quite snappy, to completely useless, but it seems like it's my fault for not upgrading the hardware.

Re:Ends of Moore's Law in software ? (4, Insightful)

bloodhawk (813939) | about 10 months ago | (#45895101)

I don't find that a sad thing at all. The fact people have to spend far less effort on code to make something that works is a fantastic thing that has opened up programming to millions of people that would never have been able to cope with the complex tricks we used to play to get every byte of memory saved and to prune every line of code. This doesn't mean you can't do those things and I still regularly do when writing server side code. But why spend man years of effort to optimise memory,CPU and disk footprint when the average machine has abundant surplus of both.

Re: Ends of Moore's Law in software ? (2)

jarfil (1341877) | about 10 months ago | (#45895193)

The sad part is not that it's easier to code, but that many people have grown complacent with their code executing at abysmal performance rates. And I also blame compilers/interpreters that don't mind bloating things up.

Re:Ends of Moore's Law in software ? (0)

Anonymous Coward | about 10 months ago | (#45895203)

Why is that not sad? Many programmers don't even do the bare minimum to make their code decent. The situation is truly appalling.

Re:Ends of Moore's Law in software ? (1)

aztracker1 (702135) | about 10 months ago | (#45895337)

Most software is one-off software written as utilities and services in corporate environments.. where working is more important. Much of the time from a business perspective, hardware isn't shared in practice.. that little HR app that only a half dozen people use gets its' own server (no proper backup/redundancy mind you, but it's own box).. it's going to be under-utilized regardless of how well the code is written.

Where it becomes more important is in very large scale systems.. supporting a few dozen, or even a few hundred users at a time is pretty easy to do with even modest hardware today. Servicing a million+ simultaneous users, that takes some effort, and even a lot of that is getting easier to do.. just look at node.js and golang for example... or the increased interest in erlang, F# and other functional languages. It's going to be a very slow shift, but honestly, the types of and extend of optimizing programs for large scale is about as rare as the people who are needed for writing enhanced drivers. Around the time we hit 1ghz computers, the cost of more computers or faster computers became less than development time.. If you have a development team that costs the company around $150-200k/year (salary + bonus + benefits), would you rather a team of four devs spending 3-6 months optimizing, or buy a few servers for less than half the cost?

Re:Ends of Moore's Law in software ? (2)

K. S. Kyosuke (729550) | about 10 months ago | (#45895457)

Where it becomes more important is in very large scale systems.. supporting a few dozen, or even a few hundred users at a time is pretty easy to do with even modest hardware today.

Yeah, you say "support users", but supporting users *well* can be more than enough computationally expensive to warrant diligent programing. Think automated knowledge systems, for example.

Re: Ends of Moore's Law in software ? (1)

aztracker1 (702135) | about 10 months ago | (#45895525)

What I mean is that using abstractions where they make sense is not always the most performance code, but can be far more easily understood. Breaking apart larger methods into smaller ones and exercising modularity are not better performing, but are better for ensuring desired results and reducing the chances of side effects and bugs. The most optimized code is not the same as the best code for a given system.

Re:Ends of Moore's Law in software ? (1)

smash (1351) | about 10 months ago | (#45895583)

Most software is one-off software written as utilities and services in corporate environments.. where working is more important.

... which eventually becomes entrenched, undocumented, the original author leaves and it gets scaled to far beyond its original intent.

Re:Ends of Moore's Law in software ? (1)

ppanon (16583) | about 10 months ago | (#45895601)

that little HR app that only a half dozen people use gets its' own server (no proper backup/redundancy mind you, but it's own box).

Apparently you've been under a rock for the last 5 years and completely missed the move to virtualization. Can you guess what the main business driver is? More efficient use of computing resources, power, and data centre infrstructure resulting from consolidation.

Re:Ends of Moore's Law in software ? (1)

phantomfive (622387) | about 10 months ago | (#45895215)

Back in the days of pre-8088 we already had music notation softwares running on Radio Shack TRS-80 model III.

In 4-bit color on a 640x480 screen, with ugly fonts (if you could even get more than one!) and lousy, cheap sounding midi playback. Seriously, TRS-80 music notation software was severely limited compared to what we have today.

Re:Ends of Moore's Law in software ? (0)

Anonymous Coward | about 10 months ago | (#45895523)

Back in the days of pre-8088 we already had music notation softwares running on Radio Shack TRS-80 model III.

In 4-bit color on a 640x480 screen, with ugly fonts (if you could even get more than one!) and lousy, cheap sounding midi playback. Seriously, TRS-80 music notation software was severely limited compared to what we have today.

Model III, youngster. Monochrome with 128x48 graphics and maybe an internal thing that went "beep".

Re:Ends of Moore's Law in software ? (2)

Guy Harris (3803) | about 10 months ago | (#45895297)

Nowadays, even the most basic "Hello World" program comes up in megabyte range.

The most basic "Hello World" program doesn't have a GUI (if it has a GUI, you can make it more basic by just printing with printf), so let's see:

$ ed hello.c
hello.c: No such file or directory
a
#include <stdio.h>

int
main(void)
{
printf("Hello, world!\n");
}
.
w
67
q
$ gcc -o hello -Os hello.c
$ size hello
__TEXT __DATA __OBJC others dec hex
4096 4096 0 4294971392 4294979584 100003000

I'm not sure what "others" is, but I suspect there's a bug there (I'll take a look). 4K text, 4K data (that's the page size), which isn't too bad; the bulk of the work is done in a library, though - it's a shared library, and this OS doesn't support linking statically with libSystem, so it's hard to tell how much code is dragged in by printf. The actual file size isn't that big:

$ ls -l hello
-rwxr-xr-x 1 gharris wheel 8752 Jan 7 21:58 hello

Re:Ends of Moore's Law in software ? (1)

JanneM (7445) | about 10 months ago | (#45895529)

What do you suggest we do with all the computing power we've gained then? It seems perfectly reasonable to use it to make software development easier and faster, and make more beautiful and more usable user interfaces.

Re: Ends of Moore's Law in software ? (1)

clickclickdrone (964164) | about 10 months ago | (#45895545)

Agreed. Seems amazing now that you could get spreadsheets and word processors running in 8 to 48k. The only time in recent years I've seen amazing efficiency was a graphics demo that drew a fully rendered scene using algorithmically generated textures. Demo ran for about 5 minutes scrolling about the buildings and hills and was only about 150k

Re:Ends of Moore's Law in software ? (2)

Greyfox (87712) | about 10 months ago | (#45895553)

I did a static-compiled hello world program a while back and found that it was only a few kilobytes, which is still a lot but way less than I expected. A C function I wrote to test an assembly language function I also wrote recently came it at about 7000 bytes. That's better...

There have been several occasions where I've seen a team "solve" a problem by throwing another couple gigabytes at a Java VM and add a task to reboot the system every couple of days. I've lost count of the times where simply optimizing SQL I was looking at (Sometimes by rewriting it, sometimes by adding an index) has resulted in hour-long tasks suddenly completing in a minute or two. There's plenty of room to get more performance out of existing hardware, that's for sure!

And best of all... (4, Insightful)

pushing-robot (1037830) | about 10 months ago | (#45895023)

We might even stop writing everything in Javascript?

Re:And best of all... (1)

Anonymous Coward | about 10 months ago | (#45895083)

You fool! There must always be a Javascript.

Re:And best of all... (0)

Anonymous Coward | about 10 months ago | (#45895221)

Like that matters. Javascript is already only 1.5 - 3x slower in non-vectorized code ( that plain C/C++. And the difference is shrinking all the time, I understand many JS engine developers are working now on autovectorization. There's little reason why Javascript can't eventually achieve performance parity with C/C++.

Re:And best of all... (1)

smash (1351) | about 10 months ago | (#45895585)

Of course, once it's burned the CPU to be JIT-ed.

Re: And best of all... (1)

jarfil (1341877) | about 10 months ago | (#45895341)

Fun thing about JavaScript, it's possibly the single language that got most optimized as of late. From a sluggish thing at the dawn of the century, to a snappy language for developing web apps, and an even snappier node.js on the server side.

Re: And best of all... (0)

Anonymous Coward | about 10 months ago | (#45895727)

This is true. Massive engineering efforts have been put behind JavaScript to make it really fast.

Re:And best of all... (5, Funny)

Urkki (668283) | about 10 months ago | (#45895481)

We might even stop writing everything in Javascript?

Indeed. JavaScript is the assembly language of the future, and we need to stop coding in it. There already are many nicer languages which are then compiled into Javascript, ready for execution in any computing environment.

Not really a 'law' anyway (0)

Anonymous Coward | about 10 months ago | (#45895025)

They should have called it "Moore's Trend" or "Moore's Observation."

Re: Not really a 'law' anyway (0)

Anonymous Coward | about 10 months ago | (#45895349)

"Moore's Self-fulfilled Prophecy" would be also spot on.

Re:Not really a 'law' anyway (1)

weakref (2554172) | about 10 months ago | (#45895351)

More like "Moore's Promise/Contract"

Mmm. Skynet. (2)

xtal (49134) | about 10 months ago | (#45895031)

..took us in directions we hadn't considered.

Forget the exact quote, but what a time to be alive. My first computer program was written on a Vic-20. Watching the industry grow has been incredible.. I am not worried about the demise of traditional lithographic techniques.. I'm actually expecting the next generation to provide a leap in speed as now there's a strong incentive to look at different technologies.

Here's to yet another generation of cheap CPU.

Imagine that... (1)

ItMustBeEsoteric (732632) | about 10 months ago | (#45895047)

We might stop seeing ridiculous gains in computing power, and might have to start making gains in software efficiency.

feature bottlenck (1)

globaljustin (574257) | about 10 months ago | (#45895269)

We might stop seeing ridiculous gains in computing power, and might have to start making gains in software efficiency.

I agree in sprit but you perpetuate a false dichotomy based on a misunderstanding of *why* software bloat happens, in the broad industry-wide context.

Just look at Windows. M$ bottlenecked features willfully because it was part of their business plan.

Coders, the people who actually write the software, have always been up to the efficiency challenge. The problem is the biggest money wasn't paying for ninja-like efficiency of executing user instructions.

It was about marketing and ass-backwards profit models forced onto the work of making good code.

I've often observed that in order to do the most desirable work, a coder would have to sacrifice the very thing that made them want to work on the best software in the first place...

Re:feature bottlenck (0)

Anonymous Coward | about 10 months ago | (#45895505)

Coders, the people who actually write the software, have always been up to the efficiency challenge.

The minority of coders, you mean. Most coders are and always have been very, very crappy.

Demand shifted? (1)

Tablizer (95088) | about 10 months ago | (#45895073)

Perhaps the focus on portable computing has pulled research money away from high-end chips. The demand for high-end PC's is flattening out, and so R&D money is being pulled out of there and into ARM-level chips to have existing CPU power in smaller, battery-friendly boxes.

Re:Demand shifted? (1)

Anonymous Coward | about 10 months ago | (#45895255)

Once upon a time I updated my computer every time new ones were 4x faster. And it was often in the nineties! But suddenly the progress stopped last decade. I'd guess single threaded performance was 4x slower in 2000 than today. In the nineties, that happened occasionally in a year or less.

Re:Demand shifted? (1)

Billly Gates (198444) | about 10 months ago | (#45895335)

Still in mobile. Very large difference between Galaxs1 and the GalaxyS4. Galaxy1 was painfully slow. Iphone 5s is probably a good 10x or maybe 15x as fast as iphone1

Ridiculous (0)

Anonymous Coward | about 10 months ago | (#45895077)

Ridiculous, talking about Moore's law as if it were a part of nature. When in fact it is the product of innovation and ingenuity, which you say it hampers. Its not Moore's law that's slowing down innovation, its lack of innovation that is invalidating Moore's law.

Software improvements matter more than hardware (3, Interesting)

JoshuaZ (1134087) | about 10 months ago | (#45895127)

This is ok. For many purposes, software improvements in terms of new algorithms that are faster and use less memory have done more for heavy-dute computation than hardware improvement has. Between 1988 and 2003, linear programmng on a standard benchmark improved by a factor of about 40 million. Out of that improvement, about 40,000 was from improvements in software and only about 1000 in hardware improvements (these numbers are partially not well-defined because there's some interaction between how one optimizes software for hardware and the reverse). See this report http://www.whitehouse.gov/sites/default/files/microsites/ostp/pcast-nitrd-report-2010.pdf [whitehouse.gov] . Similar remarks apply to integer factorization and a variety of other important problems.

The other important issue related to this, is that improvements in algorithms provide ever-growing returns because they can actually improve on the asymptotics, whereas any hardware improvement is a single event. And for many practical algorithms, asymptotic improvements are occurring still. Just a few days ago a new algorithm was published that was much more efficient for approximating max cut on undirected graphs. See http://arxiv.org/abs/1304.2338 [arxiv.org] .

If all forms of hardware improvement stopped today, there would still be massive improvement in the next few years on what we can do with computers simply from the algorithms and software improvements.

Re:Software improvements matter LESS than hardware (2)

Anonymous Coward | about 10 months ago | (#45895507)

Misleading.

Yes, I've got 100 fold improvments on a single image processing algorithm. It was pretty easy as well.
However, that only speeds up that one algorithm, 10x faster hardware speeds everything 10x.

Use of interpretted languages and bloated code has more than equalled the point gains in algorithms.

The net result overall is that 'performance' increase has been mostly due to hardware, not software.

Re:Software improvements matter more than hardware (3, Interesting)

Taco Cowboy (5327) | about 10 months ago | (#45895511)

Between 1988 and 2003, linear programmng on a standard benchmark improved by a factor of about 40 million. Out of that improvement, about 40,000 was from improvements in software and only about 1000 in hardware improvements (these numbers are partially not well-defined because there's some interaction between how one optimizes software for hardware and the reverse).

I downloaded the report at the link that you have so generously provided -http://www.whitehouse.gov/sites/default/files/microsites/ostp/pcast-nitrd-report-2010.pdf - but I found the figures somewhat misleading

In the field of numerical algorithms, however, the improvement can be quantified. Here is just one example, provided by Professor Martin GrÃtschel of Konrad-Zuse-Zentrum für Informationstechnik Berlin. GrÃtschel, an expert in optimization, observes that a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day. Fifteen years later â" in 2003 â" this same model could be solved in roughly 1 minute, an improvement by a factor of roughly 43 million. Of this, a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algo-rithms! GrÃtschel also cites an algorithmic improvement of roughly 30,000 for mixed integer programming between 1991 and 2008

Professor GrÃtschel's sighting was in regard of "numerical algorithm", and no doubt, there have been some great improvements achieved due to new algorithms. But that is just one tiny aspect out of the whole spectrum of the programming scene.

Out of the tiny segment of the numerical crunching, bloatwares have emerged everywhere.

While the hardware speed has accelerated 1,000x (as claimed by the kind professor), the speed of the software in solving the myriad problems hasn't exactly been keeping up.

I have invested more than 30 years of my life in the tech field, and comparing to what we had achieved in software back in the late 1970's, what we have today are astoundingly disappointing.

Back then, RAM was counted in KB, and storage in MB were considered as "HUGE".

We had to squeeze every single ounce of performance out of our programs just to make them run at decent speed.

No matter if it's a game of "pong" or numerical analysis, everything had to be considered and more than often we get down to the machine level (yes, we code one step lower than the assembly language) so to minimize the "waste", counting down to each and every single cycle.

Yes, many of the younger generation will look at us as though we the old farts are crazy, but our quest in fighting against the hardware limitation was, at least to us who went through it all, extremely stimulating.

Re:Software improvements matter more than hardware (1)

Anonymous Coward | about 10 months ago | (#45895531)

Unfortunately, in some cases (e.g. linear algebra systems solution) we have obvious lower bounds - good luck ever coming up with a general dense matrix-vector multiply that's faster than n^2, it's down to 2.307 formally I think - with no possibility of ever improving them.

But it's true that where possible algorithmic improvements improve everything now and forever. The widespread use of multigrid methods as preconditioners & solvers in implicit CFD and Poisson solvers, typically replacing (I believe) ADI methods, was earth shattering... Going from n log n to just n for a system of size n is gigantic when N is billions.

ps, parent's PCAST pdf refers to his quoted figures on document page 71 = pdf page 98.

Implications (3, Interesting)

Animats (122034) | about 10 months ago | (#45895131)

Some implications:

  • We're going to see more machines that look like clusters on a chip. We need new operating systems to manage such machines. Things that are more like cloud farm managers, parceling out the work to the compute farm.
  • Operating systems and languages will need to get better at interprocess and inter-machine communication. We're going to see more machines that don't have shared memory but do have fast interconnects. Marshalling and interprocess calls need to get much faster and better. Languages will need compile-time code generation for marshalling. Programming for multiple machines has to be part of the language, not a library.
  • We're going to see more machines that look like clusters on a chip. We need new operating systems to manage such machines. Things that are more like cloud farm managers.
  • We'll probably see a few more "build it and they will come" architectures like the Cell. Most of them will fail. Maybe we'll see a win.

Re:Implications (0)

Anonymous Coward | about 10 months ago | (#45895439)

>We're going to see more machines that look like clusters on a chip.

What part of "we can't squeeze any more transistors onto a chip" do you not understand?

Re:Implications (0)

Anonymous Coward | about 10 months ago | (#45895547)

Don't be so nasty...you could have just said "it's getting impossible to squeeze any more transistors onto a chip".

dumbest thing I've read all day (1)

phantomfive (622387) | about 10 months ago | (#45895191)

This quote from the article: " But it may be a blessing to say goodbye to a rule that has driven the semiconductor industry since the 1960s." is surely the dumbest thing I've read all day.

Seriously? It's like, people wake up and say, "it would be such a blessing if I could never get a faster computer." Does that make sense at all?

Re:dumbest thing I've read all day (2)

Billly Gates (198444) | about 10 months ago | (#45895319)

Tell that to the XP holdouts.

No newer technology means no change and using the best ever made soley because they are familiar.

Re:dumbest thing I've read all day (1)

phantomfive (622387) | about 10 months ago | (#45895333)

I can understand the XP holdouts, sort of. They have some features they like or something.

With Moore's law, we're talking about faster processors. No changes necessary, other than your motherboard. I've never met anyone who was in love with their motherboard.

Re:dumbest thing I've read all day (1)

Billly Gates (198444) | about 10 months ago | (#45895353)

I love mine compared to last.

Anything with a chip including chipsets, i/o controllers like thunderbolt, gpu etc, all benefit and change.

There is a reason Sata and thunderbolt 2 did not exist in 1999. Chips and power for them would be impossible or very size and cost prohibitive

Re:dumbest thing I've read all day (1)

Chuck Chunder (21021) | about 10 months ago | (#45895347)

That may have been the dumbest thing you've read all day but to be fair that was before your comment was written.

The intent behind that sentence seems fairly clear, that the end of predictable speed increases may lead to greater focus on whole other avenues of development and other unpredictable and exciting ideas popping up.

Re:dumbest thing I've read all day (1)

phantomfive (622387) | about 10 months ago | (#45895409)

The intent behind that sentence seems fairly clear, that the end of predictable speed increases may lead to greater focus on whole other avenues of development and other unpredictable and exciting ideas popping up.

Yes, why don't we go back to the abacus, and see what new ideas come up!

Seriously, do you remember when microcomputers came out? Academics complained that they were setting the computer world back three decades. That's basically how you sound.

Feature size vs "Feature size" (3, Interesting)

radarskiy (2874255) | about 10 months ago | (#45895229)

The defining characteristic of the 7nm is that it's the one after the 10nm node. I can't remember the last time I worked in a process where the was a notable dimension that matched the node name, either drawn or effective.

Marc Snir gets bogged down in an analysis of gate length reduction which is quite besides the point. If it gets harder to shrink the gate than to do something else, then something else will be done. It worked on processes with the same gate length as the "previous" process, and I've probably even worked on a process that had a larger gate than the previous process. The device density still increased, since gate length is not the only dimension.

Re:Feature size vs "Feature size" (0)

Anonymous Coward | about 10 months ago | (#45895701)

I can't remember the last time I worked in a process where the was a notable dimension that matched the node name, either drawn or effective.

I think 180nm was the last, but I can't find the source paper any more. 180nm as in "early-2001 Pentium 3".

Goovernments? Nonsense! (1)

gweihir (88907) | about 10 months ago | (#45895277)

If, just if anything can be done, Governments will not play any significant role in this process. They do not even seem to understand the problem, how could they ever be part of the solution? And that is just it: It is quite possible that there is no solution, or that it may take decades or centuries for that one smart person to be in the right place at the right time. Other than blanket research funding, Governments cannot do anything to help that. Instead, scientific funding is today only given to concrete applied research that promises specific results. That is not going to help making any fundamental breakthrough, quite the opposite.

Personally, I expect that is it for computing hardware for the next few decades or possibly permanently. I do not see any fundamental issue with that. And there would be quite a bit of historic precedent for a technology to slowly begin to mature.

As software is to incredible unrefined these days, that would be a good thing. It would finally be possible to write reasonable standard components for most things, instead of the bloated, insecure mess so common these days. It would also be possible to begin restricting software creation to those that actually have a gift for it, instead of having software created by the semi-competent and the incompetent (http://www.codinghorror.com/blog/2010/02/the-nonprogramming-programmer.html). In the end, the fast process of computing (which burned a few centuries of fundamental research) was not a good thing. Things will be moving much slower while new fundamental research results will be created instead of merely consumed.

cubes (1)

Billly Gates (198444) | about 10 months ago | (#45895299)

One of the reasons our brains are more powerful is because we are 3d and a chip is still stuck in 2d.

When will we see 3d or many layered chips turning simple instructions into complex ones similar to a neural net?

Or if machines are fast enough (witness those on 10 year old XP boxen hold outs) no one will care? When can we have our own 3cpos?

Re:cubes (0)

Anonymous Coward | about 10 months ago | (#45895573)

It turns out that building 3D chips is actually sort of difficult. The entire (exquisitely refined) process of building 2D chips based on altering the surface of a nearly flawless silicon monocrystal that is perfectly flat - not a single atomic layer out of place - on hand-sized dimensions. Once it's been through several hundred lithiography steps that may have dropped literally dozens of layers of exotic insulators and metal interconnects and embedded dopant ions all over the place, you can't exactly do that again to create layer #2.

That and the fact that layer #1 already has a heat output density comparable to nuclear reactor fuel...

Re:cubes (1)

smash (1351) | about 10 months ago | (#45895597)

intel is already doing it [intel.com] .

Slides were kind of cool (1)

Demonantis (1340557) | about 10 months ago | (#45895371)

Picture a super computer so massive that components fail as fast as we can replace them. Now that is a big super computer. This is the issue. Super computers have a physical limit. If the node power doesn't grow we will reach a limit on simulation power. It will be interesting to see how the CPU matures. That means more features will be developed beyond raw power.

Re:Slides were kind of cool (1)

Anonymous Coward | about 10 months ago | (#45895619)

Fault-tolerant versions of MPI are already under draft.

In the future expect to see hardware fault-detection that triggers the entire MPI process to pause while the entire memory image on the failed node copies itself to a hot spare node and resumes from there.

There is room for even further MTBF improvement in computer hardware. The move to solid state storage (assuming $/TB can ever catch up to rotating disks) and the move to hydrodynamic bearings in fans will eliminate the last points of moving-contact wear out, albeit at the price of introducing the SSD's finite-write-cycles problem. Following that, anticipate the phase-out of liquid dielectric and even "solid tantalum" capacitors in favor of bigger and bigger MLCCs which basically leaves nothing other than electrothermal migration within chips to cause failure-to-execute.

Long term installation/reliability presents other hazards - ROM/EEPROM/SSD memory cell depolarization, board thermal cycling, mechanical mounting stress, vibration from the power supply regulator coils, etc - but these are unlikely to impact supercomputer installations that will probably be thrown out as "no longer worth the power to run" before then.

Time to start putting money back into R & D. (1)

jzatopa (2743773) | about 10 months ago | (#45895557)

It feels like we have been coasting a bit with technology the past few years so this doesn't surprise me. Everything is just a smaller/faster version of the year before.

well then... (1)

smash (1351) | about 10 months ago | (#45895571)

... time to stop writing garbage in visual basic, man up, and use proper languages again that are actually efficient, isn't it?

50s-90s (0)

Anonymous Coward | about 10 months ago | (#45895607)

From the 50s-90s businesses ran fine with basically no PCs or 8086 machines.
Once the wall hits businesses will run fine too.
What I expect to see more of refinements of technology that make things work better.
Example: in the 70s you could buy a small portable tire repair kit. Problem was there was a metal seal across the opening of the tube of vulcanizing material, on the street it was a bitch to open. By the 80's the dsmr kit was available but on the outsige edge of the cap was a small spike. You unscreded the cap put it on backwards and pushed in. The spike would puncture the metal seal.

On possible future example. A build process for a linux kernel that scans equipment and creates the optimal kernel configuration.

Carbon Nanotubes == End of Moore's Law? (1)

nateman1352 (971364) | about 10 months ago | (#45895665)

So... TFA says that the end of Moore's law will result in a ton of new innovations aimed at... making computers faster??? Last time I checked making computers faster is what Moore's law is all about.

I think a better title would be "End of Traditional Silicon/CMOS Technology Forcing Radical Innovation to Keep Moore's Law Going!"

End of Moore's Law? (1)

Alex Vulpes (2836855) | about 10 months ago | (#45895715)

Come on now, someone always predicts an end to Moore's Law every so often, but it never happens.

There is no beginning. There is no end. There is only Moore.

wrong again (0)

Anonymous Coward | about 10 months ago | (#45895729)

https://www.youtube.com/watch?v=cugu4iW4W54

This year, she and her team announced they had made the first ever single atom transistor.

What's old is new again. (1)

Ultracrepidarian (576183) | about 10 months ago | (#45895749)

Perhaps writing efficient code will come back into style.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?