×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Progress In Algorithms Beats Moore's Law

timothy posted more than 3 years ago | from the when-bottlenecks-swap dept.

Software 166

Relic of the Future writes "Seen on the blog 'Algorithmic Game Theory,' a report to congress and the president about past and future advances in information technology notes that, while improvements in hardware accounted for an approximate 1,000 fold increase in calculation speed over a 15-year time-span, improvements in algorithms accounted for an over 43,000 fold increase."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

166 comments

First post algo (1)

Anonymous Coward | more than 3 years ago | (#34663034)

And yet first posts never seem to get any faster

Re:First post algo (2)

Concerned Onlooker (473481) | more than 3 years ago | (#34663080)

return First_post_algo();

Re:First post algo (2)

davester666 (731373) | more than 3 years ago | (#34663216)

Now, you have to disassemble the binary to make sure the compiler properly implemented tail recursion!

Re:First post algo (0)

Anonymous Coward | more than 3 years ago | (#34663126)

Cool first post you got there, but check out my doubles, bro.

But we made up in ... (4, Funny)

sourcerror (1718066) | more than 3 years ago | (#34663070)

more bloated GUI libraries (bling), and comfier runtimes (different VMs).

Re:But we made up in ... (4, Insightful)

betterunixthanunix (980855) | more than 3 years ago | (#34663210)

Well, the real question is, are users better off as a result? Personally, I found that most of the computationally intensive graphics were not actually helping me get my work done any faster, switched to a more minimal window manager and did all that I could to reduce the amount of CPU time spent on rendering the GUI (disabling special effects wherever I saw them), and discovered that, in fact, I was correct: none of those effects were actually helping me get my work done.

On the flip side, people tend to be turned off when they see what my screen looks like. It has gotten to the point where I do not mention that this is "Linux," because I do not want new users to get scared. In the end, looking pretty winds up being more important than how efficiently you use computational resources.

Re:But we made up in ... (1)

MichaelSmith (789609) | more than 3 years ago | (#34663230)

At work I run fvwm with the mouse configured for left hand use with xmodmap and Button3 on the title bar set to close the window. Right handed users inevitably grab the mouse with their right hand and try to move a window with their Button1.

[OT] fvwm (2)

lkcl (517947) | more than 3 years ago | (#34663446)

oo - please post your ~/.fvwmrc online somewhere! *excited*. i run fvwm too, i run fvwm too!

Re:[OT] fvwm (3)

MichaelSmith (789609) | more than 3 years ago | (#34664308)

Okay here it is [glitch.tl]. I am in a bit of a rush I am sorry. You will have to fix a few things like putting in your own screen background. It has an fvwm script for monitoring nodes called "System Status". Generally Button1 is for starting things and Button2 is for configuration. It is set up to use ssh-askpass to set your ssh passphrase.

Got to go. My wife is insisting I sit down for christmas dinner.

Re:But we made up in ... (1)

cskrat (921721) | more than 3 years ago | (#34664096)

I get a similar effect by using a Dvorak keyboard layout at work (keycaps still in qwerty). Great fun when a coworker tries to enter a password on my keyboard.

Re:But we made up in ... (2)

martin-boundary (547041) | more than 3 years ago | (#34664190)

Why bother with window titles at all? You can disable all window decorations, and reprogram the mouse buttons to act on the *whole* window client area. Much easier to move/resize/destroy that way. All you have to do is use the extra mouse buttons that come with practically all mice, and that nobody ever uses for anything anyway.

On my fvwm, I've set up the 8/9 buttons as follows: doubleclick 8 -> Hide, clidkdrag 8 -> Resize, doubleclick 9 -> Kill, clickdrag 9 -> Move. It seemed a bit weird the first two hours, but it was surprisingly easy to get used to, and ever since I found that I actually move and resize windows a lot more frequently than before, because the mouse gestures are so natural and don't need to be done in a small predefined space.

Re:But we made up in ... (1)

MichaelSmith (789609) | more than 3 years ago | (#34664346)

I suppress titles on some window classes. NEdit has tabs and an information field to tell me which file has focus, so the title bar is redundant there. I might give your approach a go. It sounds interesting.

Re:But we made up in ... (1)

martin-boundary (547041) | more than 3 years ago | (#34664498)

I also like to have the window titles displayed in the background menu, eg. something like

DestroyFunc invokeRootMenu
AddToFunc invokeRootMenu
+ I DestroyMenu recreate RootMenu
+ I AddToMenu RootMenu "r00tMenu [Desk $[desk.n]]" Title
+ I AddToMenu RootMenu "New" Function NewTerm
+ I All (Iconic,CurrentScreen) AddToMenu RootMenu ($[w.name])\ [$[w.desk]] WindowId $[w.id] SelectWindow
+ I All (!Iconic,CurrentScreen) AddToMenu RootMenu $[w.name]\ [$[w.desk]] WindowId $[w.id] SelectWindow
+ I AddToMenu RootMenu "" Nop
+ I AddToMenu RootMenu "Restart" Restart fvwm2
+ I AddToMenu RootMenu "Quit" Quit
+ I Popup RootMenu

Ideally, if you open a lot of terminals, you'll want to display the current shell command in the title with something like BASH_COMMAND or whatever your shell uses. That way it shows up in the menu above. The trick is to update the title just before the command gets executed.

To get the mouse behaviour I was talking about, try something along these lines:

DestroyFunc KillMove
AddToFunc KillMove
+ H Move
+ D Close
+ C Move

DestroyFunc HideResize
AddToFunc HideResize
+ H Resize
+ D Iconify true
+ C Raise
+ C Resize

Silent Mouse 9 W N Function HideResize
Silent Mouse 8 W N Function KillMove

Re:But we made up in ... (1)

MichaelSmith (789609) | more than 3 years ago | (#34664618)

I also like to have the window titles displayed in the background menu, eg. something like

DestroyFunc invokeRootMenu
AddToFunc invokeRootMenu
+ I DestroyMenu recreate RootMenu
+ I AddToMenu RootMenu "r00tMenu [Desk $[desk.n]]" Title
+ I AddToMenu RootMenu "New" Function NewTerm
+ I All (Iconic,CurrentScreen) AddToMenu RootMenu ($[w.name])\ [$[w.desk]] WindowId $[w.id] SelectWindow
+ I All (!Iconic,CurrentScreen) AddToMenu RootMenu $[w.name]\ [$[w.desk]] WindowId $[w.id] SelectWindow
+ I AddToMenu RootMenu "" Nop
+ I AddToMenu RootMenu "Restart" Restart fvwm2
+ I AddToMenu RootMenu "Quit" Quit
+ I Popup RootMenu

Mouse 1 R A WindowList

...would appear to do the same thing apart from being able to add the extra functions to it.

Re:But we made up in ... (1)

bzipitidoo (647217) | more than 3 years ago | (#34664374)

More screen space was one of the most important to me. You don't have to do as much moving and resizing ;). I was amazed what a difference it made moving from a 15" monitor at 1024x768 to a 19" monitor at 1600x1200. And now 24" wide screen LCDs and dual monitor setups are common. Keep the PDF viewer and browser with reference material and search results on one monitor, and have an integrated development environment on the other. I'm looking forward to 6'x3' tabletop size monitors.

Re:But we made up in ... (1)

I(rispee_I(reme (310391) | more than 3 years ago | (#34664350)

I've always thought that people who insisted on left-handed mouse configs were pansies, and in fact, every one I've heard complain about it turned out to suffer chronic Munchhausen's syndrome, with many other symptoms.

I am left-handed. The QWERTY layout is one of the few tools that is specifically biased in southpaws' favor. The keyboard is by most measures a more complex object than the mouse, the latter of which can be flailed about until it works.

It's far easier to learn to use a mouse using your right hand than learn to type using the crappy half of the alphabet.

Why would you ever want to take your smart hand off the good half of the keyboard? It's like ignoring a natural 20.

Re:But we made up in ... (1)

MichaelSmith (789609) | more than 3 years ago | (#34664376)

The left handed configuration is more balanced. Backspace, delete and enter are both on the right side of the keyboard. I can select an object with my left and and activate it with the enter key using my right hand.

Re:But we made up in ... (1)

Undead Waffle (1447615) | more than 3 years ago | (#34664460)

The biggest flaw I see with your logic is that the mouse is more of a precision device, so it makes sense to use it with your stronger hand.

I am left handed but I use the mouse right handed. Not because it's "better" but because it's a standard setup and there are so many right handed mice I don't want to limit myself.

Also, when I'm using the mouse and keyboard together I'm rarely typing. Usually I'm hitting backspace/delete, enter, etc. Not that it matters that I have to reach my left hand across the keyboard because it takes me about as long to move my right hand back to the keyboard as it does to move my left hand back to where it should be.

Re:But we made up in ... (1)

644bd346996 (1012333) | more than 3 years ago | (#34664500)

My setup: A regular mouse on the right for gaming and occasional casual use, and a tablet for anything requiring precision or lots of GUI interaction (and for some RTS games). That way, I'm not developing muscle memory that is in any way backwards, but I still get to take advantage of the better dexterity in my left hand.

I do quite like the layout of laptops like the MacBook Pro, with a big touchpad centered under the keyboard, so that you can use it with either hand. Even the multitouch gestures added by jitouch are ambidextrous, so I can switch between hands easily, depending on what side of the keyboard I need to use more.

Re:But we made up in ... (1)

ToasterMonkey (467067) | more than 3 years ago | (#34664386)

Well, the real question is, are users better off as a result? Personally, I found that most of the computationally intensive graphics were not actually helping me get my work done any faster, switched to a more minimal window manager and did all that I could to reduce the amount of CPU time spent on rendering the GUI (disabling special effects wherever I saw them), and discovered that, in fact, I was correct: none of those effects were actually helping me get my work done.

On the flip side, people tend to be turned off when they see what my screen looks like. It has gotten to the point where I do not mention that this is "Linux," because I do not want new users to get scared. In the end, looking pretty winds up being more important than how efficiently you use computational resources.

I'm not going to argue that on a Linux desktop anyway, the special effects are mostly just that, special effects, but you might get yourself into a situation like in a car; ripping the sound dampening material out makes it more efficient, but is not worth it 99.9% of the time. Animation is used in interfaces on a couple OSs that will go nameless to provide a smooth visual transition from one state to another. It isn't a new concept, it's just easier to do with today's hardware. I think the problem is animation on a Linux desktop is implemented largely just to look cool, not serve a particular purpose. I think they only accidentally achieve the same goal as the source they borrowed the effect from in some cases.

We could probably say color is not necessary in UI design too (the UI, not the rendering of things w/ color information), but that doesn't mean it isn't very useful when the hardware supports it.
Repeat with mouse, pixilated graphics, sound, etc..

Top notch programmer (0)

Anonymous Coward | more than 3 years ago | (#34664124)

The difference in between top notch programmer and the rest of the garden variety type is this -

Top notch programmers are ever enthusiastic over the things they have produced while the rest - they are just coders.

That's why top notch programmers keep on finding ways to optimize their algorithm while the rest ---> you know who you are and what you have been doing.

ho ho ho indeed! (-1)

Anonymous Coward | more than 3 years ago | (#34663074)

i want to lick the cum off santa's beard.

THIS is NFNSTM?? On fucking Xmas??! (1)

Smidge207 (1278042) | more than 3 years ago | (#34663078)

Linux enthusiast Rob Malda—aka Commander Taco— created something that is still unique when he started the web site in September 1997. It’s an odd hybrid of chat room and news outlet, where a member’s power rises and falls on the strength of his or her contributions. It’s about community, not coherency (grammar Nazis are despised). It’s immediate, but it’s not about real-time information; instead, it’s a discussion that can go on for days and weeks. It’s about breaking old forms, even its own forms, rather than becoming a new one.

It’s not immune to a sugary moment or two, though. Witness the exchange between Mr. Malda and Kathleen Fent from February 14, 2002. “I love you more then I can describe within the limits of this tiny little story. We’ve been together for many years now, and I’ve known for most of that time that I wanted to spend my life with you. Enough rambling. Will you marry me?”

The reply came 14 minutes later. “Subject: ‘Yes,’ message body: ‘Dork. You made me cry. :)’” Thousands of Slashdotters instantly posted their approval.

So who is this dork? Mr. Malda was born in 1976 in Holland, Michigan, and graduated from HopeCollege, a small Christian school where he majored in computer science. He was an early Linux user, and builds arcade games in his spare time, notably MAME cabinets which he uses to ensnare unwitting homosexual lovers. (Yes, America, he'd gay!). His web site was purchased first by Andover.net, then by VA Linux (now VA Software), which had one of the bubblier IPOs of the Linux boom. As a result, Mr. Malda has been free to tinker with Slashdot ever since.

But while Slashdot has grown stronger, old outlets— especially newspapers—continue to whither away. So where’s the out? Why aren’t more journalists starving in the streets? All those Slashdotters still need something to talk about, and Mr. Malda doesn’t get married every day.

Still, Mr. Malda has finally done what the media could never do on its own: marry community and content with a hyperlink, and then connect a group of intelligent, active readers to each other. Mr. Malda may yet save the media from itself. Thank you Commander!

Not for the first time (2)

charteux (777673) | more than 3 years ago | (#34663098)

Even in the 1980's it was recognized in avionics that algorithm improvement accounted for several orders of magnitude of speed in computing -- out performing hardware advances of the day. We highlighted that article in Avionics Weekly -- proudly displaying it outside our cubicles.

1000 fold (4, Interesting)

MichaelSmith (789609) | more than 3 years ago | (#34663104)

Thats an interesting figure because when the alpha came out DEC were predicting 1000 fold improvement in speed over about the same period. They split the improvement over factors of ten: clock speed, parallelism in the CPU and multiple cores were each expected to deliver a factor of ten improvement.

Now while our algorithms might be getting better our programmers definitely are not. A lot of those improved algorithms must be in our APIs. Like the way its easy to use a Hashtable in java now but in the last you might have just searched a linear array.

Re:1000 fold (0)

Anonymous Coward | more than 3 years ago | (#34663190)

You must not understand a thing about programming if you believe that every programmer should have to rewrite the same algorithms over and over without any help from anybody else.

Re:1000 fold (1)

jedidiah (1196) | more than 3 years ago | (#34663566)

Someone still has to be able to write and maintain the API.

It's like not being allowed to use a calculator in low level math.

Re:1000 fold (1)

FrootLoops (1817694) | more than 3 years ago | (#34663908)

That's an exaggeration of the GP's point. Being able to implement classes efficiently if you have to is a very good skill. It's also not like programmers have until recently gotten strictly better with time. The early programmers who wrote everything in octal were probably much better at what they did on average than today's average coder.

Re:1000 fold (1)

SuricouRaven (1897204) | more than 3 years ago | (#34664626)

On their hardware, they needed to be better.

One of the programs I wrote needs to search an array of tens of thousands of 128-bit entries to see if a particular value is in there*, then repeat this hundreds of thousands of times for different searches. I know that the tidy way to do this would be to sort the array first, but I don't - because the thing is executing on a 2.4GHz processor, and I just can't be bothered to write a sorter and binary search just to save a few miliseconds off the processing time.

*MD5sums of games, porn and lolcats. The program is used by a school to delete files the students shouldn't have.

Re:1000 fold (1)

arth1 (260657) | more than 3 years ago | (#34664234)

No, but you should understand the algorithms you use, and be able to troubleshoot them, or replace them when needed. If not, you have no business using them -- then you're just connecting black boxes, which is as far from programming as stapling prefab is from carpentry.

Re:1000 fold (0)

Anonymous Coward | more than 3 years ago | (#34663592)

> Now while our algorithms might be getting better our programmers definitely are not.

Why not? All the ones that are even remotely competent have had all that time to learn more. Plus there are all the ones who started more recently and have at least had some of that time to learn more. After 15 years, we should be better both in terms of the total number of good programmers AND in the proportion of good programmers.

Re:1000 fold (4, Interesting)

jc42 (318812) | more than 3 years ago | (#34663738)

Now while our algorithms might be getting better our programmers definitely are not.

Sometimes this is intentional. One of the more fun software competitions I've run across is to write the slowest sorting algorithm. Trivial programming tricks such as do-nothing loops that just run a counter to use time disqualify you, obviously. The judging is solely on the algorithm, by comparing the O(N) functions. One of the winners in the early years was a sort that generated a random permutation of the data, and tested it for sortedness, repeating if it wasn't sorted. And it turns out that there are several progressively worse variants of this, depending on how ridiculous your random-permutation algorithm is.

I should go find this contest again, and see what sorting atrocities they've come up with in the past couple years ...

(I did once read a suggestion that such contests not be widely publicised, out of fear that managers will find them and declare the winning algorithms the new company standards. ;-)

Re:1000 fold (3, Interesting)

sco08y (615665) | more than 3 years ago | (#34663990)

Thats an interesting figure because when the alpha came out DEC were predicting 1000 fold improvement in speed over about the same period. They split the improvement over factors of ten: clock speed, parallelism in the CPU and multiple cores were each expected to deliver a factor of ten improvement.

Now while our algorithms might be getting better our programmers definitely are not. A lot of those improved algorithms must be in our APIs. Like the way its easy to use a Hashtable in java now but in the last you might have just searched a linear array.

There are more programmers because programming is becoming more accessible, so naturally more people who suck at programming are doing it.

There's another problem: while gains from Moore's law have historically been realized by a recompile or less, most new algorithms require actually changing the code.

More RAM, Faster CPUs make better Algos possible (3, Informative)

billstewart (78916) | more than 3 years ago | (#34664182)

One thing that's happened to improve algorithms, besides having people study Computer Science and take advantage of the work other people have done in academia and open source systems over the past few decades, is that computers are enough bigger and faster that you can solve problems that weren't feasible in the past, because you didn't have enough RAM, or disks were too small and you needed to use tape, or CPUs weren't fast enough to bother using some techniques, and having those tools gives you more choices of algorithms than I had when I was an undergrad in the 70s, or than my professors had when they were learning CS in the 60s and 50s. 640KB wasn't enough for everybody, and I remember a Star Wars funded research project at Princeton in the late 80s that had a VAX with 128MB of RAM crammed into it so they could study what you could do if you really always had enough RAM. (They didn't think 128MB was really quite enough, but it was a good start and they could get it funded, and it was still extravagantly large - they even had a 450MB disk drive just for swap :-)

What Good Lord Giveth ... (5, Funny)

140Mandak262Jamuna (970587) | more than 3 years ago | (#34663118)

What the Good Lord Algorithmic Efficiency Improvements giveth, the Evil Lord Real-Time-Virus-Scanner taketh away.

Re:What Good Lord Giveth ... (2)

sco08y (615665) | more than 3 years ago | (#34664006)

What the Good Lord Algorithmic Efficiency Improvements giveth, the Evil Lord Real-Time-Virus-Scanner taketh away.

Heretic! Thine computer is a vessel of purity whose sole purpose for existence is to be continually purged of impurity. Thou shalt be grateful for the pittance of cycles that our great Real-Tyme Virus Scanner does leave ye to work with your ill begotten "productivity applications."

Re:What Good Lord Giveth ... (0)

Anonymous Coward | more than 3 years ago | (#34664166)

What the Good Lord Algorithmic Efficiency Improvements giveth, the Evil Lord Real-Time-Virus-Scanner taketh away.

Not to mention the next Windows "upgrade".

Transistor increase with software? (5, Informative)

F.Ultra (1673484) | more than 3 years ago | (#34663120)

So how did they magically increase the number of transistors using software? Oh whait, someone didn't know what Moore's law was...

Re:Transistor increase with software? (1)

c.derby (574103) | more than 3 years ago | (#34663198)

i guess you didn't bother to read TFA....

"Even more remarkable - and even less widely understood - is that in many areas, performance gains due to improvements in algorithms have vastly exceeded even the dramatic performance gains due to increased processor speed."

Re:Transistor increase with software? (0)

Anonymous Coward | more than 3 years ago | (#34663460)

WHOOSH

or is th

Re:Transistor increase with software? (1)

lostthoughts54 (1696358) | more than 3 years ago | (#34663504)

Wow. lets see if i can explain your error.
Moore's law describes a long-term trend in the history of computing hardware. The number of transistors that can be placed inexpensively on an integrated circuit has doubled approximately every two years
It has nothing at all to do with processor speed or performance(although those are sideeffects) it only concerns itself with number of transistors.
The comment is saying that moore's law is actually misleading or irrelevant, That algorithims speed us up more than more transistors(although that is very misleading).
In order for "Progress In Algorithms Beats Moore's Law" to be true, that means the algorithm actually increased the number of transistors on the circuit(which is preposterous)
so to quote the other guy,
"So how did they magically increase the number of transistors using software? Oh whait, someone didn't know what Moore's law was..."

TL:DR
"Even more remarkable - and even less widely understood - is that in many areas, performance gains due to improvements in algorithms have vastly exceeded even the dramatic performance gains due to increased processor speed." has absolutely nothing to do with Moore's Law because Moore's law doesn't concern itself with performance. And the guy above u was correct in his comment.

Re:Transistor increase with software? (1)

Garble Snarky (715674) | more than 3 years ago | (#34664142)

No, you're wrong. The article clearly states, in the passage that you quoted, that it is talking about performance gains due increased processors speeds which are clearly linked to Moore's law growth in transistor density. It doesn't say that Moore's law = improvement in performance. It doesn't say that algorithms are directly comparable to Moore's law. It says that both Moore's law and algorithm research lead to performance gains, and the gains in performance due to algorithm research are more significant, for a specific type of problem, over a specific time period.

Not so much (5, Insightful)

kasperd (592156) | more than 3 years ago | (#34663122)

When hardware gets faster it applies more or less uniformly to all algorithms you run on that hardware. When an algorithm is improved it applies to just one specific problem. For some problems we already knew close to optimal algorithms 15 years ago, in those cases there was no way algorithmic improvements could have achieved such a speedup. And in fact the article mentions just one specific problem where algorithmic improvements made it about 43.000 times faster on one specific input. In other words, this number is not nearly as general as the summary makes it sound. The are algorithms that were not speeded up nearly as much, and for any algorithms that were bad enough to begin with, there could have been an even larger speedup - without anybody making a fuss about it.

Re:Not so much (4, Informative)

QRDeNameland (873957) | more than 3 years ago | (#34663222)

Mod parent up. If this claim were true in any generic sense, we'd see newer software generally running ever faster on older hardware, and it wouldn't seem that every generation of hardware improvement goes mostly towards visual eye-candy and anti-virus.

The problems the author cites, notably speech recognition, are cases that are notorious for *not* scaling to CPU power, that is, throwing brute force cycles at the problem only yields marginal benefits. While processing power doubles every 18 months, speech recognition only gets slightly better despite both better hardware and algorithmic advances.

Re:Not so much (1)

petermgreen (876956) | more than 3 years ago | (#34663390)

While processing power doubles every 18 months
That may have been the case at one point but it hasn't been true recently.

I bought the 13 inch macbook i'm typing this on over 3 years ago. It has a 2.16GHz C2D. When I look at laptops now I still see dual core chips at 2.GHz. Occasionally a quad core is seen but the clock speed (with all four cores running) is only arround that of my macbook (slightly over if you count turbo and buy the extreme edition).

Over in desktop land a similar thing is true, by mid 2007 we had 2.67 GHz C2Qs. Now we have 3. GHz i7 quads and hexes.

There has been improvement in the last 3 years but afaict it's closer to 2x than 4x.

Re:Not so much (0)

petermgreen (876956) | more than 3 years ago | (#34663422)

grr /. ate my angle brackets and everything inside them.

"2." should have been "2.(something)" and "3." should have been "3.(low number)" (using ordinary brackets this time so /. doesn't eat them again)

Re:Not so much (3, Insightful)

SLi (132609) | more than 3 years ago | (#34663462)

The grandparent didn't say "clock speed doubles". It said "processing power doubles". In the last few years those things have been very distinct, in some cases processing power increasing while clock speeds have actually become lower.

Re:Not so much (1)

petermgreen (876956) | more than 3 years ago | (#34663780)

In the last few years those things have been very distinct
P4 to core was a huge increase in performance per clock but that was some time ago and afaict increases since then have been more evoloutionary than revoloutionary.

I stand by my statement that the improvement in the last 3 years is closer to 2x that 4x at least for CPUs that are available at anything like a sane price, The hex cores push that to 3x but only if you have a heavily multi-threaded workload and nearly $1000 to drop on the CPU alone.

See for example an anadtech bench comparison of a Q6600 to an i7-950 http://www.anandtech.com/bench/Product/53?vs=100 [anandtech.com]

Re:Not so much (0)

Anonymous Coward | more than 3 years ago | (#34663604)

He said processing power, idiot, not clock speed. Quit IT immediately, your geek card has been revoked.

Re:Not so much (1)

drsmithy (35869) | more than 3 years ago | (#34664488)

Over in desktop land a similar thing is true, by mid 2007 we had 2.67 GHz C2Qs. Now we have 3. GHz i7 quads and hexes.
There has been improvement in the last 3 years but afaict it's closer to 2x than 4x.

A Core i series CPU is substantially faster at the same clock speed than a Core 2 series CPU.

Re:Not so much (1)

Sparr0 (451780) | more than 3 years ago | (#34664020)

It is true in MANY senses. Take a game like Cube, which uses a very novel and efficient approach to representing and rendering a 3d world for a first person shooter. Send that code back in time to the day's of Wolf3D and it would blow them away on their same old-to-us hardware.

Re:Not so much (1)

QRDeNameland (873957) | more than 3 years ago | (#34664202)

I think you misinterpret what I mean by "true in any generic sense". I'm talking about the idea of algorithmic performance outpacing hardware performance applying across the board, as the article and summary seem to imply.

I'm entirely aware that new innovative algorithms can make great improvements in specific cases. However, in the vast majority of technology domains, the performance gains of more efficient software rarely outpace the performance gains of hardware over time (in many cases the opposite, better hardware encourages *less* efficient software), yet alone at anything like what the summary is suggesting.

Re:Not so much (1)

Sparr0 (451780) | more than 3 years ago | (#34664226)

Do you have any counterexamples? That is, for what problems was the optimal algorithmic solution known 20 years ago?

Re:Not so much (1)

QRDeNameland (873957) | more than 3 years ago | (#34664462)

Again, you misinterpret me. I never said more optimal algorithmic solutions are not being developed in any domain, just that the rate of performance gain from such optimizations are generally not as rapid as developments in hardware.

For example, database technology. As mature a tech as that is, algorithmic improvements are being made every day, I'm sure. But you can't tell me that the improvement you get in database performance over the past 10 years is more due to better software than better hardware, let alone anything like a 43:1 ratio in favor of software that is being suggested here.

Re:Not so much (1)

Pence128 (1389345) | more than 3 years ago | (#34664408)

Cube requires more video ram than Wolf3D requires video ram, main ram, hard disk space and the disks it came on combined.

Re:Not so much (3, Insightful)

betterunixthanunix (980855) | more than 3 years ago | (#34663268)

And in fact the article mentions just one specific problem where algorithmic improvements made it about 43.000 times faster on one specific input.

I think a more general issue is citing a constant as the speedup for an algorithm. Improvements in algorithms often refer to improvements in the asymptotic efficiency, rather than constant factors. If I asked you how much of an improvement a switch from selection sort to merge sort would be, you would not tell me "it is 1000 times faster" because that would be wrong (except for a particular input size).

Re:Not so much (0)

Anonymous Coward | more than 3 years ago | (#34663476)

Agreed, why bother with al-gorithms in the first place. We need better clock
generators so we can Moore up the Processor Clock. With good Turbo and Nitro
and Cooling and XTR timings you can bloody well have 5 or 6 GHZ. Yeeah!

Also, why dont we build more boxes with acrylix windows? Put some blue lights
and a some good leds and make 'em blink at hypnotic 4HZ and you know! it wont
even matter if the explorers pause or so you still have plenty to look at.

Incidentally, if we made us moore du7mber each 1,1/2 years by few points, we
really could someday really make >human AI com true. In 10 more years maybe?

Re:Not so much (0)

Anonymous Coward | more than 3 years ago | (#34663518)

You're probably right... However, they could have tested against that and ruled it out.

If you take an old algorithm and run it on modern hardware, and then run a current algorithm then you should be able to see an improvement. I didn't RTFA, but I just assumed (probably naively) that they thought of that.

Montgomery Scott school of programming (1)

George_Ou (849225) | more than 3 years ago | (#34663884)

If you can make and algorithm 43000 times faster, that means the first version was utter garbage to begin with. By this standard, all a person needs to do is write an algorithm that is one million times less efficient than the hardware would permit. Then over 18 months, make the software progressively faster until it gets one million times faster and claim that you performed a miracle. I think this is the Montgomery "Scotty" Scott school of computer programing where you under promise a thousand fold but deliver a half ass effort to claim that you're a genius

The ugly reality is that mainstream applications and OSes have slowed down more than the hardware has improved. That's disturbing considering the fact that hardware these days is about 10,000 times faster yet modern OSes are more sluggish than ever. Fortunately, people are waking up to this fact from instant computing devices like smartphones and tablets and once they've used instant computing, waiting 5 minutes for a computer to boot or even 45 seconds on a lean fresh install just won't be acceptable.

Re:Montgomery Scott school of programming (1)

Pentium100 (1240090) | more than 3 years ago | (#34664636)

I do not really care about boot time (after all, the phone does not turn on instantly too), however, software andoperating systems did get slower over time with not much improvement to justify the slowness.

For example, Windows Vista and 7 require much better hardware than Windows XP, but apart from the changes in GUI it's not much different. So, where do the clock cycles and memory go? Windows 2003 (fresh install) runs quite fast inside a VM on one of my PCs (3x Xeon 700MHz, 3GB RAM) with one CPU and 256-384MB RAM given to it. Windows 7 runs slow even with 1GB RAM given to it.

Over time, electronic devices got more power efficient, even for the same type of device (cell phone, amplifier). Also, in a device, you can usually tell where the power is wasted - some component gets hotter. OTOH, software over time gets less efficient and needs more processor power to do the same thing that the previous version did.

45 years later (4, Insightful)

zeroRenegade (1475839) | more than 3 years ago | (#34663154)

Moore's law is based more on economics than technology. He was just a hands on executive who had a clear vision of what the future would bring. Anybody who actually has read his original paper would realize this. The fact that his theory has held up this long is incredulous.

Moore's Law has nothing to do with speedup (2)

kaychoro (1340087) | more than 3 years ago | (#34663158)

Moore's Law is that the number of transistors on a single integrated circuit would double - it has nothing to do with speedup.

Re:Moore's Law has nothing to do with speedup (0)

Anonymous Coward | more than 3 years ago | (#34663170)

that was just the route they chose. We could have instead has redundancy

Re:Moore's Law has nothing to do with speedup (0)

ae1294 (1547521) | more than 3 years ago | (#34663386)

that was just the route they chose. We could have instead has redundancy

I canz hav redundancy?

Re:Moore's Law has nothing to do with speedup (1)

monkyyy (1901940) | more than 3 years ago | (#34663448)

but they are connected and the fact most people think thats what it was with push capitalism to do so, it will hurt us one day when capitalism no longer fills the gap(capitalism is rather fat and probably wont fall in the gap just yet)

Anonymous Coward (2, Informative)

Anonymous Coward | more than 3 years ago | (#34663182)

Linear Programming is a reasonably obscure optimisation method used in operational research, where you describe a complex management problem in terms of a series of linear inequalities, then use an algorithm to evaluate all the possible solutions until it has found the optimal one. The algorithmic improvements here mean that we have found ways of cutting down the space faster, while still coming to the best solution. For one particular problem with just the right characteristics it's very possible that an improvement of 43,000 times could be achieved, but this is almost certainly an atypical case, and in general improvements due to algorithms are almost certainly going to be a lot smaller than improvements due to processing speed over a long period.

are we stupid or something? (1)

Nyder (754090) | more than 3 years ago | (#34663192)

I'm not sure they know what Moore's law is.

Oh, a submission from tim? of course it's wrong.

nothing new here

Actual text of statement on relative improvements (4, Informative)

jthill (303417) | more than 3 years ago | (#34663196)

Progress in Algorithms Beats Moore’s Law

Everyone knows Moore’s Law – a prediction made in 1965 by Intel co-founder Gordon Moore that the density of transistors in integrated circuits would continue to double every 1 to 2 years.

Fewer people appreciate the extraordinary innovation that is needed to translate increased transistor den- sity into improved system performance. This effort requires new approaches to integrated circuit design, and new supporting design tools, that allow the design of integrated circuits with hundreds of millions or even billions of transistors, compared to the tens of thousands that were the norm 30 years ago. It requires new processor architectures that take advantage of these transistors, and new system architectures that take advantage of these processors. It requires new approaches for the system software, programming languages, and applications that run on top of this hardware. All of this is the work of computer scientists and computer engineers.

Even more remarkable – and even less widely understood – is that in many areas, performance gains due to improvements in algorithms have vastly exceeded even the dramatic performance gains due to increased processor speed.

The algorithms that we use today for speech recognition, for natural language translation, for chess playing, for logistics planning, have evolved remarkably in the past decade. It’s difficult to quantify the improvement, though, because it is as much in the realm of quality as of execution time. In the field of numerical algorithms, however, the improvement can be quantified. Here is just one example, provided by Professor Martin Grötschel of Konrad-Zuse-Zentrum für Informationstechnik Berlin. Grötschel, an expert in optimization, observes that a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day. Fifteen years later – in 2003 – this same model could be solved in roughly 1 minute, an improvement by a factor of roughly 43 million. Of this, a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algo- rithms! Grötschel also cites an algorithmic improvement of roughly 30,000 for mixed integer programming between 1991 and 2008.

The design and analysis of algorithms, and the study of the inherent computational complexity of prob- lems, are fundamental subfields of computer science.

Re:Actual text of statement on relative improvemen (0)

jthill (303417) | more than 3 years ago | (#34663348)

Aww, jeez. There was a post saying they didn't know Moore's Law meant transistors, which I figured wsa ridiculous for a Presidential Commission, so I fetched the report and snipped it. Seemed to me the text must be buried a few link-layers or pages deep. It didn't occur to me it would be front dead center and about two thirds of the blog article. This is slashdot. I have no defense. <hangs head>.

Re:Actual text of statement on relative improvemen (1)

pem (1013437) | more than 3 years ago | (#34663356)

What about memory size?

Without knowing about the "before" and "after" algorithm, it's hard to know if the "after" algorithm couldn't have been run earlier because there wasn't enough memory.

I've written a lot of stuff that runs a hell of a lot faster lately, but I use a lot more memory than I did 20 years ago.

Any realistic scenario will chalk up around a factor of at least 8000 to memory improvements, leaving, oh, maybe a factor of 5 to pure algorithmic improvements.

Re:Actual text of statement on relative improvemen (1)

Lord Byron II (671689) | more than 3 years ago | (#34664292)

A similar thing has occurred over the last 30 years in the physics sub-field of quantum Monte Carlo. I forget what the numbers are, but it's something similar with the algorithmic improvements being like an order of magnitude greater than the CPU improvements.

Yeah, but things are still slower (2)

Jay Maynard (54798) | more than 3 years ago | (#34663238)

Looks like both Moore's Law and the improvement in algorithms together don't beat out Gates's Law: Software bloat doubles every 18 months.

Re:Yeah, but things are still slower (0)

Anonymous Coward | more than 3 years ago | (#34664236)

Yeah, because Win7 is such a hog compared to XP. I wonder if you did a check on Linux 2001 vs Linux today if you would make the same claims that it's Gates' OS that is bloat... I think you'd be in for a wicked surprise which OS has bloated the most in the last decade.

we lose sight of algorithmic importance (2)

cats-paw (34890) | more than 3 years ago | (#34663312)

Remember when you were a youngster and you had to learn about O() ?

Well, over the years you lose sight of the importance of that.

Not too long ago, just for fun, I implemented a DFT naively, O(n^2), and as an FFT (O(n log n).

Try that and run it for 65536 points and see what happens, even on a fast machine...

Consider for example the humble matrix solver. Iterative solvers (only applicable for certain problem spaces) can get you n lg n sort of behaviors, as compared to n^3 for generalized gaussian elimination.

It's important to use the right algorithm !

Re:we lose sight of algorithmic importance (0)

Anonymous Coward | more than 3 years ago | (#34663934)

What did you mean by iterative matrix solvers? n log n seems suspect, and I'm curious :).

Re:we lose sight of algorithmic importance (1)

cats-paw (34890) | more than 3 years ago | (#34663992)

how about n^2 log n ?

Actually I think that multigrid methods, which are solving systems of equations really can be n log n. Admittedly it's been a while since I looked at it. However, the key is that the matrix should be sparse, and well conditioned. So the n log n bound is only going to be true for very specific problem domains.

I could be wrong :-)

Moral of the story... (3, Insightful)

Khyber (864651) | more than 3 years ago | (#34663380)

OPTIMIZE YOUR SHIT.

Because tons of hardware is useless if you can't push it to it's limits with efficient code and algorithms.

Game and OS designers, I'm looking right at you.

parallel algorithms (2)

lkcl (517947) | more than 3 years ago | (#34663410)

i read a sci-fi story once, i forget who it was by, where some space travellers found themselves in a situation where their computer failed. so they taught themselves maths, plotted the stars, learned to do trigonometry in their heads, and over the course of many months, the 30 or so travellers managed to work in parallel to calculate an approximate but accurate enough path to get themselves within range of rescuers.

then, there is another sci-fi story, called "The End of Eternity", which dates back to when the title "Computer" was utilised in exactly the same way as "Professor" is utilised, now. the hero of the story was a human named "Computer Harkan". Computer. "one who computes".

the point of mentioning these two things is very very simple: prior to modern "computers", parallel "Computing" algorithms were commonplace, well known, and heavily relied on. Along came "computers" and - you know what's coming next - exceedingly rapidly, all that incredibly valuable "parallel" expertise was lost, in favour of single-step, single-process algorithms. i believe we're paying for that loss of expertise, somewhat.

Not all algorithms can be parallelized (1)

betterunixthanunix (980855) | more than 3 years ago | (#34663492)

As it turns out, some very important problems are inherently hard to parallelize:

http://en.wikipedia.org/wiki/P_complete [wikipedia.org]

Re:Not all algorithms can be parallelized (0)

Anonymous Coward | more than 3 years ago | (#34663994)

You might want to read that a bit more closely. It specifically states that it is unknown whether or not NC = P, where NC is the class of (in vague terms) nicely parallelizable problems. That is, in fact, likely the most important open question in parallel complexity theory.

Re:parallel algorithms (0)

Anonymous Coward | more than 3 years ago | (#34664496)

It's an Arthur C. Clarke story, but I don't remember the title. I remember they built abacuses (abacii?) to calculate with.

43,000 fold? (1)

crf00 (1048098) | more than 3 years ago | (#34663486)

I don't understand, how does it mean algorithm improvement of 43,000 fold? Who needs 43,000 fold when anyone can easily make 1,000,000 fold improvement!

Proof:

Let N=1000
O(N^2) = 1,000,000
O(N) = 1000
O(log N) = 3
O(1) = 1

O(N^2)/O(1) = 1,000,000 / 1 = 1,000,000

Wait till we made improvement in O(N!) problems like traveling salesman and see how huge improvement that's gonna be!

Bubble sort (0)

Anonymous Coward | more than 3 years ago | (#34663614)

Most of the improvement may have finally come from general abandonment of bubble sort for quicksort.

Does This Account For Increases In Memory? (1)

rsmith-mac (639075) | more than 3 years ago | (#34663718)

While processing power has increased over the years, the amount of memory that can be built in to a given amount of space has grown at a similar rate. So while it's easy to account for pure processing performance increases, what about algorithms that improved due to the space-time tradeoff [wikipedia.org], which would allow more complex algorithms that use more memory in return for requiring less processing time?

It seems disingenuous to say that algorithm improvements beat Moore's Law if they're really just benefiting from Moore's Law by being allowed more memory. There's a distinct difference between "we can use a better algorithm because we have more memory" and "we've discovered a novel new & faster approach".

hi (1)

Celina252010 (1965056) | more than 3 years ago | (#34663974)

Thats an interesting figure because when the alpha came out DEC were predicting 1000 fold improvement in speed over about the same period. They split the improvement over factors of ten: clock speed, parallelism in the CPU and multiple cores were each expected to deliver a factor of ten improvement. http://mybestpicksever.com/ [mybestpicksever.com]

Yes! Faster bubble sorts! (1)

mpaque (655244) | more than 3 years ago | (#34664338)

Ah, I remember looking at the sorting algorithm in a low level graphics library. Tightly hand coded, packed to keep every pipeline stage filled and make optimal use of all the parallel units in a VLIW graphics processor, it was probably the most efficient bubble sort I'd ever seen.

I coded up a quicksort to about the same degree of tightness in a couple days, and, golly gee, a whole bunch of code suddenly got faster. Some MUCH faster... That was Lesson 1 for me. Optimize algorithms before optimizing implementations.

What restraint! (1)

Baldrson (78598) | more than 3 years ago | (#34664492)

Given the conflict of interest in the report evidenced by:

Report to the President and Congress: Designing a Digital Future: Federally FUnded R&D in Networking and IT:

It's amazing they didn't report a billion-fold increase in algorithms resulting from government funded R&D.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...