Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Can "Page's Law" Be Broken?

CmdrTaco posted more than 5 years ago | from the sounds-about-right dept.

Programming 255

theodp writes "Speaking at the Google I/O Developer Conference, Sergey Brin described Google's efforts to defeat "Page's Law," the tendency of software to get twice as slow every 18 months. 'Fortunately, the hardware folks offset that,' Brin joked. 'We would like to break Page's Law and have our software become increasingly fast on the same hardware.' Page, of course, refers to Google co-founder Larry Page, last seen delivering a nice from-the-heart commencement address at Michigan that's worth a watch (or read)."

cancel ×

255 comments

Sorry! There are no comments related to the filter you selected.

Of Course (5, Insightful)

eldavojohn (898314) | more than 5 years ago | (#28166665)

Can "Page's Law" Be Broken?

I think it gets broken all the time. At least in my world. Look at Firefox 3 vs 2. Seems to be a marked improvement in speed to me.

And as far as web application containers go, most of them seem to get faster and better at serving up pages. No, they may not be "twice as fast on twice as fast hardware" but I don't think they are twice as slow every three months.

I'm certain it happens all the time, you just don't notice that ancient products like VI, Emacs, Lisp interpreters, etc stay pretty damn nimble as hardware takes off into the next century. People just can't notice an increase in speed when you're waiting on I/O like the user.

Re:Of Course (3, Informative)

falcon5768 (629591) | more than 5 years ago | (#28166709)

agreed. Apple always manages to break it too with OS X. from 10.1 to 10.4 the OS notably improved in speed on even older equipment each time it upgraded, even on older PPC G3 and G4 machines.

Re:Of Course (0)

Anonymous Coward | more than 5 years ago | (#28166743)

This is true to a certain extent. But 10.3 and 10.4 are dog slow on hardware that was originally designed for 10.0 and 10.1. On those G3 and G4s, 10.2 was the nice middle ground. Of course, Apple has that tendency to make major changes to the APIs every few OS releases, making application updates for those 10.2 apps near impossible (for many programs you'd need to step up to 10.3 or 10.4).

Re:Of Course (1)

falcon5768 (629591) | more than 5 years ago | (#28167359)

Really? I have had no issues on 300 mhz G3s and Tiger. The key is memory though, if you only upgrade to the minimum it can be slow. It needs at least 1 ghz to run good, but it WILL run at a decent clip for web/wp.

Re:Of Course (1, Insightful)

Anonymous Coward | more than 5 years ago | (#28167411)

so if you need more memory for it to run better, how is it not page's law?

Re:Of Course (1, Redundant)

Dishevel (1105119) | more than 5 years ago | (#28167427)

I refuse to take any anecdotal information from a person who can't tell the difference between speed and capacity.

Re:Of Course (2, Interesting)

Shin-LaC (1333529) | more than 5 years ago | (#28167713)

That's not true. I ran 10.3 on a 233 MHz iMac G3 (a machine designed for Mac OS 9), and used that as my main machine for a couple of years. It ran fine.

Re:Of Course (1)

Jeremy Erwin (2054) | more than 5 years ago | (#28168115)

10.5 puts a bit of a load on older machines. Time Machine, though very useful, occasionally bogs down my 1.25 PMG4. On a modern mac, it's just an extra thread.

Re:Of Course (2, Interesting)

Trillan (597339) | more than 5 years ago | (#28168197)

I found (and measured) 10.3 faster than 10.2 on my then-computer, and 10.4 faster than 10.3 (once indexing was complete). Numbers long since lost, though, sorry.

Re:Of Course (5, Funny)

drsmithy (35869) | more than 5 years ago | (#28168179)

agreed. Apple always manages to break it too with OS X. from 10.1 to 10.4 the OS notably improved in speed on even older equipment each time it upgraded, even on older PPC G3 and G4 machines.

Of course, when you're starting from a point of such incredibly bad performance, there's not really anywhere to go but up.

It would have been more impressive if they'd somehow managed to make it slower with each release.

Re:Of Course (1)

drinkypoo (153816) | more than 5 years ago | (#28166951)

I can't speak to emacs, but these says vi is generally vim, which is much much heavier than classic vi. It also does vastly more.

Re:Of Course (5, Funny)

Anonymice (1400397) | more than 5 years ago | (#28167441)

I can't speak to emacs...

RTFM.
C-x M-c M-speak

Re:Of Course (1)

morgan_greywolf (835522) | more than 5 years ago | (#28167535)

Heavier? Yes. But is it heavy on modern systems with plenty of processor and RAM? No way. It's my number one text editor for quick file edits.

Re:Of Course (4, Insightful)

Z00L00K (682162) | more than 5 years ago | (#28166975)

The law isn't linear, it's more sawtooth-style.

Features are added all the time which bogs down the software, and then there is an effort to speed it up and then there are features added again.

One catch in performance is that it sure is faster to use RAM for data, but there is also a lot of useless data floating around in RAM, which is a waste of resources.

And this is often the curse of object-oriented programming. Objects carries more data than necessary for many of the uses of the object. Only a few cases exists where all the object data is used. A lot of object-oriented programming is somewhat like using 18-wheelers for grocery shopping.

This often explains why old languages like C, Cobol etc. are able to do the same thing as a program written in C++, Java or C# at the fraction of the resource cost and at much greater speed. The disadvantage is that the old languages require more skills from the programmer to avoid the classical problems of deadlocks and race conditions as well as having to implement functionality for linked lists etc.

Re:Of Course (1, Interesting)

drinkypoo (153816) | more than 5 years ago | (#28167051)

The law isn't linear, it's more sawtooth-style.

All data looks notchy if you sample it at high resolution and don't apply smoothing.

One catch in performance is that it sure is faster to use RAM for data, but there is also a lot of useless data floating around in RAM, which is a waste of resources.

RAM is cheap these days. Storage devices are still slow and the most interesting ones have a finite (Though still large) number of writes.

This often explains why old languages like C, Cobol etc. are able to do the same thing as a program written in C++, Java or C# at the fraction of the resource cost and at much greater speed. The disadvantage is that the old languages require more skills from the programmer

In fact you will often see today that a job that could be handled by a 555 and a couple of caps has been replaced with an internally-clocked microcontroller simply because it's a known platform and development is easy. When you have is a vertical mill, everything looks like a machining project. But you can make a water block with a drill press...

Adding RAM to an existing device (5, Insightful)

tepples (727027) | more than 5 years ago | (#28167295)

RAM is cheap these days.

Unless you would need to add RAM to millions of deployed devices. For example, the Nintendo DS has 4 MB of RAM and less than 1 MB of VRAM, and it broke 100 million in the first quarter of 2009. Only one DS game [wikipedia.org] came with a RAM expansion card.

Re:Of Course (0)

Anonymous Coward | more than 5 years ago | (#28167501)

RAM is cheap these days. Storage devices are still slow and the most interesting ones have a finite (Though still large) number of writes.

This attitude is actually the root cause of the problem, I've never heard this called "Page's Law" but in the industry it's known as "Code Bloat".

"What, our software runs too slow? Well, I suppose we could re-code it in something less than 5 layers from the OS, or optimize our code, or use a more efficient algorithm... but that would cost us more money. We decided to just half-ass our coding, and force you to upgrade your hardware instead."

Part of this problem is simple laziness, part of it is intentional marketing, and part of this is due to CS majors focusing more on languages like Java that hide hardware requirements instead of C, etc. which require that you manage your system resources.

Re:Of Course (2, Informative)

drinkypoo (153816) | more than 5 years ago | (#28167833)

This attitude is actually the root cause of the problem, I've never heard this called "Page's Law" but in the industry it's known as "Code Bloat".

No, it's called making a design decision. If the RAM is cheaper than doing it with less RAM, then you buy more RAM. If it isn't, you spend more time on design. The only bad part about it is when it leads to excessive power consumption. Which is, you know, all the time. But that's really the only thing strictly WRONG with spending more cycles, or more transistors.

Re:Of Course (1)

Hurricane78 (562437) | more than 5 years ago | (#28168145)

But... All software I installed on my computers, was free (as in everything) for me. So I care for not buying another piece of RAM.

Oh, and you can also can have less memory consumption by doing less. Which is called "feature bloat". (Not the good one, like in KDE, but the bad one, like in MS Office.)

Re:Of Course (1)

jeffb (2.718) (1189693) | more than 5 years ago | (#28168175)

In fact you will often see today that a job that could be handled by a 555 and a couple of caps has been replaced with an internally-clocked microcontroller simply because it's a known platform and development is easy.

One microcontroller beats one special-function chip plus caps on part count, board space, power consumption, and probably cost. And it can take care of other odd jobs around the circuit as well.

Re:Of Course (2, Funny)

NewbieProgrammerMan (558327) | more than 5 years ago | (#28167259)

And this is often the curse of object-oriented programming. Objects carries more data than necessary for many of the uses of the object. Only a few cases exists where all the object data is used.

That sounds like bad software design that isn't specific to OO programming. People are perfectly capable of wasting memory space and CPU cycles in any programming style.

For example, I worked with "senior" (~15 years on the job) C programmers who thought it was a good idea to use fixed-size global static arrays for everything. They also couldn't grasp why their O(N^2) algorithm--which was SO fast on a small test data set--ran so slowly when used on real-world data with thousands of items.

Re:Of Course (2, Interesting)

Anonymous Coward | more than 5 years ago | (#28167807)

What you fail to grasp is what your senior programmers understand: heap allocation is non-deterministic. Any code that your write that mallocs after initialization is done, wouldn't even pass a peer review where I work (doing safety-critical, fault-tolerant, real-time embedded). Maybe you should learn a little more before running off at the mouth.

Re:Of Course (1, Informative)

Anonymous Coward | more than 5 years ago | (#28167939)

Wow, maybe you should have listened to your senior programmers. Faster execution speed is not often the goal. Static allocation is deterministic. Slower and deterministic is better in certain types of programming, than faster and non-deterministic. You scoff at their O(N^2) algorithm without even considering all the ramifications. Let me guess: Java programmer?

Re:Of Course (3, Interesting)

hedwards (940851) | more than 5 years ago | (#28168159)

That's definitely a large part of the problem, but probably the bigger problem is just the operating assumption that we can add more features just because tomorrows hardware will handle it. In most cases I would rather have the ability to add a plug in or extension for things which are less commonly done with an application than have everything tossed in by default.

Why this is news is beyond me, I seem to remember people complaining about MS doing that sort of thing years ago. Just because the hardware can handle it doesn't mean that it should, tasks should be taking less time as new advancements are going, adding complexity is only reasonable when it does a better job.

Re:Of Course (1)

BeardedChimp (1416531) | more than 5 years ago | (#28167307)

No not linear, in the case of flash its more like an exponential decay.

Re:Of Course (0, Flamebait)

Anonymous Coward | more than 5 years ago | (#28167511)

And this is often the curse of object-oriented programming. Objects carries more data than necessary for many of the uses of the object. Only a few cases exists where all the object data is used. A lot of object-oriented programming is somewhat like using 18-wheelers for grocery shopping.

I hate to have to be the one to break this to you, but

you are a retard. (And probably a Real Programmer too, or at least what passes for one these days)

There are a lot of programs with excessive memory usage that don't use object-oriented languages, and there's a lot of programs with proper memory usage that do use object-oriented languages. Programmer skill (or lack thereof) is far more of a contributing factor, to such a degree that tiny bits of overhead from using OO is lost in the noise.

If I had to choose one single thing as "the curse of OOP", it'd probably instead be that it makes it far too easy to add needless complexity and abstraction and class hierarchies a fucking mile deep.

Re:Of Course (0)

Anonymous Coward | more than 5 years ago | (#28168057)

class hierarchies a fucking mile deep

Think of them as trees of evolution. Many branches go extinct during time.

Re:Of Course (1)

kieran (20691) | more than 5 years ago | (#28168139)

And this is often the curse of object-oriented programming. Objects carries more data than necessary for many of the uses of the object. Only a few cases exists where all the object data is used. A lot of object-oriented programming is somewhat like using 18-wheelers for grocery shopping.

Surely this is a problem begging a solution in the form of smarter compilers?

Re:Of Course (5, Insightful)

AmiMoJo (196126) | more than 5 years ago | (#28168151)

OO was never designed for speed or efficiency, only ease of modelling business systems. It became a fashionable buzz-word and suddenly everyone wanted to use it for everything, so you end up in a situation where a lot of OO programs really only use OO for allocating memory for new objects.

I'm not trying to be a troll here, I just find it odd that OO is considered the be-all and end-all of programming to the point where people write horribly inefficient code just because they want to use it. OO has it's place, and it does what it was designed to do quite well, but people should not shy away from writing quality non-OO code. I think a lot of programmings come up knowing nothing but OO these days, which is a bit scary...

Re:Of Course (3, Interesting)

Carewolf (581105) | more than 5 years ago | (#28167109)

Exactly firefox 3 vs 2 is an excelent example. Especially because Firefox between major releases have been know for the opposite: Getting slower with each minor release.

There are also examples of the opposite. The KDE 3.x got faster and faster for the entire generation, while KDE 4.0 was much slower again, but here 4.1, 4.2 and especially the next 4.3 is many times fast than the 4.0 release.

So I don't think Google's ideas are unique. The issue is well known and fought against in many different ways in especially open source.

KDE4 is ~30% faster than KDE3 (4, Informative)

kojot350 (1330899) | more than 5 years ago | (#28167985)

KDE4 is ~30% faster than KDE3, mainly because of the Qt4 vs. Qt3 improvements and vast redesign of the KDE itself...

Re:Of Course (1, Interesting)

Anonymous Coward | more than 5 years ago | (#28167221)

One word : Embedded. With advent of low-power generacl computing, ARM Netbooks operating again in hundred mhz range and battery life being prioritized above all else, Page's Law will get and is getting a thorough workout.

Re:Of Course (1)

Tyr.1358 (1441099) | more than 5 years ago | (#28167637)

"Look at Firefox 3 vs 2. Seems to be a marked improvement in speed to me."

Speak for yourself.

Page Must Have Been a Java Programmer (0, Flamebait)

Anonymous Coward | more than 5 years ago | (#28166669)

Page must have been a Java programmer, because Java is slow as hell and it only gets slower.

Re:Page Must Have Been a Java Programmer (1)

eldavojohn (898314) | more than 5 years ago | (#28166707)

Page must have been a Java programmer, because Java is slow as hell and it only gets slower.

Hey, it's no C or C++ but the story we discussed yesterday [slashdot.org] seemed to plot Java's average performance at a pretty desirable position. And I think you're wrong about the Java getting slower ... I think most implementations of the byte code interpreter get faster as time progresses and that the language just gets misapplied in its quest to be the silver bullet. An example is massive allocations of strings instead of string buffers. There's just way better languages to handle strings in, in my opinion.

The 'easy' way (2, Interesting)

Dwedit (232252) | more than 5 years ago | (#28166677)

Make developers target a slow and memory constrained platform. Then you get stellar performance when it runs on the big machines.

Nope (4, Funny)

Colin Smith (2679) | more than 5 years ago | (#28166731)

You just get an app which uses 100k of RAM and 32gb of filesystem buffer.

 

Re:Nope (1, Interesting)

Anonymous Coward | more than 5 years ago | (#28167325)

Make them work on a Netbook with a 8.9" 800x600 display, 512MB RAM (much less available with the OS and other applications running), 4GB Flash storage (much less available with the OS and other applications installed).

The reason? There is such hardware currently in use out there.

Re:The 'easy' way (4, Insightful)

imgod2u (812837) | more than 5 years ago | (#28166909)

The problem there is that there gets to a point where the user just won't notice "stellar" speeds. Take a video game for instance. Anything past ~70 fps is really unnoticeable by the average human eye. If you design the game to run at 70 fps for a slow and memory constrained machine, the user won't really notice his quad-SLI or whatever vacuum cleaner box being any better. And you've sacrificed a lot in visual quality.

Benefits of being able to render over 100 fps (3, Informative)

tepples (727027) | more than 5 years ago | (#28167483)

Anything past ~70 fps is really unnoticeable by the average human eye.

I disagree. If you can render the average scene at 300 fps, you can:

  • Apply motion blurring (think 4x temporal FSAA) at 60 fps. Film gets away with 24 fps precisely because of motion blur.
  • Keep a solid 60 fps even through pathologically complex scenes.
  • Render at 60 fps even when four players have joined in on the same home theater PC.

If you design the game to run at 70 fps for a slow and memory constrained machine [...] you've sacrificed a lot in visual quality.

A well-engineered game will have (or be able to generate) meshes and textures at high and low detail for close-up and distant objects respectively. On high-spec PCs, you can use the high-detail assets farther from the camera; on the slow and memory-constrained PCs that your potential customers already own, they get the low-detail assets but can still enjoy the game.

Re:Benefits of being able to render over 100 fps (3, Insightful)

imgod2u (812837) | more than 5 years ago | (#28167715)

I disagree. If you can render the average scene at 300 fps, you can:

        * Apply motion blurring (think 4x temporal FSAA) at 60 fps. Film gets away with 24 fps precisely because of motion blur.
        * Keep a solid 60 fps even through pathologically complex scenes.
        * Render at 60 fps even when four players have joined in on the same home theater PC.

All of your points follows the argument "you can do 60 fps with higher quality". Which was pretty much my argument...

A well-engineered game will have (or be able to generate) meshes and textures at high and low detail for close-up and distant objects respectively. On high-spec PCs, you can use the high-detail assets farther from the camera; on the slow and memory-constrained PCs that your potential customers already own, they get the low-detail assets but can still enjoy the game.

It could or it could not. The point is the game can utilize the computing power of higher-end systems. It isn't just designed for a slow and memory-constrained machine and then runs at blazing fps on faster systems; you can change visual quality settings to use more computing power.

Re:Benefits of being able to render over 100 fps (3, Informative)

Shin-LaC (1333529) | more than 5 years ago | (#28167835)

Mod parent up. And here [100fps.com] is a page that explains some common misconceptions.

Re:The 'easy' way (1)

AvitarX (172628) | more than 5 years ago | (#28166931)

Not true.

An app that aggressively uses the massive amount of RAM in a modern sub-$1000 computer will be quicker than one that uses the disk.

I don't have that new of a system, so it wouldn't help me, but if an app was able to assume it could have 1GB or RAM it can run quicker than one that needs to stay svelte.

Consider this entry level machine:http://www.newegg.com/Product/Product.aspx?Item=N82E16883113094

It has an easy 3-4GB of RAM just for applications, programs made to not take advantage of that will not run as fast as they could.

There's constrained, and then there's constrained. (1)

tepples (727027) | more than 5 years ago | (#28166947)

Dwedit, the current maintainer of the PocketNES emulator for Game Boy Advance, wrote:

Make developers target a slow and memory constrained platform.

I hope you're not talking about something like the NES. There are some things that just won't fit into 256 KB of ROM and 10 KB of RAM, like a word processing document or the state of the town in a sim game like SimCity or Animal Crossing.

Then you get stellar performance when it runs on the big machines.

Only if the big machines use the same CPU and I/O architecture as the small machines. Otherwise, you need to use an emulator that brings a roughly 10:1 CPU penalty (e.g. PocketNES), or more if the CPU has to translate between I/O models (e.g. NES emulators on PCs).

Re:The 'easy' way (2, Funny)

IamTheRealMike (537420) | more than 5 years ago | (#28166949)

Ah ha, the business model behind Android finally reveals itself :)

Re:The 'easy' way (1)

L4t3r4lu5 (1216702) | more than 5 years ago | (#28166965)

This is why I'm interested in checking the license details of Windows 7 Starter Edition.

Designed to run on a netbook? Less bloat? Reduced cost to the consumer? Win-win.

I use third party media players and don't care about Aero Glass. If it supports DX10, we have a new Windows gaming platform.

Re:The 'easy' way (4, Informative)

Abcd1234 (188840) | more than 5 years ago | (#28166981)

Make developers target a slow and memory constrained platform. Then you get stellar performance when it runs on the big machines.

Hardly. Have you never heard of space-time tradeoffs? ie, the most common compromise one has to make when selecting an algorithm for solving a problem? If you assume you have a highly constrained system, then you'll select an algorithm which will work within those constraints. That probably means selecting for space over time. Conversely, if you know you're working on a machine with multiple gigabytes of memory, you'll do the exact opposite.

In short: there's *nothing wrong with using resources at your disposal*. If your machine has lots of memory, and you can get better performance by building a large, in-memory cache, then by all means, do it! This is *not* the same as "bloat". It's selecting the right algorithm given your target execution environment.

Re:The 'easy' way (1)

91degrees (207121) | more than 5 years ago | (#28167163)

Sometimes. But at a cost to developer time. Sometimes it makes more sense to pay more for hardware than pay for better developers.

Re:The 'easy' way (0)

Anonymous Coward | more than 5 years ago | (#28167353)

I do this all the time. The production environment gets a nice bad ass piece of hardware. The devs get the my old 1 cpu 512 meg ram box for the DB server. Make it smoke on that and it will fly on the production hardware. If they need a dual proc box for race conditions. It will not be much better.

This has nice advantages of finding deadlocks and other performance bottlenecks NOW rather than when 20 people are breathing down your neck. If it is SLOW on that box I have found it is usually not much better on faster hardware.

People who say 'oh the hardware will just be faster/more/cheaper' later are just being lazy. Hell ive done it. Want to know what I was being...?

Most bang for the buck. (5, Insightful)

rotide (1015173) | more than 5 years ago | (#28166685)

Why would a company spend money to make software more efficient when the current incarnation does its job just fine?

While I like the idea of being as succinct and efficient as possible with your code, at what point does it become fruitless?

Obviously, if you're testing your code on a "new" workstation and it's sluggish, you'll find ways to make it work better. But if it works well? What boss is going to pay you to work on a project for no real benefit other than to point out it is very efficient?

Exactly (0)

Colin Smith (2679) | more than 5 years ago | (#28166785)

It only makes sense to improve a compiler, library or application if you're going to be the one USING it. Not the one SELLING it.

If you're selling it then the faster you can get your pile of shit out the door into the marketplace and generating revenue, the better. Hence Java, Ruby etc.

That is... There is an economic incentive to produce bloated slow piles of crap, and little incentive to produce fast, light, efficient systems. It ain't a technical problem, it's an economic one.

 

Re:Exactly (1)

drinkypoo (153816) | more than 5 years ago | (#28166979)

And just to hammer the point home, this is one of the areas where FOSS is inherently superior; nobody can tell that developer to stop optimizing that loop and go write a new feature. (Unfortunately, nobody can tell them to stop writing new features and fix bugs, either; no plan is perfect.)

Re:Exactly (1)

91degrees (207121) | more than 5 years ago | (#28167273)

Sure they can. A lot of open source software is developed commercially. Most of the rest is to scratch a developers personal itch (it's no coincidence that the best open source tools are for ones used for software development). Both sets of developers will improve it until the software runs adequately on the target platform. Try running any latest version of KDE on hardware that's several years older than it.

Re:Exactly (1)

drinkypoo (153816) | more than 5 years ago | (#28168061)

Try running any latest version of KDE on hardware that's several years older than it.

AFAIK there's mobile versions of QT, dunno about KDE but there's definitely a mobile GNOME-equivalent (GPE). Angstrom Linux is a distribution based around it, built on OpenEmbedded. I'm working (off and on) to get Angstrom to run on my DT Research WebDT360 (Geode LX 800-based) and have run it on my iPaq H2215 with mixed results.

Re:Exactly (1)

yukk (638002) | more than 5 years ago | (#28167165)

No, it pretty much always makes sense to make software more efficient rather than less. Specially for companies like Google who have to run it on their own servers. If it becomes twice as slow then they need at least twice as many servers to provide the same service level. Yes, they do sell their service for others to run on appliances and while it may seem like a good idea to you to force customers to buy 4 appliances to search their website instead of a single efficient one, eventually the competition will begin to look more appealing and sales will be lost. Yes, it makes sense to get a product out the door, but if Rev 2 comes out better AND faster, doesn't your company look better ? Letting code bloat to the point that it slows down 50% is not just lazy and bad for the customer, it's bad for the devs too (and thus the producer) because bad code is more bug-prone and harder to fix or update without producing more bugs. Product releases are usually a balance between "We need to get product out and cash in" and good product.

Re:Most bang for the buck. (1)

fuzzyfuzzyfungus (1223518) | more than 5 years ago | (#28166883)

Unlike workstations where(as you say) the value of going from "workstation adequately responsive, 60% load" to "workstation adequately responsive, 30% load" is pretty much zero; it matters on servers, particularly servers running vast numbers of instances of a homogeneous workload. If you have thousands of instances, gains of even a few percent mean substantial reductions in the number of servers you need to run.

Moore's law (1)

tepples (727027) | more than 5 years ago | (#28167057)

Unlike workstations where(as you say) the value of going from "workstation adequately responsive, 60% load" to "workstation adequately responsive, 30% load" is pretty much zero

Not always. A notebook computer running at 60% load draws more current than one running at 30% load. But LCD backlights eat a lot of power too, and the licensing policy that Microsoft announced for Windows 7 Starter Edition (CPU less than 15 watts) might encourage CPU engineers to move more logic to the GPU and the chipset.

If you have thousands of instances, gains of even a few percent mean substantial reductions in the number of servers you need to run.

Moore's law predicts that transistor density on commodity integrated circuits doubles every 18 months. This means more cores can fit on the same size chip. If your applications are inherently parallel (as servers often are), and your user base doesn't grow faster than that, you can just throw more new hardware at the problem. But you do need to optimize in a couple cases:

  • Your applications become bound by something other than CPU speed and cache capacity, such as main memory bandwidth or persistent storage latency.
  • You plan to increase your servers' load faster than Moore's law, such as if you are promoting your service to a lot of new users or especially if you are adding features. I think adding features is one big source of this slowdown described as Page's Law (formerly called Gates' Law).

Re:Moore's law (1)

BenoitRen (998927) | more than 5 years ago | (#28167547)

you can just throw more new hardware at the problem

You forgot that this also costs money.

Re:Moore's law (1)

tepples (727027) | more than 5 years ago | (#28167665)

you can just throw more new hardware at the problem

You forgot that this also costs money.

It costs money to replace hardware that wears out, such as hard drives and fans. Eventually, it costs more especially in labor to keep fixing a server than to replace the server with a newer, faster server.

Re:Moore's law (1)

BenoitRen (998927) | more than 5 years ago | (#28167725)

Yes. What's your point? I never said you should never replace hardware if it breaks or wears out.

Replace worn-out boxes with new boxes (1)

tepples (727027) | more than 5 years ago | (#28167881)

My point is that if you're already paying to replace hardware, your cluster's capacity will grow over time as you do so because the new machines will be bigger and faster than the ones they replace. So if your application's performance is already satisfactory, you need to optimize if and only if you expect load to grow faster than the computing power of your cluster.

Constant , not most, bang for the buck (1)

davecb (6526) | more than 5 years ago | (#28166995)

Actually, if you're doing test-directed development, you should have a test that tells you if you've met your performance needs or not. Your management wants to know they have a certain amount of bang/$, to meet their performance budget.

For user-interface stuff, that could be as simple as "3 seconds on average, no more than 5% over 20 seconds", for some number of simulated users on your development machine.

So build a test framework and measure the first part of the program you write. For example, that might be the front end of an interactive query program. Put in a dummy delay for the back-end database and test performance the first day the code responds to a request. Code and tune to meet your performance targets, and stop tuning as soon as it is fast enough . In this case the tuning will mostly be looking at code-path length with your test framework and a source-code profiler, to get both latency and transfer time down to an appropriate value. Since you have the program available to you, measure the residency times in each major component with the profiler. The slowest component will be the limiting factor, and the limit on its performance will be 1/Dmax, where Dmax = V *S, visit count to the component, times service time for it.

Once the code is performing, now is a good time to stop and look at resource usage. Find out how much CPU, memory and I/O bandwidth your program uses per transaction, and save that information for sizing later. You will need to ensure when you size the system that you dont introduce an artificial bottleneck. This is where your management will want to know the performance, so they can plan to support an estimated number of users.

Returning to tuning, next build a test version of the sql. Run it as a script and measure the sql response time. Now you can tune the database queries, and get them fast enough.

Finally, if your program contains middleware, arrange for it to communicate via sockets, and measure performance at the front end and the database. The difference will be the performance of the middleware. As before, the demand of the slowest component will be the bottleneck, and will hold performance to 1/Dmax. Speeding up other parts of the programs wont help.

Consider this the performance experts version of test-directed design.

--dave

Re:Most bang for the buck. (4, Interesting)

cylcyl (144755) | more than 5 years ago | (#28167141)

When companies go into feature race, they forget that it quickly becomes diminishing returns. As the features you enable are less and less likely for your client base to be interested in.

However, if you improve the performance of your core functions (thru UI or speed), your entire customer base gets improvement and have a real reason to upgrade

Re:Most bang for the buck. (1)

hedwards (940851) | more than 5 years ago | (#28168235)

Because it's not doing it's job just fine if it's inefficient. There's a certain amount of inefficiency that's optimal or acceptable, but milliseconds can and do add up.

Over the entire company, what might be a minor waste of time for one person can become significant very quickly, which is one of the reasons why updating computers and adding a second monitor can be such a profitable move for a company. Tweaks like that do cost money in the short term, but frequently pay off in the long term.

The only thing that's really missing is some sort of metric for the bean counters to use to determine how much money to spend on it. I know at my work the amount of time and energy I wait for the database to do transactions with a server across the country really hurts my productivity.

Not twice as slow (1)

aethelwyrd (1410845) | more than 5 years ago | (#28166689)

It gets twice as bloated.

Re:Not twice as slow (1)

zippthorne (748122) | more than 5 years ago | (#28166955)

Or as I like to say, "Half Fast."

coming from google (2, Insightful)

ionix5891 (1228718) | more than 5 years ago | (#28166695)

who are trying to make software be available only via a browser and clunky javascript

  makes this rather ironic

Re:coming from google (2)

Ilgaz (86384) | more than 5 years ago | (#28166895)

No it is their justification to run Office in a Web browser which I did back in 2001, with Think Free Office. Think Free guys used to rely on Java but it changed as technology progresses, now they use mixture of Java, Ajax and HTML technologies. I think some Flash will be involved too.

Of course, same people laughing at me while using a Office written in Java now talks about what kind of a modern idea Google invented (!) after 8 years.

Of course, native vs. interpreted application? I purchased Apple iWork, funnily named but piece of art Objective C code as I got couple of PPCG4 which I can't waste with some spoiled search engine guys ''inventing'' things. ;)

Re:coming from google (2, Insightful)

Anonymous Coward | more than 5 years ago | (#28167347)

So... you don't think it would be a good idea for them to improve the efficiency of their browser and said software? To me it sounds like common sense, not irony... if you're going to run software in a browser via javascript, make it really efficient software.

2 reasons why software gets bigger. (1)

goldaryn (834427) | more than 5 years ago | (#28166711)

1) Historically: thwarting piracy. Bigger apps were harder to pirate. Copying 32 floppies = pain in the ass.

2) The perception of value. More megabytes implies more features implies more value. You can charge more. Also, you can charge people again for what is basically the same product (there are companies that depend on this!)

Re:2 reasons why software gets bigger. (1)

gplus (985592) | more than 5 years ago | (#28166781)

Computer power is like money: Whenever there's more available people will find something to spend it on.

Re:2 reasons why software gets bigger. (0, Flamebait)

ionix5891 (1228718) | more than 5 years ago | (#28166813)

Obama is a computer?!

Re:2 reasons why software gets bigger. (1)

ionix5891 (1228718) | more than 5 years ago | (#28167019)

wow moderators today dont have a sense of humour (yes that a U in there :D )

Re:2 reasons why software gets bigger. (1)

tepples (727027) | more than 5 years ago | (#28167077)

Obama is a computer?!

No, but Jamie Foxx and T-Pain are robots [youtube.com] .

I don't think that holds up (2, Insightful)

viyh (620825) | more than 5 years ago | (#28166735)

"Page's Law" seems to be a tongue in cheek joke since it's sited primarily by the Google folks themselves. It definitely isn't true across the board. It's purely a matter of a) what the software application is and b) how the project is managed/developed. If the application is something like a web browser where web standards are constantly being changed and updated so the software must follow in suit, I could see where "Page's Law" might be true. But if the product is well managed and code isn't constantly grandfathered in (i.e., the developers know when to start from scratch) then it wouldn't necessarily be a problem.

Re:I don't think that holds up (5, Informative)

Keith_Beef (166050) | more than 5 years ago | (#28166775)

All he has done is put numbers into Wirth's law.

I remembered this as "software gets slower faster than hardware gets faster", but Wikipedia has a slightly different wording: "software is getting slower more rapidly than hardware becomes faster".

http://en.wikipedia.org/wiki/Wirth%27s_law

In fact, that article also cites a version called "Gates's Law", including the 50% reduction in speed every 18 months.

K.

Re:I don't think that holds up (1)

morgan_greywolf (835522) | more than 5 years ago | (#28167741)

All he has done is put numbers into Wirth's law.

Wirth? As in guy responsible for Algol and Pascal?

Yeah, makes sense he'd say something like that! ;)

Speaking of hardware power to waste (1)

Ilgaz (86384) | more than 5 years ago | (#28166787)

I wasn't talking about it for a while as I am tired of Google fanatics but, what is the point of running a software with Administrator(win)/Super User(Mac) privileges every 2 hours that will... check for updates?

I speak about the Google Updater and I don't really CARE if it is open source or not.

Not just that, you are giving a very bad example to industry to use as reference. They already started talking about ''but Google does it''.

Is that part of the excuse? Because hardware guys beat the badly designed software coded by some re-invent wheel guys? Does something run in your server farms opening a socket to the outside World every 2 hours that will check for updates?

Listen, people purchasing $1400 software are bugged about their paid commercial software checking for updates yet alone it does only check weekly and _if application runs_. We don't have hardware to waste or some top certified security engineers to waste. Stop thinking everyone has some undocumentedly large server farms like you.

They probably will. (5, Insightful)

fuzzyfuzzyfungus (1223518) | more than 5 years ago | (#28166795)

I'd suspect that Google probably will. Not because of any OMG special Google Genius(tm), but because of simple economics.

Google's apps are largely web based. They run on Google's servers and communicate through Google's pipes. Since Google pays for every server side cycle, and every byte sent back and forth, they have an obvious incentive to economize. Since Google runs homogenous services on a vast scale, even tiny economies end up being worth a lot of money.

Compare this to the usual client application model: Even if the scale is equivalent, the maker of the software doesn't pay for the computational resources. Their only pressure is indirect(i.e. customers who don't buy because their machines don't meet spec, or customers who get pissed off because performance sucks). They thus have a far smaller incentive to watch their resource consumption.

The client side might still be subject to bloat, since Google doesn't pay for those cycles; but I suspect competitive pressure, and the uneven javascript landscape, will have an effect here as well. If you are trying to sell the virtues of webapps, your apps are (despite the latency inherent in web communication) going to have to exhibit adequate responsiveness under suboptimal conditions(i.e. IE 6, cellphones, cellphones running IE 6), which provides the built in "develop for resource constrained systems" pressure.

Incentive to run on existing deployed hardware (1)

tepples (727027) | more than 5 years ago | (#28167765)

Compare this to the usual client application model: Even if the scale is equivalent, the maker of the software doesn't pay for the computational resources. Their only pressure is indirect(i.e. customers who don't buy because their machines don't meet spec, or customers who get pissed off because performance sucks). They thus have a far smaller incentive to watch their resource consumption.

Then why are games for PlayStation 2 still coming out years after the launch of the PLAYSTATION 3 console? If the incentive to run on existing deployed hardware were so small, major video game publishers would make their games PS3-exclusive even if the game's design didn't require it.

Re:Incentive to run on existing deployed hardware (1)

fuzzyfuzzyfungus (1223518) | more than 5 years ago | (#28168109)

"customers who don't buy because their machines don't meet spec"

Since consoles move in large, discreet steps, the particular indirect pressure noted above is extremely significant. In the case of the playstation, the PS2 was released ~2000 and the PS3 ~2007. Nothing in between, one or the other and the PS2 has a vastly greater installed base. Because specs are fixed, requirements don't get to drift upward. They either stay still, or jump.

PCs aren't wholly different, any publisher of "casual games", for instance, would be insane to require more than a GMA950, and they all know it; but requirements do creep up; because the average power of their customers' PCs creeps up.

emulation layers (1)

Speare (84249) | more than 5 years ago | (#28166817)

When I was a little kid, I saw a new computing device: a Pacman cabinet at the local pinball parlour.

Since then, I've seen dozens of implementations of it, and they fall into two camps: a knockoff that can hardly be called a Pacman-clone, or a full-up 100% authentic duplicate of the original. Of course the latter is done with emulation. Every important detail of the old hardware can be emulated so a true ROM copy can be run with the same timing and everything behaves properly. If you know the proper secret patterns through the maze, then the deterministic behaviors of Inky, Pinky, Blinky and Clyde will not allow them to catch up to you.

We also have many kinds of indirection, where data must be handed through one protocol to another, in order to reach the intended platform. I'm not just talking about TCP/IP and routers, but many new layers to the OSI layer cake: encryption, encoding, tunneling and translation.

Of course, emulation and indirection can go too far. Imagine playing that ROM copy of Pacman on a MAME built for PPC running on Mac OS X Tiger's Rosetta layer, played through a VNC terminal over SSH via an HTTP proxy. That's a contrived (but perfectly possible) example, but I see layers and layers of indirection in real operating systems and applications all the time.

To break "Page's Law," I expect one should focus on reducing the layers of emulation and indirection.

Page's Law. (2, Insightful)

C_Kode (102755) | more than 5 years ago | (#28166881)

Sounds like someone is trying to cement their legacy in history by stamping their name on common knowledge. :-)

Re:Page's Law. (1)

foo1752 (555890) | more than 5 years ago | (#28167301)

Why would he bother? Google is (will be) far more notable than any silly "Page's Law" will ever be.

Re:Page's Law. (4, Insightful)

mdmkolbe (944892) | more than 5 years ago | (#28167493)

Do you remember Moore because of his law or because he co-founded Intel?

Code Bloat? Think twice. (1)

Ukab the Great (87152) | more than 5 years ago | (#28166921)

We could also consider the possibility that a twice-as-fast computer on a twice-as-fast network pipe produces twice-as-much data which, in order to keep the same perceived speed, must be processed twice-as-quickly by another computer.

Re:Code Bloat? Think twice. (1)

tepples (727027) | more than 5 years ago | (#28167913)

We could also consider the possibility that a twice-as-fast computer on a twice-as-fast network pipe produces twice-as-much data

But why is it producing twice-as-much data? Is it receiving twice-as-many requests? If so, from whom? Twice-as-many users? Or a single user doing twice-as-many things?

Bloat wastes energy. (2, Insightful)

miffo.swe (547642) | more than 5 years ago | (#28167027)

One thing that rarely comes up when discussing bloat and slow underperforming applications is energy consumption. While you can shave off some percents off of a server by maximizing hardware energy savings you can save much more by optimizing its software in many cases.

I think it all comes down to economics. As long as the hardware and software industry lives in symbiosis with their endless upgrade loop we will have to endure this. To have your customers buy the same stuff over and over again is a precious cash cow they wont let go off volontarily.

More Code Efficiency=More Elecricity Efficiency (1)

FathomIT (464334) | more than 5 years ago | (#28167585)

I wounder how much energy and money this would save for a server hog like Google. Reminds me of Blackle [blackle.com]

Meanwhile Looser's Law *is* broken... (1)

Dystopian Rebel (714995) | more than 5 years ago | (#28167043)

From the transcript of the speech:

"you never loose a dream"

Yes! (1)

JamesP (688957) | more than 5 years ago | (#28167053)

It's simple, just don't use Java

In a more serious note, my personal opinion is have the developers use and test the programs in slower machines.

Yes, they can profile the app, etc, but the problem is that it really doesn't create the 'sense of urgency' working in a slow machine does. (Note I'm not saying developers should use slow machines to DEVELOP, but there should be a testing phase in slow machines)

Also, slower machines produce more obvious profile timings.

law!?! (1)

iCodemonkey (1480555) | more than 5 years ago | (#28167063)

but I don't want to break any laws.

Grosch's (other) Law (4, Informative)

Anonymous Coward | more than 5 years ago | (#28167069)

Herb Grosch said it in the 1960's: Anything the hardware boys come up with, the software boys will piss away.

Larger user base (2, Interesting)

DrWho520 (655973) | more than 5 years ago | (#28167341)

Making later versions of software run more efficiently on a baseline piece of hardware may also make the software run more efficiently on lesser pieces of hardware. Does the increase in possible install base (since your software now runs on hardware slower than your baseline) justify a concerted effort to write software that runs more efficiently?

Ask Apple how they do it. (2, Interesting)

toby (759) | more than 5 years ago | (#28167375)

10.0, 10.1, 10.2, 10.3, and maybe 10.4 was a series of releases where performance improved with each update. I don't run 10.5 so can't comment if the trend continues.

We don't want it to be broken, really (1)

realmolo (574068) | more than 5 years ago | (#28167415)

Hardware has advanced to the point that we don't care about performance all that much.

What is more of a concern is how easy it is to write software, and how easy it is to maintain that software, and how easy it is to port that software to other architectures. Efficiency of code generally means efficient use of a single architecture. That's fine, but for code that has to last a long time (i.e., anything besides games), you want it to be written in a nice, easy-to-change way that can be moved around to different platforms for the next 20 years.

Moar users! Moar battery life! (1)

tepples (727027) | more than 5 years ago | (#28168075)

Hardware has advanced to the point that we don't care about performance all that much.

That might be true of software intended to run on desktop PCs. But for servers, you want efficiency so you can handle more requests from more users. And for software intended to be run on small, cheap, battery-powered devices, you want efficiency so you can underclock the CPU and run longer on a charge. You mentioned games, but a lot of applications for handheld and subnotebook computers aren't games.

Puh-lease (0)

Anonymous Coward | more than 5 years ago | (#28167767)

"Page's law"? Not too egotistical is he? I guess by stating the obvious that make it his idea.

Sure it can be broken... just stop upgrading (0)

Anonymous Coward | more than 5 years ago | (#28167773)

Seriously.... I can count on one hand the can't-live-without software that has changed in the last 10 years.
After those 6 or so Apps, the rest is just candy.

Theoretically, yes. Practically, not often (2, Interesting)

jollyreaper (513215) | more than 5 years ago | (#28167949)

Business managers don't want to pay for great when good will do. Have you gotten the beta to compile yet? Good, we're shipping. I don't care if it was a tech demo, I don't care if you said your plan was to figure out how to do it first, then go back through and do it right. We have a deadline, get your ass in gear.

Then the next release cycle comes around and they want more features, cram them in, or fuck it we'll just outsource it to India. We don't know how to write a decent design spec and so even if the Indians are good programmers, the language barrier and cluelessness will lead to disaster.

And here's the real kicker -- why bother to write better when people buy new computers every three years? We'll just throw hardware at the problem. == this is the factor that's likely to change the game.

If you look at consoles, games typically get better the longer it's on the market because programmers become more familiar with the platform and what it can do. You're not throwing more hardware at the problem, not until the new console ships. That could be years and years away, just for the shipping, and even more years until there's decent market penetration. No, you have to do something wonderful and new and it has to be done on the current hardware. You're forced to get creative.

With the push towards netbooks and relatively low-power systems (low-power by today's standards!), programmers won't be able to count on power outstripping bloat. They'll have to concentrate on efficiency or else they won't have a product.

There's also the question of how much the effort is worth. $5000 in damage to my current car totals it, even if it could be be repaired. I can go out and buy a new car. In Cuba, there's no such thing as a new car, there's only so many on the market. (are they able to import any these days?) Anyway, that explains why the 1950's disposable rustbuckets are still up and running. When no new cars are available for love or money, the effort in keeping an old one running pays for itself.

Excellence has to be a priority coming down from the top in a company. If cut-rate expediency is the order of the day, crap will be the result.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>