×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Hardware Is Cheap, Programmers Are Expensive

Soulskill posted more than 4 years ago | from the optimization-takes-effort dept.

Programming 465

Sportsqs points out a story at Coding Horror which begins: "Given the rapid advance of Moore's Law, when does it make sense to throw hardware at a programming problem? As a general rule, I'd say almost always. Consider the average programmer salary here in the US. You probably have several of these programmer guys or gals on staff. I can't speak to how much your servers may cost, or how many of them you may need. Or, maybe you don't need any — perhaps all your code executes on your users' hardware, which is an entirely different scenario. Obviously, situations vary. But even the most rudimentary math will tell you that it'd take a massive hardware outlay to equal the yearly costs of even a modest five person programming team."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

465 comments

Frist? (-1, Troll)

Anonymous Coward | more than 4 years ago | (#26183927)

I've got a greased up YODA doll shoved up my ass and natalie portman's naked and petrified statue is pouring hot grits down my pants!

Re:Frist? (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#26183973)

Natalie Portman can't act for shit and she has the tits of an 11-year old girl. Grits are bland and best served to the inbred, down-syndrome-afflicted inhabitants of the Southern United States. Get off it already.

As for the article, what the fuck was it trying to say? Durrrr, having better computers is good and optimizing code is good and every developer should have their own 500 TB server and every washing machine and toaster should have a SPARC processor with 256 megs of RAM. Grrrrreat fucking idea. And totally fucking pointless. Good thing that no sane American would be awake at this ungodly hour, otherwise they just may read that cretinous bullshit article.

Re:Frist? (5, Funny)

couchslug (175151) | more than 4 years ago | (#26184279)

"Natalie Portman can't act for shit and she has the tits of an 11-year old girl. Grits are bland and best served to the inbred, down-syndrome-afflicted inhabitants of the Southern United States."

OK, OK, ya got me horny, hungry, and nostalgic for the folks back home, but what was your point?

Re:Frist? (-1, Offtopic)

Sponge Bath (413667) | more than 4 years ago | (#26184329)

Natalie Portman can't act for shit and she has the tits of an 11-year old girl

Hmmm... sounds like most of the Dr. Who companions.

Re:Frist? (-1, Offtopic)

alain94040 (785132) | more than 4 years ago | (#26184335)

One reason corporations don't like part-time is that as long as you are full-time, you actually tend to work way past 40 hours a week. You do whatever it takes to get the job done, under impossible deadlines.

Once you are part-time, you start saying no to crazy demands. Corporations just hate it.

My answer? Be your own boss. It comes with a caveat: starting your own business alone is a bad idea. Guess what? It takes more than one person to provide something of value. It doesn't take an army of hundreds, but a small dedicated group of friends can do amazing things. The sum really is larger than the parts.

Take a look at fairsoftware.net [fairsoftware.net]. It was designed for exactly that purpose: friends starting a side business together.

Re:Frist? (5, Interesting)

tomhudson (43916) | more than 4 years ago | (#26184345)

Natalie Portman can't act for shit and she has the tits of an 11-year old girl. Grits are bland and best served to the inbred, down-syndrome-afflicted inhabitants of the Southern United States. Get off it already.

that's the point - they DO get off on it!

As for the rest, if you REALLY want to improve productivity:

  • HARDWARE
    1. Dual monitors. They pay for themselves within weeks. This is a real no-brainer.
    2. Dual monitors. They pay for themselves within weeks. This is a real no-brainer.
    3. Dual monitors. They pay for themselves within weeks. This is a real no-brainer.
    4. Did I mention dual monitors? They really make a difference ...
  • PEOPLE
    1. Learn to manage people. The biggest time-waster is bad management.
    2. Learn some communications skills. This applies to everyone. Management, programmers, get your "people skills" in order.
    3. Give people the time they need to better self-organize. Unrealistic deadlines waste time as corners are cut.
    4. Learn to manage projects. This includes cutting features right at the beginning, instead of the usual "we have this checklist of features", and then the inevitable "feature creep", followed by the "what can we cut so we can ship the *^&@&%&^% thing?"

The real productivity killers are poor morale, poor management, poor communications, poor specifications, poor research, lack of time for testing, lack of time for documenting, lack of time for "passing on knowledge" to other people, etc. Not hardware.

Yes, hardware IS cheap. Poor management is the killer - in every field. Just ask anyone who has been on a death march project. Or bought GM stock a year ago. Or who supported John McCain, then watched Sarah Palin become his "bimbo eruption." They all have one thing in common - people who thought they knew better, didn't do their research properly, and then screwed the pooch.

fp (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#26183929)

hell yeah

Timing is everything (4, Interesting)

BadAnalogyGuy (945258) | more than 4 years ago | (#26183933)

Sure, right now it may be more expensive to hire better developers.

But just wait a couple more months when unemployment starts hitting double digits. You'll be able to pick up very good, experienced developers for half, maybe a third of their current salaries.

Sure, invest in some HW now. That stuff will always be handy. But don't just go off and assume that developers will be expensive forever.

Re:Timing is everything (1)

tsa (15680) | more than 4 years ago | (#26184013)

For a third of the price of a developer you can buy an enormous amount of hardware.

Re:Timing is everything (3, Funny)

ijakings (982830) | more than 4 years ago | (#26184143)

Yeh but you have to appreciate where all this enormous amount of hardware, enormous amount of hardware goes. It doesnt just come on a truck you can dump things on, it has to come via a series of tubes. Oh... Wait.

Re:Timing is everything (5, Insightful)

ShieldW0lf (601553) | more than 4 years ago | (#26184191)

Everyone knows a blind mapmaker will finish his work much faster on a motorcycle than he will on foot. This is basically the same thing...

Re:Timing is everything (0)

Anonymous Coward | more than 4 years ago | (#26184347)

Did you even read the article? Troll.

Re:Timing is everything (5, Insightful)

samkass (174571) | more than 4 years ago | (#26184015)

We'll see. The good developers probably won't be in the first wave of folks looking for jobs. I know our company is still in the "we have to figure out how to hire fast enough to do next year's work" mode.

Where having good engineering really helps, though, is in version 2.0 and 3.0 of the product, and when you try to leverage embedded devices with some of the same code, and when you try to scale it up a few orders of magnitude... basically, it buys you flexibility and nimbleness on the market that the "throw more hardware at the problem" folks can't match.

Despite Moore's Law being exponential over time (so far), adding additional hardware is still sub-linear for any snapshot in time. So it's not going to automatically solve most hard scalability problems.

Re:Timing is everything (5, Informative)

diskofish (1037768) | more than 4 years ago | (#26184017)

I think there will be more developers looking for work in the future, but I don't think the price is going to drop THAT much. I just think you'll be able to find qualified developers more easily.

As for the article, it makes a lot of sense when you're running in a controlled environment. It's really a no brainier in consulting work. Upgrading hardware or optimizing software will both meet the customers needs only the hardware upgrade is $2,000 and the software optimization costs $20,000.

Of course, if you're releasing software into the wild and it needs run on many different machines you better make sure it performs well especially if it's a retail product. So spend the extra money and make it really good.

Re:Timing is everything (5, Interesting)

jopsen (885607) | more than 4 years ago | (#26184363)

Of course, if you're releasing software into the wild and it needs run on many different machines you better make sure it performs well especially if it's a retail product. So spend the extra money and make it really good.

People usually buys a product before they realize the performance sucks... And retailers always says that it's just because your computer isn't new enough... Which makes people buy new computers, not complain about the software...
- Or may be I'm wrong...

But, I don't know many non-computer-freaks who can tell you the system requirements of their computer, and even less that compare them to the minimum requirements of a game, and almost nobody who know that recommended system spec. is actually the minimum requirements for any practical purpose...
And I don't blame them... I'm a nerd, no gamer, and I can't tell the difference between most modern graphics cards...

Re:Timing is everything (0)

Anonymous Coward | more than 4 years ago | (#26184409)

Why? Micro$oft never did - just spend like crazy on marketing!( and then on legal )

Re:Timing is everything (2, Funny)

dexmachina (1341273) | more than 4 years ago | (#26184045)

And if that still doesn't appeal to you, Walmart sometimes has developers going as loss leaders during the Christmas season...you can pick one up today for a fraction of its wholesale value!

Re:Timing is everything (1)

WindowlessView (703773) | more than 4 years ago | (#26184247)

You'll be able to pick up very good, experienced developers for half, maybe a third of their current salaries.

It's not as if software development exists in isolation from the rest of the economy. If SE salaries dropped to 1/2 or 1/3 of current levels it is almost certainly the case that virtually every other profession would have as well. In that case employers should be less concerned about buying computer hardware than loading up on armaments since there would be a full-blown revolution and complete social chaos taking place.

Water is wet, Pope Catholic? (0)

Anonymous Coward | more than 4 years ago | (#26183943)

Bears shit in the woods?

It's been this way for 15 years.

Re:Water is wet, Pope Catholic? (1)

aliquis (678370) | more than 4 years ago | (#26184133)

Personally I don't understand how they are comparable at all, since when does hardware programming? =P

The amount of programming scenarios where more hardware solves anything must be severely limited. Optimisation of software vs adding more hardware?

I agree. (5, Insightful)

theaveng (1243528) | more than 4 years ago | (#26183945)

Recently my boss reviewed my schematic and asked me to replace 1% resistors with 2 or 5% "because they are cheaper". Yes true, but I spend most of the day doing that, so he spent about $650 on the task, thereby spending MORE not less.

So yeah I agree with the article that's it's often cheaper to specify faster hardware, or more-expensive hardware, than to spend hours-and-hours on expensive engineers/programmers trying to save pennies.

Or as Benjamin Franklin said, "Some people are penny-wise, but pound foolish." You try to save pennies and waste pounds/dollars instead.

Re:I agree. (3, Interesting)

tinkertim (918832) | more than 4 years ago | (#26184025)

Or as Benjamin Franklin said, "Some people are penny-wise, but pound foolish." You try to save pennies and waste pounds/dollars instead.

The same can be true in programming, but usually the scenario describes development itself, i.e. premature optimization. If your team is experienced, the only reason for this would be people trying to do big things in small spaces.

I think it comes down to what you need, what you want and what you need to spec for your software to actually run.

If your willing to spend quite a bit of money on some really talented people, what you need as far as hardware (at least memory) can be reduced significantly.

What you want is to roll a successful project in xx months and bring it to market, so raising the hardware bar seems sensible.

Then we come down to what you can actually spec, as far as requirements for your clients who want to use your software. Microsoft ended up lowering the bar for Vista in order to appease Intel and HP .. look what happened.

If your market is pure enterprise, go ahead and tell the programmers that 4GB/Newer dual core CPU is the minimum spec for your stuff. If your market is desktop users .. may be a bad idea.

I don't think there's a general rule or 'almost always' when contemplating this kind of thing.

But first: Profile, Analyze, Understand (5, Insightful)

StCredZero (169093) | more than 4 years ago | (#26184029)

This only works for certain cases. Some your problems are too many orders of magnitude too big to throw hardware at them.

Before you do anything: Profile, analyze, understand.

It might be useless to spend a month of development effort on a problem that you can solve by upgrading the hardware. It's also useless to spend the money on new hardware and the administrator time setting it up and migrating programs and data, when you could've just known that wouldn't have helped in the first place.

Two questions I used to ask when giving talks: "Okay, who here has used a profiler? [hands go up] Now who has never been surprised by the results? [almost no hands]"

Before you spend money or expend effort, just take some easy steps to make sure you're not wasting it. Common sense.

Re:I agree. (1)

aliquis (678370) | more than 4 years ago | (#26184147)

Uhm, you get paid / cost more than $ 650 / day? Consult cost for him or what? Will whatever you did only be used once?

Can't wrong resistance give wrong results since one have calculated on the result and what is needed using exact values?

Re:I agree. (2, Funny)

theaveng (1243528) | more than 4 years ago | (#26184295)

Engineers are billed at about $90 an hour. That includes wages, health benefits, rental for the cubicle space, and heating.

Re:I agree. (2, Informative)

Velox_SwiftFox (57902) | more than 4 years ago | (#26184277)

But it is so much fun to explain to the bean counter who ordered twice as many disk drives of half the capacity you specified, because their painstaking research found they were a few percent cheaper per byte, that now they have to add in the cost of twice as many raid card channels or storage servers, rack expenses, et cetra when figuring out how much money they saved the company.

Re:I agree. (1)

Sponge Bath (413667) | more than 4 years ago | (#26184311)

...replace 1% resistors with 2 or 5% "because they are cheaper"

Did your boss implement a QA process to screen each resistor and discard any that exceed the 2 or 5 percent accuracy?
You can't achieve the increased efficiency with out proper controls.

Re:I agree. (1)

DerekLyons (302214) | more than 4 years ago | (#26184423)

Recently my boss reviewed my schematic and asked me to replace 1% resistors with 2 or 5% "because they are cheaper". Yes true, but I spend most of the day doing that, so he spent about $650 on the task, thereby spending MORE not less.

Which [potentially] shows why he's a boss - and you aren't. That $650 (overpaid in salary to you) is a one time cost - but it can also represent considerably savings, in setup time if 2 or 5% resistors are the standard wherever you circuits are manufactured, in total cost (of hardware) across a large production run (even more so if your design contains many resistors), etc... etc...
 
Any engineer worth a damn knows enough accounting to be able to figure this stuff out.

Programmers From India (0)

Anonymous Coward | more than 4 years ago | (#26183951)

are cheap. But they also suck, so expensive in the long run.

But who's going to fly it? (5, Insightful)

malefic (736824) | more than 4 years ago | (#26183955)

"10,000! We could almost buy our own ship for that!" "Yeah, but who's going to fly it kid? You?"

Re:But who's going to fly it? (5, Funny)

tinkertim (918832) | more than 4 years ago | (#26184043)

"10,000! We could almost buy our own ship for that!" "Yeah, but who's going to fly it kid? You?"

Typical Management Response:
"You bet I could!, I'm not such a bad programmer myself!"

Re:But who's going to fly it? (1)

theaveng (1243528) | more than 4 years ago | (#26184337)

You've met my boss?

"Why isn't this done yet?" "You only gave it to me four days ago." "Don't give me excuses; I would have had it done by now." (whispers quietly): "So why don't YOU do it then?"

My favorite:

"Add a voltage regulator." "Okay. Do you have a suggestion boss?" "Figure it out yourself." "Surely we've used these on other cards? Do you have a current schematic I can look at?" "I don't know. Leave." (whispers): "I thought my boss said he knew this stuff and could have it done in four days. Now he can't even provide a simple schematic."

Original Article here... (4, Informative)

Ckwop (707653) | more than 4 years ago | (#26183963)

http://www.codinghorror.com/blog/archives/001198.html [codinghorror.com]

Give the person who actually wrote the article the ad revenue rather than this bottom feeding scum.

Re:Original Article here... (0)

Anonymous Coward | more than 4 years ago | (#26184009)

[citation needed]

Re:Original Article here... (0)

Anonymous Coward | more than 4 years ago | (#26184309)

What ad revenue? Are there any Slashdot users that don't use Adblock?

(That being said, yeah, I agree linking to intermediate sites that don't add anything of value when you can just as well link to the original instead is a no-no.)

Recalculate for the crisis (2, Interesting)

Baldrson (78598) | more than 4 years ago | (#26183965)

Better recalculate the trade-offs for the current economic crisis:

TFA says the average programmer with my experience level should be getting a salary of around $50/hour but you'll see I've recenetly advertised myself at $8/hour. [majorityrights.com]

How many hundreds of thousands of jobs have been lost in Silicon Valley alone recently?

The crisis has gutted demand for hardware as well, but things are changing so fast, yesterday's calculations are very likely very wrong. Tomorrow, hyperinflation could hit the US making hardware go through the roof due to the exchange rate.

Re:Recalculate for the crisis (4, Interesting)

MickLinux (579158) | more than 4 years ago | (#26184117)

Well, unless the $8/hr is an introductory rate (that is, the first 200 hrs are at $8.50, then after that you go up to $15 or $20/hr), you could do better by joining a construction site. At our place (prestress, precast concrete plant), we are paying warm bodies $10/hr.

Show that you can read drawings, and you can quickly rise up to $12-$14/hr. Which is, admittedly, a pittance, but if you live in a trailer home, you can make ends meet. Then you can still program in your spare time, and keep the rights to your work, to boot.

Re:Recalculate for the crisis (1)

Jeff DeMaagd (2015) | more than 4 years ago | (#26184171)

I think some people would take less money rather than work outside in the winter. Working outside in the summer isn't always a picnic either.

Re:Recalculate for the crisis (1)

theaveng (1243528) | more than 4 years ago | (#26184393)

That's me! I would rather work for $8 at walmart than $12 in construction. Working at walmart has all the comforts of working in an office, like A/C and heat. Minus the websurfing but plus lots of attractive women to flirt with.

Re:Recalculate for the crisis (1)

Baldrson (78598) | more than 4 years ago | (#26184175)

It's true that for permanent on-site work my compensation requirements are much higher, so my advertised $8/hour for remote temporary consulting is apples to the $50/hour permanent salary annualized to $99k given in TFA. But I think it trades fairly when you consider that employers don't want to commit to fixed recurring costs in the present economic climate, and the vast majority of programming work can be done remote.

This has been true since at least 1980. (1)

mbone (558574) | more than 4 years ago | (#26183969)

Back in the mainframe days (when you were likely to be charged for every byte of storage and CPU cycle, hardware was viewed as expensive. But at least in my career, since about 1980 programmer time is viewed as the most expensive piece.

I agree...to a point (3, Insightful)

MpVpRb (1423381) | more than 4 years ago | (#26183983)

With cheep hardware readily available, I agree that, for many projects, it makes no sense to spend lots of time optimizing for performance. When faced with this situation, I optimize instead for readability and easy debugging, at the expense of performance.

But, and this is a big but, fast hardware is no excuse for sloppy, bloated code. Bad code is bad code, no matter how fast the hardware. Bad code is hard to debug, and hard to understand.

Unfortunately, bad or lazy programmers, combined with clueless managers fail to see the difference. They consider good design to be the same as optimization, and argue that both are unnecessary.

I believe the proper balance for powerful hardware is well thought out, clean unoptimized code.

Re:I agree...to a point (4, Insightful)

nine-times (778537) | more than 4 years ago | (#26184209)

I think if you're paying for programming vs. hardware, you're just paying for different things. I would think that would be somewhat obvious, given their very different nature, but apparently there's still some uncertainty.

The improvements you get from optimizing software are limited but reproducible for free-- "free" in the sense that if I have lots of installations, all the installations can benefit from any improvements you make to the code. Improvements from adding new hardware cost each time you add new hardware, as well as costing more in terms of power, A/C, administration, etc. On the other hand, the benefits you can get from adding new hardware is potentially unlimited.

And it's meaningful that I'm saying "potentially" unlimited, because sometime effective scaling comes from software optimization. Obviously you can't always drop in new servers, or drop in more processors/RAM into existing servers, and have that extra power end up being used effectively. Software has to be written to be able to take advantage of extra RAM, more CPUs, and it has to be written to scale across servers and handle load-balancing and such.

The real answer is that you have to look at the situation, form a set of goals, and figure out the best way to reach those goals. Hardware gets you more processing power and storage for a given instance of the applcation, while improving your software can improve security and stability and performance on all your existing installations without increasing your hardware. Which do you want?

Re:I agree...to a point (1)

dzelenka (630044) | more than 4 years ago | (#26184343)

I wish I had mod points...

The cost of maintaining code is almost never fully considered during the design phase. Clean code is MUCH cheaper in the long run. Reworking code should not be spend on optimization, but on simplification.

It's like the famous Mark Twain line 'I didn't have time to write a short letter, so I wrote a long one instead.'

When you have some "hamster code" that HAS to be optimized, it should be isolated and heavily documented.

Assuming of course hardware is the bottleneck (4, Interesting)

Analogy Man (601298) | more than 4 years ago | (#26183985)

Toss as much CPU and memory as you want at a chatty transaction and you won't solve the problem. What about the cost of your 2000 users of the application that wander off to the coffee machine while they wait for an hour glass to relinquish control to them? Over the years I have seen wanton ignorance from programmers that ought to know better about efficiency, scalability and performance.

Shiny! (1)

trold (242154) | more than 4 years ago | (#26183993)

If you buy the newest hardware gizmo on the market, the geeks will be begging you to let them code for it.

Everything Old is New (1)

critical_point (1430417) | more than 4 years ago | (#26184005)

This is the original justification for using high level languages e.g. FORTRAN, COBOL, etc, instead of working with machine instructions.

Just like the debate over mainframes vs thin-clients/cloud-computing, this is an old notion that waxes and wanes in cycles. Besides FORTRAN the biggest single step in trading execution speed for higher level programming was the adoption of Java, unless you include the minority of programmers who get to use a fourth generation programming language at work.

How clueless can someone get? (5, Insightful)

Marcos Eliziario (969923) | more than 4 years ago | (#26184019)

From someone who has been there, done that. I can say that throwing hardware at a problem rarely works.
If nothing else, faster hardware tend to increase the advantage of good algorithms over poorer ones.
Say I have an alghorithm who runs at O(N) and another one functionally equivalent that runs at O(N^2). Now let's say that you need to double the size of the input keeping the execution time constant. For the first algorithm you will need a machine which is 2X faster than the current one, for the second O(N^2) you'll need a 10X times faster machine.
Let's not forget that you need not only things to run fast, but to run correctly, and the absurdity of choosing less skilled programmers with more expensive hardware will become painfully evident.

PS: Sorry for the typos and other errors: english is not my native language, and I've got a bit too much beer last night.

Re:How clueless can someone get? (1)

BadAnalogyGuy (945258) | more than 4 years ago | (#26184081)

Now let's say that you need to double the size of the input keeping the execution time constant. ...for the second O(N^2) you'll need a 10X times faster machine.

4x, actually, but I think we all get what you're saying.

Basically, the angle of the dangle is geometrically proportionate to the motion of the ocean.

Re:How clueless can someone get? (2, Interesting)

tukang (1209392) | more than 4 years ago | (#26184253)

From a purely algorithm perspective you are correct, but it will be easier to implement that O(n) algorithm in a high level scripting language like python than it would be to implement it assembly or even C and I think that's where the submitter's argument of relying on hardware to make up the speed difference makes sense.

Re:How clueless can someone get? (0)

Anonymous Coward | more than 4 years ago | (#26184283)

That is from an CPU load perspective. I work mainly with databases and it always surprises me that someone willing to spend millions on a project will begrudge buying lots of storage space (terabyte dbs). When doing things like precalculating aggregated data, etc would help improve performance they seem reluctant to spend a few grand on more drives. Then there are the folks who would rather avoid spending money on a license for a high performance db like Oracle and insist on getting cheaper, less capable ones, just to avoid the capital cost. They end up spending more in the long run as we struggle for months trying to improve performance for them.

Re:How clueless can someone get? (0)

Anonymous Coward | more than 4 years ago | (#26184407)

Say I have an alghorithm who runs at O(N) and another one functionally equivalent that runs at O(N^2). Now let's say that you need to double the size of the input keeping the execution time constant. For the first algorithm you will need a machine which is 2X faster than the current one, for the second O(N^2) you'll need a 10X times faster machine.

So 2^2 == 10, eh?

and I've got a bit too much beer last night.

I really would like some of that :-)

Another u.s. specific problem. cost of living (1, Insightful)

unity100 (970058) | more than 4 years ago | (#26184033)

everything ranging from a measly meal to healthcare is so expensive that, any kind of rare labor becomes exponentially expensive. because, people need multiples of pay to make any advance in their standard of living due to cost of living.

in u.s., due to the ease you let mega corporations run rampant because they yelp and wank 'hands off business', you people are paying a fortune for almost anything that is sold for much cheaper in any other country. even, the SAME corporations are selling same products for much cheaper in europe, whereas they are giving you the shaft on the price of the same product in u.s.

'hands off business' was supposed to 'create jobs', 'increase standard of living' and so on.

did it ? what we see currently is totally to the opposite.

the wealth did not 'trickle down' (and why the hell should it anyway), you're losing jobs whilst cost of living almost stays the same (after all, corporations have to make profits, so that they can provide jobs arent they - but where are the jobs), spiral goes deeper and deeper.

i blame this on one thing alone - extremism.

extremism is bad at EVERYthing. every aspect of life, social or personal, without any exception.

when you go extreme on something, you break some other things to the extent that it becomes a disaster. just make a list of such stuff you experienced in your life, and you'll see.

business, economy are just features of social life, and they are no exceptions. if you go to extreme to ANY side, be it extreme 'freedom' or extreme regulation, it breaks down.

america went to the extreme lawless end in the last 30 years. it cost entire world a crisis. n. korea went extreme control in the last 50 years, it cost their people a poverty.

balance is the answer, balance is the key. take europe as an example. with all its faults, the system seems to be working exceptionally well. a lot of petite european countries which should not have any significance at all because of their lack of natural resources and manpower, are producing and creating much more compared to u.s. in a ratio scale. not only that, but the life standard of their people is much, much higher.

one word; balance.

Yes... that's the answer... (1)

borgheron (172546) | more than 4 years ago | (#26184037)

Throwing hardware at a bad application is ALWAYS the right way to go.

There's an old saying "Never throw good money after bad."

GC

Always thought the same about managers (1)

jmcnaught (915264) | more than 4 years ago | (#26184053)

I always thought the same about managers. Companies are better off spending money on productive workers and the machines they need than with cushy managers who IMHO do work that is way less hard.

I think companies would see real productivity gains from having the programmers (or whatever type of worker) manage themselves... either by meeting together regularly, rotating the leadership position, or preferably a mixture of both.

This way the skills and competency of the workers are enhanced, the decisions are made those doing the work, they have a bigger stake in decisions that are made (because they helped make them) and they can divide that extra high management salary amongst themselves.

In my experience, in professional technical settings the supervised almost always have a better understanding of their worker than the supervisors. And the further up the further up the management train you go, the less likely you'll find anyone with a clue.

Not to mention, this idea of just throwing hardware at efficiency problems kinda bucks the trend of finder lower energy IT solutions.

What a fucking douche-bag.

Wrong objective (2, Insightful)

trold (242154) | more than 4 years ago | (#26184057)

Good hardware running code written by bad programmers just means the code will fail faster. The primary goal of a programmer is to make the code work, and that does not change no matter how fast your hardware is.

objective is correctness, always (2, Insightful)

pikine (771084) | more than 4 years ago | (#26184219)

The article seems to assume that bad programmers write slow but correct code, which is a big assumption. But the observation on cost also means that good programmers should focus on correctness rather than performance.

Just to illustrate how difficult it is to get correctness right, on page 56 [google.com] of The Practice of Programming by Kernighan and Pike---very highly regarded book and highly regarded authors---there is a hash table lookup function that is combined with insert to perform optional insertion when the key is not found in the table. It assumes that the value argument can be safely discarded if insertion is not performed. That assumption works fine with integers, but not with pointers to memory objects, file descriptors, or any handle to a resource. An inexperienced programmer trying to generalize int value to void *value will induce memory leak on behalf of the user of the function.

Not always the case (3, Interesting)

mark99 (459508) | more than 4 years ago | (#26184059)

In a lot of big orgs it is amazing how expensive it can be to upgrade your hardware, or add to an existing farm. Not because of the hardware cost, but because of all the overhead involved in designing/specifying the setup,ordering, waiting for it to come, getting space for it, installation, patching, backing up, etc.

In fact I've seen several orgs where the cost of a "Virtual Server" is almost as much as a physical one because the cost of all this servicing it is so high. Whether or not this is necessary I don't want to debate here, but it is undeniably the case.

So I think the case for throwing hardware at issues is not as clear cut as this article implies.

What a crock... (4, Insightful)

johnlcallaway (165670) | more than 4 years ago | (#26184063)

For pure CPU driven applications, I would agree with this statement. But NONE of the business applications I write are bogged down by CPUs. They are bogged down by I/O, either user uploads/downloads, network, or disk access.

I have yet to see any application that was fixed for good by throwing hardware at it. Sooner or later, the piper has to be paid and the problem fixed. Someone improved response time by putting in a new server?? Does that mean they had web/app/database/data all on one machine?? Bad, bad, BAD design for large applications, no where to grow. At least if it's tiered and using a SAN with optical channels more servers can be added. Sometimes, more, not faster is better. And resources can be shared to make optimal use out of the servers that are available.

The FIRST step is to determine WHY something is slow. Is it memory, cpu, or I/O bound. That doesn't take a rocket scientist, looking at sar in Unix or Task Mangager in Windows can show you that. Sure, if it's CPU bound, buying faster CPUs will fix it.

The comment about developers having good boxes isn't the same as for applications. My latest job gives every developer a top-notch box with two monitors, I was in heaven. Unfortunately, it can't stop there. I also need development servers with disk space and memory to test large data sets BEFORE they go into production.

Setting expectations is the best way to manage over optimization. Don't say "I need a program to do this", state "I need a program to do this work in this time frame". It is silly to make a daily batch program that takes 2 minutes run 25% faster. But it's not silly to make a web page respond in under 2 secs., or a 4 hour batch job to run in 3 *if* it is needed. But without the expectation, there is no starting or stopping point. Most developers will state "it's done" when the right answer comes out the other end, while a few may continue to tune it until it's dead.

Re:What a crock... (1)

StrawberryFrog (67065) | more than 4 years ago | (#26184229)

You concentrate on CPU. Many web apps, including probably the one that I am looking at now (stats from the live system are still pending...), could go faster with more and better caching. I.e. more memory on the web or D=batabase tier. That's hardware too.

Re:What a crock... (0)

Anonymous Coward | more than 4 years ago | (#26184233)

Too bad you ruined your comment with that asinine sig.

Re:What a crock... (1)

mjensen (118105) | more than 4 years ago | (#26184285)

If bogged down by CPU, get more CPU.
If bogged down by network, get more bandwidth.
If bogged down by disk access, get better disk access.

It will take the people to tell you which one you need....

Get a rope (5, Interesting)

Anonymous Coward | more than 4 years ago | (#26184067)

I almost feel an order of magnitude more stupid for reading that article. Throwing more hardware at a problem definitely makes more sense for a small performance issue, but this is rarely the case. The whole idea makes me sick as a developer. This reminds me of the attitude of many developers of a certain web framework out there. Instead of fixing real problems, they cover up fatal flaws in their architecture with a hardware band aid. There's no denying it can work sometimes, but at quite a high cost and completely inappropriate for some systems. Not everyone is just building a stupid to-do-list with a snappy name application.

Consider that many performance problems graphically have an upper limit. At some point throwing more hardware at the problem is going to do absolutely nothing. Further, the long term benefit of hardware is far less than the potential future contributions of a highly paid, skilled programmer.

Another issue is there are plenty of performance problems I have seen that cannot be scaled easily just by adding more hardware. A classic example are some RDBMS packages with certain applications. Often databases can be scaled vertically (limited by RAM and IO Performance), but not horizontally because of problems with stale data, replication, application design, etc. A programmer can fix these issues so that you can yes then add more hardware, but it is far more valuable in the long-term to have someone to enable you to grow in this way properly.

Actually fixing an application is a novel idea, don't you think? If my air conditioning unit is sometimes not working, I don't go and install two air conditioning units. I either fix the existing one or rip it out and replace it.

Further, there are plenty of performance problems that can never be solved with hardware. Tight looping is one that I often see. It does not matter what you throw at it, the system will be eaten. Another example is a garbage collection issue. Adding more hardware may help, but typically delays the inevitable. Scaling horizontally in this case would do next to nothing because if every user hits this same problem, you have not exactly bought more time (therefore you must go vertically as well, only really delaying the problem).

The mentality of this article may be innocent in some ways, but it reminds me of this notion that IT people are resources and not actual humans. Creativity, future productivity, problem solving skills, etc are far more valuable to any decent company than a bunch of hardware that is worthless in a few months and just hides piss poor work by the existing employees.

I feel like a return to the .com bubble and F'd Company. I am sure plenty of companies following a lot of this advice can look forward to articles about their own failures. If someone proposes adding hardware for a sane reason, say to accommodate a few thousands more visitors with some more load balanced servers, by all means do so. If your application just sucks and you need to add more servers to cover up mistakes, it is time to look elsewhere because your company is a WTF.

Wait, what? (4, Interesting)

zippthorne (748122) | more than 4 years ago | (#26184071)

Surely that might work for a one-off, but if you're selling millions or even thousands of copies of your software, even a $100 increase in hardware requirements costs the economy millions. Just because it doesn't cost YOU millions doesn't mean you don't see the cost.

If your customers are spending millions on hardware, that money is going to the hardware vendors, not to you. And more importantly, that money represents wasted effort. Effort that could otherwise be used to increase real wealth, thus making the dollars you do earn more valuable.

So i guess the lesson is, If you're CERN, throw hardware at it. If you're Adobe, get a lot of good programmers/architects.

Re:Wait, what? (1)

Marcos Eliziario (969923) | more than 4 years ago | (#26184307)

And if you are microsoft, make your users think it's normal to throw hardware at it. After all, it's their money, not yours.

energy consumption increases? (2, Insightful)

itzdandy (183397) | more than 4 years ago | (#26184077)

I think you need to complicate this logic a bit by taking into account added electricity required to power the extra servers, run the servers at a higher load, or run the clients at a higher load as well as the air conditioning cost increase as well.

also, time is money. If a program takes more time, there is more time for users to be idle which will also have a cost.

best practice? program as efficiently as possible. Programming expenses are only spent once which the power bill lasts forever.

Maybe its simpler than that..... (2, Insightful)

3seas (184403) | more than 4 years ago | (#26184097)

... throw the money at genuine software engineering (not psuedo engineering) so that we have much better tools by which to program with.

Depends on type of problem (2, Informative)

foobar123 (1174837) | more than 4 years ago | (#26184105)

Problem that has nonlinear impact on performance can not be solved by adding of two more servers...
Simplest example is index in database. Before adding of index it takes 2 days to execute it, after adding of an index query executes in 100 milliseconds. How can you solve that by adding of more hardware? Also you usually can not solve IO issues between app and DB servers by "just adding of two more servers"...
Not to mention that when it comes to scaling of DB you really can not just depend on "adding of another server in cluster"...

Hardware doesn't just configure itself (2, Insightful)

olyar (591892) | more than 4 years ago | (#26184111)

One thing not in the equation here: Hardware is cheap, but having that hardware managed isn't so cheap. When you scale from a couple of servers to a big bank of server, you have to pick up system admins to manage all of those boxen.

Less expensive than a programmer (some times) but certainly not free.

People Are Expensive (2, Informative)

Greyfox (87712) | more than 4 years ago | (#26184113)

And using them inefficiently is also expensive. If you're looking for a quick fix perhaps you should first consider your company's processes and the tools you use to support those processes. If you can hire a programmer or two to write and maintain tools that allow you to eliminate some of the meetings you have to have every week because no one knows what's going on, you'll find it doesn't take very long for him to pay for himself.

Yes, but algorithm complexity makes that moot (0)

Anonymous Coward | more than 4 years ago | (#26184115)

It is true programmers are expensive and hardware is cheap. But, trying to keep the explanation simple, say you have an algorithm that operates on 100 objects and requires 100^2 = 10,000 computing resources to work. So lets say You want you app to handle 10% more objects. That's 110 so you need (110)^2 computing resources or 12,100 computing resources, a 21% increase in hardware for a 10% increase in capacity. Now 100^2 is very good. What if you had stupid ass programmers that wrote something that's N^4? 100^4 = 100,000,000 computing units, or 10,000 times the capacity to hand 100 objects. And to increase the capacity a mere 10%, you would need (110)^4 = 146,410,000 computing resources.

That's a whopping 46% increase in computing resources for a 10% gain in capacity. Maybe you should just pay a programmer to straighten out that spaghetti code instead? Hardware is cheap. But the best investment a development shop can make is dedicated employees to performance optimization.

Depends, as noted in the article (1)

Velox_SwiftFox (57902) | more than 4 years ago | (#26184127)

But not with enough emphasis. To the suggested procedure:
      1. Throw cheap, faster hardware at the performance problem.
      2. If the application now meets your performance goals, stop.
      3. Benchmark your code to identify specifically where the performance problems are.
      4. Analyze and optimize the areas that you identified in the previous step.
      5. If the application now meets your performance goals, stop.
      6. Go to step 1.

I would add:
      0. If the performance is lower than you can possibly fix with faster hardware, skip to step 3 and find out the real problem first.

Hardware upgrades may be cheaper, but software design problems can make a program orders of magnitude slower. I've seen 6 large, sorted database queries followed by a small one that was the only one that really had any reason to be sorted, for example. Hardware can't perform miracles.

Absolutely True (4, Interesting)

titzandkunt (623280) | more than 4 years ago | (#26184135)

When I was young, eager and naive I worked at a place that was doing some pretty heavyweight simulations which took a good three-four days on a (I think) quad-processor Sun box.

It was quite a big site and had a relatively high turnover of decent hardware. Next to the IT support team's area was a room about 6 yards by 10 yards almost full to the ceiling with older monitors, printers and a shitload of commodity PC's. And I'd just stated reading about mainstream acceptance of linux clustering for paralellizable apps.

Cue the lightbulb winking into life above my head!

I approached my boss, with the idea to get those old boxes working again as a cluster and speed things up for the modelling team. He was quite interested and said he'd look into it. He fired up Excel and started plugging in some estimates...

Later that day I saw him and asked him what he thought. He shook his head. "It's a non-starter" he said. Basically, if the effort involved in getting a cluster up and working - including porting of apps - was more than about four man-weeks, it's cheaper and a lot safer just to dial up the Sun rep, invoke our massive account (and commensurate discount) with them and buy a beefier model from the range. And the existing code would run just fine with no modifications.

A useful lesson for me in innovation risk and cost.

I'm not sure I understand the article (1)

Kupfernigk (1190345) | more than 4 years ago | (#26184177)

What point is he trying to make? Programmers do not spend 100% of their time on optimisation. They have to design front ends, create business logic, debug, document, and optimise when necessary. Let's say the average programmer spends 10% of his or her time on optimisation. That's maybe $8000 per year per programmer.

Now assume that the application has a low number - say 10 customers per programmer, for a server application, and each customer instance needs 2 boxes. So the programmer optimisation cost is currently around $400 per server per annum.

The root flaw in the article is an assumption that each application has only one customer. That may be true of some in-house projects, but in these cases the main value of programmers tends to be their specialist knowledge of the company and the application. In these cases too, the process of updating and replacing servers taking into account all the internal constraints (likely to be limited by lack of resources) is probably many times the hardware cost.

Some reasonable criteria... (1)

digsbo (1292334) | more than 4 years ago | (#26184193)

1) When the pending performance issue can be solved for the next year's worth of growth with an affordable amount of hardware, buy hardware.

2) If the system is only deployed in one place (not a problem multiplied across 10s, 100s, or 1000s of sites)

3) When the hardware to be purchased is NOT expensive legacy hardware that makes it more expensive to fix the right way later (or makes finance think you don't know what you're doing saying you can replace a $1M AIX machine w/ $40K in commodity hardware, leaving them with a huge capital cost to depreciate).

4) When the marginal cost of hardware is less than the marginal cost of hiring a new programmer. This is true for most Sun machines and all x86.

I've been in the opposite of each of the above scenarios, when we used programming talent to solve the problems because the criteria above were actually the opposite.

Nothing new (2, Insightful)

fermion (181285) | more than 4 years ago | (#26184221)

This has been the trend for a very long time. Once, a long time ago, people wrote code in assembly. Even not so long ago, say 20 years, there were enough applications where it still made sense to do assembly simply because it was the only way for affordable hardware to perform well.

Ten years ago many web servers were hand coded in relatively low level complied languages. Even though hardware had become cheaper, and the day of the RAID rack of PCs were coming on us, to get real performance one had to have software developers, not just web developers.

Of course cheap powerful hardware has made that all a thing of the past. There is no reason for an average software developer to have anything but a passing familiarity with assembly. There is no reason for a web developer to know anything other than interpreted scripting languages. Hardware is, and always has been, cheaper than people. That is why robots build cars. That is why ISM sold a but load of typewriters. That is why the jacquard loom was such a kick but piece of machinery.

The only question is how much cheaper is hardware, and when does it make sense to a replace a human wiht a machine, or maybe a piece of software. This is not always clear. There are still reletively develop places in the world where it is cheaper to pay someone to wash you clothes by hand than buy and maintain a washing machine.

Well... (1)

Anrego (830717) | more than 4 years ago | (#26184235)

The main goal of writing solid code isn't to lower resource requirements.. it's to increase maintainability.

Sure you can hack out shitty code and make up for it with more hardware to handle the mem leaks and bloat... and probably save some money in the short term. In the long term though, when you need to add something to your mess of spaghetti code.. you're going to spend much more programmer time .. which is what you were trying to save from the get go.

I`m a firm believer that a little extra time and money spent on writing good, clean code will pay off in the long run.

theory vs practice (0)

Anonymous Coward | more than 4 years ago | (#26184237)

had to be written by someone not working in a real business.

As mentioned by others cpu speed is rarely the case, and even if so optimization isn't always for page response or even speed.

Much of my optimization is to reduce transaction fees and user speed. More machines won't reduce io bound per-user speed.

I wish you could just throw hardware at it. Also even if you do, software has to be changed to accommodate various levels of scaling, you often can't just plug and go. You're going to increase your infrastructure engineering staff and costs, it's gotta come from somewhere.
 

Java, PHP et al (1)

Midnight Thunder (17205) | more than 4 years ago | (#26184243)

This is why interpreted or semi-interpreted programming languages make so much sense, especially for stuff such as web applications. Here you can scale to what ever the best hardware is, even changing CPU, without worrying that you will need to recode, or recompile. The same can't generally be said for languages such as C++. Its ironic that you would have to choose a approach that is probably less optimal to get cheaper long term improvements in performance.

What about desktops? (2, Insightful)

br00tus (528477) | more than 4 years ago | (#26184249)

This uses servers as an example, but what about desktops? We use Windows desktops where I am, and having AIM and Outlook open all the time is more or less mandatory for me. Plus there are these virus-scanning programs always running which eat up a chunk of resources. I open up a web browser and one or two more things and stuff starts paging out to disk. I'm a techie and sometimes need a lot of stuff open.

We have a call center on our floor, where the people make less than one third what I do, and who don't need as many windows open, yet they get the exact same desktop I do. My time is three times more valuable than theirs, yet the company gives me the same old, low-end desktop they get, resulting in more of my productive time being lost - those seconds I wait when I switch from an ssh client to Outlook and wait for Outlook to be usable add up to minutes and hours eventually. Giving everyone the same desktop makes no sense (I should note I eventually snagged more RAM, but the point is about general company policy more than my initial problems).

Is this news? (1)

AvitarX (172628) | more than 4 years ago | (#26184257)

Practical C programming lists in their "optimization" chapter that hardware is often the most cost effective optimization. It even gives an example of a couple of thousand on a new machine cut execution time in half instantly with no effort. The programmers then did some optimizations, but it was the easy stuff, not hard core.

The book is old now, and the anecdote older.

A couple of things that were ignored (2, Insightful)

Todd Knarr (15451) | more than 4 years ago | (#26184261)

The first is that the hardware cost isn't the only cost involved. There's also the costs of running and maintaining that hardware. Many performance problems can't be solved by throwing just a single bigger machine at the problem, and every one of the multiple machines means more complexity in the system, another piece that can fail. And it introduces more interactions that can cause failures. An application may be perfectly stable using a single database server, but throw a cluster of 3 database servers into the mix and a problem with the load-balancing between the DB servers can create failures where none existed before. Those sorts of failures can't be addressed by throwing more hardware at the problem, they need code written to stabilize the software. And that sort of code requires the kind of programmer that you don't get cheap right out of school. So now you're spending money on hardware and you're still having to hire those pesky expensive programmers you were trying to avoid hiring. And your customers are looking at the failure rates and deciding that maybe they'd like to go with your competitor who's more expensive but at least delivers what he promises.

Second is that, even if the problem's one that can be solved just by adding more hardware, often inexperienced programmers produce code whose performance profile isn't linear, it's exponential. That is, doubling the load doesn't require twice the hardware to maintain performance, it requires an order of magnitude more hardware. It doesn't take long for the hardware spending to become completely unbearab le, and you'll again be caught having to spend tons of cash on enough hardware to limp along while spending tons of money on really expensive programmers to try and get the software to where it's performance curve is supportable and watching your customers bail to someone offering better than same-day service on transactions.

Go ask Google. They're the poster boy for throwing hardware at the problem. Ask them what it took on the programming-expertise side to create software that would let them simply throw hardware at the problem.

Developer=Engineer (1)

DanMc (623041) | more than 4 years ago | (#26184265)

I really disagree with this article. There's very few real world cases where a problem lands on a dev team's plate that could otherwise easily be solved by throwing hardware at it. In cases where it is possible, it's usually an exotic hardware change. For example, moving from local disks to a SAN or SSDs, which is not to be taken as a hardware swap. If you have an app that's CPU bound and you go from a single core 3gz to a quad core, any dev will tell you that it won't ever get close to 4x faster unless it was very carefully developed for multiple cores in the first place.

Consider one of the most common cases. A web application that starts to bog down quickly when it reaches a certain threshold. Let's assume it's not as simple as the network bandwidth (I *wish* my customer's bandwidth doubled every year for the same price!). You've got 200 visitors and they're all responsive, but when you get 300 it's twice as slow, and when you get 350 it's 3 times as slow. Go ahead and replace the servers with double the CPU & RAM, and the next step up of disk tech (raid 5 SCSI to RAID 10 SAS for example) and see if you can suddenly support 400 users.

Part of the developer's skill set, and the reason they are expensive, is engineering and architecture. Consider the mechanical engineering world. Humans have been doing that for a lot longer than programming, so we have a better understanding of a solution that is plain stupid. If a device is underperforming, can you skip talking to an engineer and solve it by swapping out the engine for one twice as big? Have you ever talked to an engineer who said, "yeah, just double the power and that won't affect anything else!" Maybe you've got a steel rod that keeps breaking, and the actual solution is to make a slightly bigger rod out of titanium. But if that actually works, it just means the engineering wasn't done right in the first place. This is how engineering disasters happen.

And finally, good programmers are not dumb. If a problem lands on their desk that can be fixed with a few thousand dollars worth of hardware, they're going to consider it after they've billed that much time. Chances are they've got better things to do because their skills can be better used elsewhere.

All depends... (1)

lpfarris (774295) | more than 4 years ago | (#26184269)

It's always programmers that claim programmers are expensive, and hardware is cheap. People who make this claim rarely take other costs into consideration. Like support, power, rack space, etc. I would just love to see someone do a study tracking total project costs against the money spent up front in planning, design, and development.

There's more to hw cost than just the initial pric (0)

Anonymous Coward | more than 4 years ago | (#26184281)

There's more to hw cost than just the initial price. There's also the sw licensing, people support for adding hw to your infra and the cost to move applications from 1 platform to another. If virtualization is used effectively, then you don't necessary need to buy anything new to get more CPU, RAM, whatever. If small servers are used, the ability to place multiple servers on shared hardware is limited and overall company costs will probably be higher.

In general, newer hw is significantly faster and cheaper to run than any 2 year old hardware. It also tends to reduce license costs purely due to a fewer number of CPUs being needed to perform the same amount of work.

As an example, I was forced to redeploy an E6800 wit 8 CPUs when I needed a V240 with 2 faster CPUs. I had to upgrade the E6800 with RAM and HBAs that costed more than 2 V240s including double the RAM and the correct HBAs.

Why? I don't know the answer for certain, but believe the other 16 CPUs were planned to be used by other projects. Personally, I think the company should have traded in that class of server for whatever discount they could have gotten. This happened about 18 months ago. This company had more than 40k servers so adding more without removing some is a really big problem. Most of their data centers were "full" whether by power or space capacity.

Software developers don't seem to understand the complexities of running data centers. Larger volume costs need to be considered whenever making policy decisions concerning new equipment. BTW, I did software development for over 10 years.

This only works with LAMP/FOSS (2, Funny)

Alpha830RulZ (939527) | more than 4 years ago | (#26184289)

If your performance problem is in an Oracle or SQL Server database, throwing more hardware at the problem probably has a license fee attached to it, and that can easily be measured in multiple developer salaries. This also causes people to scale using bigger boxes, rather than more boxes, and that gets you out of the range of commodity hardware and into the land of $$$$$.

Which is why I don't care to deliver on Oracle, but my employer hasn't figured out that Postgres and MySQL will work for a lot of problems, and is still fellating the Oracle and IBM reps.

Cheap programmers = cheap results (0)

Anonymous Coward | more than 4 years ago | (#26184293)

Sure- we can easily throw 3 or 4 systems at a problem that can be solved by 1 with good programmers.

But what do you do when the code is unmaintainable and stuck together with bubble gum and duct tape? That too is the results of that way of thinking. You're not paying for good programmers to make things fit on less hardware. You're paying for good programmers to make good software, and a side effect of that is that it runs efficiently.

are you kidding? (1)

speedtux (1307149) | more than 4 years ago | (#26184355)

That's a tradeoff that's at the start of every software development project: you pick the tools that allow you to get the job done with overall minimum cost. That's why so many projects are written in Perl, Python, etc.

Unfortunately, many people make the wrong tradeoffs and then don't even follow through. In particular, they pick languages that in theory permit high performance implementations (e.g., C, C++), but then actually fail to write high performance code. A lot of desktop applications written in C and C++ fall into this category.

Typical Windows-aera response ... (1)

garry_g (106621) | more than 4 years ago | (#26184361)

The pretense of solving performance problems by adding more performance to the hardware is something typical of the MS Windows generation ... instead of clean, optimized programming, relying on more CPU (or whatever) power to solve the problems is a very short-sighted solution. Sure, you'll be able to get your system performance up to par again, but what when you run out of performance yet again?

Instead, check where the actual bottlenecks are ...

I believe, every person who wants to get into programming should get some hands-on training with either ancient systems like Atari or C64, or embedded systems like AVR or PIC. All of which with limited resources, but with decent programming style they are capable of getting the job done well!

Dinosaur Lesson (1)

anorlunda (311253) | more than 4 years ago | (#26184371)

Yes we learned that lesson generations ago.

When I started writing code, memory cost $1 per bit, and programmers cost about $4 per hour. If you were writing a program that would be installed on many machines, you could afford very many man hours to save a bit or a byte here or there. It was precisely that misguided sense of economy that created spaghetti code and the worst programming practices ever.

I was personally responsible for a programming horror as a green youngster. I wrote a load-flow program for real time applications. I was given only 100K bytes budget for disk (drum) storage for code plus data plus saved results. To fit in that constraint I had to invent a 16 bit floating point number format. That made my application unusable and useless in the long run.

After the end of the project, I belatedly learned that all the other programmers routinely exceeded their memory constraints by 300-400%; and by doing so they managed to deliver something useful.

Thats why javascript became popular. (0)

Anonymous Coward | more than 4 years ago | (#26184413)

I think this is one of the main reasons (AJAX etc. aside) why JavaScript became popular and accepted lately. Why waste YOUR server resources generating structure of the page? Just vomit raw data with adequate classes/id's and let THEIR machine structure it client-side. Makes perfect sense to me.

hardware costs have personel costs too (1)

misterjava66 (1265146) | more than 4 years ago | (#26184415)

Although I'm an advocate of buying good hardware, and not getting silly on the hardware to developer ratio. You must also note that more complex hardware requires more skilled/talented operators and more numerous hardware requires more skilled/talented operators and either one runs up the electric bill. Most of the cost of a server is operator costs, NOT purchace. So be careful on how you run the numbers. With that said, custom software development should be a choice of last resort. I am one of those guys, and if I'm bringing less than a factor of 10 improvement in speed in a redevelop project, it probably would have been cheaper to just buy more hardware.

Stupid question. (1)

drolli (522659) | more than 4 years ago | (#26184425)

It depends on how many instances of the code will be run. In the innermost loop of an PDE integrator, which will be run in a monte-carlo simulation on 1000 cores, even a small time saving can be valuable and pay off immediatly.

If you are running on a system which has intrinsic limitations (e.g. embedded systems battery) you also may find it useful *NOT* to run into the platforma limitation without need (changing the platform may be expensive).

However, if its about aggregating the database in the evening, the it maybe does not matter.

More factors (2, Insightful)

Lazy Jones (8403) | more than 4 years ago | (#26184429)

Generally, investing into hardware will usually mean more people with salaries like programmers' on the payroll (designing the architecture, the maintenance tools, installing the software and hardware, keeping it running ...). A lot of these things can be automated / done with little effort, but it takes someone as competent (and expensive) as a good programmer to get it right.

In the long run, your best investment is still the good programmer, as long as you can keep him happy and productive, because then you can grow more/faster (by buying hardware as well).

time is the key (1)

decairn (669433) | more than 4 years ago | (#26184433)

I've done this many times. It boils down to simple stuff and its all a trade off; time vs money vs quality. Time is usually the key where I work as the environment is a highly changing one. Deploying new hardware is generally less risk and faster than upgrading software, and uses different people from the (always too busy) software teams. So it buys you time which is often fine for the business folks that don't care about how inefficient that algorithm or SQL query is as long as the business needs are met. However, there's only so many times you can band-aid this stuff until hardware cannot solve it. When you do new software, its a full development cycle. If it requires major rethink on the design you can bet new issues are raised in production when it goes live; the business do care about this and you have to do some extra preparation to deal with these new risks as the new software is rolled out. Having gone all through is, the goal-posts change again. New activity happens from business that drives software in ways you didn't imagine or get the chance to implement for ahead of time. You're back to square one on the hardware versus software again.

dont think that way (2, Informative)

scientus (1357317) | more than 4 years ago | (#26184451)

Programmers and efficiency are not unrelated free market resources. Good coders write more efficient code regardless of weather thet is a primary goal. While premature optimization is athe root of all evil, claiming there is no need to optimize at all is equally a fallacy.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...