Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Software Defects - Do Late Bugs Really Cost More?

Cliff posted about 11 years ago | from the effects-of-the-software-lifecycle dept.

Businesses 125

ecklesweb asks: "Do software defects found in later phases of the software development cycle REALLY cost THAT much more than defects found in earlier phases? Does anyone have any empirical data (not anecdotal) to suggest that this logarithmically increasing cost idea is really true? That is the question I use whenever I want to tick off a trainer. Seriously, though, it seems an important question given the way this 'concept' (or is it a myth?) drives the software development process."

"If you're a software engineer, one of the concepts you've probably had driven into your head by the corporate trainers is that software defects cost logarithmically more to fix the later they are found in the software development life cycle (SDLC).

For example, if a defect is found in the requirements phase, it may cost $1 to fix. It is proffered that the same defect will cost $10 if found in design, $100 during coding, $1000 during testing.

All of this, to my knowledge, started by Barry Boehm in papers[1]. In these papers, Mr. Boehm indicates that defects found 'in the field' cost 50-200 times as much to correct as those corrected earlier.

That was 15 years ago, and as recently as 2001 Barry Boehm indicates that, at least for small non-critical systems, the ratio is more like 5:1 than 100:1[2].

[1] - Boehm, Barry W. and Philip N. Papaccio. 'Understanding and Controlling Software Costs,' IEEE Transactions on Software Engineering, v. 14, no. 10, October 1988, pp. 1462-1477

[2] - (Beohm, Barry and Victor R. Basili. 'Software Defect Reduction Top 10 List,' Computer, v. 34, no. 1, January 2001, pp 135-137.)"

Sorry! There are no comments related to the filter you selected.

Thigs they don't tell you ... (4, Interesting)

Pogue Mahone (265053) | about 11 years ago | (#7268617)

The bugs might be cheaper to fix, but they cost a lot more to find.

At any stage, you can only find bugs that are introduced at or before that stage. So while fixing a requirements bug in the coding phase might be more expensive than fixing it during the requirements phase, fixing a coding bug during the requirements phase is a tricky operation that I'll leave as an exercise for the reader :-)

Of course, if you omit some of these phases completely, you won't introduce any bugs during them. That's why the JFDI(*) methodoloy is so popular.

(*)Just F*cking Do It

Re:Thigs they don't tell you ... (1)

Pogue Mahone (265053) | about 11 years ago | (#7268642)

Replying to self ... sad.


Oh, and I saw JFDI originally in an article in Computer Weekly, I think. It's also here []

Re:Thigs they don't tell you ... (2, Insightful)

Koos Baster (625091) | about 11 years ago | (#7268655)

IMHO the JFDI methodology probably doesn't work very well for large projects (50 people * 2 years).

But then again, what methodology does work for those cases?

Real computer scientists don't program in assembler. They don't write in anything less portable than a number two pencil.

Re:Thigs they don't tell you ... (1)

weicco (645927) | about 11 years ago | (#7269519)

Well, I've just f*cking done it couple of times. There was big discussion at my previous firm that "something needs to be done". I wrote cool library in two weeks, went to boss and said that "something has been done". He almost jumped throught the roof... I got a nice little raise in salary shortly after that and everybody was happy :)

Oh, the whole project was 50 people * 3 years.

Re:Thigs they don't tell you ... (0)

Anonymous Coward | about 11 years ago | (#7279498)

There are way too few excellent coders who can pull elegant and efficient code out of their hat. Software engineering has to deal with that and therefore doesn't approve of JFDI. Relying on flashes of genius isn't acceptable planning. Besides, these "geniuses" tend to "lose" their coworkes somewhere down the road, which can cause the few excellent programmers who can keep up to become overworked and drop out due to stress.

Re:Things they don't tell you ... (1)

oo_waratah (699830) | about 11 years ago | (#7280284)

The reality to this is that you already had a concrete design. Not on paper but in your mind. You did not have to describe it to anyone to get it built correctly. So this is not strictly JFDI.

Get a phone call at 17:00 from a section that you have never heard of and get a project dumped in your lap to be ready by 10:00am, now that is a JFDI.

Re:Thigs they don't tell you ... (0)

Anonymous Coward | about 11 years ago | (#7271701)

Replying to self ... sad.
Replying to self only to correct spelling ... pathetic.
Abusing karma bonus to do so ... priceless.

Re:Thigs they don't tell you ... (3, Funny)

penguin7of9 (697383) | about 11 years ago | (#7277221)

The bugs might be cheaper to fix, but they cost a lot more to find.

Not if you are a company like Microsoft or Sun and you let your customers do your bug hunting for you, for free.

Of course, that's also how bug hunting works for open source software, but with OSS, at least I don't pay anybody for the privilege of finding their bugs for them.

Re:Thigs they don't tell you ... (2, Funny)

cookiepus (154655) | about 11 years ago | (#7277713)

(*)Just F*cking Do It

it's called 'Incremental Development' :)

Re:Thigs they don't tell you ... (2, Insightful)

Anonymous Coward | about 11 years ago | (#7279476)

That's beside the point. The notion that defects are more expensive if found later is about mistakes of a stage which are found in subsequent stages. In essence it means that if you're at the requirements stage, get it right, because missing or wrong requirements are expensive to fix later (mostly because you either end up with unmaintainable code or have to throw a lot of work away and redo it). You're not supposed to try and fix off-by-one loops at the requirements stage. It usually takes deep insight into a problem to get the requirements and the design right. That's why many programmers know to write a program once to learn about the problem, throw it all away and use the insight to write a good implementation. The result of this is known as rapid prototyping and it does not deserve the bad image it has. That's the fault of non-programmers who don't understand that RP is part of the requirements/design stage, not implementation.

Yes they do. (4, Informative)

Karora (214807) | about 11 years ago | (#7268629)

There's plenty of proof out there. Even "ancient" but worthy texts like "The Mythical Man Month" discuss this one.

The size of the project and the nature of the bug really combine to drastically affect the outcome.

For me personally we have just spent about a year tracking down a particular set of bugs (probably not all nailed yet) which showed up post-live. When we were pre-live these would undoubtedly have been easier to fix, but something else that we could have done at that point would have been to improve our design, which would have nuked most of the bugs completely. Once we are in production however we have this forward/backward compatibility heuristic tying one hand behind our backs, and redesigning the thing gets much much bigger.

But that's just anecdotal, of course.

Also, bugs take $$$, who should pay? and ethics? (5, Interesting)

Frobnicator (565869) | about 11 years ago | (#7269024)

Similar experience for me, too. It is anecdotal evidence and not proof of the costs(let us not confuse the two). Now some questions to add to your observations: Should the company be liable for an engineer's errors (as is normally done in business)? Or should the individual or team be liable?

Most recently I've been tracking down an error in our system. After nearly a month of trying various things, I found the problem of an error. In this case, two years ago the hardaware engineer building the FPGA and DSP programs didn't bother to fix the [relatively simple] design problem. Rather than give all communications the same format, a few commands differ substantially from all others (different responses in certain circumstances, for example).

The problem made it into the PC software that interfaces with the board. The problem is documented in several [maybe 20?] bugs of the software that works between the PC and the external device. The problem is documented in at least 50 bugs in a port of that PC software. It has been in production for several years, and implemented by external companies (which I feal sorry for, due to the complexity of the communications bug).

Now we're working on a completely new FPGA/DSP board to replace the earlier board. Design changes prevent us from directly implementing the bug in the new design, although otherwise the communication protocols are the same. Implementing the same malformed communications will mean breaking the simple straightforward design and carefully implementing a set of 'design exceptions' (read: 'bugs').

It would have taken one engineer an hour or so to fix this thing when they first saw it. It would have taken both teams a few days to fix it when writing the PC to DSP interface (~1 FTE month). It would have taken a few weeks to fix it when writing the port, requiring changes to the PC software and the DSP (~1 FTE year). If we choose to fix the error now, it will probably result 2+ FTE years of work to just fix everything, and more time for regression testing every old peice of software for this one bug. If we choose to leave it in, we will devote at least that much time in evaluating, implementing, and testing the old errors. Not to mention the continued maintenence work when the eventual bugs are found in the new board.

Now we're forced with a tough financial decision: do we spend a month or more carefully re-creating and testing the 'design exceptions', (probably 3-5 FTE years in total) or do we do it 'the right way' and break both our own and our customers' software? (again, several FTE years, but potentially loosing faith with the customers.)

This particular bug could have been prevented by about $50 of work. It has now cost the company tens of thousands of dollars, and will probably cost a few hundred thousand before all is said and done.

Now, lets throw some financial ethics into the $50 --> $5,000 --> $50,000 --> $500,000+ problem: The engineer was in a hurry to fix the problem before a company imposed deadline. Is that engineer responsible for the enormous financial cost? If so, how much? If not, why not? It can be argued that his negligence cause a half-million dollars in damages. It can be argued that the engineer was responsible for $50 but the team was responsible for allowing it to grow. It can be argued that this is a regular business cost due to falibility of engineers' designs.

This begs the question:

How responsible are any of us for the errors we introduce?


Re:Also, bugs take $$$, who should pay? and ethics (2)

Golthar (162696) | about 11 years ago | (#7269590)

Depends on the circumstances.

If the engineer was rushed through the design and nobody had the time to check his designs, the company gets what it deserves.

However if the engineer skimped his work (for whatever reason) and the team failed to check his work, I think the team would share some kind of responsability.

If the engineer made the mistake and willfully let it in, he and the team should both partialy liable (he because he left an obvious flaw in and his team for not checking it)

This is also why I normaly have somebody (as in my boss) check my work and or discuss things that can be flawed or not.

This way I can't be fully responsable (usualy im just responsable for fixing the problem only)

I do notice how my emphasis is placed on automated testing and better designs around here, which makes an software engineer like me happy ;-)

Re:Also, bugs take $$$, who should pay? and ethics (1)

stevew (4845) | about 11 years ago | (#7270168)

I have some problems with this way you have reported this story (and maybe I'm taking it personally cause I do hardware.).

You say the protocol has exceptions instead of always being the same. Do you KNOW that the exceptions were put there to get around a bug? How do you KNOW that a fix for the bug existed - maybe that fix was the addition of the protocol exceptions because for technical reasons there wsa no other solution available to the engineer. Do you KNOW that the hardware engineer saw the bug - or even defined it as a bug?

Let me give you an example of how the hardware guy might have been constrained. He might not have had enough time to fix the problem otherwise. He might have simply been out of room in the FPGA to implement the fix in another manner. A decision might have been made by management to fix the problem a certain way when presented with choices.

All of these are realities in the world of hardware.

My whole point is that there is almost more than one side to such a story!

As often as not - schedules & resource limit the types of fixes that can occur in a program as it nears production. They DO get more expensive to fix at this point because it often means that the steps that occur between design and production have to be repeated (like layout of the chip in hardware). As soon as you near product release the decision to fix a bug becomes more a matter of "is it a show stopper or not?" Can we "program" around the bug instead of fixing the hardware.

I think the original problem in this post should have clarified the space the question was being asked about. Maybe software production costs are lower than hardware (heck I know they are.) To make a mask set for a 0.13 chipset costs perhaps a million dollars. You ARE going to think twice before you decide to make a hardware fix at that kind of cost.

Re:Also, bugs take $$$, who should pay? and ethics (1)

Frobnicator (565869) | about 11 years ago | (#7270893)

You say the protocol has exceptions instead of always being the same. Do you KNOW that the exceptions were put there to get around a bug? How do you KNOW that a fix for the bug existed - maybe that fix was the addition of the protocol exceptions because for technical reasons there wsa no other solution available to the engineer. Do you KNOW that the hardware engineer saw the bug - or even defined it as a bug?
Yes there are two sides, but in this case the other side does not fit any of your reasons. I know from working daily with the teams for years now that the bug was caused by a lazy implementation by that one particular engineer, who has (in an odd form of revenge) been assigned to fix almost every bug related to that problem, until now. The 'exceptions' to the protocol were unneccessary because the design included well-defined methods for adding commands and adding responses, which were not followed. Specifically, rather than add a command, he appended a value that indicates the 'extension', or in another case, prepended undefined error codes, both because he didn't want to wait a few minutes for a recompile.

According to the engineer, the actual thinking for the two communications problems was: "Lets just make the header bigger by one byte for these 'extensions', we'll go in and add new commands for them later", but he never considered that that next byte may be legitimate data in the shorter command since it was to be temporary. The other was "Adding a new response code will require a full rebuild, but this is just a small test for debugging. We'll just prepend a number to one of the other failure codes, just for testing." Both 'temporary' solutions were left in place, and duplicated by a few interns or when they added a few more 'extensions'; his practice of using specific return codes has evolved to a selection of 4 possible return styles. All this because he wanted to avoid a few minutes of compile time!

In this case, it is entirely contained within the flash memory shipped to customers, so we could easily fix it and declare all old versions depricated -- but not until correcting every piece of software, which will take a lot of time.

For now it has evolved into something of a tribute to ad-hoc design: we have either [command][data][crc] or [command][ext][data][crc] where the values in [[ext][data]] is occasionally valid [data] for the basic command, leading to ambiguity: Is it the basic command with 1 or 2 as the first byte, or the extended command? For responses, we have a choice of [command][status] or [command][#][status] or [command][ext][status] or [command][ext][#][status], where [#] is the function-specific error code being returned. The latter is easier to check for than the former, but both are a continual source of flaws.


Re:Also, bugs take $$$, who should pay? and ethics (1)

mongbot (671347) | about 11 years ago | (#7280524)

This is hilarious. I mean, I hate recompiling myself, but that's just ridiculous.

But perhaps something as important as command protocal the should have been designed before hand?

Re:Also, bugs take $$$, who should pay? and ethics (0)

Anonymous Coward | about 11 years ago | (#7271097)

You mean it raises the question.

Re:Also, bugs take $$$, who should pay? and ethics (1)

Frobnicator (565869) | about 11 years ago | (#7272243)

No, mr/ms Anonymous Coward, I mean 'begs the question'.

"Begging the question" aka "circular reasoning", in argument, means that you assume that a statement which depends on the conclusion is true, and you use it as proof of your argument.

My rhetorical argument was based entirely on the premise that software engineers must bear responsibility for the errors they introduce, and therefore they are at fault for the errors. The belief that software engineers are responsible implies that they are at fault, therefore the conclusion implies the premise. The logic is falicious, and therefore requires additional verification that the premise is indeed true. If the premise can be shwon to be true through other means, only then can it be validly used in circular reasoning, and even then, it is only generally permitted for contradictory proofs.


Re:Also, bugs take $$$, who should pay? and ethics (0)

Anonymous Coward | about 11 years ago | (#7273479)

Your response demonstrates that you know exactly what begging the question is. Your original post, however, reads as though you don't. I still can't quite see how you meant it to be interpreted correctly.

Re:Also, bugs take $$$, who should pay? and ethics (2)

Xtifr (1323) | about 11 years ago | (#7275502)

"Begging the question" aka "circular reasoning", in argument, means [...]

"Begging the question" [] is not a synonym for "circular reasoning" [] . "Begging the question" simply means that your argument is based on questionable premises. "Circular reasoning" means specifically that you're using your conclusion as your premise. Circular reasoning may be begging the question, but begging the question is not necessarily circular reasoning.

In this case, I think you may be right, it's both circular reasoning and begging the question, but they're not synonyms, although they are, obviously, related.

(If more people on slashdot were to familarize themselves with common logical fallacies [] , I think this might be a better place.)

Re:Also, bugs take $$$, who should pay? and ethics (1)

Frobnicator (565869) | about 11 years ago | (#7277391)

I am very familiar with the elements of argument. I just have and use other sources than yours.

Other than the site you presented, all other descriptions of the two that I have seen, they are synonymous.

And I agree with you and have done so with others in the past: it would be good if there were more real argument and debate on /. as opposed to just contradiction. [Homage to Monty Python goes here.]


Re:Also, bugs take $$$, who should pay? and ethics (1)

arb (452787) | about 11 years ago | (#7277929)

it would be good if there were more real argument and debate on /. as opposed to just contradiction.

No it wouldn't! ;-)

Re:Also, bugs take $$$, who should pay? and ethics (0)

basingwerk (521105) | about 11 years ago | (#7279645)

"This begs the question" does not mean what you think it means. See It actually means a logically flawed argument.

Re:Also, bugs take $$$, who should pay? and ethics (1)

Frobnicator (565869) | about 11 years ago | (#7283883)

It actually means a logically flawed argument.
[mental note: Ignore obvious trolls/nazi attacks like this in the future.] Um, yeah, that was exactly what I said, and it's exactly what I meant. That's why I said it was begging the question.
your link: An argument that improperly assumes as true the very point the speaker is trying to argue for is said in formal logic to "beg the question." Here is an example of a question-begging argument: "This painting is trash because it is obviously worthless." The speaker is simply asserting the worthlessness of the work, not presenting any evidence to demonstrate that this is in fact the case. Since we never use "begs" with this odd meaning ("to improperly take for granted") in any other phrase, many people mistakenly suppose the phrase implies something quite different: that the argument demands that a question about it be asked--raises the question.
My argument was structured as follows:
  • Software engineer introduceed a flaw
  • Societal values imply that individuals are responsible for the damages and costs due to flaws they introduce
  • Flaws demonstrably cost the company money
  • Therefore, software engineers need to be held responsible (financially or otherwise) for the flaws they introduce.
From the definition at the place you posted, I "improperly took for granted" that "Individuals [specifically engineers] are responsible for the flaws they introduce", since you cannot use the conclusion as part of the argument except in arguing through contradiction. If you prefer, substitute it into the same form specified in your 'questiong-begging' argument you linked to: "The engineers are responsible for the flaws because they are obviously responsible for the flaws." Therefore, it begs the question (or more precisely, the assertion) that we are responsible for the flaws that we introduce.

As you must know (since you are asserting a logical fallacy), the way to remedy a flaw of this type is to either replace the invalid statement or to support it through other means showing that we are not taking the statement for granted, but that it is a valid piece of the argument. Which is why I asked the begged assertion as a question itself, "How responsible are any of us [ as software engineers, electrical engineers, etc. ] for the flaws they introduce?" If we are indeed responsible, then the argument holds because the element has been supported through other means. If not, then the argument fails (although the conclusion may still be proven valid through other means).

Finally, whether you accept my argument or not, and regarless of if you believe the word I should have used was 'begging' or 'demands' or 'brings up' or any other word selection: /. is an inforamal discussion board. Enforcing strict formal language or other strict language rules in this informal arena makes you what is commonly called either a "grammar nazi" or a "troll".

Which would you prefer to be called?


What software engineering should mean (1)

Anonymous Brave Guy (457657) | about 11 years ago | (#7279651)

Should the company be liable for an engineer's errors (as is normally done in business)? Or should the individual or team be liable?

That's an interesting idea that goes right to the heart of how software development is done today. Realistically, at present, a company will have to be accountable for the errors, because if it were all pinned on an individual developer, no-one would risk taking on the job.

In a better world, software engineering (currently a rather offensive term to real engineers, and one of dubious legality in many places) would be done more like real engineering disciplines: ultimately, a qualified engineer would have to sign off on a product and take responsibility for it. However, that engineer would also have the authority to say "no" if management put unrealistic budget or time constraints on a project, and there'd be suitable support, insurance, etc, making his position realistic. Any code monkeys on a project are responsible to that engineer. They aren't liable if it all goes wrong, but if their work isn't up to the engineer's standards, they're deemed incompetent and shown the door.

The "real engineering" scenario is hardly out of this world, but the question is how you make the jump. You need a mechanism for recognising the skill, experience and professionalism of people who are good enough to be engineers. Most of the software developers in the world wouldn't even be close, but who's to decide who is and who isn't good enough?

Remember that you're talking about an industry where "best practices" are in constant competition with fads, and concrete examples often date within a decade. Compare that with, say, civil engineering, where best practices are based on thousands of years of experience, and concrete examples (sorry :-)) last for centuries. At that point you can see the problem with getting software engineering started, but once the ball is rolling, I think the software development world will be a much better place.

Now we're forced with a tough financial decision: do we spend a month or more carefully re-creating and testing the 'design exceptions', (probably 3-5 FTE years in total) or do we do it 'the right way' and break both our own and our customers' software? (again, several FTE years, but potentially loosing faith with the customers.)

In today's management-driven culture, that's easy: your managers have to decide what your customers will accept, and you do what they tell you.

In a more engineering-oriented culture, it's also easy: you do things properly. If your engineers could discuss the problem with their engineers, all sides would probably agree on this, and make the decision that is, in the long time, in the interests of both your company and your client, which is almost certainly to rework the broken system properly, from scratch if necessary.

Personally, I would always prefer to do that, since all my experience tells me that cleaning it up now will take less time than reliably fixing all the known bugs anyway, and will be much more effective at preventing similar "special case" bugs in future.

Re:Yes they do. (1)

Samus (1382) | about 11 years ago | (#7269870)

I have to echo this. One of my assignments right now is to fix a post production bug. It happens to be a particularly nasty bug that we haven't been able to reliably reproduce. I think if you factor in the things needed to work around the fallout of this bug its easily the 50-200 ratio described. Even if you don't its still big. There can be a lot of soft costs involved in fixing a post prod bug. One big one is opportunity cost. If I didn't have to spend my time fixing a bug I could be off doing other productive things.

Re:Yes they (still) do -- but 15 times? (2, Insightful)

kawika (87069) | about 11 years ago | (#7270409)

I at the time those numbers were calculated, the software development process was very different from today. It was harder to distribute software, harder to deploy updates, harder for developers to get information about errors in the field. Testing the next release was a lot more critical because if a bug did exist it might not be possible to fix for several months until the next release could be sent out via floppy or mag tape to each customer.

Today most people download their software throught the Internet, and can get patches just as fast, even automatically as they are posted. Tools like Windows Error Reporting, Quality Feedback Agent, and BugToaster make it easier for detect and prioritize bugs based on their frequency of occurrence in the field.

So with all those changes, it's still 15 times more expensive to fix a bug after release? Does that take into account the time value of money, the value of early user feedback, or lost opportunity costs?

Lost customers (0)

Anonymous Coward | about 11 years ago | (#7273867)

If I'm anything to go by if it doesn't work, I don't use it for another 2 years.

I'm a software engineer, and I don't have the time to download patches to the software others couldn't be bothered to code correctly: I see buggy software as an attitude problem that won't go away with the next release

Re:Yes they (still) do -- but 15 times? (1)

jrumney (197329) | about 11 years ago | (#7275613)

I think the factor depends a lot on the specific environment the software runs in. When this idea was first proposed, replacing software in the field meant shipping hard copies to users, and for embedded software, replacing PROMs, hence the 50-200 factor. These days the distribution medium is much more likely to be the internet, and at worst even upgrading an embedded system is a matter of plugging in a laptop and reflashing, so the factor might be much lower (5-50 maybe), but the principle still stands.

Re:Yes they (still) do -- but 15 times? (1)

JustAnotherReader (470464) | about 11 years ago | (#7283015)

Today most people download their software throught the Internet, and can get patches just as fast, even automatically as they are posted.

But that's the cost of updating the software. What we're all missing is the cost of the bug. Not the cost of finding and fixing the bug, but the cost of the bug itself.

Let's say that you're working for a bank and their wire transfer software delvelops a bug during the end of month or (even worse) the end of quarter period. There may only be 5000 to 10000 transactions but those transactions can account for several billions of dollars (Yes I work as a programmer for a bank and yes these numbers are reasonable).

A bug like that could cost you customers. The knds of corporate customers who use that wire transfer system have hundreds of bank accounts with hundreds of millions of dollars in assets. If they're unhappy with their service then the cost of losing even ONE of those customers is in the hundreds of millions of dollars.

What if the bug causes an airplane to crash? Or a car to suddenly accelerate? Those bugs cause damages (both physical and financial) in the millions of dollars. Yes, the cost of a bug in the latest video game is trivial. But the cost of a bug in systems where people's lives and finances are at stake is tremendous.

Re:Yes they do. (1)

anomalous cohort (704239) | about 11 years ago | (#7270894)

I've not read the particular references sited and I've not seen anyone care to speculate on the nature of the cost vs time curve (i.e. exponential or logarythmic) but my take on this effect isn't so much about bugs in code as it is about errors throughout the entire SDLC.

Here is an example. Let's say that the architect for project X decided to use a file based ISAM instead of a relational database to persist the data. Think how costly the fix would be if this error wasn't discovered until the part of the coding phase where the reports were being written or the part of the stress testing phase where data corruption started turning up. There would be a whole lot of rewriting at that point. Whereas if the error was caught in the architecture or design phase, the cost of the changes would be minimal.

Technologies such as AOP [] help mitigate these kind of costs.

Trade off (2, Interesting)

Koos Baster (625091) | about 11 years ago | (#7268644)

Defects are easier to find in a concrete product than in a conceptual design. Also, many bugs will be introduced in later stages. Therefore, even a full proof design may evolve into a buggy implementation. So surely: there is a trade-off between looking for "bugs" too early and fixing bugs too late.

Nevertheless a trainer is correct in stressing the golden think-before-you-code rule - especially when instructing unexperienced coders.

Every program has two purposes -- one for which it was written and another for which it wasn't.

Re:Trade off (1)

angel'o'sphere (80593) | about 11 years ago | (#7269676)

Thats not right. You can apply the exact same quallity ensurance measures at any stage of the development, except tests.
It is equally easy to have a formal review on your design as it is to have a formal review on your code.
While you "plan" software you should have a clue how you want to esnure that you have: a) what you want, and b) how you want it, in every single stage.
A standard way is:
Use Use Cases to capture requrements
Use Scenarios to refine Use Cases and lay the foundation for later test cases
Use Walk Throughs to validate the Use Cases and Scenarios
With planned and well done Walk Throughs you will be able to catch a lot of errors, especialy if you use pre- and postconditions in Use Cases as well as in Scenarios.
Derive your consceptual design (component architecture) from the use case model, use Walk Throughs to verify the mapping from the use case model to the component architecture. Play the scenarios on the component architecture.
Repeat the quallity ensurance measures, like Walk Throughs or Inspections on every "gap" from one design level to the next level.

Re:Trade off (1)

ichimunki (194887) | about 11 years ago | (#7271551)

I think the problem here is that bugs and defects are possibly two different things (not that you missed this distinction). Obviously it's going to cost a lot (either money or time or whatever) to fix a defective requirement that gets fully coded. You'd quite possibly have to start entire pieces of the project from scratch. On the other hand, a buffer overflow error that is introduced early on in the process isn't going to be any harder to fix later than it is if it is caught right away.

I wonder if the best way to produce error-free code isn't to design as a team and code in pairs. Sure it sounds like a lot of upfront expense, but some studies show that pair programming especially has a positive impact on the process in spite of the seemingly higher opportunity cost.

Bug != Defect (1)

magnum3065 (410727) | about 11 years ago | (#7282856)

Bugs are when the software doesn't fulfill the specification; defects are when the specification doesn't fulfill the requirements. These problems are introduced at different stages of development. As one professor put it there are two questions "Did I implement the thing right?" and "Did I implement the right thing?" Early on in the software design it's important to make sure that the specifications that are written for the software actually meet what the customer wants. These are the problems that can potentially be very costly to fix later on. You can implement an entire software system that perfectly meets the specs (i.e. no bugs), but if the specs were flawed it could take a lot of time to revise the specs and fix the system to implement the right thing.

Bug testing is another thing completely. You can't find bugs until they've actually been written in the code. This is the reason for the "test early, test often" philosophy and code reviews. It's important to find bugs early too, but you're right that it isn't feasible to find bugs before the implementation phase.

Logarithmic (4, Informative)

gazbo (517111) | about 11 years ago | (#7268665)

Looks more exponential to me.

Re:Logarithmic (1, Informative)

Anonymous Coward | about 11 years ago | (#7269110)

Logarithmic progressions are exponential.

Re:Logarithmic (0)

Anonymous Coward | about 11 years ago | (#7270366)

What is a logarithmic progression, and how is it exponential?

Re:Logarithmic (1)

MarkusQ (450076) | about 11 years ago | (#7271523)

Logarithmic progressions are exponential.

No, they aren't. Think about it. For any n > 0, base > 1, log(n+1)-log(n) < log(n)-log(n-1), whereas exp(n+1)-exp(n) > exp(n)-exp(n-1).

The parent was correct; the examples were 10^n for n=0,1,2... This is exponential, not logrithmic.

-- MarkusQ

Re:Logarithmic (1)

tyrecius (232700) | about 11 years ago | (#7272868)

Exponential up, logarithmic down. They are opposites, like plus and minus. If a progression is exponential, it is logarithmic, and vice versa.

That said, such progressions are generally said to be logarithmic. It is just a convention. Like the fact that geometric and arithmetic series can be thought of as division and subtraction respectively, but we generally don't.

No. (1)

MarkusQ (450076) | about 11 years ago | (#7273216)

Exponential up, logarithmic down. They are opposites, like plus and minus. If a progression is exponential, it is logarithmic, and vice versa.

No, they are inverses [] . If the mapping from a the range to domain is exponential, the mapping from the domain to the range is logarithmic. This does not mean that there is some value A' for every value A such that A^B is the same as log-base-A'(B) for all B. The functions are not the same.

-- MarkusQ

Re:Logarithmic (1)

Anonymous Brave Guy (457657) | about 11 years ago | (#7279698)

If a progression is exponential, it is logarithmic, and vice versa.

No, if a progression is exponential then the reverse of that progression would be logarithmic.

You're confusing "opposite" with "inverse". The next step after +n vs. -n and then n vs. 1/n might be e^n vs. logn.

Look at it another way. Exponential series grow faster than any polynomial. Logarithmic ones grow slower than any polynomial one. The two cannot be the same: arguably the most fundamental properties they have are opposites.

Re:Logarithmic (0)

Anonymous Coward | about 11 years ago | (#7280666)

regardless of nomenclature -- but the hashing out in the parents is correct -- if bug cost was logarithmic then it would make sense to never fix them.... you return on investment is at least linear in sales so you want to wait and wait.... do you suppose the original poster of the quesiton is Bill Gates? That might 'splain some things !

More references... (2, Informative)

Bazzargh (39195) | about 11 years ago | (#7268712)

As I recall there was a conference paper in Extreme Programming Perspectives [] which describes an "infection" model for bug creation, fixing, etc. They were trying to model exactly the effect you describe to see if they could (in a model) find any justification for XP's argument against the increasing cost of bugs through phases. Again, just from memory, they do try to validate the model against figures from real studies.

There's also material in Watts Humphrey's book on the Personal Software Process [] (about as far from XP as you can get). That book is illustrated throughout with statistics about students who tried to complete the exercises in the book, including in Chapter 13, where there's a section on "The Costs of Finding and Fixing Defects.".

Re:More references... (1)

El (94934) | about 11 years ago | (#7271742)

Uh, maybe bugs found earlier in the development process are cheaper and easier to fix because they're much more obvious bugs! I don't know about you, but as a developer I always fix the easy problems first, and leave the "pie in the sky" enhancement requests for future development...

Yes (3, Interesting)

Anonymous Coward | about 11 years ago | (#7268723)

Compare the cost of testing, then over-the-air updates to a set of mobile phones & associated risk management
the cost of just building and shipping new code
that has yet to undergo testing or launch.

To give you an idea, managing the testing and upgrading over-the-air softare in mobile phones can become a new project in its own right with all the associated monitoring and overheads.

Fixing the bug of a pre-launch project can be a 1 minute job.

Re:Yes (2, Informative)

archilocus (715776) | about 11 years ago | (#7268834)

Agreed. I've always thought Barry's estimates were on the money.

Comes down to what you consider a bug ? If you think a bug is a spelling mistake on a web page then 1:5 is probably not far off (but bad enough!!!) If a bug is bad design decision in a mass market product then 1:1000 might be a bit on the light side...

Don't look back! The lemmings are gaining on you!

Re:Yes (1)

Kanagawa (191142) | about 11 years ago | (#7269533)

If you have to create a new project to deal with upgrading code in a distributed environment, you incurred the bug way back in the requirements / design stage of the system. Implementing a fix then would have been pretty expensive, too. Does the cost of that project really change all that much between the beginning and the end of the lifecycle?

Posit: It costs more because where one massive design problem has been built into the system, there are likely to be several (dozens?) more just as bad or worse. Finding that one bug is like turning the lights on in the kitchen--all of a sudden you spot twenty roaches, not just one. If your mobile phone infrastructure included a mechanism to test, deploy, and undeploy code from the very beginning then each incremental update would be cheaper.

Getting rid of a single roach can be a hassle but its rarely expensive in a well-designed system. Getting rid of a colony of roaches that are living inside a roccoco infrastructure is a massive undertaking-- after all, you wouldn't want to damage the gold leaf plate, right? :-)

Design simpler systems, with fewer inherent problems, and you end up with fewer colonies. There will always be roaches, but at least you won't have an infestation on your hands. When you do find a small colony, you'll have less to rip apart and fix. Knowing good design when you see it seems to be an art, rather than a science. There appear to be lots of "software architects" who think complexity is the hallmark of good design.

Re:Yes (0)

Anonymous Coward | about 11 years ago | (#7275853)

Good points, but I'm not talking design problems here, the design is sound (as far as is known), I'm just talking implementation problems.

Even non-serious bugs that don't warrant delaying the product but that will need over-the-air-fixing soon afterward launch can be expensive but less expensive than delaying launch.

Re:Yes (1)

p_tweak (708775) | about 11 years ago | (#7275311)

THey actually test cell phone firmware? Have you used a Nokia phone lately???

Costs accumulate (5, Insightful)

David Byers (50631) | about 11 years ago | (#7268871)

Never forget that complexity accumulates. Fixing the bug itself probably costs about the same at every stage, but other costs are introduced as the project moves along, and peak after the software has been deployed.

A bug found after deployment has costs associated with it that a bug found during coding does not:

  • Cost of running integration and system tests again.
  • Cost of recertification (if you're in that kind of environment).
  • Cost of deploying the software again.
  • Support costs when only half your customers deploy the new version.
  • Indirect costs caused by using resources to fix bugs rather than implement revenue-generating features.
  • Liability for damages caused by the bug.

The cost of finding and fixing the bug may be negligible compared to other costs.

Another aspect of the issue is the nature of the bugs you find late. In my experience, bugs that survive testing and deployment tend to be either bugs in requirements or pretty subtle bugs that slipped through testing, and both are more expensive than the type of bugs commonly detected early on during development.

Re:Costs accumulate (3, Insightful)

SnowDog_2112 (23900) | about 11 years ago | (#7269170)

If I could mod that post up, I would, but my Magic Mod Points are empty today, so I'll just post a little "me too" post.

I can't point you at any studies, but I think it's common sense. In anything but a fly-by-night shop, the later in the cycle you are, the larger the ripple-effect is from making any change.

If I can fix a bug in my code before it gets to QA, QA has never seen the bug. There's no bug in the bugtracking database, there's no need to review the bug at a weekly cross-functional bug triage meeting, and there's no need to write specific regression tests that specifically make sure that bug is fixed. There's also no need to perform those specific regression tests on every build that follows to make sure it's still fixed. There's no need to hold a meeting to justify the cost of fixing the bug versus the cost of simply leaving it in and documenting its presence and its workaround. Just there, I've saved a ton of time.

The costs explode even higher once the software is in the field. Once it's in the field, it hits support every time the bug is reported in the field (multiple times, usually, as of course level 1 support is usually going to blow off the report, tell them to reboot, or whatever the "filter out bogus complaints" method of the week is). Finally it might bubble up through support but only after it gets seen multiple times, costing us money every time. Then it gets argued about by who-knows-who (more time/money) until finally someone tells development it's a bug, and then we have to hold meetings and decide whether it's important enough to fix immediately, whether it should go in a service pack or just the next version, etc. We have to write up a technical bulletin and distribute it, that bulletin has to be reviewed by documentation, product management, QA, and who-knows-who else. Then QA has to specifically add test cases to make sure the fix is there in future versions, etc.

The costs explode. Again, in any sort of large corporate environment, a cost difference of 100:1 seems completely reasonable to me.

Re:Costs accumulate (1)

Anonymous Brave Guy (457657) | about 11 years ago | (#7279762)

The costs explode. Again, in any sort of large corporate environment, a cost difference of 100:1 seems completely reasonable to me.

I think the key point behind your post is that you're considering the overall costs to a large scale outfit, including all the supporting infrastructure the company has to create to handle bugs from QA and customers. Perhaps a lot of people here are only considering the time spent directly by the development and QA teams themselves.

OTOH, with a smaller scale outfit the costs don't scale up nearly so fast. The "everyone knows everyone" effect within a small development shop, and good working relationships with your major clients so bugs are reported clearly and efficiently, both go a long way.

Re:Costs accumulate (3, Insightful)

dabraham (39446) | about 11 years ago | (#7273690)

There's also the question of definition. How much does it cost when
  • some customers who were thinking about buying your product decide to buy CompetitorCo's because they heard that you've had three bug fixes already?
  • you annoy your partners by telling them "Hey, the stubs we sent you to start working against has just changed. Here's the new version."?
  • you burn out your geeks by calling a meeting Friday afternoon and telling them that MajorCustomerCo just found a big bug, and it needs to be fixed by Monday 9AM?
And moreover, yes, these will vary dramatically from project to project.

Re:Costs accumulate (1)

Anonymous Brave Guy (457657) | about 11 years ago | (#7279724)

That was a great post, but I do disagree with this one part:

Fixing the bug itself probably costs about the same at every stage,

I think one of the reasons fixing bugs later on is more painful is that the developers who were intimately familiar with the code when they wrote it will have moved on, and will take time to get back up to speed. Worse yet, the original developers may have left, and unless they were of the rare breed who both write good comments and leave compact but complete high level documentation, no-one else will ever have quite the same insight into the hows and whys, making it less likely that any fix will be complete and reliable.

Have you watched fight club? (2, Interesting)

oliverthered (187439) | about 11 years ago | (#7268941)

The guy in fight club worked out if the cost of a recall and fixing the fault was going to be greater than the cost of litigation.

I would expect the same kind of factors come into play when the product is software instead of hardware. So why not try google []

Sometimes it costs less to pay a person to manually correct data that is incorrect due to a fault in the core of a product, sometimes it's cost less to do a re-write.

Alistair Cockburn (1)

jhannes (677848) | about 11 years ago | (#7269231)

There are XP proponents that claim that the cost of change curve is still valid. [] (Summary: If you perform work based on a fault assumption, it will always cost you). The real question that XP poses, though is: How do we deal with this fact?

"The exponential cost curve is mostly in detecting and communicating the Mistake and naming the change that is to be made. XP cannot change that curve, and indeed, XP takes that increasing cost curve neatly into account. So the first lesson I get is that one should not base a defense of XP on the ABSENCE of the curve, but rather on the PRESENCE of the exponential cost curve."

A recent example (1)

JonoPlop (626887) | about 11 years ago | (#7269410)

Even disregarding the direct developer cost of finding and fixing the bugs, take a look at a recent example. Enter the Matrix seems to have been plagued with bugs (although I must admit that I haven't played it yet myself). It's "reputation" is now far from good, and so the bugs has caused a large loss in potential sales.

Now a hypothetical example. You work for a large company, and you're looking for some enterprise-level database software (HypotheticalDB). You get some that has been hyped, and spend a fair amount of time learning it, and developing your solution. When you start trying to use it, though, there are a few major bugs that make it unusable for practical purposes. You eventually switch programs. Now, the company that makes HypotheticalDB will have lost money in some sales (as the company expands), and likely support, too. But the major hit is this: HypotheticalDB 2.0 comes out, which is a lot less buggy than 1.0. However, would you really trust that program again? Also, you've already set up your solution and are running it now; there would be a large cost for you to switch over, too. So HypotheticalDB has lost money in software licenses, support, and future sales of that product and probably others. And that's excluding the actual development cost of finding and fixing the bugs.

Re:A recent example (1)

JonoPlop (626887) | about 11 years ago | (#7269432)

OMG, so much for my reputation, too ("it's reputation")... How did I miss that? Now I have to make up for it; Fixing bugs after is much more expensive than if I'd corrected it then and there. :)

Larry Ellison's Solution to Version 2.0 Problem (4, Interesting)

joneshenry (9497) | about 11 years ago | (#7269540)

From what I have read [] , Oracle's founders had the best solution to the problem of customers holding off buying until version 2.0: "This first Oracle was named version 2 rather than version 1 because the fledgling company thought potential customers were more likely to purchase a second version rather than an initial release."

Re:A recent example (1)

El (94934) | about 11 years ago | (#7271806)

Let's further assume that the vendors of HypotheticalDB insist that your server be connected to the internet "for support purposes". When their first customer encounters a bug, they rush to the customer site, analyze the problem, fix it, and patch all the other customers systems over the net without telling the customers. Now they haven't lost any sales, and they've saved themselves the cost of doing massive testing in a live enviroment. (I.e. look at an airline reservation system with tens of thousand of concurrent clients -- how do you simulate that load in a lab?) Remember the M$ mantra: "Our customer is our best quality assurance!"

Good point -- Backend bug fixin easier today (2, Interesting)

mactari (220786) | about 11 years ago | (#7269756)

If you have a good logical design that compartmentalizes each functional unit of your code (what I'll call "well-factored"), how long should it take to fix any one bug? For a typical app, even of pretty hefty size, you should, in theory, be able to run to the exact object, swap out what's broken, and *poof*, every place that functionality is needed is good to go. XP et al really do lose a lot of time in the overhead it takes to keep two people on any programming task, unit test, and the rest. You might be nearly guaranteed nice code, but what's your opportunity cost? In short, it's having two coders hacking about twice as much on what, if they're mature enough, should be well-documented, modular code!

Now we all know *poof* is not the case, and we all know that a well-factored system is about as hard to come by as nirvana (which means each fix requires ripping out a chunk of code), but the argument is still a valid one. Unless you have a huge system, where perhaps someone's "fixed" a bug by hack on top of hack ("Hrm, Bob's addFunction always returns a number one too low. Instead of bugging Bob, I'll just add one to the result in my function."), bugs today aren't like bugs in pre-object oriented days. If coders in the 80's had the debug tools and langauges we have today... Let's face it, it's much easier to create an Atari 2600 game today [] than it was when you had to burn to an EPROM to test on hardware each time and print out your code to review it.

The bottom line is whether it's more cost-effective to prevent 99.44% of bugs up front than it is to fix the extra 10% that slip through. I believe the original post is simply suggesting that the cost of fixing on the backside is dropping considerably, especially compared to what the same results would've required decades ago, and that is, honestly, a good point.

(Remember, this isn't upgrading code -- might be awfully tough to make code that's slapped together change backends from, say, flat files to an RDBMS; this is just bug fixing to make what you've got work *now*. But XP tells us not to program thinking that far down the road anyhow [] , so future scalibility is another topic altogether.)

Re:Good point -- Backend bug fixin easier today (1)

Anonymous Brave Guy (457657) | about 11 years ago | (#7279799)

bugs today aren't like bugs in pre-object oriented days

Why? I think bugs today are exactly like bugs in "pre-OO days". Modular code was a good idea then, and is still a good idea now, but I fail to see why OO has anything to do with it...

POP3 as an example. (5, Insightful)

Technician (215283) | about 11 years ago | (#7269769)

If POP3 could have looked forward and seen the SPAM and Forged header abuses, security could have been part of the standard. Now that POP3 and IMAP mail is everywhere and forged headers are also everywhere, changing the de-facto standards is a big thing. Making the switch to something more robust will be a long and painful transition. Everything will be incompatible for a while.

It will be as easy as getting the US to switch to the metric system or transition with the rest of the world to driving on the left side of the road. Both would be much cheaper if they were implimented in the beginning instead of attempting a transition later.

Re:POP3 as an example. (1)

Haeleth (414428) | about 11 years ago | (#7273552)

> It will be as easy as getting the US to ... transition with the rest of the world to driving on the left side of the road.

I was under the impression that the side of the road on which people drove was split fairly evenly between left / right? Britain, parts of the Commonwealth, and Japan on the left, the USA and mainland Europe on the right, and India straight down the middle except when there's a cow in the way.

Re:POP3 as an example. (1)

colinleroy (592025) | about 11 years ago | (#7279931)

Very true, but you meant SMTP. POP3 and IMAP are somewhat secured with login methods.

That depends.. (1)

sporty (27564) | about 11 years ago | (#7269823)

Uh.. that depends on the bug. A bug where the grammar and spell checker are switched has a small initial cost to the user, but once they figure it out, it's fine. Fixing it should be near minimal cost. Something in an errata to the manual.. if there was a printed one, and poof. A software bug that costs near nothing.

If it's a bug where something is off by a dollar per 100 or so transactions, that is hella costly. Both to find if it's not consistent, customer support and all of the other efforts to fix it.

Always remember the 90-90 rule. (1, Funny)

TheSHAD0W (258774) | about 11 years ago | (#7269993)

The rule is, 90% of the bugs take up 90% of your budget, and the remaining 10% take up the other 90% of your budget. This goes the same for time before deadline.

Windows worms are cheap (1)

nuggz (69912) | about 11 years ago | (#7270000)

Yeah, it is so cheap having those MS Windows worms running around.
I can't see how it would only cost 5 times as much to get millions of users to patch their system, account for their lost time, and write the patch.

Re:Windows worms are cheap (1)

El (94934) | about 11 years ago | (#7271899)

Uh, Micro$oft just posts the patch to their servers. Patching those millions of systems doesn't cost Micr$oft a cent! And Micro$oft assumes no liability whatsoever for problems caused by their security holes. On the other hand, if they delay a release, their revenue slips. Now do you begin to understand their business model?

Re:Windows worms are cheap (1)

greenhide (597777) | about 11 years ago | (#7272281)

Yes, it does. It probably costs them 1,000s for each patch (more like 100,000s), so you have to divide that by the number of systems to get a cost-per-patch, which may be minimal but will *always* be > 0. Also, there are bandwidth costs, promoting costs, PR people who spread the word about the patch, techs which post detailed articles on the patch, and probably customer service techs who stand by to deal with their very important customers.

Since a patch goes out to (hopefully) so many systems, it isn't much on a per system basis. But it still costs them a lot of money.

Re:Windows worms are cheap (1)

El (94934) | about 11 years ago | (#7272578)

Actually, customers click on "Windows Update" or have automatic "Windows Update" notification, plus most IT publications announce whenever there are new patches available, so PR costs should be negligible. Yes, there are fixed development costs associated with each patch, but these are probably only twice as costly as doing the fix before release (does anybody really beleive they go through a full test cycle on every patch?) Agreed, the cost per patched system is non-zero, but it may be so low as to be less than the opportunity costs of delaying shipment of a product due to potential bugs. Mnay companies seem to have this "drop whatever features are necessary to meet the announced ship data" mentality; apparently some consider adequate design analysis and testing to be two of the things that can be dropped...

Depend on the bug (2, Insightful)

emmenjay (717797) | about 11 years ago | (#7270283)

Coding bugs are generally not to tough to fix (though sometimes hard to find). Design bugs are the killer. If you discover a design bug after implementation, you might need to change or even rewrite big slabs of code. The logarythmic estimate is probably a worst case analysis, not an average case. But without a doubt, design bugs that make it into production are bad stuff. That's why sofwtare engineers are either grey-headed or bald. :-)

Re:Depend on the bug (1)

El (94934) | about 11 years ago | (#7271996)

Design bugs are the killer.
And yet, every place I've worked has given only lip service to design reviews, refusing to spend the time to actually analyze the designs or fix flaws pointed out...

What type of bugs? (4, Insightful)

Mesozoic44 (646282) | about 11 years ago | (#7270432)

Years ago I worked with a bunch of economists in the US Federal Government - they categorized 'bugs' in their memos into three types:
Typos: Simple misspellings of words. Infrequent, easy to detect, easy to fix.
Writos: Incoherent sentences. More frequent, hard to detect, harder to fix.
Thinkos: Conceptually bonkers. Very frequent, subtle and hard to detect; almost impossible to fix.

Most 'late' bugs that I've seen in software projects belong in the last category - a lack of design or the failure to make a working mock-up leads to 'thinkos' which are only obvious when the application is nearly completed. These are expensive to fix.

Yes (1)

photon317 (208409) | about 11 years ago | (#7270674)

If define "later phase of the project" as a point further down the line on a scale of code complexity, then it's obviously true. Rooting out a bug is much easier when the codebase is smaller and simpler than when it has grown into a huge complex behemoth.

Bug vs Design Flaw (1)

Godeke (32895) | about 11 years ago | (#7270676)

I think that the accumulated cost factors for late bugs is exaggerated by lumping in major design flaws with simple bugs. Surely a bug found during the coding process itself is cheapest to fix (thus XPs pair programming and test first methods) because the hunt isn't going to be as large. However, having worked on the same project for three and a half years, we stumble over hidden bugs from time to time, and they really are not much more difficult than recent bugs to fix.

Where we find ourselves paying a premium is changing design decisions. However, as we have been following XP incremental design, even that isn't that bad, and frankly the most expensive design corrections we have had revolve around over designing. Customer driven iterations have been much less likely to be changed than our brilliant analysis.

Sometimes fixing bugs late brings money in (0)

Anonymous Coward | about 11 years ago | (#7271415)

Well, past a certain point, you can get your customers to do the testing, and then have them pay for fixes through maintenance and consulting.

The nice point about it : it's also a good way to retain customers and charge them after they paid good $$ for the initial release.

I've been working for quite a few software companies by now, and even though this has never been an explicit policy, there seems to be a tacit agreement that meeting FCS deadlines is far more important than delivering bug-free software.

As long as the customers seem to accept this, and don't find competing products with better practices, I see no particular evil in this behavior: after all, we're still trying to deliver the best software we can...

Business vs. Technical (0, Troll)

Anonymous Coward | about 11 years ago | (#7271961)

So... are you still at Microsoft?

examples from my former industry (1)

morcheeba (260908) | about 11 years ago | (#7271419)

I used to do embedded programming where it was really costly to fix in the field.

Here's a similar project's repair estimates (pdf). [] Mind you, this product cost 1000x our product, but since they were at similar customer sites, the repair cost wouldn't be significantly different.

Service trip #1 = $413 million
Service trip #2 = $497 million
Service trip #3 = $547 million
Service trip #4 = $400 million
(note: these prices don't include airfare)

In fact, it would be far cheaper to just toss out our old hardware and start from scratch ($13 million total costs) than it would be to try to fix it in the field.

Yes, 100:1 is outdated (1)

El (94934) | about 11 years ago | (#7271659)

Back when you had to physically send an employee to fix the problem after it shipped, or to send replacement ROMs, it would have been 100:1. Now everbody assumes bugs found after ship are par for the course and builds in software/firmware upgradability over the 'net, it's probably more cost effective to ship with bugs and fix them later, when you factor in the opportunity cost of delaying shipment to be absolutely sure there are no bugs. Many companies appear to operate that way these days (cough, microsoft, cough). The only downside seems to be that sending customers an email telling them they need to upgrade because what you sold them is a crock of manure could be damaging to you companies reputation. However, software companies are working on a fix for that too... they'll simply update your software for you without bothering to tell you about it! Isn't it wonderful now that almost every computing deviced is connected to the 'net?

Use some common sense (1, Interesting)

Anonymous Coward | about 11 years ago | (#7271662)

I think a little common sense can show that the cost of finding defects in the field is higher, although it depends on the nature of the product. Each time a customer calls support, it'll cost money. If multiple customers call support for the same bug, it will cost even more. In addition, the lost reputation may cost future sales.

Would you buy a car from the same company again if your current car had a lot of recalls? Is it cheaper for the car company to fix a defect before the car is made, or perform a recall? While a patch may not appear to cost as much as changing physical parts, it still requires additional $upport and hurts the company's reputation.

Contractors (3, Insightful)

pmz (462998) | about 11 years ago | (#7272173)

How do you fix a bug cheaply when the contract has ended and all the people working on it are gone? Enter training costs for new staff.

How about needing a whole new contract just for the bugs? Enter the immoble bureaucracy.

How about a year later, when, even if someone from the project is still around, it takes them a few days just to remember what they did 14 months ago? Enter seemingly wasted time.

Anecdotal evidence is viable evidence for the undeniable fact that late bug fixes are very expensive.

It's a fundamental XP issue (1)

PinglePongle (8734) | about 11 years ago | (#7272206)

The same question was asked in an []
XP usenet post. In fact, in Kent Beck's book on eXtreme Programming, he discusses
the cost of changing software in the various stages of the development process,
and a recognition that this "10-100-1000" progression is not necessarily true is
a fundamental part of the XP philosophy.
For what it's worth, I believe there are many domains where that cost escalator
does not apply. If you have a well-designed application rolled out to a manageable
userbase with access to a helpdesk or the development team, it is fairly cheap to
release software, and it is cheap to find and fix bugs. It's reasonably cheap to
find and fix bugs on web projects, too.
There are also domains where fixing bugs after the release is extremely expensive -
embedded devices, shrinkwrapped software, software subject to regulatory checks.
At the time the metrics were collected, compile times were a significant issue for
pretty much all developers - the "code-compile-test" cycle could be hours or days.
Nowadays, most of us can compile our application in seconds or minutes (no, not
the linux kernel. Yet.) IDE's and case tools have made it easier to understand code,
and we have debuggers which allow us to look deep inside the application in real-time.
I don't think the "one-size-fits-all" metric was ever valid, just as there
is no "one-size-fits-all" development process. I think the ratio for any given project
has gone down from its level in the early 1980s, though.

Show stopper bugs cost more with time. (1)

toybuilder (161045) | about 11 years ago | (#7272238)

Minor bugs might be okay, but show stopper bugs are definitely more expensive.

Here's why: when it's early in the development cycle, the team working on the product is small and intimately familiar with it. Finding and fixing the bug takes less time and costs less money in idled staff-hours affected by the bug.

As the product grows, more developers are added, additional staff (let's pretend this is a commercial effort) comes in, sales, marketing, customer support, et cetra... Now, the same show-stopper bug might take the same amount of time to identify and fix, but there's potentially more people down the dependency chain that will also be affected.

and, in reality, as the product gets bigger and has more developers, there's a fair chance the developers will be less intimately familiar with their code and will end up spending more time fixing the bug!

It's expensive when you have to trash stock... (2, Interesting)

Cranx (456394) | about 11 years ago | (#7272548)

It's expensive when you have to trash your CD stock because they're unshippable, or when you have to ship CDs to all of your customers three times in two weeks after you release. Try it and I bet you will have all the empirical data you and your wallet need.

Fix bugs early; it's less expensive that way. =)

Exponential, not logarithmic (1)

Wesley Everest (446824) | about 11 years ago | (#7272652)

Sounds like you meant the cost grows exponentially, not logarithmically. But, yes, that makes perfect sense and probably applies to much more than software. Imagine you are a "developer" building a house. You put the foundation one foot too close to the property line, violating zoning laws. Imagine the costs if someone noticed the flaw in the blueprint before concrete was poured vs. after the foundation was poured, vs. after the whole house was built on top of it.

It certainly depends on the bug, but when you think about it, in the worst case, a bug is so fundamental that all the rest of the code depends on it and would need to be redesigned. If you catch such a catasrophic bug early in the process it's not such a big deal, but towards the end of the project, it could mean the death of the project.

Forest for the trees (1)

duffbeer703 (177751) | about 11 years ago | (#7273502)

You are comparing apples and oranges.

A bug found in the requirements phase is not the same as someone's misplaced semi-colon.

Say you were told to develop an inventory management style. You deliver a curses-based terminal app and the customer says "wait, I was expecting a web interface!"

That is a requirements-gathering bug that will require substantial work to correct after release!

Cost is a function if ignorance (1)

joab_son_of_zeruiah (580903) | about 11 years ago | (#7273565)

"Corporate Trainers" are only mouthing conventional wisdom.

FWIW, many of the original studies are for software development projects for which (1) the system is unique (e.g., flight control); (2) has never been built before; and (3) "the right people" for the job exist but are too expensive (so you have to do with folks that are less skilled.)

With these constraints, the figures of 10x, 100x, 1000x help predict life-cycle cost given a larger economic constraint (i.e., scarcity of talent). Estimation is a major issue when bidding with the gov't. Inside a private company such statistics can be use to bludgeon the employees into all sorts of stupid actions.

The open source movement might provide an intersting counterpoint to this acepted wisdom. There might be some interesting results in the case that the developers are so highly qualified on a specific piece of code. The picture might be different for exploits vs. functionality, too.

Scientific Expirement (1)

Hard_Code (49548) | about 11 years ago | (#7273722)

Ok, the scientific thing to do would be to 1) fix a bug in development, and compare the cost against 2) leaving the bug as is, then trying to fix it while you are in live production....

Any takers? ;)

Don't think of it in "dollars and cents" terms... (2, Insightful)

crazyphilman (609923) | about 11 years ago | (#7274633)

The cost of a bug isn't in cash per se. Whether a programmer is in-house or a contractor, they're going to be at your shop for the standard work-week at least, right? So they're either fixing your bug or they're browsing slashdot. You pay the same either way.

The REAL cost of a bug while the project is being coded is in delays to your project, which could push you past deadline. The cost of a bug after the project rolls out is the embarassment of getting caught with your pants down, and of having the inconvenience of pulling people off of other work to fix it.

So in my opinion, bugs are "cheapest" to fix during the initial design and prototype phase, where you're probably not that close to your deadline and you have some wiggle room.

They're more "expensive" to fix when you're closer to a deadline and the delay screws you up (for example, find a bug during user acceptance testing and you've got to go back and code, then start the testing all over again).

They're most "expensive" to fix when you've rolled out the project, the users come to depend on it, and something goes wrong. This embarrasses you and makes your code look untrustworthy, and forces you to scramble to deal with the problem, rolling out a patch, etc, all while dealing with hot-under-the-collar users.

I think this three-level way of looking at it is a lot more useful than any kind of imaginary mathemagical flim-flam. Forget the numbers, worry about the egg on your face. ;)

Some real-world numbers (1)

a1291762 (155874) | about 11 years ago | (#7277137)

I worked at a company that produced Telco products (switches, base stations, etc) and we had an estimate of $10,000 per bug found in the field. This was assuming it was a relatively straightforward problem that could be fixed within a few days. If it took longer for the developers to invetigate/fix the bug, the cost went up. The cost was so high because we had multiple levels of support who would investigate before passing the info onto a developer who would confirm the bug's presence and fix it. A new build would be made and tested (to ensure the bug was fixed and nothing else was broken) before being sent out to the customers.

We had a dedicated Verification (testing) team that tested the unreleased code (ie. the next release). They spent their entire time trying to figure out how customers would use the system and using it in that way. We even had a small mobile network setup to do the testing properly.

Bugs caught before release were much cheaper to fix since it didn't require a dedicated build/test run, just a developer to fix it and a verification person to test the fix. The Verification team got a new build each week containing all the bug fixes made in that time.


Don't get any ideas (2, Insightful)

cookiepus (154655) | about 11 years ago | (#7277694)

This question is idiotic, and in fact given the kind of code I get to work with every day, I should like to punch you in the nuts for asking. But since I cannot do that, I am going to give you a real answer ;-)

BTW, if you only read one part of my post, read the last paragraph.

For small, non critical projects the difference is indeed smaller because the complexity is much more manageable. Let's say you're building a house for your dog and your design forgot to specify which way the door should be facing. It doesn't really matter at which point you figure this out, because at any time you can pick the house up and turn it so the door faces the right way. Cost for error correction is exactly the same because the unit is stand-alone, the error is obvious, and easily correctable.

On the other hand, let's say you're building a pedestrian bridge between the Student Union and the Library, which are also being built at the same time. If during design you realize that "wait a minute, the library's entrance is facing away from the union, how's this bridge going to work", you can correct the issue fairly quickly. By the time the bridge and the library are built, your options for fixing the issue are very expensive. Which is why the bridge we had at Stony Brook wasn't all that convenient for about 20 years. It finally got torn down last year.

Analogies aren't even necessary here because there's plenty of real-world experience (mine!). Here's a quick example. Client does something and the server crashes. It is easy to detect this at the time of bug introduction, because "hey, chances are that the code I just made is the buggy one" so you know where to look. Five years later, when someone else is working on your code and something crashes because the clients started entering new kinds of trades or whatever, or because this guy is Indian and his name is longer than you allocated for, it's going to be a BITCH to find which part of the code does the crashing. Sure, the fix may take the same amount of time (just allocated 20 more chars and you'll be fine until aliens with REALLY long names land and start using our system) but bug identification took you a whole lot longer, and it cost you more.

The biggest incentive to detecting errors at the stage they are introduced is that the stages are developed one from another. In the above paragraph, I show that even an implementation error caught during maintenance stage is more expensive than one caught immediately - but they both stem from the fact that the spec and the design eroneously omited (for example) how long a name should be. It is a spec error all along. If the spec stated the required name length, the programmer would likely implement it correctly. If not, the QA testers would certainly detect it during testing stage.

You can argue with your instructor all you want, but in the real world not only is it more time consuming to find the error later on, it has more of a chance of affecting a customer - which can become an expense of its own easily enough.

Plastic biro or mont blanc (1)

oo_waratah (699830) | about 11 years ago | (#7277816)

This is more complex than just a simple metric. I would agree that the cost of that coding bug could have been 20 minutes to test, 10 minutes to fix and after implementation it two weeks to placate the client, prove my software, etc. So the management of people costs a lot of time hence money. XP does NOT fix this "people perception" problem. Now look on the counter side. How long would it have taken me to construct the testing regime that would have eventually found that bug on an existing product without real tests. I set up a minimal test system over a period of 12 months and I would estimate it took me about say 10 days (probably more). Was this testing regime stringent, no! Would it have picked up that bug, no! Why not, very simple if I had thought about it I would not have coded it that way. Because I failed to think about that boundary I would not have tested it anyway. So the real cost metric that management are focused on here is: What is the cost of testing vs the cost of bugs in production. I never make this call, I always advise more testing! The cost of testing can far exceed the cost of the code. Is this realistic in your particular environment? Was the code change that you made worth that level of stringent testing? I know that XP brings in test for a bug, fix bug, never repeat bug, I now live by this dogma and it has saved my arse many times. The down side is that you have to "see" the bug and this is the bug problem. There are no golden bullets. Quality is relative, determine your level. Set a reasonable level of certainty that you want to release to. Medical Embedded systems - test, test, and test! Medical Accounting records - test, test. Automatic door closer with manual work around and fast load for software - quick functional test. One size does not fit all.

What a retarded question (0)

Anonymous Coward | about 11 years ago | (#7277830)

The reason the trainers get ticked off is because you only ask that question if you are retarded or are trying to tick them off. I guess you should be glad they didn't assume you are retarded.

Maybe not just the lateness? (1)

tjstork (137384) | about 11 years ago | (#7279494)

There are plenty of dinky little bugs that can get unearthed in the few days before release. These aren't necessarily more expensive to find or fix.

Really bad late bugs don't happen if you get your requirements straight, and test early and often.

Fixing one may introduce others (1)

jurasource (568039) | about 11 years ago | (#7279496)

Again as mentioned in "The Mythical Man Month", bugs found during the later stages of development (and more so post deployment) are more costly, because:

  • The cost of regression testing
  • Other (perhaps subtle) bugs may well be introduced by fixing another
The book goes on to say that, post deployment, fixing bugs (and the introduction of new ones in the process) causes the system itself to atrophy. In other words become more unstable, not more stable.

Gone over a waterfall lately? (1)

sohp (22984) | about 11 years ago | (#7279863)

Unlike this guy [] , you aren't likely to survive going over a waterfall these days. A more recent discussion of the cost of change [] and a further examination by Alistair Cockburn [] might be better than reviewing Boehm again.

It's not just the cost "to fix", but also... (2, Insightful)

AJWM (19027) | about 11 years ago | (#7282044)

...the cost of the wasted effort down the wrong path.

For example, if you get a requirement wrong and spend X developer-months designing and coding a subsystem around that requirement, the cost to fix it includes that already sunk cost plus the cost of reworking the design and code to make it conform to what the spec should have said.

Or consider the case where section II.3.iv of the spec conflicts utterly with the requirements detailed in section IV.2.iii. If you don't catch that early (and assuming its a large project, given the size of the specs), you'll have two different subproject teams off designing, coding and testing to cross purposes and you'll only discover the problem at integration time.

Sure, some requirements or design bugs are trivial to fix even after coding is almost complete (you got the color of some GUI feature wrong, say). Others aren't (you missed some key requirement that radically affects the way the data should be represented and you have to change all your data structures and database tables).

plan for bugs then. (1)

nietsch (112711) | about 11 years ago | (#7282681)

Every method guru agrees that bug will creep into any development effort. The thing you need to do when designing and writing your code, is that it may all be wrong from the design up. I have seen few methods that emphasize on planning your bugs ahead. XP will even declare that YAGNI!
There will be some points in your design that will have grave consequences if there are bug in it, it is up to the designers/programmers to identify those points and plan the repairs ahead.

How? I dont know really, I'm supprised you even read this far.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?