Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Can Software Schedules Be Estimated? 480

J.P.Lewis writes " Is programming like manufacturing, or like physics? We sometimes hear of enormous software projects that are canceled after running years behind schedule. On the other hand, there are software engineering methodologies (inspired by similar methodologies in manufacturing) that claim (or hint at) objective estimation of project complexity and development schedules. With objective schedule estimates, projects should never run late. Are these failed software projects not using proper software engineering, or is there a deeper problem?" Read on for one man's well-argued answer, which casts doubt on most software-delivery predictions, and hits on a few of the famous latecomers.

"A recent academic paper Large Limits to Software Estimation (ACM Software Engineering Notes, 26, no.4 2001) shows how software estimation can be interpreted in algorithmic (Kolmogorov) complexity terms. An algorithmic complexity variant of mathematical (Godel) incompleteness can then easily be interpreted as showing that all claims of purely objective estimation of project complexity, development time, and programmer productivity are incorrect. Software development is like physics: there is no objective way to know how long a program will take to develop."

Lewis also provides a link to this "introduction to incompleteness (a fun subject in itself) and other background material for the paper."

This discussion has been archived. No new comments can be posted.

Can Software Schedules Be Estimated?

Comments Filter:
  • by JohnHegarty ( 453016 ) on Monday November 05, 2001 @11:05AM (#2522290) Homepage
    There is only two type of software schedules

    1) As long as it takes.

    2) Take your best estimate , and double it and add 5 or something....

    It prefer the as long as it takes. Other wise you end up with something like Windows Me.
    • "1) As long as it takes."

      As long as it takes to get it right. This to a point is a barrier opensource S/W does not hit to a large extent as development is continual till no longer required.

      An interesting question is when will Linux/*BSD development stop? Will it be surpased by an/other projet(s) or evolve to perfection?
      • Ummm... Closed source doesn't necessary hit it either. As a programmer, I can tell you that we often have time overruns due to a large variety of reasons. One of which includes QA finding some mistakes in our programming assumptions.

        EFGearman
      • This reminds me of "The New Jersey Method versus the MIT method."

        The MIT Method is to take as long as needed to get a task done "right," regardless of cost and schedules.

        The New Jersey method calls for solving 80% of the problem, and putting off 20% until later.

        The MIT method results in more project failures than the New Jersey method. Microsoft epitomizes the New Jersey method, as does open source. Multics followed the MIT method, and was never actually finished, just killed off years later...

        If anyone has a reference for the "MIT vs NJ" in its original form, please post it.
    • by magi ( 91730 )
      2) Take your best estimate , and double it and add 5 or something....

      The standard multiplier used is PI.

      There are also some interesting results of programming speed in the Prechelt's comparison of different programming languages: an article [ira.uka.de], a tech report [ira.uka.de].

      One of the conclusions is that script languages such as Python or Perl are about 2-3 times as fast to program with than Java or C/C++, at least in the small projects. The script programs were also about half as long in lines. There were also some differences in the reliability of the solutions - Python and Tcl had a very good score compared to C, although the small sample size for C may give misleading results.

      I'd personally be very interested to see better data for differences between C and C++. I've recently been involved in C again after a long pause, and it seems like an awfully risky language to program with. However, it may be faster than C++, on average, and the Prechelt's results agree with this conception.
    • Software design is ultimate application of Hofstader's Law, which states:

      Everything takes longer than you expect, even when you take into account Hofstader's Law.
    • by johnnyb ( 4816 ) <jonathan@bartlettpublishing.com> on Monday November 05, 2001 @02:18PM (#2523550) Homepage
      The truth is, you can somewhat accurately estimate project time. The problem is, few know how.

      The thing is, you must get entirely through the design stages first. The design stages should include every screen as well as every possible error message, sub-screen, or whatever can pop up, as well as an outline of how the program flow will go. This takes a lot of time, but not quite as much as it sounds.

      Once you have done the complete design, you can accurately make schedules. The problem is, most programmers put all error handling and messaging off as something that doesn't need to be designed. That's where the extra time comes in. If you know _exactly_ how the program flow is supposed to work, estimating time is easy. However, if you haven't finished the design stage, YOU DON'T KNOW WHAT YOU'RE PROGRAMMING, so, obviously, you can't estimate the time. So, with a _complete_ design, including all possible error conditions and actions to be taken, scheduling is not that hard.
  • The majority of Slashdot readers are students without any notable software engineering experience. Sure, not everyone here fits this description, but it's certain that there will be lots of hearsay, what-my-professor-told-me responses, and misguided personal theories based on blind idealism.
    • Right place to Ask (Score:2, Interesting)

      by maroberts ( 15852 )
      Like any public forum Slashdot has a wide range of readers, a large number of whom actually work in the software engineering field [myslef included].

      Anyway my personal theory based on blind idealism is that it is extremely difficult to get an estimate for completion right; short term goals are fairly easy to predict, because you have most of the information you require to make those predictions, but longer term estimates are much more of a wild guess. I personally thing its a consequent of chaos theory - a butterfly flutters its wings in Brazil and your software project instantly takes another two years! More seriously small errors in estimating components of a large project can induce large errors in estimating the time and resources needed to complete the whole project.

      Linux is right with its "release when ready" motto. Since it is impossible to tell when it will be ready over such a wide range of groups and interests, you have to pick your release moments when they happen, not try and force them to happen.
      • (Maybe someone should do a survey to find out how many of us are pros?)

        Likewise, I've been developing (C++) for a living for about 12 years now and I've come to some conclusions:

        There are estimating techniques/metrics which will work. They depend upon going round a few times to "calibrate" and consistent application. "Task Points" was a good one - basically break your use cases down and down until you have a series of one-line statements about the system. Multiply these by your magic number and that's the estimate. This, like all estimating techniques, is built on sand because:

        It depends upon a development team sticking around long enough to do a few projects to calibrate you method.

        It depends upon the exact functions of the system being known at the time you do the estimate. This is the killer.

        I have never worked on a project where the exact functioning is known at the time coding starts. I have, however, observed that the more analysis/design you do before estimating, the more accurate the estimate is. The problem is, that people always want the answer (estimate) before they've given you the problem (spec).

        FWIW On small projects (which are generally better defined), I run through the spec, do a rough n' ready count up of the number of classes, multiply by a factor (decided by the complexity of each class and who I think is going to code it) add a QA+debugging allowance and come up with figures which aren't too wide of the mark.

        Oh yeah, and the "who's coding it" is important. Lots of studies show that the difference between "good" and "bad" coders can be a factor of ten. I've been slammed by PMs after estimating how long something would take me, then the PM puts some "cross trained" ex VB dork on it.

        To summarise: it is possible if you know who is coding what. Recommendations: 1) read Brooks, 2) keep it small 3) ignore any of the "latest methodologies" that Project Managers try and sell you.

    • where'd you get that idea?

      i've always thought most /. readers are programmers and IT people who come here to kill time at work.

      -c
    • by Organic_Info ( 208739 ) on Monday November 05, 2001 @11:28AM (#2522427)
      True but most experienced S/W engineers or Project managers know that most projects slip because of changes to/deviations from the original project spec.

      Fixed specs are much easier to engineer than those that continually change. You wouldn't easily engineer a bridge if the river banks kept moving.

      I think experienced project managers know how to control the spec rather than the project. (I could be wrong - It's just what I've seen).
      • by gorilla ( 36491 ) on Monday November 05, 2001 @11:48AM (#2522581)
        I'd have to agree with this. There are two major problems, the first being that the users don't really know what they want and the second being that almost always, the problems being solved are new problems, and therefore it's difficult to know what solution will best solve the problem.
      • There also seems to be a professionalism problem in software development - programmers often deviate from the project spec to add things that they want to add, just because its fun for them, with no regard to the impact on the deadline or whether or not the feature is required and/or even useful for the project. Project deadlines for bridges would also often slip if some of the engineers kept deciding halfway through that it "would be cool" if the bridge pillars "looked like giant penguins" or something. "Real" engineers have the professionalism to realise that they need to stick to the spec. With software its not quite so clear that you absolutely have to, so (unprofessional) software developers spend too much time near the beginning of the project adding fun, cool, useless things instead of concentrating on what needs to be done. Then for the last two weeks before the deadline SOMEBODY ELSE (usually me) usually ends up picking up the slack and working 16-hour shifts to get the program ready for delivery.

        I keep having fights with one of the developers here, who is a good programmer, but he has *no* concept of deadlines, time, or priorities. Even the *management* have started multiplying his development time estimates by a factor of three (its usually the other way round!). He's always like "I'd like to add this", or "it would be really cool if we had this feature", or "but we're going to need this eventually anyway" (for future future projects that don't exist yet). And its always "it'll take less than a day", or "it'll only take a day or two". And it ALWAYS takes several times longer than "a day or two". And these things add up, he just doesn't see it, a few days here and there soon add up to a month or two. I can't get it into his head that even if it "only takes a day", as he insists, that thats one day that we don't have to spare, we're already running late as it is. Its simply not possible to add features without pushing your deadline further back, and he just doesn't get that. Its unprofessional, and its frustrating.

        My biggest problem as project manager just seems to be getting people to work on what they're supposed to be doing. It doesn't help either that my manager keeps finding other things for the programmers to do. Some of the developers are professional, and will just focus on doing their jobs without requiring nanny assistance, but some of them you seem to need to check up on several times a day to make sure they're not doing the things they *want* to be doing. I shouldn't have to do that.

    • by Anonymous Coward
      ...the real reason estimating doesn't work is that there's no way to predict how much time programmers will spend reading Slashdot...
    • I'm not saying the majority of Slashdot readers are professional developers, but don't judge the readership on the first-posters.

      That aside, my experience in software development (only 3 years) ball parking (1-3 days, 1 week-3 weeks, 1 month-3months) is usually possible, but tends to become wildly inaccurate beyond a few months. Regardless of what methond we use to determine timelines, some things always seem to slip, while others take a fraction of the expected time.
  • by Anton Anatopopov ( 529711 ) on Monday November 05, 2001 @11:07AM (#2522300)
    But not with any degree of accuracy. Function point analysis is one method that has had some success. The key to delivering projects on time always has been and always will be RISK MANAGEMENT.

    Software development is not a science in the normal sense. Designing large software systems is an art. It cannot be pigeonholed. Stroustrup has a lot to say about this when he describes the 'interchangable morons' concept in the 2nd edition C++ book.

    Anyway, read Death march by Ed Yourdon, and the mythical man month by fred brooks, and antipatterns, any time someone asks you for an estimate say 'two weeks' and then bullshit from there on.

    That is how it works in the real world. The numbers are essentially meaningless, but the bean counters and suits have to justify their existance somehow :-)

    Can you imagine asking Linus when 2.5 will be ready ?

    • by sql*kitten ( 1359 ) on Monday November 05, 2001 @11:21AM (#2522389)
      Software development is not a science in the normal sense. Designing large software systems is an art. It cannot be pigeonholed

      That's exactly the sort of attitude that has caused the sort of spectactular failures of software projects to be accepted as the norm. Software Engineering is *not* "hacking" or "coding" or "programming", it's *engineering*, like building a bridge or a skyscraper. Yes, those projects go over time and budget too sometimes, but they are the exception rather than the rule.

      That is how it works in the real world. The numbers are essentially meaningless, but the bean counters and suits have to justify their existance somehow

      The problem is endemic in the industry. The other Engineering professions require rigorous accreditation before they let practitioners loose in the world, like the PE (in the US) or the Charter (in the UK). But the software industry hires anyone, and lets them get on with whatever they do, with no real management or oversight or planning.

      In a well analyzed and properly planned project, the actual coding stage is little more than data entry.
      • by KyleCordes ( 10679 ) on Monday November 05, 2001 @11:28AM (#2522433) Homepage
        This approach applies, more or less, sometimes MUCH less, depending on how well understood the problem domain is, how many times you have done it before.

        If you're building your 57th e-commerce web site, which works roughly like the 56 you build before, you can estimate very, very well, and you can reduce coding to nearly data entry.

        If you're solving a problem of unknown scope, which your team has not solved before, which the solution is not clear to, and analysis has revealed some but not all of the details, etc., then you are not very right.
        • This brings to mind the old quote "If builders built buildings the way programmers wrote programs, the first wood pecker that came along would destroy civilization".

          When looked at in the context of practical experience, this is quite false. We have been building buildings for at least several thousand years with some tremendous success and some spectacular failures. I live in Toronto where we were lucky (I think) enough to have the first major league baseball stadium with a retractable roof. IIRMC, the original cost estimates were in the vicinity $100 million (CND). When the stadium opened (pretty close to on time), the cost was actually around $480 million (CND).

          I guess this somewhat proves you can estimate either cost or time accurately but not always both. My experience in the IT industry has shown that most problems can be over come with enough resources. Unfortunately, resources are not limitless and therefore consessions must be made. This generally means the completion date slips or functionality is reduced or a combination of both.
      • If the software industry were saddled with the same level of process that exists in other engineering professions, we'd still be using character-based software, the web and the internet as we know it today wouldn't exist and most business would still be conducted on paper.

      • by john@iastate.edu ( 113202 ) on Monday November 05, 2001 @11:34AM (#2522484) Homepage
        Writing software is not like building bridges because halfway through the project some dumbass from marketing doesn't come down and tell you that concrete is out and so it needs to be a steel bridge. Oh, and those tacky cables have got to go -- the focus group hated them.

      • by clare-ents ( 153285 ) on Monday November 05, 2001 @11:51AM (#2522598) Homepage
        "
        That's exactly the sort of attitude that has caused the sort of spectactular failures of software projects to be accepted as the norm. Software Engineering is *not* "hacking" or "coding" or "programming", it's *engineering*, like building a bridge or a skyscraper. Yes, those projects go over time and budget too sometimes, but they are the exception rather than the rule.
        "

        But that's simply not true. Writing software of anything that is non-trivial is not the same as straightforward engineering. For a start there is the rate of progress, how many people have 30 years + experience of building 50 story + buildings. How many people have 30 years + experience of dealing with terabyte + sized datasets?

        When buildling software previous code can be reused for a very small amount of effort, when building skyscrapers the previous design can be reused for only marginally less effort than the last one.

        Compare the difference between building a C compiler from the gcc source and the world trade centre from the blueprints.

        Essentially the estimate is

        Time = [time to do the bits we know how to do [accurate] ] + [guess for the bits we don't know how to do [inaccurate] ]

        With software, the first part of that expression tends towards zero since most things we know how to do we can reuse code, whereas with building it remains a large accurate estimate.

        The error here will be of the form

        Error = [variance of inaccurate terms] / [total]

        For the example of a skyscraper whos construction is mostly a known method this will tend to a small number since the inaccuate term is much smaller than the accurate term, but for software with reuse of all the known methods of coding this will tend to 1 - i.e.. 100% error in the estimate and hence the conclusion that it's worthless to even bother estimating.

        In my company we can accurately estimate how long projects will take providing the projects are mostly identical to ones we have done before, and if this is the case it generally costs the client more in programmer time in meetings to dicuss the cost of the job than it does to write it.
      • That's exactly the sort of attitude that has caused the sort of spectactular failures of software projects to be accepted as the norm. Software Engineering is *not* "hacking" or "coding" or "programming", it's *engineering*, like building a bridge or a skyscraper.

        We'd like it to be so, but it ain't.

        The behavior of bridges and skyscapers is determined by classical physics, which allows us to make precise predictions.

        The behavior of computer programs is governered by complexity theory, which tells us that any reasonably complex program has non-predictable behavior. And the manageability of software development depends on human understanding and appreciation of code - there's an aesthetic factor.

        Certainly things could be better...the fact that something has a large component of art doesn't mean that there aren't areas of mastery for a practitioner to study. But at its heart, the creation of complex software requires a creativity and intuition that cannot be set to a timetable.

        (Yes, one can "engineer" art to some degree - popular music being an example, where teams of marketers follow formulas to construct the next boy band. But that does not result in a quality product that stands the test of time.)

        In a well analyzed and properly planned project, the actual coding stage is little more than data entry.

        But the problem still applies to the design phase.

      • the software industry hires anyone, and lets them get on with whatever they do, with no real management or oversight or planning.

        The software industry doesn't hire anyone. Software companies hire people, and a company that behaves like you described won't be around for long if software is their main source of revenue.

        Also, management != good software engineering. Planning != good software engineering. These are all factors that go into a good software project but people shouldn't think that if they draw class diagrams before they start coding, they're suddenly software engineering.

        On the other hand, you need to look at what's best for the project - it isn't always a large, formal approach to software, especially for small projects. Being too rigid can be as bad as being too loose with your design. I've seen projects design themselves into a corner before the first line of code is even written.
      • That's exactly the sort of attitude that has caused the sort of spectactular failures of software projects to be accepted as the norm. Software Engineering is *not* "hacking" or "coding" or "programming", it's *engineering*, like building a bridge or a skyscraper. Yes, those projects go over time and budget too sometimes, but they are the exception rather than the rule.

        I agree with you up to a point. I am an engineer. I have worked in Process Engineering, at AMEC, and now work in Design engineering. I have not done much coding, but I think that software development probably relates most closely to design. As I said, I now work in design. In design you can estimate a schedule, but that schedule is dependant on our everything going perfectly the first time, which we all know doesn't happen. This does also not include problems with parts we have to design around, which we then have to wait on, or a change in requirements of our part. (Sound familiar yet?)

        This is all in the conceptual, design phase. This doesn't include the acutal production of a physical part. That all happens later, after our 3D model has been packaged correctly. Once the physical part has been made, then there are the joys of testing and testing and testing...

        What I'm trying to get at, is that I've experienced several forms of Engineering (Yes there are many), and I think that Software development relates most closely to Design. In design, there is no reasonable way to schedule out how long things will take. We just make an estimate based on what's happened in the past, and change things as we go along.
      • by Aceticon ( 140883 ) on Monday November 05, 2001 @01:20PM (#2523144)
        Let's see:
        • At any point in time the ground your skyscraper stands on can crumble into nothingness. [Operating System bugs]
        • Your skyscraper can be required to stand on slightly different types of ground. [Operating System types and versions]
        • Also the steel, glass and cement you are using have wildly varieing properties. They also might have been imposed by an outside entity (read Company Standarts). [Third Party Components]
        • Plus the elevators that you get always do less than their specifications (for example they don't stop on the 5th floor). The next version of the elevator will actually do that but on the other hand it doesn't fit on the elevator shaft.[Third Party Components and Applications]
        • Also half-way through building the skyscraper you find out that the plant has been changed and it's now supposed to have a Shopping Mall on the ground floor.[Creeping Requirements]
    • by xyzzy ( 10685 ) on Monday November 05, 2001 @11:29AM (#2522438) Homepage
      I agree with your risk management comment, and a later poster who mentioned fixing the endpoint, but I'm not sure I agree on your claim that it can't be pinpointed with any degree of accuracy.

      After ~15 years in the industry, I've found that one thing that makes a huge difference is the experience of the team, and the familiarity between the actual engineers and the project management.

      As you have experience solving a variety of classes of problems, you can predict with increasing accuracy the time it'll take you to solve later problems. And as your management sees you getting increasingly accurate in your estimates (based on past projects) they can create better and better schedules and estimates for the project as a whole, and have a better intuition for the gray areas of development, or the greener developers.

      Projects that tend to go off into the weeds have included (in my experience) wholly green teams, wholly green management, or areas of development that are outside the areas of expertise of one or both.
    • Software development is not a science in the normal sense. Designing large software systems is an art. It cannot be pigeonholed.

      An experienced software project manager can usually be quite accurate in estimation of effort for a well analyzed software project.

      This, however, highlights a few problems in The Real World:

      • many (most?) software projects are ill defined.
      • many (most?) software projects are not analyzed properly prior to the start of architecture design and start of coding
      • many (most?) software projects are not resourced properly up front; resources are thrown haphazardly at a project once deadlines are quickly approaching
      • many (most?) software projects are given unrealistic deadlines prior to analysis being done
      • many (most?) software project leaders do not have the political experience needed to manage the business expectations of a project [most engineering schools have mandatory Management Sciences courses for their students. Most CS schools avoid Humanities courses...yes, I am a CS grad].
      • many (most?) software senior developers are not encouraged to get involved in the "business" aspects of software projects.

      Am I too pessimistic? I don't believe so.

    • by Overt Coward ( 19347 ) on Monday November 05, 2001 @12:16PM (#2522750) Homepage
      The key to function points -- or any other -- estimation techniques is relying on historical data to predict future results. This means that they are fairly accurate as long as you collect metrics and stay within the same general project domain and relative project size. The more radical the departure from historical size or domain the new project is, the less accurate an estimate will be.

      However, the biggest thing to remember is that no matter what estimation method is used, the simple fact that a methodical approach to analyzing the problem will almost always yield a reasonable estimate.
      The main reasons projects go over schedule and budget are:

      1. "Feature creep" -- having the requirements change significantly over the course of the project without adding the impact of changes into the schedule.

      2. Rampant optimism -- many engineers (and managers) will typically estimate how long they think it should take to do a specific task but will not add in a buffer in case somthing goes wrong. And something always goes wrong.

      3. Artificial deadlines -- project schedules where the budget (time and money) was set by customer/marketing committments, and not by the technical requirements at all.

      4. Calendar/personnel issues -- people take vacations, there are holidays, and people occasionally fall ill. Plan for it. Also, don't forget any company/department meetings, training, seminars, etc.

      5. Dependencies -- if a required piece of hardware or software won't be available (or is late), it can impact the overall schedule, espeecially if critical path tasks depnd on those materials.

      Risk management is indeed the key. As the project manager or lead engineer, it is your job to predict what potential risks might be and attempt to mitigate them on a cost-effectiveness basis. You can still be bit by bad luck, but you can minize the chances it will strike.
  • Sure they can... (Score:3, Interesting)

    by Mike Schiraldi ( 18296 ) on Monday November 05, 2001 @11:10AM (#2522319) Homepage Journal
    As they say, the first 95% of a software project takes 95% of the time.

    And the remaining 5% of the project takes another 95% of the time.
  • by Tadghe ( 18215 ) on Monday November 05, 2001 @11:11AM (#2522329) Homepage
    The reason that software development timetable "estimation" (guess is a better word) is so often wrong is that quite often you are not given enough information about the projecto accuratly pin down what your milestones are much less your final delivery date.

    To accuratly plan a software release you must have the project, and all it's complexities and nuances down COLD. otherwise you are not giving an estimation, you are giving a guess based upon incomplete knowledge.

    The question becomes, do or, can you, know the complete details of the project? In this, software development is NOT like manufacturing, but more like home construction.

    Think about it.
    • by markmoss ( 301064 ) on Monday November 05, 2001 @11:42AM (#2522534)
      To accuratly plan a software release you must have the project, and all it's complexities and nuances down COLD. otherwise you are not giving an estimation, you are giving a guess based upon incomplete knowledge.

      The bulk of the work of programming consists of getting all the complexities and nuances down cold. Once you really and completely understand what is required, coding is trivial.

      This leads to a thoroughly unrealistic method of estimating software costs:
      1) Work for months on the specs.
      2) Get the customer to sign on to those incredibly detailed specs, even though he doesn't understand them.
      3) Go and code it, no spec changes allowed.

      8-)

      The article mainly talks about the mathematics of estimating complexity. This is a lot like the proof that you cannot determine when or whether a computer program will end -- it's true for pathological programs, but it has little relevance for the real world. You try to write the code so the conditions for the program to end are clear. If it gets into an endless loop, you probably got a conditional expression backwards and you'll recognize it immediately once you figure out which loop is running endlessly... Likewise, there may be well-defined specifications for which it is impossible to estimate the coding time, but the usual problem is poorly-defined specs, which obviously makes any estimate a guess.
  • by sphealey ( 2855 ) on Monday November 05, 2001 @11:13AM (#2522340)
    Very large and complex projects do get completed, sometimes even on-time/on-budget. Examples include skyscrapers, nuclear submarines, aircraft carriers, power plants (whether conventional or nuclear), oil refineries, B-747/A-320, etc. And all of these systems nowadays have a software component as well.

    So the easy response is that bad management in general, and bad project management in particular, is responsible for software project failures. While this is no doubt true, the next question has to be, why do software projects have such bad project management?

    I don't have a good answer, but one thing that occurs to me is the lack of a fixed endpoint. When an oil refinery ships its first load of POL, it is complete. When an aircraft carrier launches its first plane, it is complete. But the amorphous and mallable nature of software means that it is hard to define an exact endpoint, and very hard to avoid changing the definition of the endpoint as the project proceeds. So things keep "creeping" along until disaster occurs.

    sPh
    • Maybe it's harder to access slashdot, and taking care of your personal correspondence, play some game, surf around for news, etc. if you're a construction worker on the clock? Something that makes the estimation much easier for a contruction boss.
    • Note that many of the kinds of projects you mentioned also sometimes have cost and time overruns of remarkable size.

      Note also the enormous difference between building the first 747 / skyscaper / nuclear submarine and the 15th or 1500th of each.
    • When you construct a house or a power plant, you are in a business with subcontractors, that can take some of the risks. It is generally accepted to set a fixed price, because the procedures that are involved, are mostly known.

      In software, however, most projects do not rely on known procedures. It is fairly easy to estimate the costs of creating 1000 different window layouts, which is a known procedure, but it is a very difficult task to estimate the costs of implementing the layouts.

      If software would use as much energy on estimating each new task as construction projects did, developing software would be extremely expensive. Just imagine that you had to do a while-loop according to an ISO standard, and another while loop according to another ISO standard, because the two while loops were in different functions that were categorized differently by a third ISO standard. Instead we hire a bunch of programmers and make them program themselves. Sometimes we do it a little more complicated, like Open-Source, Xtreme Programming etc., but it's still a bunch of programmers hacking around.

      The trick is to manage it anyway - and that's why managing software projects will always be risc management and not very predictable.

      Lars.
    • I don't have a good answer, but one thing that occurs to me is the lack of a fixed endpoint.

      That's also a failure of management. All projects should have a requirements spec that describes exactly what the system is supposed to do.

      I think the fundamental problem is that people don't want to spend money on all the "non-coding" documentation. Good documentation can take half the time of a project. It seems so much more "efficient" just to put a hoard of programmers on the project and crank out code, but it ends up costing a lot more.

    • Components (Score:4, Interesting)

      by wytcld ( 179112 ) on Monday November 05, 2001 @12:28PM (#2522831) Homepage
      Very large and complex projects do get completed, sometimes even on-time/on-budget. Examples include skyscrapers, nuclear submarines, aircraft carriers, power plants (whether conventional or nuclear), oil refineries, B-747/A-320, etc. And all of these systems nowadays have a software component as well.

      Yes but. The important components of a skyscraper are steel beams. Put them up correctly, after calculating loads and stresses, and it doesn't matter what the twenty tons of stuff you have sitting on the 27th floor is. It doesn't matter if the beams come from different foundaries, either, because the specs are clear enough (dimensions, strength, where the bolt holes are).

      Now try putting together a typically complex business software solution, meshing a bunch of different, reasonably good, existing programs and components with some custom code and configuration. Even where there are reasonably good standards spec'd in some areas of the project, if you're not solving new problems it shouldn't be a software engineering project at all - it should just be system administration using the available solutions. That it's real software engineering means you're running into unpredictable surprises where the components at hand don't fit without a great deal of extra labor.

      A parallel can be found in work on the portions of the New York City infrastructure that are under the streets: We still have wooden water mains in some places from the mid-1800s, mixed with gas, electric, steam pipes, sewer, subways, gas lines ... most of which was not documented to current standards on either installation or subsequent changes, despite most of it being reasonably well done by the standards of its time (pretty amazing, those wooden water mains still working, right?).

      So what happens when we finally go in to improve one of the services - say, lay new water mains? Other stuff is found that's in the way where you didn't expect it, or that need's fixing on examination when you didn't expect it. Meanwhile you've got the street ripped up but you have to cap it again quickly or traffic is too snarled for too long. So a single block's 4-week project can stretch out for over a year - dig up the street, fix one problem, discover more, recap while designing and provisioning the next stage, repeat - because it's all stuff that needs to be done once you get into it, that can't be properly assessed until you get into it.

      Well, software in the real world isn't as old as New York, but if anything it's more complex, and the layers of crufty stuff that have to be accommodated in current projects are as considerable, and often as poorly documented by current standards (which will always advance so as to obsolete whatever we do now). Building a skyscraper, by contrast, is just a sysadmin job. Put the beams and bolts in the normal places, and it stands.

  • Yes, you can guess how long it could take, and no, there is no formula for it. I can have a great day and produce 10x as much as usual. I can be lucky and my first guess of how to do it is correct, or I can take the wrong one and loose a week. You might predict for some kind of generic person, but I know for experiences that there are very large differences between how fast different people produce code/documents etc.

    So it's all a loss? Nope, but you have to remember that it's not an exact science. It involves replanning, knowing your work force, letting the work force plan on their own, more replanning, experince, guesses, and whatever it takes. Honesty is also high up on the list, and not trying to do huge amounts of work in one go. Heck, there is so much about this subject that it would ages to describe them. My suggestion is, go out in reality, work, and learn.
  • Projects != R&D (Score:2, Insightful)

    by TheKodiak ( 79167 )
    Straightforward implementation, no matter how complex, can be scheduled accurately. Developing new technology cannot.
  • This article is also at K5 [kuro5hin.org]. In fact, it's the same article. If you want to get comments from two different places, then please post the article at one place, and post a link to it at other places.
  • When making estimates, people tend to sweep under the carpet, or simplify the things they don't know, but can be quite accurate estimate the things they've built before. That's why really large project fail so badly, because every single person involved im the project has many more unknowns than known things to deal with.

    So, never say "How hard can that be?" before having coded up a small working prototype.
    • The problem is, you don't get paid for coding up a small working prototype in order to do an estimate. So my estimating technique is:

      Figure the time to do the parts I understand.

      Count the parts I don't understand. Allow a very long time for each of them.

      Add it all up, then multiply by 3
  • by wetdogjp ( 245208 ) on Monday November 05, 2001 @11:20AM (#2522376) Homepage

    Lewis also provides a link to this "introduction to incompleteness" (a fun subject in itself)

    I started writing a paper about this topic once, but I never finished it.

    -WetDog

  • by ciurana ( 2603 ) on Monday November 05, 2001 @11:21AM (#2522391) Homepage Journal

    My company develops turn-key systems. Sometimes we also develop custom solutions for our customers. Our customer base has increased steadily after the dotcom crash, when we switched from products to services. One of the reasons our customers like us is that we don't bill projects by the hour. We will the project on a fixed price, not to exceed, basis.

    The programmers who work with us on a contract basis don't bill us by the hour either. After we have the design and we distribute tasks and prior to submitting the final estimate, we ask contractors to place a fixed bid.

    We've done six major projects like this since March, and in all cases we finished within budget and on-schedule, and the systems are currently in production. They are all mission-critical systems running in either robotics environments or high-availability networks.

    Our economic motivation is then to do things well and quickly in order to increase our profits. That also enables us to move on to the next project faster than slaving over some customer in order to bill the maximum hours.

    As far as development techniques go, we adopted XP earlier on and it's working for us.

    Cheers!

    E
    • My firm also does some work on a fixed-cost basis, with similar good results. I also borrow many ideas from XP.

      A key to fixed-cost is that it takes practice. Try it on a small scale before you commit to it on a larger scale, to avoid large-scale failure...
    • That is very interesting, but how do you determine the fixed price you charge the customer?
    • It would seem that with fixed cost billing you'd need to specify rigid acceptance criteria up front to avoid the customer lobbying for "just one more feature" under the cost umbrella of the current contact.

      How do you reconcile this with the nature of XP projects to deliver something that is noticeably different from the customers original conception of their need (but that in fact fits very well the customers need as learned over the course of the project?)

      I'm seriously interested to hear about folxs who have figured out how to marry an agile development process to fixed cost contracts.
      • [agile development process to fixed cost contracts]

        I work with the customer to divide the project up in to phases / steps / iterations / releases / whatever. Group the most vital core pieces together and do them first, at a fixed cost. As requirement change, these changes either go in to future fixed cost releases, or they are done hourly if requested. Thus, the overall project is not fixed, but at each stage the customer knows what they are buying at what price, and does not have the worry of the "meter running".

        There is some related explanation (not a sales pitch) about it on my web site:

        http://kylecordes.com/story-182-shared-risk-pricin g.html [kylecordes.com]
    • A couple of posters asked this question above: How do we reconcile XP short develop/test cycles with a fixed project plan + bid?

      The answer is simple: During the planning and estimate parts we focus on defining the problem domain and a set of solutions for it. We don't focus on too many implementation details.

      XP techniques are applied to solving each specific problem found in the requirements. For example, the problem may be something like "how do we decode this math-intensive file the fastest?". There usually are two or more answers to such a problem. First we define an interface, then we try two parallel, different solutions and try both. The one that meets that criteria best wins, and we move on to the next problem.

      The thirst for features suffered by some people is often the result of poor design choices in the beginning of the project. If additional features are required, and the analysis was done correctly, you'll find that these new features simply extend solutions you were already working on (or solved). Thus, XP comes to the rescue again by letting you add the new feature without throwing the schedule out the window. Think about it: If a new feature forces someone to re-write a whole system then something must've been overlooked during the requirements analysis phase.

      The most important part of this process is not to start coding and testing until the business requirements are clearly defined. We've been guilty in the past of coding before understanding the problem completely; we try to avoid that trap now. That is probably the single most relevant cause of software project delays.

      Cheers!

      E
  • by ers81239 ( 94163 ) on Monday November 05, 2001 @11:22AM (#2522392) Homepage
    As a software developer, I would have to say that a majority of the development that I have been involved in or been aware of is of the manufacturing variety. Most business sofware is a DIDO job. Data in, Data Out. Make some fancy forms and reports and you have turned a database into a 'billing' system or what have you. There aren't really any new algorithms needed. Of course, there are a ton of them in use in the database server, the network protocols, etc. But you aren't developing those, just using them.

    The reason that estimates are always wrong are *1* unclear requirements, *2* changing requirements, *3* complicated user interfaces, *4* weak focus on testing.

    I find *1* to be the biggest difficulty. The prinicipals of a software project like to say things like "Automate timeclock operations" but as a developer, you need *A LOT* of information to do that. When you ask questions like "I understand that you do not want to allow any changes to a pay period after the checks have been cut, but then what are we going to do when travelling workers report their hours late?" Management thinks you are being a pain in the ass, but if you don't get it right, your project will fail.

    I agree with taking a realistic estimate and doubling the both the developement and the testing estimates.
  • by dybdahl ( 80720 ) <infoNO@SPAMdybdahl.dk> on Monday November 05, 2001 @11:22AM (#2522395) Homepage Journal
    There are four parameters to a software project:

    - Quality
    - Quantity
    - Deadline
    - Costs

    In a competitive environment with humans involved, up to three can be specified. Not four. Good examples are:

    - Many guidelines for managing software projects tell you to reduce quantity when you get near deadline.
    - Some customers have a specified budget but really don't know how much software they can get for that money. They prefer to have costs fixed than to have quantity or deadline fixed.
    - Sometimes deadline is so important, that costs may 10-double in order to reach that deadline, and quality and quantity may get reduced a lot in order to finish the project.

    It is extremely important to realize the meaning of all four parameters before you can talk about estimating project schedules.

    Lars.
  • There are tons of tools and techniques to developing software. Best practices abound in fact. Two things present in every form of good software development are analysis/design and project management. If you do the work in analysis and design you will be capable of building a good estimate.

    That's only half the battle. Once a project is underway, keeping scope in check is critical so you need good project management. If you build a great estimate through analysis and design and then throw it out the window when you start writing code, you'll never have a good estimate.

    Where do major providers like Microsoft and even Mozilla go wrong? Simple, they either jump in and start coding before they've completely settled on what they're building or they change their mind in development about what they're building. Either way, it screws up delivery dates.

    • > Once a project is underway, keeping scope in check is critical so you need good project management.

      Yes, requirements creep seems to be the main problem. Since software is "intangible", management seems to think that they can change the requirements when it's half done, with no adverse consequences. The same management wouldn't dream of doing the same thing with their new office building. (E.g, doubling the space requirements after the foundation and half the floors have already been built.)

      In general, it's a management problem from top to bottom. Start with vague requirements, disallow sufficient time and money even for a minimal implementation of those vague requirements, put underqualified and undisciplined staff on the project, change the vague requirements while the project is in progress, and go on death marches when the inevitable happens.

      Software projects aren't going to behave well until management imposes an engineering discipline on them. And the biggest issues for management are (a) deciding what they are going to do before they start, and (b) deciding before they start whether the project is worth the time and money it's going to take, and cancelling it up front if they don't like those times/costs, rather than just trimming the time/cost projections down to something that their organization finds politically acceptable and then going ahead with the project under falsified time/cost projections.
  • by Tassach ( 137772 ) on Monday November 05, 2001 @11:24AM (#2522406)
    I've been developing software professionally for about 14 years now. In that time, I've almost NEVER seen a development project get completed in the allotted time. This has been true even when the schedule has been padded outrageously to account for slippage.


    The biggest problem I've seen is requirements creep. Most often, you don't have a firm set of requirements to start with. Management and programmers both have a tendancy to view requirements documents and other formal software engineering practices as superflourous. The problem is that without a firm set of fixed requirements, you are always trying to hit a moving target.



    Another problem is attitude, mostly on the part of management, but programmers are guilty too. One faulty attitude is that we are conditioned to expect immediate results. There's also a prevaling attitude that there is never enough time to do it right, but there's always enough time to do it over. This leads to undocumented, unmaintainable masses of code that either gets thrown away after a while.



    Even worse, you wind up with garbage code that SHOULD be thrown away and re-written from scratch, but winds up getting patched and modified for years. I can't tell you how many times I've had a manager say "there isn't time to rewrite it, just patch it". That would be OK if you are only going to patch it once -- but you wind up patching the same program a half dozen times, and it winds up taking twice as long to do all the as it would have if you had just rewritten it from scratch.

  • SW development is still more of an art than a science. That said, I've seen several fairly common causes for late software:

    1) Lack of up front planning - too many projects fail to do proper initial planning - specifically defining the problem to be solved, producing detailed product requirements, and a detailed project plan (and then sticking to it).

    2) Late (or incomplete) requirements - if you went to an architect half way through home construction and wanted to change the design of a house; you wouldn't be surprised if it fell behind schedule and went over cost.

    3) Poor risk management - failure to track dependencies, too many high risk dependencies ("we'll build it on the next OS release, with the new compiler, and that SW package that our start-up partner will finish next month"), failure to make and execute contingency plans.

    4) Failure to heed Brook's Law ("Adding software developers to a late project - makes it later.")

    5) Failure to have read Deming ("You cannot test quality into a product").

    6) General design failures - not assuring that product is scalable, reliable, testable, etc.

    7) Failure to place a senior developer on the team that knows about the previous issues.
  • I've seen, and been subjected to software-estimation techniques.

    The best defense I've heard is that "Yes, everyone's estimate will be way off, but they are independent estimates of different pieces of code and when aggregated the standard deviation drops to a reasonable value". IOW, the estimate I pulled out of my butt will be way optimistic, but your estimate will be pessimistic, and it will all cancel out.

    There are a few problems with this, rather nice and neat statistical trick:

    1) As Michael Milken found out, the observations are not independent -- there are two many interactions between the components being estimated. In Milken's case, he argued that a diversified portfolio or junk bonds would have high yield, low risk charactersitics. Unfortunately, the performance of shaky companies in a market downturn is rather strongly corelated.

    2) You need something objective to estimate. In our case, we measured the number of easy, medium, and hard member functions of classes that had to be implemented. See the problem? You need to cast your interfaces in stone, external, and internal ones, right at the start. On simple projects this is easy, but not on hard ones, as much as we all agree it is desirable. There is something called learning from one's mistakes and it will happen with anything novel.

    3) This presumes that the design is sound. To ensure this we reviewed and analyzed and studied, and "damnit, you indented 3 spaces instead of 4...", well you get the idea. The closest scrutiny will find the obvious bugs, but not the really tricky ones.

    4) This technique does not encourage the one thing that saves you in the face of change -- adaptive and modular design. You make things modular so change affects as little as possible, and you make things adaptive so change is as painless as possible. IOW, you plan for making changes bacause of mistakes. Naturally, this violates (1) above, so it is not permitted. The mantra is "Design it right the first time!" We know that we can get 95% or 99% or maybe even 99.5% of it right, but never 100%.

    In the end, sure, we "finished" on time, but, er what was finished didn't work very well, and had to be rescued by the few who knew what was going on. To be fair, the design efforts and documentation helped provide a somewhat modular system, but the really important parts weren't documented -- we had reams of paper describing the "trees", but not nearly enough describing the "forest" as it were.

    So, I'm skeptical.

    I've heard that these techniques encourage "discipline" and help mediocre programmers contribute acceptable code. Well, where I work now, we have a policy of not hiring "mediocre" programmers. I can dump a suspicious log on someone and be assured that they WILL fix the problem -- I don't have to argue that there IS a problem ("but, the process, the process says this WON'T happen... your log must be a lie...")

  • Check out the CSE Center for Software Engineering [usc.edu]
    Home of ....
    • COCOMO [usc.edu] (COnstructive COst MOdeling)
    • MBASE [usc.edu] (Model-Based Architecting & Software Engineering)
    • and other resources [usc.edu]
  • I remember being in my software engineering class in college the day the professor was lecturing on "CoCoMo" (think it stood for "Cost Completion Model").

    He very carefully laid out the algorithm - I don't have my textbook handy, but it involved elementary mathematical operations on estimated man hours, estimated lines of code, estimated overhead, etc., then at the end -- and I am not making this up -- they multiply the result by a "magic number".

    Where did you get the magic number, oh sage of the ivory tower? Well, we just made it up -- it seems to work.

    It hit me then that the whole discipline of estimating cost completion is all bullshit. You might as well be estimating with a crystal ball or divining the future with chicken bones. Since I've been working, the best advice I've gotten so far has been "take how long you think it'll take and double it".
    • I agree entirely. My software engineering class involved an overview of various estimated tools and it was all BS. The systems used lines of code as a metric. What a joke. I remember the magic number, which is just a way of them reverse engineering on historical data. The bean counters are desperate for a way to understand programmers without knowing a line of code.
    • Where did you get the magic number, oh sage of the ivory tower? Well, we just made it up -- it seems to work.

      There is nothing wrong in principle with measuring what has happened in the past, and using that to predict what will happen in the future, before you discover why it works like that.

      For instance, if you measure that throughout the year, the average time between sunrises is 24 hours. You can use that number even though the only explanation for it that you might have is "it seems to work"

      Of course, when you apply this to software develpment time estimation, it falls down for a number of reasons. It's not constant across technologies. It's not constant across types of project. It doesn't take into account the variation in technological risks (ie if you have done something like this before, you will spend less time finding ways to do stuff). It doesn't scale linearly with the size of the project. It varies across individuals. etc. etc.

  • Where I work, we just take any estimate and multiply it by 3 and that seems to be a lot closer to the end result.

    Of course that doesn't stop the managers from asking every day, starting on the first day, whether or not the 3 month project you're working on is complete.
  • I've being working on large mostly web based applications for the past five years. There are always problems with timelines for software design and development, but I must say that most of the problems I've seing are not actually related to computer science. Whenever the task of time allocation is done by a marketing person, the times are wrong. When I get into the room with a couple of other developers, look at the problem that is set for us from the very beginning, from basically the reasons why this application is developped and who the users are to the point where we have to understand the datamodel, the time assignment exercise becomes much more clear. For a person with experience, it should be possible to estimate time for a task if this task resembles some of the person's experiences in the past. Of-course everything new has new risks associated with it, that is why you must always allow some slack for every specific task within the entire project. It also helps if you know who your developers are and if you have seeing these people at work before.
  • WRT software schedules - seldom has so much crap been written by so many for the benefit of so few.

    The problem is that the term "software schedule" is too wide a field to say anything meaningful about it. If you want to estimate how long it will take to put together a customized ecommerce web site, and the organisation has already built 5 of them, there is no problem. If you want to solve some problem that hasn't been solved before, it could take a week or a hundred years. Recognising the difference between these two cases is less simple than one might expect. And, if there's genuinely no novelty in the problem one should not be writing software at all. Someone should just write an application to solve that general class of problems.

    People get unstuck when they break the problem down into small chunks and then guess a number on each chunk. Often the initial decomposition misses crucial interactions and needs to be refactored later on. This is a bit like answering the problem about how long is a piece of string, by saying - well the string eginning, a middle and an end, I estimate each piece is 5 inches long, so the string is probably about 15 inches long. Unless the breakdown has brought genuine insight into the unknown aspects of the project, the estimate it provides is worse than useless. However, since one can then stick things out in MS project, print out pretty GANT charts, etc, this estimate is given more credence than a number generated by just reading the spec and making an educated guess.

    Part of the problem is that it's described as software engineering. Then we get all sorts of morons saying: civil engineers can tell us how long it will take to build a bridge, the problem must be that software engineers are unprofessional or that the subject is in it's infancy - things will improve. No, they won't, for the same reason that mathematicians couldn't tell you how long it would take to solve Fermat's last theorem.
  • Scenerio 1:
    Q: It has to do *this*, how long?
    A: X days (Not very accurate)

    Scenerio 2:
    Q: Find out what it has to do, spend TIME specifiying it, then tell me how long.
    A: X days (Can be very accurate)

    The Problem I (10+ yrs pro developer) keep running into, is that you figure out what it is to do, specify it very well, and then as you start developing it and delievering pieces for review, that specification is changed and you are plopped solidy back into Scnerio 1. Worse is when you think you're done, and begin QA and get SLAPPED back into Scenerio 1... or even Scnenerio -1 where you are trying to hack your guess at how it works into how it really should work.

    M@
  • 2 weeks (Score:3, Insightful)

    by KarmaBlackballed ( 222917 ) on Monday November 05, 2001 @11:56AM (#2522633) Homepage Journal
    Ask a sharp programmer to estimate the time to develop a software solution and he might shrug and look irritated. Ask him if 2 weeks will be enough time, and there is an 80% chance he will say "of course" no matter what the task!

    Gung-ho programmers are optimists. Couple optimism with the ennumerable factors involved in programming a non trivial application and you will get what we have today.

    By the way. I am a programmer and I have little to no confidence in my time-estimation abilities, or anyone elses. It has taken me 14 years to come to grips with that.
  • by LazyDawg ( 519783 ) <<lazydawg> <at> <hotmail.com>> on Monday November 05, 2001 @12:01PM (#2522664) Homepage
    Assembling software from reusable pieces requires three things that most software companies don't typically have:

    1. Discipline. Your average programmer will have read about various programming methodologies, but skipped past the parts which would make their code an easy-to-reuse template in lieu of fast development time. As with any gamble, you should know at exactly what point you want to quit, have an A-line for version 1.0's feature set, all that jazz.

    2. A big code base. Because of step 1, or maybe just a lack of previous projects, one's code base is typically limited to what you can find in a computer science textbook. Having a good database of classes and patterns that have turned out to be useful, and having easy access to this database for the information you need is the difference between a library and a code base.

    3. Incremental development. Throwing together a large software project, all at once, and then testing the whole thing is very tempting, and happens more often than most people like to admit. What should be happening is a series of incremental integrations into the final product, with unit tests of each part. Otherwise your large project can become a giant, complex nightmare. Making complex software shouldn't be made quite so complicated.

    While making a "software assembly line" takes slightly more work and trouble than your average car assembly line, it has incredible cost savings in the long run.
  • by Kefaa ( 76147 ) on Monday November 05, 2001 @12:05PM (#2522687)
    The issue is not physics versus manufacturing, it is scope and cost containment like is done in manufacturing. As a person who has lead multi-million dollar projects, I have grown used to the cliché that goes something like this:
    If we built homes like software we would all be living in the street, penniless...

    The major issues I have seen revolve around a lack of scope and cost control. In many cases it is because there is little penalty for being late or over budget. In cases where penalties exist it is often beneficial to then over estimate the effort or cost required. Then once the money is approved, using it is becomes easy.

    Going back to the analogy consider the following:
    Scope
    If you were building a house, each piece has a specified cost, known in advance to a very large degree. In addition, altering the scope itself often incurs a penalty, because the work is not done by the owner. You plan a three bedroom, 1.5 bath home. Midway through planning you decide to make it a two bath home instead. The architect will charge the "re-scoping" fee and the builder will add the material fee. Now do the same after construction has begun. The architect gets their fee, the builder adds the material and resource costs, plus a "revision" fee for changing your mind after construction begins.

    During a software project, it is common for individuals to approach the developers and ask to expand the scope. This would be analogous to approaching one of the work crew and asking them to just add the extra half a bath. The difference is the work crew would get fired, and the developer gets bonus points for adding the feature, either directly or indirectly.

    If the developer chooses not to do it, or pushes them to the project manager, the client may label them uncooperative or difficult to work with. The project manager not wanting to be labeled either may coerce, cajole, or beg the developer to accomplish it, without a scope revision. Failure to do so by the developer results in real financial impact at some point, and offers little incentive to hold the line.

    Cost
    I call this the "Porsche syndrome".

    I go into the Porsche dealership and see a new 911 Carrera Coupe. Smiling the dealer offers to sell it at a deep discount, with options and accessories $84,000 (U.S.). Whewwww baby!!! I cannot afford that. "Look," I tell him, "my wife will never approve that, you need to get it down to $28,500 tops." Would any of us expect to have the price cut down? By half or more?

    Okay, how about "Look, what will it take to get it under $30,000? Seriously now, what do I have to give up" As the dealer is escorting me to the door he explains the only way I will get this car under $30k is with a mask and a gun or from a scrap metal dealer.

    Yet, daily we go to developers and tell them to do the same. We ask for an estimate and then go back with "This is too much, it needs to be smaller or it won't get approved!" --Insert blank stare here--- The idea that if something cannot be cost justified it should not be done, is often lost in the "request" itself.

    To nearly guarantee a project is on budget and time requires things many companies are unwilling to provide. Strict scope control procedures, with oversight by the person responsible for the money. That means each change, regardless of how trivial must be approved by someone above the project management team with business justification. It also means that requests for scope change cannot be made to developers directly, by anyone.

    I was very happy with the people who built my home. When speaking to many of my friends and coworkers who built their homes, they describe it as a process akin to having their flesh removed. Everything required such effort and detail that many would not do it again.

    Most of them were looking for the relationship to be like one at the office. We all want to get along and help each other out. This is not a commercial arrangement, and when we put the commercial context around it, we see it many offices lack structure.

    Internal organizations can be setup like commercial ones, but it is usually unwelcome as the perception is everyone should be working for the greater good of the company and this has the appearance of bureaucracy. Even if inaccurate, everyone "wanting to get along" prevents it from being implemented.
    • First, you can easily get a Porsche 911 turbo (or whatever they call it today.) for under 30K.. I can get you one right now for $12K. Do you require the new vynal smell and no minor paint scratches? are you willing to pay $70K to not have those paint scratches and not have 80K miles on it? It's still a porsche911. It still acts like the shiny new one, it just needs to be watched. maintained and cared for.

      The same can be done in software. The solution can almost every time be a used product or older product. The ony thing you gain by shiny new product is maybe performance, and that is a big maybe. and trouble free operation... well in software that is not the case.

      The simple fact is you can have what you want if you widen your scope.

      I drive a 911turbo and I have a Testerossa in my garage. I also only make $40K a year and live in a 980SQ foot house in the nicest neighborhood (not rich-snob land) in my city.

      I also have more spending cash than my $180K a year friends, they will never own a testerossa (Unless I sell them mine HA!) and probably never drive a 911turbo as their daily driver (except winter.)

      Why do I succeed and they fail? I have what I want, I got my toys. I also only have a $700 a month mortgage... they have $2000 a month, My cars are paid for and older (Porsche is 1989 The testerossa is 1986 and needs a transmission and interior.. I need to install the rebuilt tranny and havethe leather seats re-done) My boat is from the early 90's they pay $600.00 a month for their boat payment,$900.00 a month on their lexus and I have to take them out once a month because they cant afford to have fun.

      Only a very rich man or a moron demands to have the NEW items when a used item or older item is a perfect substitute.

      Work asked me to find out how much to replace out SQL6.5 servers with SQL2000. 50 user licenses for 10 servers...

      I asked why, their response was because it's new.

      They wanted to spend $20,000.00 for no reason whatsoever... and in fact would have caused downtime as the software that relies on the SQL server is not compatable with SQL2000 yet.

      That is the porsche syndrome... spending money foolishly and for no reason whatsoever... Unfortunately I.T. and I.S. is rife with stupid spending.

      Upgrading when needed is important. I fully support spending money when it increases reliability and productivity and therefore positively effecting cash-flow. spending it for only bragging rights or because there's a new one available?

      That's the failure of many people and companies today.
  • by pberry ( 2549 ) <pberry&mac,com> on Monday November 05, 2001 @12:33PM (#2522862) Homepage
    As long as your defintion of what you are doing is sane. Everyone who hasn't read Joel Spolsky's essays on software development should...not to follow like sheep, but mearly to gain perspective and see if any of what he says works for you.

    Painless Software Schedules [joelonsoftware.com] is a great one and you will get sucked in just following the links from this one essay.

  • Reductio (Score:3, Interesting)

    by hey! ( 33014 ) on Monday November 05, 2001 @12:42PM (#2522900) Homepage Journal
    OK, what I take it here is that you are talking about a method by which software project times can be predicted accurately. Suppose we had such a method. Since it is a method which takes inputs and produces outputs, it can be described as an algorithm. Since it is an algorithm, it be be represnted as a software program which predicts completion times. So far so good.

    Next get together a team of programmers. Set them to work on a program which determines proves {insert your favorite unsolved mathematical conjecture here}. It turns out you actually don't need the team at all, just run your software project estimator and if it comes out with a finite amount of time to complete the program, you know that the the conjecture is true.

    In other words your software estimator can be used to solve the halting problem.

    OK, this is a joke, but it points something about the question. I once had a CS professor who required that we right requirements statements for all of our assignments. She forbade us to include halting times, because "you can't predict whether a program will halt or not." To which I wanted to reply, "About that 'hello ,world' assignment..."

    The lesson is that there are some cases to which a rule like this applies and others to which it does not. There are some projects that can be estimated with simple tools, some that can estimated with complex tools, and some that are not practical to estimate at all. Even fairly seat of the pants kinds of estimates work pretty well on relatively simple problems, providing you break things down a bit and do an honest estimate the costs on individual deliverables and the individual functions you know you'll need to make them work. About the only methods that never work are pulling a number out of the air based on how much the project scares you, or using wishful thinking (whether the source is your boss or you). Nobody can give good estimates when you spring the question on them with no time to prepare. My boss's most (and my least favorite) questions start with "how hard would it be.." and my most favorite (and his least favorite) answers start with "It depends..."

    Nonetheless, my experience with past projects of the kind that I do means I can do a pretty good job with relatively unscientific tools, provided the problem is like one I've solved before. However if you are writing software for space flight or some other kind of highly complex mission, I could estimate until I was blue in the face and it wouldn't be worth a damn. You want to hire somebody with experience in such projects and who has methods of estimation well calibrated from similar past projects.

    I think the particularly difficult cases are ones inolving software maintenance -- extending software to perform things that weren't originally factored into the design, or adapting the software to run when the systems it depends upon change in some unpredictable way. These are cases where surprises can throw the best laid estimates well off.
  • by MarkusQ ( 450076 ) on Monday November 05, 2001 @12:42PM (#2522902) Journal
    The huge gotcha, that IMHO makes most if not all schedules fantasy, is that people talk about how long it will take to finish coding when what they are really interested in is the time it will take to have the code finished and debugged. Of course, the time it takes to have debugged code depends on things like:

    * A tester or test suite exhibiting the bug

    * Someone recognizing that it is a bug

    * Enough data being gathered to define the bug ("It hangs sometimes" or "I don't think the results are always correct" doesn't cut it).

    * Enough eyeball hours to find the bug (this in itself makes the process equivalent to solving a crime. Do we ask the cops to schedule crime solving?)

    * About two minutes (average) to devise and implement a fix

    This has to be done for N bugs, where N is unknown. People who think you can estimate software development schedules with any accuracy are either dreaming or assuming that they just have to estimate how long it will take to get it coded, not how long it will take to get it working correctly.

    -- MarkusQ

  • by Xiver ( 13712 ) on Monday November 05, 2001 @12:44PM (#2522910)
    The problem with estimating development time lies mostly in the management's concept of software development. I was hired to work on a project that was estimated by management to last two months. My estimate was four months and the actual time it took to complete was over a year. Why could I not meet the project deadline?

    The customer claimed it was because I could not seem to fully complete a component of the project. What they really meant was I could not fully complete a component of the project before they would request a change to that component that in some cases required a complete rewrite of the component. They didn't think it was a big deal to add a button here or there in the application after all it was only a button. Never mind the fact that each of those buttons required stored procedures to be written and existing stored procedures to be altered. They would get upset that I could not make their requested changes in a day when they wanted to completely alter the way the interface to the application worked.

    The bottom line is most people who don't know anything about software development don't think it is a big deal to add a feature here and there at the end of the development cycle. I try to equate software development to carpentry. Sure I can add another door in the center of those cabinets, but don't expect it not to affect the other doors and their space within.
  • by andy4us ( 324798 ) on Monday November 05, 2001 @01:05PM (#2523041)
    One of the greatest criteria for a good programmer, whether it is the quality of the code, or the ability to estimate a schedule, stems from humility. Part of the problem with people when estimating a schedule is that they thing they are Superman. They think that they are so good that the complex task that is in front of them is trivial. These people tend to have very buggy code as well (normally from insuffient testing). All programmers suffer from this to some extent. I've also noticed that these people tend to never use libraries, since they can write one better, but then use up all their scheduled time rewriting libraries and never actually working on the project.

    Personally for me, I tend to do the best hourly breakdown I can and then double it before submission. This is normally not too far wrong (say one week on a 3 month project). The double factor allows for inaccuracies, meetings (which really do take time !), and spec changes. I may add more "fudge factor" depending on my feelings for how well the spec is sorted out and the quality of management (i.e. weak management will allow spec changes every week, good management will filter well).

    ANdy
  • PM Estimates (Score:3, Insightful)

    by Martin S. ( 98249 ) on Monday November 05, 2001 @01:42PM (#2523274) Journal
    PM: How long to do this work ?
    ME: How about a spec ?
    PM: You're kidding :) I only want a rough guess.
    ME: Roughly 6 weeks.
    PM: Nah, too long we'll never get that past the customers, lets call it 4 weeks.
    ME: Not again remember what happened last time, you chopped my estimate ?
    PM: Don't worry I won't hold you too it, this time!

    PM: That work finnished ?
    ME: NO, two more weeks.
    PM: You said 4 weeks, look here it is in the plan.
    ME: I said 6, You said 4 weeks, and that you wouldn't hold me to it.

    PM: The only thing I can fault you on is your estimates, they aren't very good.
    ME: You £$%&* git !!!

    And practically every project manager does the same thing.

    Why engineer failure into the plan ?
  • by remande ( 31154 ) <remande.bigfoot@com> on Monday November 05, 2001 @02:15PM (#2523534) Homepage
    Software development isn't always like physics--often we are boldly going where people have gone before. However, certain factors in software houses cause underestimations:

    Underestimation as a Marketing Tactic
    AKA "Vaporware". Even if marketing knew when a product would be shippable, a particularly cinical marketing department may claim it to be earlier, thus freezing competitor's development.

    Lack of Feedback (Moving Targets)
    Software engineers are particularly bad at estimating because they have never done what they estimated. They are given a large project, give a large estimate, start working on it, and the project changes in the middle in a major way. This is a moving target; the estimate no longer applies. Major law of software development: You cannot change the spec or the development team on the project without impacting the real ship date. If you don't re-assess the estimated ship date, you are simply fooling yourself. Thus, they don't have any clue whether they hit the estimate or not. One way to defend against this is to break the project down into bite-sized pieces and estimate them; a small piece gives you a chance to do precisely what you estimated. Once you have that, you can have somebody track your estimates, and come back saying something like "On average, you go one third over your estimates. Add a third to your estimates from now on, and we'll be accurate".


    Management Estimates
    Often, engineers don't do the estimate. The management or marketing people tell you what must be done, and how long you have. Sometimes this is done explicitly; other times, management may have a number in mind and shame a software team into agreeing with it by laughing off any number that doesn't match theirs. Business people often negotiate the ship date with the geeks, like any negotiate with any other vendor. To a suit, vendor negotiations are how you determine the "margin", or how much the vendor is making (like when you buy a car, you and the dealer come to a number that determines the dealer's margin). This doesn't work in in-house software develoment because geeks hold back precious little "slack" or "margin" (they don't get paid profits, they get paid salaries); in a decent shop, geeks program at flank speed all the time and always give the project 100%.

    See Ed Yourdon's Death March or any of Ward Cunningham's Extreme Programming books for more details, and ways to avoid the above traps. Yourdon suggests that the head geek has to take a hard stand in scheduling to prevent business interests from setting both the project spec and the ship date. He especially tells you never to negotiate schedule, and to help the suits understand why you never do. Whatever number you estimate doesn't affect the actual ship date, so playing with that number is simply fooling yourself.


    Extreme Programming actually has a "planning game" (sort of a ritual dance) which places business interests and geeks on the same side of the table. Two big rules are "The geeks may not reject any part of the spec" and "The suits may not reject any part of the estimate". Once the suits set the spec, both teams break it down into pieces-parts, line them up in order of what gets done first and the geeks give their estimates. From there, the suits can choose the ship date (and can instantly see how much product will be ready by then), or can choose a certain amount of project completion (and can instantly see the ship date). The fun part about this method is that the suits can change their minds at any time by changing, adding, or removing pieces-parts, and can instantly see how that affects the ship date. The other fun part is that breaking up the project into pieces-parts allows developers to do a (small) project they estimated. This allows people to track estimated versus real time, and to give developers feedback that lets them make better estimates. Such a team will start off with bad estimates like everybody else, but they will be able to improve rapidly.

  • by cr0sh ( 43134 ) on Monday November 05, 2001 @02:24PM (#2523601) Homepage
    "Suffering" from it right now, AAMOF...

    1. Programmer comes up with new system in spare time while learning a language. New system, if polished, would actually make a nice application to sell to current clients. Programmer is excited, and shows "product" to highers-ups.
    2. Higher-ups are excited, can see it may take a bit more work, and look into what it would take to get it to market. They tell sales and marketing to go see the programmer to have him demo it to them.
    3. Programmer is excited, shows it to sales and marketing. Sales and marketing love it.
    4. Months pass. Unbeknownst to the programmer, sales and marketing have sold it to a client, as part of the contract, to be a finished package by the end of the year - OR ELSE.
    5. More months pass - higher ups finally tell programmer, and others, that this new system is wanted - and oh, BTW, it is wanted in Java - not in the VB it was shown it.
    6. Three months are left to complete the project. Original programmer knows little Java. Other Java coders know little Swing. Architecture of app is changed from a simple app to a three-tier client-server system. Only two other coders have sufficient Java experience to code on it. The lead of the project knows no Java, and only takes notes at meetings.
    7. Twenty-one days until deadline (ie, it has to be in QA in 21 days) - everyone sweating bullets knowing it can't be done. Oh, and BTW, at every meeting it seems like a new section not planned for is realized...

    It was an ad-hoc system, and it is progressing as an ad-hoc system - a system that should have NEVER been shown to marketing and sales. I am not the programmer who originated it, but suffice to say it is a system that will be nice for our clients once it is completed. Fortunately, it sounds like things will be able to be smoothed over if we miss the deadline...

    So remember, all you budding coders out there - if you create something in your "learning" time - don't show it to anyone BUT other coders. If marketing and sales come around, have them sign an NDA promising not to sell it or something - you don't want to release a product to market before it is done - quit "selling" vaporware!!!
  • by remande ( 31154 ) <remande.bigfoot@com> on Monday November 05, 2001 @02:27PM (#2523621) Homepage
    My understanding of the paper is "Software estimation has been proven to be impossible by any formal systems."


    Now, this paper makes a hell of a lot more sense to anyone who's read Hofstadler's Godel, Escher, Bach, but I suspect that many, even most, Slashdotters have read this one.


    What makes the paper irrelevant is that we don't use formal systems to estimate software. We use our own head. We use hunches. We use intuition. These things are informal systems, capable of forms of reasoning that no formal system can achieve. That's what Godel proved.


    The paper is saying that you can't take a spec, give it to an estimator program, and have the program write the estimate. You can give the spec to humans who write estimates for parts of it, feed that into an estimator program (like a spreadsheet), and you can get an estimate, but you simply cannot remove the human from the loop.

  • by Kope ( 11702 ) on Monday November 05, 2001 @02:35PM (#2523663)
    The article presents an interesting arguement for why a completely new software project must have an arbitrarily large upper bound for time/quality estimates and can have no lower bound.

    But herein lies the rub -- exactly how many software systems are "completely new?"

    Damn few!!

    The average software project in an average industry will be primarily a repackaging of previously solved problems.The majority of integration tasks will be sufficiently similar to previous integration tasks as to be known.

    You will be left with a small number of "sub problems" which are unique and new. But now we have a situation where the caveats of the article are very important. Specifically, if we have decomposed the programming tasks to a sufficient degree, it should be the case that the estimation is tractable.

    Also, it should be noted, that the author assumes that a good estimate is one obtained through formal methods that is objectively defensible. However, in project maangement, a good estimate is defined as one that is believable and acceptable to all stakeholders in the process. The method for obtaining the estimate is not important.

    Moreover, good project management will include some significant up-front analysis. One common (at least common to companies with good PM'ing track records) is to run "monte-carlo" simulations of project work with large variances in schedule-v-actual work. With a run of a few thousand simulations, those processes that are most important to the time and budget performance of the project.

    These "key" work packages are often non-obvious without this type of simulation work. However, with a good work breakdown structure and a good simulator, it is possible to generate a reasonably accurate picture of project performance based on what is not known.

    This means that in the "real world" of business, the article's claim is irrelevant!!

    We don't NEED objectively defined and defensible estimates. Instead we need estimates that the project stakeholders (which includes the people doing the work) can agree to.

    We don't NEED our estimates to be generated by formal methodologies. Subjective estimates backed up by years of experience are just as good, and often better, from a planning perspective.

    This whole article strikes me as another programmer trying to show how dumb the business people are. Hey folks, good business people KNOW that estimating is hard and that it isn't objective. But just because something isn't objective doesn't mean it can't be done well. It is possible to build models that compensate for unknowns if you can do enough decompossing of the problem to limit the unknowns to a well defined, small manageable few.

    So, in the view of this PM, this is all just academic and has no bearing on the real world.
    • The article presents an interesting arguement for why a completely new software project must have an arbitrarily large upper bound for time/quality estimates and can have no lower bound.

      But herein lies the rub -- exactly how many software systems are "completely new?"

      Damn few!!

      Unless you're merely doing maintenance on an existing program and know exactly what you need to change, what you are doing is new. Especially if you are trying to fix a problem with a software package that you are not familiar with.
      The average software project in an average industry will be primarily a repackaging of previously solved problems.The majority of integration tasks will be sufficiently similar to previous integration tasks as to be known.
      If that was the case we would be able to make better estimates. This is almost always not the case.
      You will be left with a small number of "sub problems" which are unique and new. But now we have a situation where the caveats of the article are very important. Specifically, if we have decomposed the programming tasks to a sufficient degree, it should be the case that the estimation is tractable.
      Software development is an art form. You can hire someone to paint your house and he can tell you exactly what it will cost. This is presumed upon the house being already built and it being an exact structure before he starts; that you not rebuild the house while he is painting it; nor change the paint color in the moddle of the job; and not asking him to remove the previous paint coat, etc. Otherwise it's akin to doing the Sistine Chapel without even an image to start with. An unlimited job results in an unlimited requirement. Until someone pulls the plug.
      Also, it should be noted, that the author assumes that a good estimate is one obtained through formal methods that is objectively defensible. However, in project maangement, a good estimate is defined as one that is believable and acceptable to all stakeholders in the process. The method for obtaining the estimate is not important.
      It is if you want it to be realistic. Usually the estimate is either totally unrealistic or it's manufactured from whole cloth.
      Moreover, good project management will include some significant up-front analysis. One common (at least common to companies with good PM'ing track records) is to run "monte-carlo" simulations of project work with large variances in schedule-v-actual work. With a run of a few thousand simulations, those processes that are most important to the time and budget performance of the project.
      This is ridiculous. If management knew what it was doing we wouldn't have so many businesses run themselves into the ground and the dot com bubble would never have happened in the first place.
      These "key" work packages are often non-obvious without this type of simulation work. However, with a good work breakdown structure and a good simulator, it is possible to generate a reasonably accurate picture of project performance based on what is not known.
      Asking for estimates on the development of art work is ridiculous unless you have fixed guidelines and an exact idea of what you want, something which is usually lacking.
      This means that in the "real world" of business, the article's claim is irrelevant!!
      If it's irrelevant, why is it in the "real world" more than 3/4 of all projects run over time and over budget and something near 1/2 end up being cancelled?
      We don't NEED objectively defined and defensible estimates. Instead we need estimates that the project stakeholders (which includes the people doing the work) can agree to.
      You can get people to agree to anything. The question is whether the estimates are anything close to accurate. In most cases, they are not.
      We don't NEED our estimates to be generated by formal methodologies. Subjective estimates backed up by years of experience are just as good, and often better, from a planning perspective.
      True. But the problem is, most places don't know enough about what they are doing or how it is defined to be able to give any kind of reasonable estimate. If you don't measure what's going on, and you do everything in an ad-hoc style, you will get estimates that are essentially about as valid as rolling dice to get an answer. And maybe less valid than that.
      This whole article strikes me as another programmer trying to show how dumb the business people are.
      It is not that business people are dumb, it is that we are failing to make adequate estimates and standing up for them as based upon what we know to be correct. But again, since the measurements of what is being done are often missing, the estimates are usually nothing better than seat-of-the-pants guesses, and wildly wrong.
      Hey folks, good business people KNOW that estimating is hard and that it isn't objective. But just because something isn't objective doesn't mean it can't be done well. It is possible to build models that compensate for unknowns if you can do enough decompossing of the problem to limit the unknowns to a well defined, small manageable few.
      If that was the case, why is it common place for managers to demand increases in functionality and cuts in the schedule? Because those who hear the estimates think they are overly padded (and therefore should be cut), and those who make the estimates don't have the means to show where they get the numbers from (and therefore can't show why their estimate is even close to correct, when it probably wasn't anyway).
      So, in the view of this PM, this is all just academic and has no bearing on the real world.
      Believe that if you will; the way things are really happening in the world prove otherwise.

      Paul Robinson <Postmaster@paul.washington.dc.us [mailto]>

  • by the_great_cornholio ( 83888 ) on Monday November 05, 2001 @02:40PM (#2523700)
    As silly as this paper is, most responses to it are off-topic. What he is trying to show is that there is a good case for saying there is no general, algorithmic way to estimate how long it will take to do a given software project. What he isn't saying is that you can not make reasonable estimates on a given project.
  • by sulli ( 195030 ) on Monday November 05, 2001 @06:36PM (#2524968) Journal
    "You tell me the month, I'll tell you the year"
  • by Anonymous Brave Guy ( 457657 ) on Monday November 05, 2001 @06:50PM (#2525045)
    With objective schedule estimates, projects should never run late. Are these failed software projects not using proper software engineering, or is there a deeper problem?"

    Yep, there's a deeper problem, and it's very simple. Suppose your manager asks you for an estimate, and you say "six months" because that's how long you think it will take. Your manager works out that the project will not succeed if it takes six months, and asks you if you can do it in four. If you say "Yes", you have just become a statistic.

    Saying yes does not mean that you can do it if you couldn't before, it just means that you have lied to management, prevented them from doing their job properly. If your project would take six months, but it will not make money if it takes six months, then you simply should not start that project. Failing to realise that simple fact is the major cause of late/failed projects, IME.

  • by soft_guy ( 534437 ) on Monday November 05, 2001 @07:46PM (#2525255)
    Software engineering is like any other kind of engineering. You *can* create a realistic schedule that you can follow. I have worked on a large number of software projects. Some hit their dates, others did not. I have identified certain preconditions that have to be met if you want to hit your date. (Not that these are profound, pretty much everyone would agree this is common sense stuff -- it's just that often times conditions aren't met which causes late projects.) First, the customer (whoever if calling out the requirements) can't be changing the requirments insanely. This one should be obvious, but I've experienced a large number of situations where management changes the basic premise of what they want regularly and are surprised when this impacts the schedule. Any external dependencies have to be met in the timeline called out in the schedule. I worked on a project where we had to deliver a server that talks to our customer's other servers using a proprietary protocol. The customer asks, "Can we have it by x date?" Our response, "Yes, if you can give us the documentation to your protocol and access to a testbed by x-y date." They delivered their end of the bargain (extremely) late causing us to be late. ("But you said you could hit the date!") Go figure! The third precondition is that the program manager should not be an idiot. This person needs to have the following characteristics. They need to be very technical. People who are former developers usually do okay. As a rule, people whose total background is as a marketing assistant or a receptionist(!) usually do not make good program managers. (The receptionist I had as a PM didn't do too bad because she understood that she didn't know anything about it and let me - the lead dev - call the shots.) This person should have been around the block a few times and should agressively track down any risky issue or "gotchas" in the process as soon as it is uncovered. This person should be tenatious in doing this. If you have those three preconditions met, then typically you can hit your date.
  • by NoHandleBars ( 10204 ) on Monday November 05, 2001 @08:00PM (#2525298)
    With a little over 20 years experience of managing very large software projects for Fortune 500 companies I can identify the root cause for the spectacular successes and the colossal failures: Scope Creep.

    If the business requirements have been properly defined and management discipline exercised to keep within the original scope, every estimate I've developed -- using a variety of methods over the year -- has been successful. But those instances where the specs continually change, the business requirements are "discovered" along the way and/or new requirements are added to the mix are all failures. This has been true whether I've led teams doing something "no one's done before" or the "same old thing" again.

    Kudos to everyone here that has posted information on the REAL solutions in the form risk management, scope containment, good old fashioned discipline, and the like.
  • by Baldrson ( 78598 ) on Tuesday November 06, 2001 @01:59AM (#2526398) Homepage Journal
    Of the question "Can software schedules be accurate?" I can only say, it depends on how much new stuff has to get done.

    To take a reductio ad absurdum:

    You are given the task of duplicating the functionality of Windows NT. Furthermore, you are given the source code for Windows NT in a .tgz file and the associated development environment within which that source code can be tested. The question now degenerates into "How long does it take me to copy the tgz file?" That can be accurately predicted by measuring how long it takes to copy files on that environment in general, and the estimated schedule can be predicted to absurdly high degrees of accuracy with enough benchmarks of the system's file copying performance.

    Here's another reduced complexity angle:

    Translate a program written in Visual Basic and convert it to C++ (readably).

    You actually can sit down and convert a sampling of the program and get a measure of how long it will take you to do the whole thing -- the more you sample, the more accurate the measure right up to the point where you have converted the whole thing.

    Here's another example with a bit less reduction in complexity:

    You are given a working program but no source code, and some expert users of that program. Here we are getting into what might be thought of as "function point analysis" but really, it is much easier and more accurate than that since the program exists and works as it is "supposed" to work, you can bang away on it, and the expert users can bang away on your version of it to ensure it meets their needs -- perhaps discovering that some of the features in the old program were not really used thereby simplifying the task.

    Each step has been away from the "absurd" position of simply copying a program which was, in a sense, a "spec" for itself.

    At the other extreme, we get to the problem of "write a program that will make me as rich as Bill Gates". Note that this specification is not very specific.... it is very far from being source code for a program you can simply copy, isn't it? Guess what that says about the accuracy of the schedule?

    So a lot of this hubub about estimating software schedules is really hubub about the nature of the program specificiation process.

  • by noisebrain ( 46937 ) on Sunday November 11, 2001 @08:57PM (#2552170)

    It looks like several people (well, more than several) posted responses without reading beyond the lead-in. If you're one of them, yes, the argument here is in the general ballpark of "software estimation is hard or impossible", but it actually says something more specific than that.

    The article does NOT say the following:

    1. software estimation is impossible
    2. objective software estimation is impossible therefore software estimation is impossible

    The article DOES say

    • Various software engineering authorities claim that objective software estimation is possible [paragraph 3, quotes on first page].
    • objective software estimation is in fact not possible [body of article]

    From this, it does NOT conclude either of the points 1,2 above. Instead, it concludes:

    • Software construction is inherently creative and subjective, having more in common with physics than manufacturing; software estimation is inherently subjective [conclusion, Bollinger quote].
    • Because software is used in the government, in vehicles, and other places where it can potentially have a negative on people's lives, we (software writers) have an ethical responsibility to not over-represent our ability to estimate (especially when it comes to estimation of software quality- r.e. correctness claim in the supplementary material).

    Now some of the response posts, paraphrased:

    • "The article says that estimation must be objective rather than subjective"
      No, it does not say this.

    • "The article says that subjective software estimation is not useful"
      It also does not say this.

    • "The article says that we are looking for exact answers, not estimates" or "the article doesn't understand what `estimate' means"
      No, the article distinguishes subjective and objective estimates, and specifically discusses the case of an objective estimate with bounds in detail.

    • "People/organizations can make accurate estimates, I made one last week" or "Estimation is hard, I double my estimates and still miss them".
      Ok, but slightly off topic: the article is specifically talking about those who claim objective estimates.

    • "You can do objective estimation, and I did it last week using COCOMO"
      And where did you get an objective estimate of the complexity of a new project? Read the article...

    • "I think I'm the only person who has read this far".
      Yes, you are. Your boss is monitoring you, get back to work.

    • "Software estimation needs common sense, not advanced mathematics."
      Certainly. The 'manufacturing' camp of software estimators (Humphrey quote in the supplementary material [idiom.com]) say or hint that software construction can be made into a repeatable, fairly boring process where projects are always on time and programmers are like factory workers. This may or may not be true (I don't think it is), but regardless: to make this view seem more science than philosophy some of these people have fallen into the trap of cloaking their estimating process with formal notation and claiming or hinting objectivity. This part is wrong.

      On the contrary, [conclusions to the article and the supplementary material]:

      Good estimation therefore requires experience and judgment. The software industry should value human experience, intuition, and wisdom rather than claiming false objectivity and promoting entirely impersonal "processes".

E = MC ** 2 +- 3db

Working...