Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming Software Apache IT Technology

Software Code Quality Of Apache Analyzed 442

fruey writes "Following Reasoning's February analysis of the Linux TCP/IP stack (putting it ahead of many commercial implementations for it's low error density), they recently pitted Apache 2.1 source code against commercial web server offerings, although they don't say which. Apparently, Apache is close, but no cigar..."
This discussion has been archived. No new comments can be posted.

Software Code Quality Of Apache Analyzed

Comments Filter:
  • by Marx_Mrvelous ( 532372 ) on Monday July 07, 2003 @10:25AM (#6382652) Homepage
    Why don't they fix them? It seems almost paradoxical, if you find .53 errors per thousands lines of code and fix them, then you'll have 0 errors. But since we can only fix errors we can detect, we only detect errors we can fix. Ok, it's too early on a Monday morning...
    • by dkh2 ( 29130 ) <`moc.hctIstiTyMoDyhW' `ta' `2hkd'> on Monday July 07, 2003 @10:48AM (#6382824) Homepage
      Sure, they found them but, did they catalog them in any way. .53/KLOC errors translates to approx. 1 error every 1886 LOC on average. On top of that, on further investigation, which of these are actual errors and which only look like errors?

      I'm just glad I'm not the poor go-coder who has to go through the code to find and fix these few "errors."
    • by arrogance ( 590092 ) on Monday July 07, 2003 @11:05AM (#6382930)
      Defect Report [reasoning.com]

      Metric Report [reasoning.com]

      They make you fill out a form that asks for your email and then do an opt out checkbox at the bottom of the form (you have to check it to NOT get spam from them). The site's a bit slashdotted right now though.

    • by Jeremy Erwin ( 2054 ) on Monday July 07, 2003 @11:07AM (#6382942) Journal
      If you download the defect report (available from here* [reasoning.com], it will explain exactly where the bugs are.
      For instance, the first bug is

      DEFECT CLASS: Null Pointer Dereference DEFECT ID 1
      LOCATION: httpd-2.1/modules/aaa/mod_auth_basic.c :291
      DESCRIPTION The local pointer variable current_provider, declared on line 235, and assigned on line 257, may be NULL where it is dereferenced on line 291.
      PRECONDITIONS The conditional expression (res) on line 253 evaluates to false AND
      The conditional expression (!current_provider) on line 264 evaluates to true AND
      The conditional expression (!provider || !provider->check_password) on line 268
      evaluates to false AND
      The conditional expression (auth_result != AUTH_USER_NOT_FOUND) on line
      282 evaluates to false AND
      The conditional expression (!conf->providers) on line 287 evaluates to false.


      Each bug report is followed by the snippet of source code containing the defect.

      The metric report simply reports the statistics. For instance, the most bug ridden file is otherchild.c. The most common bug class is "dereferencing a NULL pointer".

      If the Apache developers simply want to fix the bugs, they can use the Defect Report. If they want conduct a brutal purge of their contributors, they can use the Metric report.

      *Yes, Reasoning wants an email address. They will mail you a URL (a rather simple one at that) to access the reports.
      • by MisterFancypants ( 615129 ) on Monday July 07, 2003 @11:15AM (#6382983)
        None of that bug report is at all useful if there is no logical way for all of those preconditions they listed to actually be met.

        I mean, yeah, it would be nice if code would explicitly check for a NULL before dereferencing, but if there's no earthly way for the pointer to actually BE a NULL pointer at that time (barring memory corruption -- in which case all bets are off and your code is doomed anyway) then I wouldn't call those errors.

        This whole exercise seems very suspect to me.

        • by tomstdenis ( 446163 ) <tomstdenis@gma[ ]com ['il.' in gap]> on Monday July 07, 2003 @11:20AM (#6383019) Homepage
          Agreed. Things like splint often report "warnings" on code that shouldn't be. For instance

          int some_func(char *somebuf)
          {
          if (somebuf == NULL) return ERROR;
          somebuf[0] = 'a';
          return OK;
          }

          Will generate a warning with splint saying "pointer may be null" despite the fact it cannot be.

          Those tools are generally too sensitive and give too many false positives to be useful in the long run.

          Tom
        • by Skjellifetti ( 561341 ) on Monday July 07, 2003 @11:46AM (#6383229) Journal
          None of that bug report is at all useful if there is no logical way for all of those preconditions they listed to actually be met.

          Well, Yes and No. The problem is that there may be no logical way that the pointer may be NULL today. But tomorrow, a new coder will add something that modifies the preconditions and suddenly that pointer can indeed be NULL. Even where you are sure that a condition is impossible, it is usually a good idea to check for NULL in order to avoid future errors.

          And for those who haven't seen this trick before, a nice habit to get into is to write your checks like so:
          if (NULL == myPointer) { ... }
          This lets the compiler catch errors where you meant '==' rather than just '='. As in
          /* Do we really mean this? */
          if (myPointer = NULL) { ... }
          • .. But tomorrow, a new coder will add something that modifies the preconditions and suddenly that pointer can indeed be NULL.

            That's what assert() exists for. And 'preconditions' you are referring to are actually 'invariants', so if "suddenly that pointer can indeed be NULL" it means that someone broke a fundmental design assumption and should not be tweaking the code anyway.

            And for those who haven't seen this trick before, a nice habit to get into is to write your checks like so:..

            I found this trick p
          • This lets the compiler catch errors where you meant '==' rather than just '='.

            MY compiler (Microsoft C++) does catch this

            if (myPointer = NULL) { ... }
            and issues a warning. Doesn't gcc?
      • by Anonymous Coward on Monday July 07, 2003 @11:57AM (#6383294)
        The funny thing is that this "bug" doesn't appear to actually be one...

        Note that current_provider is set to conf->providers on line 257. The loop starts and neither current_provider or conf->providers change. Then on line 287 there's a conditional break if conf->providers is NULL.

        If current_provider is going to be NULL at line 291, then conf->providers must be as well, so the conditional break will happen and the NULL dereference will be skipped.

        Or am I missing something else?
  • apache 2.1? (Score:5, Interesting)

    by fishynet ( 684916 ) on Monday July 07, 2003 @10:26AM (#6382657) Journal
    2.1 is'nt even out yet! the latest is 2.0.46!
    • is equivalent to the error level in post-release commercial web serving software. Sounds like an endorsement to me.
      • is equivalent to the error level in post-release commercial web serving software. Sounds like an endorsement to me.

        That, too, but I'm damn certain that they must have tried it on recent stable 2.0.46ish release aswell. The question is, why weren't those results made public?

        I'm guessing it's because the results were something that would've placed their "defect detection sw" into bad light. I.e. nothing as fancy as the forementioned "use of uninitialized variable" and "dereference of a NULL pointer" (which strikes really odd to me in the first place).

        Naturally the other explanation is endorsement. It would be so much not-the-first-time that I don't even bother... but I wouldn't bet that this is the case here, because the defect counts were only compared to production release code averages (which strikes me as the other extremely dubious part of this whole "experiment").
      • by yaphadam097 ( 670358 ) on Monday July 07, 2003 @02:24PM (#6384340)
        I've worked on open source projects and I've also worked in commercial development shops. I think that their findings are accurate but misleading:
        1. In my experience there are generally less bugs in pre-release code on a commercial project because there is a stronger culture of code ownership, and most if not all code is independently reviewed before being committed.
        2. There are generally a high number of defects in pre-release open source code, because developers commit early and commit often. Independent review happens more often in open source projects, but it usually happens after the code has already been committed to the dev branch (Before that, the geographically dispersed dev team has no access to it.)
        3. The quality of code released to production in a commercial environment is usually very similar to the quality of code in the development branch. Once it is reviewed and committed it enters a QA cycle where an independent team tries to find any bugs. At this point there is invariably strong pressure to release. So, bug fixes happen quickly and quality suffers (I've always found it ironic that we called this "Quality Assurance.")
        4. Once an open source project has been completed (Meaning all of the features have been developed) it enters a much longer period of code review, bug hunting, and alpha release. For a project like Apache it was over a year before anyone started to use 2.0 in production. Most commercial companies can't afford nearly that much "QA" time, because they are spending money to make money.
  • by jpmahala ( 181937 ) on Monday July 07, 2003 @10:27AM (#6382659)
    Just because Open-Source coders can't spell when they insert comments doesn't mean that they can't write good code!

  • by mao che minh ( 611166 ) * on Monday July 07, 2003 @10:27AM (#6382660) Journal
    I suppose now we have to question the severity of the defects (and also factor in the implementation and use of the code). If Apache and, say, IIS are roughly equivalent in terms of code defects, you have to ask yourself "well, why does IIS have so many more general problems and security flaws then Apache, when they both carry the same general amount of coding defects?". Is IIS just inherinetly insucure because it is used on a Windows platform? Is it because hackers generally target IIS and not Apache (most people will rush to this conclusion)?

    But here's the kicker: the vast majority runs Apache on either BSD or Linux. All of this code, from the kernel to the library that tells Apache how to use PHP, is open source. Every hacker on the planet has full access to the code - which means that they can review it and find vulnerabilities in it. Not many people have access to Windows or IIS code. So why does IIS and Windows come out as far less secure, and is exploited so much more?

    I think the answer lies in the severity of the code defects, and the architecture and design of the operating system that powers the web server. And yes, I know that Apache can run on Windows.

    • by siskbc ( 598067 ) on Monday July 07, 2003 @10:37AM (#6382745) Homepage
      If Apache and, say, IIS are roughly equivalent in terms of code defects, you have to ask yourself "well, why does IIS have so many more general problems and security flaws then Apache, when they both carry the same general amount of coding defects?". Is IIS just inherinetly insucure because it is used on a Windows platform? Is it because hackers generally target IIS and not Apache (most people will rush to this conclusion)?

      First, are all of IIS's issues "software errors" per se? I'm wondering if all security problems would have been caught, or if that was really the goal of the analysis. Perhaps it was, but I'm not sure. One could contest that IIS has a lot of things unprotected, but that this doesn't constitute a software error.

      And as you say, severity would be another issue. It's always been typical open-source style to get the mission-critical parts hardened against nuclear attack, but leaving the other bits a tad soft. I wouldn't be surprised to learn that was the case with apache.

      One thing I want to know - did MS (or whoever) give these guys source or were they analyzing the binaries?

      • by Tony-A ( 29931 ) on Monday July 07, 2003 @12:37PM (#6383569)
        It's always been typical open-source style to get the mission-critical parts hardened against nuclear attack, but leaving the other bits a tad soft.

        IMNSHO, that ought to be standard for any mission-critical software. Bugs and the places that bugs live in are not created equal. The beauty of Apache (at least 1.13) is that the overall system can be very robust and reliable with rather buggy modules. I suspect the problem with IIS is that everything assumes everything else is perfect, which overall doesn't quite work so well.
    • by brlewis ( 214632 ) on Monday July 07, 2003 @10:40AM (#6382762) Homepage
      Another post seems to indicate this was done via software to automatically detect defects. Many (most?) security defects cannot be detected automatically, as they involve using the software in an unintended way.
    • by Illserve ( 56215 ) on Monday July 07, 2003 @10:42AM (#6382788)
      By its very nature, Open source will tend to fix important bugs and leave unimportant ones unfixed, while standard QA processes associated with commercial software will tend to fix little UI issues during the release schedule before dealing with vulnerabilities.

      So seems pretty clear to me that in Open source, the ratio of showstopper bugs to miscolored widget bugs will be much lower than for commercial software.
    • by jdh-22 ( 636684 ) on Monday July 07, 2003 @10:58AM (#6382887)
      Every hacker on the planet has full access to the code - which means that they can review it and find vulnerabilities in it. Not many people have access to Windows or IIS code.
      To quote Bruce Schneier: "If I had a letter, sealed it in a locked vault and hid the vault somewhere in New York. Then told you to read the letter, thats not secruity, thats obsecurity. If I made a letter, sealed it in a vault, gave you the blueprints of the vault, the combinations of 1000 other vaults, access to the best lock smiths in the world, then told you to read the letter, and you still can't, thats security." Open source does have an upper hand on holes and bugs, but the code isn't where we should be looking.

      The majority of the secruity holes are from the people setting up the web servers. The holes are usually abused by "wanna-be" hackers, or script-kiddies. The problem is that people are not educated enough to run some of these programs. Being able to understand Apache, and how to make it operate correctly is not everyone's top priority. As long as it works, people don't care how it works (as goes for many other things in this world).
    • by sterno ( 16320 ) on Monday July 07, 2003 @10:59AM (#6382893) Homepage
      The thing that always kills IIS, is the integration it has with Windows. This isn't a defect in IIS, or Windows, per se, but rather a defect that arises because of how they integrate with eachother. A script executes on IIS in a way that's not inately a bug, but then when it interacts with Windows, Exchange, etc, suddenly it becomes one.

      Apache is just a webserver, and that's all. PHP, JSP, etc, are all separate applications treated separately. The integration does make things more efficient, yes, but also more prone to problems.
    • Is IIS just inherinetly insucure because it is used on a Windows platform? Is it because hackers generally target IIS and not Apache (most people will rush to this conclusion)?

      Microsoft will try to make people belive whatever is in their interests .. Even if it means contradicting themselves ..

      Last Friday Microsoft called all their Premier customers in France with "information" related to the upcoming "hackerfest" last Sunday.

      According to Microsoft mostly Unix and Linux servers would be the target

    • Every hacker on the planet has full access to the code - which means that they can review it and find vulnerabilities in it.

      Do you know how long it takes to read someone else's code on something like an Apache-level webserver and understand it to the point where you can make useful changes and fixes? The big lie of the "all bugs are shallow" argument is that such a thing is simple, when in fact it is not.

      Fixing a non-obvious bug in a 100k or so line C or C++ project is hard enough when you wrote the

      • by bwt ( 68845 ) on Monday July 07, 2003 @11:49AM (#6383244)

        One of the best ways to get to know a large code base like Apache or something else is to find a repeatable bug and track it down. To fix a bug you do not need to understand the whole program, just the relevent parts. I've submitted bug fixes to several projects, so I must strenuously disagree, especially because, ahem, I have never submitted a bug fix to a proprietary project because its impossible.
      • Actually, I've found that fixing bugs in large projects is about the same whether or not you are familiar with the project, provided that the author was no smoking crack at the time he wrote it.

        For example, I managed to code, test, and patch a "fix" for PostgreSQL this weekend in under 2 hours, having never seen the code before.

        The "fix" wasn't a bug, per se, i't just that the output of pg_dump wasn't optimal in my usage for dumping the schema for CVS revision control. I added two flags, -m -M, which molded the output to my liking.

        If you haven't seen your code in two months, you and an outsider have about the same chance at finding and detecting bugs/misfeatures.
    • by aziraphale ( 96251 ) on Monday July 07, 2003 @12:27PM (#6383507)
      One word: architecture.

      And not just the architecture of the web server, but the architecture of the entire platform. But specifically looking at the architecture of Apache versus the architecture of IIS, you'll immediately see that the goals of the two pieces of software are not the same. Look at things like IIS's metabase - the structural details of the server's configuration are kept in an in-memory data structure, which is easily modified while the server is running. Apache, in contrast, reads its configuration at startup, and uses it to determine which modules of code are loaded, and how they are used to process requests - fixing the behavior of the web server at startup.

      IIS follows typical MS enterprise software design - it has to interface with COM, and the NT security model, and active directory, and the registry, and a million other systems, all in the name of integration, and enterprise management. Apache doesn't have PHBs telling it that it needs another way for the metabase to be edited, or a new instrumentation API, or whatever else a particular large customer asked for - and can get on with just providing its facilities cleanly.

      That's why IIS has so many more security holes, even if it does (as may or may not be the case) have the same raw coding error rate as Apache.
  • Wait a second (Score:4, Insightful)

    by Knife_Edge ( 582068 ) on Monday July 07, 2003 @10:27AM (#6382661)
    Has Apache 2.1 been released as a stable, non-developmental release? If not I would say testing it for defects is a bit premature.
  • Defect? (Score:5, Interesting)

    by Jason_says ( 677478 ) on Monday July 07, 2003 @10:27AM (#6382663)
    Reasoning found 31 software defects in 58,944 lines of source code of the Apache http server V2.1 code.

    so what are the calling a defect?
    • Re:Defect? (Score:5, Informative)

      by richie2000 ( 159732 ) <rickard.olsson@gmail.com> on Monday July 07, 2003 @10:48AM (#6382816) Homepage Journal
      From the report:
      NULL Pointer Dereference (Expression dereferences a NULL pointer) 29 instances
      Uninitialized Variable (Variable is not initialized prior to use) 2 instances

      They also list the files and code snippets where the errors were found.

      In addition, the comparison is made against an industry average of commercial code they have tested this way, NOT against other webservers.

  • by 3.5 stripes ( 578410 ) on Monday July 07, 2003 @10:27AM (#6382664)
    And don't most NDAs for when they do let you look forbid any competetive analysis?

    Or am I just too far out of that line of work to know how these things work?
  • 2.1 ? (Score:4, Insightful)

    by Aliencow ( 653119 ) on Monday July 07, 2003 @10:27AM (#6382667) Homepage Journal
    Wouldn't that be unstable? I thought the latest was 2.0.46 or something.. If I'm not mistaken, it would be a bit like saying "Freebsd 4.8 has less bugs than Linux 2.5!"
  • by SystematicPsycho ( 456042 ) on Monday July 07, 2003 @10:28AM (#6382670)
    So basically they offer a service like lclint [virginia.edu] only many times more advanced ? What is to say they haven't missed anything?

    This is probably a publicity stunt for them although a good one. I think it would be a good idea for them to sell software suites of their product if they don't already.
  • by TheRaven64 ( 641858 ) on Monday July 07, 2003 @10:28AM (#6382673) Journal
    Hmm, so they looked at 58,944 lines of code, and found 31 defects? Did they find every defect? Can they prove this? What about those found in commercial code? If it were possible to find all of the defects in a piece of code this big in a small amount of time, then there would be no defects, since they would all be identified and fixed before release.

    As far as I can see, this article says 'We have two arbitary numbers, and one is bigger than the other. From this we deduce that Apache is not as good as commercial software.'

    • Completely and utterly agree, I mean hell, I could write fifty thousand lines of code, each line completely and utterly with no meaning, run it through the checker and produce 0 defects, except for one overall defective piece of software. Does this article have any point whatsoever to it at all, I mean, even if the results had any meaning, what on earth is the point of comparing a known to an unknown ?
      • I agree completely. Any metric based on Lines of Code anything is a harmful metric. Any metric based on defect counts is also harmful. Both of these are left-overs from attempts to (mis)-apply statistical process control. Control of crappy metrics give crappy quality.

        Suppose I had 100K lines of code with 100 defects. After reviewing my code I discovered that I could refactor it to 80K lines and suppose further that doing so had no effect on the defect count. Defects per line of code would look worse after
    • by Cancel ( 596312 ) on Monday July 07, 2003 @10:42AM (#6382790)
      That's not what they're saying at all. In fact, Reasoning concluded that there was no statistically significant difference in 'defect density' between Apache and the unnamed commercial product.
      "In our February study that compared the defect density of the Linux TCP/IP stack to the average defect density of commercially developed TCP/IP stacks, we concluded that Open Source had a significantly lower defect density compared to commercial equivalents," said Bill Payne, President & CEO of Reasoning. "We received numerous inquiries about that study and took seriously requests for us to examine defect density rates in a less mature Open Source application and compare it with the commercial equivalent. Taking advantage of our database of automated software code inspection projects, we were able to do exactly that,
      and found the difference in defect density between the two was not significant." (emphasis mine)
    • by sterno ( 16320 ) on Monday July 07, 2003 @10:42AM (#6382791) Homepage
      This doesn't indicate that the commercial equivalents are better. You've got the DEVELOPMENT branch of Apache, which is derrived from the 2.0.x code which is a complete rework from the original 1.X branch of code. So it's a rather new code base and it's showing similar defect rates to a code base that has been around for a while. I'd say this prooves that open source is better.
  • Apache 2.1...? (Score:5, Insightful)

    by bc90021 ( 43730 ) * <bc90021 AT bc90021 DOT net> on Monday July 07, 2003 @10:28AM (#6382675) Homepage
    According to Apache.org [apache.org], Apache's latest stable version is 2.0.46. Is that a typo on their part, or are they testing a development version? Also, since 1.3.27 is widely used, it would have been interesting to see how that stacked up as well, having been developed longer.

    Either way, to have only 31 errors in close to 60,000 lines of code is impressive!
    • Re:Apache 2.1...? (Score:3, Insightful)

      by jbp4444 ( 193803 )
      I was quite impressed by the fact that Apache can cram all the functionality into ~59k lines. So besides defect rate, I would like to know how many lines of code the commercial package had ... 0.51 defects per 1000 lines sounds good, unless there are 1,000,000 lines more code in the commercial package.
      • Re:Apache 2.1...? (Score:3, Insightful)

        by pmz ( 462998 )
        I was quite impressed by the fact that Apache can cram all the functionality into ~59k lines.

        Agreed. It would be interesting to know whether this low LOC is accomplished through good architecture that emphasizes simplicity and maintainability or "clever" hacks that compress a 10-line loop down into a three-line abomination of pointer arithmetic. I genuinely hope it is not the latter.

        Regardless, 59K lines is small enough a program that--given a good architecture--can be studied and debugged relatively e
  • "Defect Density"? (Score:5, Insightful)

    by sparkhead ( 589134 ) on Monday July 07, 2003 @10:29AM (#6382676)
    A key reliability measurement indicator is defect density, defined as the number of defects found per thousand lines of source code.

    Since LOC is a poor metric, a "defect density" measurement based on that will be just as poor.

    Yes, I know there's not much else to go on, but something along the lines of putting the program through its paces, stress testing, load testing, etc. would be a much better measurement than a metric based on LOC.

  • by ElectronOfAtom ( 685701 ) on Monday July 07, 2003 @10:29AM (#6382680)
    The difference is that now that someone has found 31 errors in the open source Apache software, they will be fixed fairly quickly whereas closed source software will have to have the company do a cost-benefit analysis, put together a team to do the fixes, probably charge to put out patches or minor upgrades (assuming the product is Microsoft's IIS ;b)...
  • by Jearil ( 154455 ) on Monday July 07, 2003 @10:29AM (#6382681) Homepage
    Why does it seem a bit odd to be testing software quality with other software? I wonder if they ran their own software through its own program, but then that gets kinda weird when a program starts noticing errors about itself... maybe it'd get depressed and start ranting at the creator on how they should have taken better care of it... ok, I need more sleep
    • Recursion (Score:3, Funny)

      by sterno ( 16320 )
      They didn't do that because if they did that, then they'd find bugs in their bug finder, so they'd have to run the bug finder on the bug finder to find bugs there, but then they'd have to run the bug finder on the bug finder on the...
      • by fgb ( 62123 ) on Monday July 07, 2003 @11:08AM (#6382945)
        That reminds me of an old (early 1980's) product named BILF (Basic Infinite Loop Finder). It was supposed to be run against BASIC source code and it would find all infinite loops in the code, or so the vendor claimed.
        A magazine reviewed the product. In their review they included a formal mathematical proof that such a program could never work. The vendor responded to the proof by saying that they would fix that problem in the next release!
        • Re:Recursion (Score:3, Interesting)

          by nick255 ( 139962 )
          Yes the proof is quite a simple application of the famous halting problem proof.

          Imagine you made the program go into an infinite loop whenever the program it was analysing did not have an infinite loop.

          Them run the program on itself......
  • by dtolton ( 162216 ) * on Monday July 07, 2003 @10:29AM (#6382685) Homepage
    They are comparing a development version to an un-named commercial web server?

    Why don't they compare it to apache 2.0.46 if they want a newer, but release product? I expect they did, but they didn't get the results they wanted.

    This is a development version, it's an odd numbered release for crying out loud.

    I wouldn't be suprised to see this is bankrolled by M$. Let's compare IIS in development to Apache 2.1, and then see what IIS bug density rate is.

    Bah!!
  • by David McBride ( 183571 ) <david+slashdot@ d w m.me.uk> on Monday July 07, 2003 @10:31AM (#6382697) Homepage
    Umm, Apache 2.1 hasn't been released yet. Current latest stable is 2.0.46 [apache.org].

    I can only assume that they're looking through the current DEVELOPMENT codebase -- finding a higher ``defect density'' in such a development codebase compared with commercial offerings is not exactly unexpected.

    They're also some automated code inspection product; the press release doesn't go into details as to the severity of the defects found or the testing methodology.

    It'll be necessary to read through the full report [reasoning.com] before drawing any sound conclusions.
    • by David McBride ( 183571 ) <david+slashdot@ d w m.me.uk> on Monday July 07, 2003 @10:38AM (#6382751) Homepage
      The above link wants your email address. Bah.

      The direct URLs for the reports are:
      Defect Report [reasoning.com]
      Metric Report [reasoning.com]
      • by David McBride ( 183571 ) <david+slashdot@ d w m.me.uk> on Monday July 07, 2003 @10:53AM (#6382862) Homepage
        Well, the reports simply state that, in the 360 files they checked (most of them header files) they found 29 cases of a potential NULL pointer dereference and 2 potentially uninitialized variables. This is from the Apache 2.1 codebase as of 31st Jan this year, about 58k lines of code.

        Their automated checker also searched for out-of-bounds array accesses, memory leaks, and bad deallocations. It found none.

        They also state that they ran the same checks against other codebases, and found that they did marginally better, on average.

        In short, this report says that OLD development code for an unreleased opensource project is nearly as good as current commercial offerings. That's at best, when you consider the huge gamut of possible defects that this checker won't pick up. That margin probably disappears in the +/- of the sampling if you were to do a proper statistical analysis.

        The report is fairly useless. It certainly should not be taken as a reason to not trust Apache; to do so would be foolhardy particularly given Apache's track record.

        Oh, and Reasoning's webserver is being pounded into the ground. You can get my local copy of the reports from here [ic.ac.uk].

  • What bothers me about these articles is that there is more to software quality than the # of flaws-per-unit-"whatever".

    Like design.

    It seems to me most of the problems with Apache's main competitor in terms of software quality are the result of design and engineering choices made by MS's IIS development team.

    In other words, it does exactly what they designed it to do, but what they designed it to do was a very bad idea.

  • by hughk ( 248126 ) on Monday July 07, 2003 @10:32AM (#6382707) Journal
    If anyone has an Apache 2.1 dist around, they say they checked 58,000 lines - does this seem reasonable? Is this with any of the modules such as PHP or Perl or is this raw????

    I know that Apache has vulnerabilities but it should come better than IIS. You can't realisticly give a verdict on IIS without looking at the libraries called.

    As for the rest, I can imagine some commercial products coming in better, but not many.

  • No cigar, my ass. (Score:5, Insightful)

    by KFury ( 19522 ) * on Monday July 07, 2003 @10:32AM (#6382710) Homepage
    The article claims Apache's error density, based on a meager 5100 lines of code, is 0.53, while that of 'comparable commercial applications' is 0.51.

    The problems with this are:
    • 5100 lines of code does not give you a confidence range of less than 0.02, especially when the error rate can be expected to be heterogeneous across the code base, as would be the case in an open-source product where different code pieces are created by entirely different groups.
    • 'Comparable' my ass. If they can't provide details of what software they're comparing to (I somehow doubt they got a look at IIS source code) then the stats are worthless, because anyone who's ever programmed knows that quality control isn't a constant factor across commercial products any more than it is among open-source products.
    • What's the error rate of their 'defect analysis'? If they're so good at finding defects, why aren't they out there writing perfect software? If their defect detection rate is less than 98% accurate, then the difference between a rate of 0.51 and 0.53 is meaningless anyhow.
    • There's a big difference between caught coding exceptions and fundamental security problems. The first can cause code to run a little slower, the second can destroy your company. This testing methodology doesn't even look at the second.
  • by BigBadDude ( 683684 ) on Monday July 07, 2003 @10:34AM (#6382719)

    The defect density of the Apache code inspected was 0.53 per thousand lines of source code...


    We can bring this number down to 0.2 by avoiding the BSD style guidlines. No kiddings, have you seen the density of MFC code?

    BSD code:

    char*
    foo(int bar, double baz)
    {

    /* do something */
    return bar + random();

    }



    MS code:

    char* Foo(int nBar, double dBaz) { return bar + random() + m_ExtraWindowsBugModifier(); }
    • Wrong Math (Score:5, Insightful)

      by bstadil ( 7110 ) on Monday July 07, 2003 @10:45AM (#6382803) Homepage
      You got the math reversed

      The longer and more content you have per line the higher the likelyhood of error/ line.

      As example with one errror in 100 lines you get 1% error. Imagine you could do the whole thing in one line. Now you have 100% error.

  • Does it matter? (Score:5, Interesting)

    by pubjames ( 468013 ) on Monday July 07, 2003 @10:36AM (#6382730)

    So?

    There are errors and there are errors. There are error that don't matter a jot, and there are errors that are show-stoppers.

    I've worked on banking software containing code that was written in assembly for PD11s and developed over decades. The most horrible spaggetti code you could ever imagine. Why did the banks keep using it? Because for any particular input it always gave the correct output.

    Years of bug fixing had made the code horrible and probably full of errors if you were looking at it from a purely theoretical/software engineering viewpoint. But from an input/output point of view, it was faultless.
  • That's so weird ... (Score:3, Interesting)

    by SuperDuG ( 134989 ) <<kt.celce> <ta> <eb>> on Monday July 07, 2003 @10:38AM (#6382752) Homepage Journal
    I found just the opposite.

    Important Tech City, CA, July 7th 2003
    For Immediate Release
    Sbj: Apache beats other webservers

    Recently we had our staff (some guys kid) look over the source code of 3 major webserver packages available, in that code nearly 8 million lines of error were found, but surprisingly the damned things still worked?!

    We placed a performance test (click a link and see if porn comes faster) with apached and 3 other commercial offerings. Apache seemed to knock them all of the water, boy will those other three companies be mad now.

    While we cannot tell you what the other three offerings were (that might make this whole thing more believeable) we can tell you that we think they're popular.

    Here's the results

    Apache ------------------- 104
    Com 1 --------32
    Com 2 -----------45
    Com 3 ---------------53

    As you can see by the clear test results, apache wins in all tests.

    Since when are unfounded results from a company that doesn't explain what the "32 defects" were, newsworthy. Don't act like these guys are worth my time, this is bullshit.

  • Dubious (Score:5, Insightful)

    by cca93014 ( 466820 ) on Monday July 07, 2003 @10:39AM (#6382754) Homepage
    Is it just me that finds this entire concept of "code defects per 000 lines" sounding like a little bullshit?

    If the company has developed proprietary tools to enable them to identify defects in medium-sized software projects, which of the following business models do you think is more effective:

    1. Design proprietary tools to identify defects in medium-sized software projects.
    2. Fix defects
    3. Profit

    or

    1. Design proprietary tools to identify defects in medium-sized software projects.
    2. Sit around mumbling about defects, Open Source software, closed source software and why farting in the bath smells worse
    3. ???
    4. Profit

    Secondly, where on earth did they get hold of a closed source enterprise level (which Apache undoubtedly is) web server software codebase?

    "Hi, is that BEA? Do you mind if we take a copy of your entire code base so that we can peer review it against Apache's? What's that? Yes, Apache might come out on top, and we will make the results public..."

    How do they define a defect anyway? A memory leak? A missing overflow check? A tab instead of 4 spaces?

    It just sounds like bullshit to me...

  • by NotClever ( 635709 ) on Monday July 07, 2003 @10:39AM (#6382757)
    When the same group said that the IP stack in Linux was cleaner than a comparable one, everyone was screaming from the rooftops that it validated the open source model. When they say that an open source project and a closed source project are roughly comparable, all of a sudden everyone criticizes the methodology of the report!

  • by tsetem ( 59788 ) <tsetem@gmai[ ]om ['l.c' in gap]> on Monday July 07, 2003 @10:42AM (#6382783)
    ...then why is it their webserver [netcraft.com]? :)

    Of course it is Apache 1.3.23...
  • Bad Statistics... (Score:5, Insightful)

    by FunkZombie ( 322039 ) on Monday July 07, 2003 @10:42AM (#6382787)
    Also keep in mind that defect density is just an average. If you have 31 defects in 60k lines of code, that is potentially 31 security risks, or out-of-operation risks. If the other software tested had double the lines of code (120k), the density would imply that they had slightly less than double the defects, so say 58 or 60. That implies _58_ potential security or uptime risks. In this case, imho, defect density is not a good indicator of the reliablity of the software.

    My general rule is that if someone is quoting statictics to you, they are lying. At least on average. :)
  • Apache 4.2 Alpha, a release that is yet to be even a twinkle in it's Daddies' eyes. I have found a whole bunch of errors, bad comments, a few scribbles on napkins, some old Populous save games, and a letter to 'Mom' asking for money.

    I compared this to my 'other' server, for now unnammed.

    My 'other' server brought me coffee, 2 pieces toast, 2 eggs OVER EASY, 4 strips of bacon, *and* Smucker's Grape Jelly with nary a mistep, or hesitation. This other server smiled, asked how my wife was, and brought me a new fork when I dropped my first one.

    Congratulations, Gloria! You win the 'great server' award!

    This article isn't worth the 2 dollar tip.

  • by Daath ( 225404 ) <(kd.redoc) (ta) (pl)> on Monday July 07, 2003 @10:48AM (#6382821) Homepage Journal
    Why doesn't Reasoning fill the niche, and code a completely error free web server? They know other peoples mistakes, so they should know how to code an error free one.
    Well, seriously, I wouldn't put much in their obvious estimation.
  • Don't assume IIS (Score:5, Insightful)

    by m00nun1t ( 588082 ) on Monday July 07, 2003 @10:49AM (#6382828) Homepage
    Ok, IIS is the obvious choice as being the second most popular web server after Apache. But I hardly think Microsoft will be letting these guys all over the IIS source code.

    It could also be Zeus, SunOne or one of the other lesser known web servers out there.
  • by defile ( 1059 ) on Monday July 07, 2003 @10:50AM (#6382833) Homepage Journal

    The test may be more interesting if applied to Apache 1. As someone who has had to migrate a mod_perl site from Apache 1 to Apache 2, I can tell you that Apache 2 is a very new beast, and it doesn't shock me at all that there are dozens of bugs that still need to be shaken out. Fewer users are running Apache 2 in a production environment as well, since it's considered a development branch. See less eyeballs rule.

  • Defect Details (Score:5, Informative)

    by Eustace Tilley ( 23991 ) * on Monday July 07, 2003 @10:51AM (#6382842) Journal
    Interested persons can download the full defect report free of charge. [reasoning.com]

    Some things I found interesting:
    1. Apache 2.1 (dev) is a mere 76,208 LOC.
    2. No memory leaks detected
    3. 29 NULL pointer dereferences
    4. 2 Uninitialized variables
    5. No bounds errors, no bad deallocs
    6. otherchild.c had a rate of 7 NULL pointer dereferences per 1000 KSLC


    7. One of the explanations (given by Reasoning) for a NULL pointer dereference is "can occur in low memory conditions," which I think means the original allocator did not check for malloc failure.

      So you can get a sense of what a defect looks like, here is #21. The orignal uses bold and fonts improve readability, but I don't know how to reproduce that in slashcode:
      DEFECT CLASS: Null Pointer Dereference

      DEFECT ID 21

      LOCATION: httpd-2.1/srclib/apr/misc/unix/otherchild.c : 137

      DESCRIPTION The local pointer variable cur, declared on line 126, and assigned on line 128, may
      be NULL where it is dereferenced on line 137.
      PRECONDITIONS The conditional expression (cur) on line 129 evaluates to false.
      CODE FRAGMENT
      124 APR_DECLARE(void) apr_proc_other_child_unregister(void *data)
      125 {
      126 apr_other_child_rec_t *cur;
      127
      128 cur = other_children;
      129 while (cur) {
      130 if (cur->data == data) {
      131 break;
      132 }
      133 cur = cur->next;
      134 }
      135
      136 /* segfault if this function called with invalid parm */
      137 apr_pool_cleanup_kill(cur->p, cur->data, other_child_cleanup);
      138 other_child_cleanup(data);
      139 }

    • One of the explanations (given by Reasoning) for a NULL pointer dereference is "can occur in low memory conditions," which I think means the original allocator did not check for malloc failure.


      appache got its own malloc() that kills the child (and closes connection) if it fails to allocate enough bytes.
  • by the eric conspiracy ( 20178 ) on Monday July 07, 2003 @10:51AM (#6382847)
    This study makes a lot of sense to me - that the defect rate is tied to the maturity of the code base. I have long felt that Microsoft's business model where they redo the operating system in order to churn their user base and induce cash flow will always result in more defects and security problems than a model where software change is driven on a solely technical basis.

    I think the next step for these folks would be to take a project that has a long history, say perhaps Apache 1.x and show defect rates over the life of the project.

  • by ByTor-2112 ( 313205 ) on Monday July 07, 2003 @10:53AM (#6382857)
    29 possible "null dereferences" and 2 possible "uninitialized variables". Some of them are simple "fail to check return value of malloc() for null", and others are not bugs in the code but bugs in the logic of the scanner. This is, of course, a precursory review of their document. All in all, these are absolutely minor bugs if they are real at all.
  • by XaXXon ( 202882 ) <xaxxon.gmail@com> on Monday July 07, 2003 @11:03AM (#6382919) Homepage
    I have to play the BS card here.

    There is no magic "defect detector" for software. If there was such a thing, they would be making a helluva lot more money than they get for doing little defect tests.

    It is very difficult to prove a program to be correct, and there's a lot of REALLY smart people who have tried.

    Maybe these people have stuff than can look for buffer overflows and stuff, but actually being able to tell if Apache is returning the correct results requires far more than generic tests.

    And I'll all but guarantee they didn't get together an entire development team to understand the code base and how it works as apache is a very large and complex code base.

    Maybe they take what the find for their generic tests and extrapolate that if they find more generic problems there are probably more specialized errors as well, but they make it very clear in the report that the difference between .51 and .53 defects / KLoC (thousand lines of code) is statistical noise.

    Anyways, I'm not saying the entire thing is worthless, just not to read too much into it -- either this one that puts Apache slightly behind some unnamed commercial implementation or the one that put the Linux TCP/IP stack ahead of some other commercial implementation (though I'd say it would probably be easier to test a TCP/IP for correct behaviour than a web server).

  • This is a dupe (Score:3, Informative)

    by presroi ( 657709 ) <neubau@presroi.de> on Monday July 07, 2003 @11:08AM (#6382950) Homepage
    This Slashdot-Posting [slashdot.org] was featuring the same PR from Reasoning.
  • by UnknowingFool ( 672806 ) on Monday July 07, 2003 @11:14AM (#6382978)
    Numbers can mean anything. It's the interpretation that matters. 31 errors in 58,944 lines. Hmmm. Even if we take Reasoning's word that these are errors and not "features", that's 0.53 error rate. The unnamed commercial software had an error of 0.51. So what does that prove?

    1) Apache 2.1 has more bugs than some unknown commercial competitor. If the version is correct, a development (not-ready-for-release) build was pitted against a released commercial build. Not fair playing ground.

    2) Reasoning does not detail the severity or kind of the bugs. Certainly, a web server not being able to handle a type of format (pdf, csv, ogg vorbis) is less severe than a security hole. Pitted against IIS, I would trust Apache even if it had more bugs, because historically it has had fewer security patches. Check out Apache's 2.0 known patches [apacheweek.com] vs IIS 5.0 [microsoft.com]

  • RTFAdvertising (Score:4, Insightful)

    by tanguyr ( 468371 ) <tanguyr+slashdot@gmail.com> on Monday July 07, 2003 @11:24AM (#6383054) Homepage
    As has been pointed out a couple of times in other comments, 2.1 is the development branch of the Apache web server - ie "beta", "buggy", "work in progress", etc. etc. In stead of reading this as "Apache has roughly as many defects as closed source web servers" let's read this as "the development version of Apache has as many defects as... well, some unidentified (beta? shiping?) version of some unknown (iPlanet? IIS?) web server". But you can be *much* more confident that these defects will be fixed in Apache than in the *other* product.

    Heck, forget confidence - YOU CAN JUST CHECK.

    The fact that Reasoning didn't have to go and get permission from Apache to run this test - coupled with the fact that we don't even know what Apache is being compared to - is the *real* point behind this "article". /t

    ps: IANAL but don't they have to include a copy of the Apache License given that they publish fragments of the source code in their defect report?
  • by Bazman ( 4849 ) on Monday July 07, 2003 @11:38AM (#6383176) Journal
    Take the null pointer dereferencing thing. All this program seems to do is see if there's a possible path for null-pointer dereferencing. It has no clue as to whether this is logically going to happen. For example:
    2815 while (1) {
    2816 ap_ssi_get_tag_and_value(ctx, &tag, &tag_val, 1);
    2817 if ((tag == NULL) && (tag_val == NULL)) { 2818 return 0;
    2819 }
    2820 else if (tag_val == NULL) {
    2821 return 1;
    2822 }
    2823 else if (!strcmp(tag, "var")) {
    2824 var = ap_ssi_parse_string(r, ctx, tag_val, NULL,
    2825 MAX_STRING_LEN, 0);
    The software claims that tag could be null on line 2823. But thats only if on return from ap_ssi_get_tag_and_value that tag is a NULL pointer and tag_val is non-NULL. If ap_ssi_get_tag_and_value cant return these conditions then this is not a defect. If anything its a red flag, in case the return values of ap_ssi_get_tag_and_value could satisfy that condition.

    I suspect the following code will be flagged as a defect:

    char *tag=NULL;
    doOrDie(&tag);
    strcmp(tag,"do");
    as long as doOrDie() does its job and never returns a NULL then where's the defect? The guys who wrote this tester seem to want you to check any pointer dereferencing against NULL before use - I might be doing this in my doOrDie() function, I dont want to have to do it twice.
    • BINGO (Score:3, Informative)

      by Anonymous Coward
      In almost every case they listed the pathway was via a failed malloc.

      Apache has it's own malloc that kills the connection (and the child) if it fails.

      That code can never be reached. Their test is invalid.
    • Defect is way too strong. Take Defect 1. Can only possibly derefence a NULL pointer if a number of preconditions are true. The last one is (!conf->providers)[the pointer in question] must be false.

      !!conf->providers => conf->providers => conf->providers != NULL

      Their program has detected "defects" where there are none. Perhaps the greater coding style variation on open source projects exposes more defects in their automated program!

  • Slashdot's summary of this article is way off base, and the article itself couldn't be less useful. Counting the number of "errors" in lines of code... and the ratio is supposed to mean something to us? As compared to unnamed other software? C'mon, I have better things to do with my time.

    *plonk*
  • by mystran ( 545374 ) on Monday July 07, 2003 @11:44AM (#6383220)
    I don't know, probably some of these defects might be actual problems, but unless the software is real good, it's always possible that certain cases never happen, although automatic software can find "defects".

    As a rather "stupid" example, I had to initialize a Map to an empty HashMap just last week to get Sun's Java compiler accept my code, although the only two references to the Map where within two if-blocks, within the same function, both of which depended on the same boolean value, which wasn't changed in the whole function.

    There's a difference between defect and a bug. Tools that help in finding problems are great, but after all, they can only point possibly unsafe points. Ofcourse it's good to write code that doesn't trigger any such possibilities in the first place.

  • by Sxooter ( 29722 ) on Monday July 07, 2003 @11:49AM (#6383250)
    Well this certainly falls under the "duh" category. Freshly written code tends to have fewer bugs than older, well reviewed, well tested code.

    Wow, next we'll learn how you shouldn't buy any Ford, GM, or Chrysler product in the first year of production.
  • by MROD ( 101561 ) on Monday July 07, 2003 @11:54AM (#6383275) Homepage
    Of course, this test of the code is purely a test of coding errors rather than errors in the code logic.

    The most worrying errors in programs are generally not coding errors as they are either terminal (ie. crash) or they are benign (the error may cause memory corruption in a place where it does no harm). Of course, there are exceptions such as buffer overflows, but I'd class those, in general, into the logic error category.

    Logic or algorythmic errors are far more dangerous as they can be well hidden and are more likely to make the code do things unintended. The code itself may be perfect but if the algorithm is faulty then there's a major problem.
  • by Door-opening Fascist ( 534466 ) <skylar@cs.earlham.edu> on Monday July 07, 2003 @12:03PM (#6383338) Homepage
    Why did they use the development branch of Apache, when only a handful of sites are running it? I would have found an analysis of the stable 1.3 branch, which 60% [netcraft.com] of the web-serving world uses, to be more informative.
    • by sabat ( 23293 ) on Monday July 07, 2003 @01:36PM (#6383956) Journal

      Why did they use the development branch of Apache

      Let me restate this: why are they comparing pre-alpha software with production releases?

      Most simple answer: because they wanted to find flaws. The second most popular web software is ISS. This looks like a Microsoft tactic: anonymously hire this company to "evaluate" code so that the results look unbiased. Everyone will likely realize that the competitor is Microsoft's ISS, so it doesn't need to be stated bluntly. MS wins; another (small) battle for mindshare is won.

  • Apache 1.3? (Score:5, Interesting)

    by Spazmania ( 174582 ) on Monday July 07, 2003 @12:14PM (#6383422) Homepage
    First, as many posters have noted, Reasoning DID NOT TEST APACHE 2.1. They tested Apache 2.1-dev. That's dev, as in development branch. As in: I have new untested code, so don't use me on a production server until I'm released in the STABLE series.

    For a valid comparison versus commercial software, the testers should have used Apache 2.0.46, the most current STABLE series release.

    Second, I'd be interested to see a comparison of 2.0.46 versus 1.3.27. I have a pet theory that multithreaded C code has more bugs than single-threaded C code, and I'd like to see whether there is evidence to support it.
  • by AYEq ( 48185 ) <dmmonarres@NOSPaM.gmail.com> on Monday July 07, 2003 @01:18PM (#6383844)

    Reasoning's code inspection service is based on a combination of proprietary technology and repeatable process.

    Am I the only one who looks at reasoning's results with suspicion (even when I agree with them). Any analysis using methods that are not open and repeatable is not science. This just feels like marketing to me. (it is sad because the study of code quality is such a worthwhile pursuit)

  • prove it. (Score:4, Interesting)

    by Mark19960 ( 539856 ) <[moc.gnillibyrtnuocwol] [ta] [kraM]> on Monday July 07, 2003 @03:08PM (#6384713) Journal
    they dont say what they used for a comparison.
    when they tell us what they used, then I will believe it.
    this smells microsoft.

    bring it on! we want to know what it was compared against, sure as hell was NOT IIS...

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...