Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

How Do You Know Your Code is Secure? 349

bvc writes "Marucs Ranum notes that 'It's really hard to tell the difference between a program that works and one that just appears to work.' He explains that he just recently found a buffer overflow in Firewall Toolkit (FWTK), code that he wrote back in 1994. How do you go about making sure your code is secure? Especially if you have to write in a language like C or C++?"
This discussion has been archived. No new comments can be posted.

How Do You Know Your Code is Secure?

Comments Filter:
  • You don't (Score:5, Funny)

    by CockMonster ( 886033 ) on Monday January 08, 2007 @06:24AM (#17506118)
    Just get others to formally review it so if anything is found, there's collective responsibilty
    • Re: (Score:2, Funny)

      by Anonymous Coward

      Prophylactic: [medterms.com] A preventive measure. The word comes from the Greek for "an advance guard," an apt term for a measure taken to fend off a disease or another unwanted consequence.

      Sorry CockMonster, with today's DNA testing, getting others to participate in your virgin sacrifice wouldn't save you if you had a buffer overflow.

      *Warning* as appropriate as prophylactic might seem under its definition for use in the computer industry when talking about firewalls, sandboxes, etc, please keep in mind that some female

    • Re:You don't (Score:4, Insightful)

      by jorgevillalobos ( 1044924 ) on Monday January 08, 2007 @09:23AM (#17507158) Homepage
      Modded as funny? This is as real as it gets. At least in the private sector.
    • Re: (Score:3, Insightful)

      by hackstraw ( 262471 ) *
      Just get others to formally review it so if anything is found, there's collective responsibilty

      Yes, that is funny, but there is truth to it as well (which is why its funny).

      Security, software development, and everything else is a process, not an event. It gets better over time, and basically, the way that issues come out is for them to be found "in the wild". And as these issues are found, better tools and techniques make the process better over time, but I don't envision a world where people just think o
  • Verified (Score:5, Funny)

    by Anonymous Coward on Monday January 08, 2007 @06:24AM (#17506122)
    I get mine verified by microsoft
  • Secure? (Score:2, Insightful)

    by nahime ( 1048362 )
    Secure?? What does it mean?
  • by dangitman ( 862676 ) on Monday January 08, 2007 @06:27AM (#17506140)
    I hit it with a shovel. If the code doesn't fall apart, I know it's pretty securely attached to my computer. If not, I add more epoxy glue.
  • by quigonn ( 80360 ) on Monday January 08, 2007 @06:28AM (#17506146) Homepage
    Modern C++ provides a very nice and functional Standard Library which provides a lot of functionality and data structures such as strings, vectors, lists, maps, sets. While using these available classes does not completely rule out making programming mistakes related to buffer overflows and such, it at least minimizes the risk of producing stupid buffer overflow through badly done string handling. At least that's what my experience is.

    Actually, the best thing would be not to use C or C++ at all, but that's where reality comes into play. Most developers don't even have the choice which language they should use, but that is predetermined by the employer and/or supervisor.
    • The mostly STL gets rid of the old problems such as buffer overflows but introduces new ones that can a lot more subtle and harder to track down such as deep/shallow copy issues. Personally (and I'm probably in the minority) I prefer to deal with the old fashioned bugs since you can usually guess where they're happening whereas in a highly abstracted C++ program using the STL with lots of objects being copied and references flying around it can be a LOT harder to figure out whats really going on , especiall
      • Re: (Score:3, Insightful)

        by Diablo1399 ( 859422 )
        You would sacrifice the flexibility and usefulness of the STL to get a class of bugs that are old and well-known? Hardly seems like a fair trade-off to me.
        • Re: (Score:3, Insightful)

          by Viol8 ( 599362 )
          Not necessarily , all I'm saying is that the STL can introduce bugs of its own that can be a lot harder to find than old style buffer overruns so its not a solution that will get rid of obscure coding (as opposed to logic) bugs.
      • by swillden ( 191260 ) * <shawn-ds@willden.org> on Monday January 08, 2007 @10:34AM (#17507922) Journal

        whereas in a highly abstracted C++ program using the STL with lots of objects being copied and references flying around it can be a LOT harder to figure out whats really going on , especially since different compilers do different things under the hood.

        Those bugs aren't harder to track down than "old-style" bugs, in fact I think they're vastly easier to track down than, say, a wild pointer. The difference is that you're less experienced at dealing with the new problems, so they seem harder to you. With time and practice, you'll see through copy/reference errors quickly. In the meantime, a little discipline can cover your lack of experience -- never store raw pointers in collections, always "objects". If you don't want to create copies, then store objects of a smart pointer class. In fact, avoid ever using raw pointers at all. *Always* assign the result of a 'new' operation to a smart pointer (auto_ptr works for a surprisingly large set of cases, but you may have to get a reference counted pointer type or similar for others -- the BOOST library has some good options if you haven't already rolled your own).

        If you really run into different behavior with different compilers, then at least one of the compilers is buggy. That does happen, but it's a lot rarer today than it was a few years ago. When you find that situation, wrap the tricky bit behind another abstraction layer and implement compiler-specific workarounds so that your application code can just use the abstraction and get consistent behavior. In most cases, someone else has already done this work for you. Again, look into BOOST.

      • by quigonn ( 80360 )
        But the number of employers where you actually get a chance to use languages like Eiffel, Ada, Dylan, ... is very small.
        • Re: (Score:2, Funny)

          by orangeyoda ( 958347 )
          Which is a good thing, Ada was awful to learn and worse to debug. I've seen the light, no more c++ spending hours to decode meglomaniac's tempalates , no more java exception hell , bye bye vb6 error unhandling . Hello C#
    • by shutdown -p now ( 807394 ) on Monday January 08, 2007 @07:59AM (#17506630) Journal
      Uh... let's see. Open the most recent ISO C++ standard, and search for all occurences of "undefined". Repeat for "implementation-defined". Make a notice of how many of those are from the sections related to the Standard Library. Then meditate on the results.

      Yes, sure, if you use STL, you need not worry about getting the buffer size wrong. And that's about it - container indexing is not bound-checked (unless you use at() instead of operator[] - and that's about the only instance of run-time safety check I remember seeing in STL!), iterators can go outside their container without notice, or can suddenly become invalid depending on what their container is and what was done to it. Even leaving library issues aside, there are some nasty things about the language itself - it's just way too easy to get an uninitialized variable or a class member, or to mess up with the order of field initializers in constructor.

      This is not to say that C++ is not a good language. All of the above are features in a sense they are there for a reason - but they certainly don't make writing secure software easier.

      • by Curien ( 267780 ) on Monday January 08, 2007 @11:35AM (#17508630)
        And that's exactly why so many things are "implementation defined" or "undefined". Many real-world users of C++ demand that, for instance, vector::iterator be a typedef for a raw pointer for efficiency reasons. Other equally-important users would prefer an iterator type that guarantees sensible behavior in the face of real errors. The ISO standard allows for both behaviors by conforming C++ implementations.

        There's something attractive about the Java and C# languages having all constructs so well-defined. But both of those languages could afford not to support real hardware. Both target abstract machines and are happy with the results. C++ can afford no such conceit: it thrives in high-performance, customized, and otherwise exotic environments.
  • by Ed Avis ( 5917 ) <ed@membled.com> on Monday January 08, 2007 @06:32AM (#17506156) Homepage
    You introduce buffer overflows when you deal with buffers directly. In conventional C with its standard library you're encouraged to do this rather a lot, for example many of the string functions expect you to allocate a char buffer of big enough size and pass it in. The language's arrays are just syntactic sugar for accessing raw memory, with no bounds checks.

    However you don't have to do it like this, especially not in C++ which has a safe string class (for example) as part of its standard library. Unfortunately C++'s vector type still doesn't do bounds checking with the usual [] dereferencing - you have to call the at() method if you want to be safe. But the general principle is: don't do memory management yourself, use some higher-level library (which exist for C too) and let someone else do the memory management for you.

    You can write a C++ program and be pretty confident it doesn't have buffer overruns simply because it doesn't use pointers or fixed-size buffers, but relies on the resizable standard library containers.
    • by ojQj ( 657924 ) on Monday January 08, 2007 @06:54AM (#17506286)
      Unfortunately stl isn't binary compatible. That means you have to make sure you've compiled with exactly the same version of the stl with all the components of your program which accept and pass strings. This in turn makes it impossible to release different parts of your program separately from each other if you are using the stl at the interface between your components.

      There are a couple of solutions to this problem:

      1.) Pass character arrays at the interfaces between your components and immediately put those character arrays under the control of your library once they come in.
      2.) Write or find your own string library and pass that string class between program components. Be careful when doing this. Mistakes will come back to byte you.

      All of it's kind of nasty. It'd be nice if C++ could standardize their binary representation, even if it's only a standard valid per platform.

      Then there's also:

      3.) Choose a language which unlike C++ already has a standardized binary representation for strings, or a system global interpreter for a varying binary representation. This is just an extension of the "higher-level library which does the memory management for you" option really.

      Don't get me wrong -- I'm agreeing with the parent post. I'm just adding a caveat.
      • by ruiner13 ( 527499 ) on Monday January 08, 2007 @09:35AM (#17507246) Homepage
        You say things like:

        Mistakes will come back to byte you.
        without even flinching.
    • by Viol8 ( 599362 ) on Monday January 08, 2007 @06:57AM (#17506306) Homepage
      "simply because it doesn't use pointers "

      Err, how much C++ have you written? I've yet to see any complex C++ *without* pointers since you cant reference or use dynamically created objects using the new operator without them. Not to mention in 101 other instances where they're useful.
      • I've seen, and written, a lot of complex C++ that didn't use pointers directly. I try to avoid pointers in general - any case where I use pointers is carefully hidden behind abstraction.

        (This doesn't count pointers-as-function-parameters, as long as they're not stored anywhere. I use those pretty often. But I've found that generally stored pointers are just plain difficult to deal with properly, unless ownership and invalidation semantics are utterly 100% clear, and even then they're tough.)
    • I think there's a qualifier that needs to be added here. A C++ program is safe if you use standard C++ coding methods, the classes that you are using are safe and don't have to write your own classes. Having the STL classes helps a lot here but there may be situations where one might want an alternative data storage class which might require the coder to do memory management.
    • by rucs_hack ( 784150 ) on Monday January 08, 2007 @09:33AM (#17507230)
      I code predominasntly in C, and I find very few problems with allocating my own string buffers and so on. Doing your own memory allocation/deallocation does not instantly mean you have a program full of buffer overflows and security holes, although many people seem to assume this is the case.

      What does that is rushed code, poor design and inadequate testing. These feature heavily in the vast majority of commercially produced code I've seen. Frankly most of what I've seen is horrifically bad. With code of such low quality, C should be avoided, but that's not C's fault, it's crap house coding rules. C is elegent, minimal, and mindbendingly fast. This does not mark it as ideal for enterprise tools, but it does have a place there, for time intensive operations.

      It is extremely easy to ensure buffers in C have a strictly limited inputs, and do not encounter overflows. It's also easy to not do this, and thus faster. That I suspect, is where most of the problems come from.

      Open source code used in the enterprise seems nowadays to be starting to suffer from similer problems to the commercial code I've seen, although commenting schemes are better. The problem seems to me to be a feeling that things must be pushed forward to compete. That isn't a good plan. Slower development, more testing before actual deployment, and less feature creep are what is needed.
    • For ever string function, there's an equivalent that will only perform the operation on the first n bytes. If you're working with a C library that's old and doesn't have such a convienece, you can always wrap it with a call that does.

      The real problems come into play when you're using a 3rd party library. You can always police your code, but it's hard to police / fix other's code. Open source libraries are great for this in general, but there's not always an open source solution for connecting to propriet
  • Easy (Score:5, Funny)

    by $pearhead ( 1021201 ) on Monday January 08, 2007 @06:33AM (#17506166)
    Just make sure your buffers are really really REALLY big:

    char nooverflowbuffer[234523400];

    sprintf("Enter something:");
    scanf("%s", nooverflowbuffer);
    ... or maybe not ...
  • Security (Score:2, Insightful)

    by El Lobo ( 994537 )
    It is hard to be sure that your code have no bugs or security holes. That's because even the Hello world program is using implicity a lot of libraries/dependences that are not written by you and are sometimes very complex. For example, writting Hello World to the console invokes a string handling unit in any hight level languge. String handling units are per se complex unit and there may be a lot of bugs there that may affect your code's security if those bugs are exploited.

    Writting in C/C++ doesn't do the

  • Easy, I never ever run the program.
    • But can you discount the possibility that your program may already be running (or not running) on a quantum computer somewhere?
    • Easy, I never ever run the program.

      You'd make a great Doctor:

      It hurts when I do this.... ouch!

      Don't do that then.
  • by Llywelyn ( 531070 ) on Monday January 08, 2007 @06:36AM (#17506184) Homepage

    0) Don't "roll your own" security unless absolutely necessary. Find someone else's implementations and work with those.

    1) Design the code for security, code to that design. I've seen of security bugs creep into code because it was never designed to be secure.

    2) Use static code checkers--such as Splint [splint.org] for C/C++ and FindBugs [sourceforge.net] for Java--that look for security vulnerabilities.

    3) Peer reviews/code audits. Sit down with your code (and have others who know how to look for security vulnerabilities sit down with your code) and do a full review.

    Nothing is foolproof, but every little bit helps. It should be noted that all of the above also improve the overall quality of the code and reduce the number of overall bugs: Finding existent implementations of features that can be used can reduce maintenance and reduce bugs; Designing the code and putting it through a proper design review can catch a lot of logic problems and ensure that the code fits the requirements list--I've seen a huge number of synchronization bugs in Java simply because the author didn't know how to use synchronization properly; static code checkers find a lot more than just security bugs; and Peer Reviews/Code Audits can help isolate a variety of problems.

  • by tuxlove ( 316502 ) on Monday January 08, 2007 @06:36AM (#17506186)
    Anyone who develops software knows the axiom - the number of bugs discovered in any piece of software is directly proportional to the amount of testing you perform on that software. From this, it follows that you can keep testing forever and at best only asymptotically approach bug-free code. Sounds hyperbolic, but I've observed it to be true in my experience. And as long as there are bugs, there are bound to be security bugs.

    You can only minimize the risk that security issues will be found with any software. The best way to do this is to perform a rigorous code audit, preferably by security professionals. And if you can, make the software open source. You get a lot more eyes staring at it for free that way.
    • Re: (Score:3, Informative)

      by Rogerborg ( 306625 )
      Another issue with (manual) testing is that testers tend to pursue bugs aggressively in whatever area they first happen to find some, which means you get good depth coverage, but can end up missing out on testing whole areas of functionality.
    • by TheRaven64 ( 641858 ) on Monday January 08, 2007 @08:29AM (#17506804) Journal
      Don't trust your own code. The reason OpenBSD is secure is party because the code is security audited constantly, but also partly because much of the system is written on the assumption that the rest of it is buggy. Isolate your code as much as possible. If you can get away with it, fork off separate modules and communicate between them over a well-defined interface. Validate everything that is received. Don't let any of your code run with more privileges than it needs; make good use of chroot and setuid. If you don't need to be able to access anything on the filesystem then the first thing you should do is make an empty directory and chroot there; that way an attacker who compromises your code can't do anything useful.

      The best advice I read was from the Erlang documentation. It suggested that you program defensively on a system level, but not on a module level. If a module receives input it can't understand, or thinks it is in an invalid state, the correct behaviour is for it to crash. A system of monitors should deal with failures of components, because they can determine how the failure will affect other components. There has only been one remote root hole in OpenBSD in the last ten years, and it would have been avoided if the OpenSSH developers had used this principle.

      • Let's not forget their wonderful documentation! Complete and accurate API documentation is absolutely necessary for writing secure and reliable software. And of course the programmers should actually read the documentation and check all the details of the API calls they are using (return values, etc...)!

    • by shrykk ( 747039 )
      tuxlove wrote [slashdot.org]:

      Anyone who develops software knows the axiom - the number of bugs discovered in any piece of software is directly proportional to the amount of testing you perform on that software.

      With respect, I suspect this is a not-quite-appropriate extension of the maxim applied in manufacturing processes, that you can't test defects out of an item. This applies to pulling mass-produced objects off the assembly line and checking they perform to specification. You can test more items, and you can test them more rigorously, but this will keep increasing your defect rate. (You have to actually improve your processes, which leads to the continuous improvement idea, six sigma quali

  • by mangu ( 126918 ) on Monday January 08, 2007 @06:36AM (#17506188)
    Why do people keep this meme that C/C++ is so insecure? Remember, deep down inside the other languages, there often is a compiler, library, interpreter, etc written in C/C++.


    It's not that C/C++ is so insecure by itself, the problem is that programmers may not have used the best programming practices. There are plenty of libraries for handling strings and memory allocation in C, in C++ there are string and storage classes that do as much or as little checking as you need.


    When you are an expert programmer there are places where you need more efficiency than the super-safe string routines can give you. It's the job of the expert to determine exactly how to balance efficiency against security, and only C/C++ can give you this balance.

    • by Anonymous Coward on Monday January 08, 2007 @06:48AM (#17506256)
      'It's not that C/C++ is so insecure by itself'

      yeah a gun by itself is not insecure either....
      try giving it to a baby.....
      well I prefer a baby with a knife...I can still run faster than him...
      • Re: (Score:3, Insightful)

        And that's the problem these days. Too many babies using C/C++.
      • Re: (Score:3, Insightful)

        by canuck57 ( 662392 )

        yeah a gun by itself is not insecure either.... try giving it to a baby.....

        There is the crux of the C/C++ problem, we give an oozie to to 3 year olds without the training and knowledge. 9/10 C/C++ programmers I ever interviewed failed to properly explain how pointers work. Those that did answer pointer questions correctly tend to have programmed more securely than those that put */&/** by memory.

        It also comes down to money, a good C/C++ programmer isn't cheap.

    • by Viol8 ( 599362 ) on Monday January 08, 2007 @07:00AM (#17506318) Homepage
      C/C++ are very powerful languages because they let you do pretty much what you like. But with this freedom comes the ability to shoot yourself in the foot badly either due to design errors, sloppy programming or just genuine mistakes. Personally I don't mind this risk but other people (usually the types who knock C/C++) can't really function in an enviroment that doesn't hold their hand and protect them from their own mistakes. Horses for courses.
      • Re: (Score:3, Insightful)

        Personally I don't mind this risk but other people (usually the types who knock C/C++) can't really function in an enviroment that doesn't hold their hand and protect them from their own mistakes.

        There are plenty of us who are perfectly capable of functioning in that environment but choose not to, preferring to focus mental energy on algorithms rather than silly implementation details like whether that pointer I've got points to something stack-allocated or heap-allocated. Besides that, I do mind the risk,

    • Re: (Score:3, Interesting)

      by TheRaven64 ( 641858 )

      Remember, deep down inside the other languages, there often is a compiler, library, interpreter, etc written in C/C++.

      Not in the case of Smalltalk. The Squeak VM is written in a subset of Smalltalk which is compiled by a compiler written in Smalltalk into native code. Most of the Java VM and compiler, I believe are written in the same way.

      There are plenty of libraries for handling strings and memory allocation in C, in C++ there are string and storage classes that do as much or as little checking as you need.

      Once you have added enough to a language that it no longer looks like the original, then it's time to ask yourself if you picked the correct language for the job to start with. I could write a dynamic dispatch mechanism with inheritance for use in C, but I would start to wonder if I

    • Re: (Score:3, Interesting)

      by CDarklock ( 869868 )
      > Why do people keep this meme that C/C++ is so insecure?

      Because bugs don't belong to programmers, they belong to code.

      Imagine the difference between "I fixed a Linux kernel bug", which earns you much respect from the community, and "I fixed one of Linus Torvalds' bugs" - which is a rather offensive thing to say.

      So while the insecurity is the programmer, not the language, we can't blame the programmer. It's simply not acceptable.
    • by arevos ( 659374 ) on Monday January 08, 2007 @10:00AM (#17507530) Homepage

      Why do people keep this meme that C/C++ is so insecure? Remember, deep down inside the other languages, there often is a compiler, library, interpreter, etc written in C/C++.

      C and C++ have a larger domain that can suffer from buffer overflows than languages with automatic memory management. In C, a buffer overflow can potentially occur at any point in your source code. In a language which automatically manages memory and checks bounds, the possible points at which buffer overflows can occur are reduced. This does not necessarily make the application more secure, but it does mean that there are less points at which it can be compromised.

      When you are an expert programmer there are places where you need more efficiency than the super-safe string routines can give you. It's the job of the expert to determine exactly how to balance efficiency against security, and only C/C++ can give you this balance.

      I'm not sure that the efficiency increase from dropping boundary checks is often necessary, except possibly in high-end 3D games. Also, many languages allow for binary libraries written in C, so it would be possible to write an application in C#, Python, Ruby or whatever, and farm out any efficiency-critical routines to a library.

    • Re: (Score:3, Informative)

      by Decaff ( 42676 )
      Why do people keep this meme that C/C++ is so insecure? Remember, deep down inside the other languages, there often is a compiler, library, interpreter, etc written in C/C++.

      Which is irrelevant. That code can be thoroughly tested and safe, even with the fundamental issues of C++. What matters is your code. You probably won't get the chance to test that code thousands or millions of times the way the compiler/library or interpreter has been.

      It's not that C/C++ is so insecure by itself, the problem is that
  • Grammar (Score:2, Insightful)

    by noz ( 253073 )
    Marucs Ranum notes taht [...]
    Marucs' code is more secure than Zonk's editing.
  • Some possibilities (Score:3, Insightful)

    by AArmadillo ( 660847 ) on Monday January 08, 2007 @06:44AM (#17506232)
    You cannot know for sure (unless you want to develop code by mathematical proof, which requires a considerable amount of effort). However, you can do some things to help prevent buffer overflows and security problems in general: - encapsulate all buffer access, and make the interface overflow-safe. Then you need only ensure your encapulation is secure. - use a static code analysis tool that detects buffer overflows. I do not know of any open source ones off the top of my head, but I remember seeing an article on slashdot a few months ago about a new open source static analysis tool - avoid unsafe functions. Nearly all standard C functions that deal with buffers are unsafe (that is, a typo or oversight can give you a difficult to detect buffer overflow). Sprintf and strcpy are common culprits off the top of my head. If you're writing for Windows, the Microsoft extensions to the standard library have equivalent 'secure' functions (usually postfixed with _s). I do not know if there is an open source equivalent. - Use your compiler's buffer overrun detection. I think this is -fmudflap for gcc. That's all I can think of for now.
  • How Do You Know Your Code is Secure?


    Easy! It doesn't run :)

  • Valgrind (Score:5, Informative)

    by chatgris ( 735079 ) on Monday January 08, 2007 @06:50AM (#17506266) Homepage
    By using valgrind. It's a virtual machine of sorts that runs your code and checks for any memory problems at all, including use of uninitialized memory. Combine that with thorough test cases, and you can be virtually assured that you have no memory errors in your C/C++ code.

    However, security is a lot more than buffer overflows... but at least it brings you up to the relative security of Java, with speed to boot.
    • Nope. Valgrind will find crashing bugs, not security issues.

      It can only find actual overflows as they happen not potential overflows. You need code analysis to do that - there are a number of tools on the market that can do that kind of analysis (not sure if there are any free/oss ones though.. never seen any).
  • Assume failure (Score:5, Insightful)

    by zCyl ( 14362 ) on Monday January 08, 2007 @06:53AM (#17506282)
    Every function should be designed with the assumption that its input is faulty, and should have safe failure modes for every possible value and all possible content. Any unsafe external libraries must be wrapped in handlers which verify the data being passed to them with a similar mindset. Do not EVER presume data will be of a certain form, or that a function will be used a certain way. If sequential routines are becoming long such that you cannot verify the accurate function or the absence of a buffer overflow immediately in your head, then stop and look for a way to break it down into smaller abstract pieces.

    Combine this mentality with the usage of safe classes as datatypes whenever possible, so that you can wrap your input verification into the functionality of the classes. If prudent, wrap external library routines in classes which manage the interaction with them, and which verify the data content being passed.

    Use test suites to test every component of your program, and be sure to include invalid and pathologically insane input in your test suites.

    Do not trade security for efficiency. And don't forget to cross your fingers.
    • Do not trade security for efficiency. And don't forget to cross your fingers.

      I disagree on this point. Only trade efficiency for security when you know what you are doing. If you are writing HPC code that is going to run on a private supercomputer or cluster, or code that will be run in a VM which is then thrown away, you can do this. Whenever you do, however, add a #warning line reminding users of the code where it's insecure so that anyone who wants to take the code and use it in production will know what to fix.

  • by Anonymous Coward on Monday January 08, 2007 @07:00AM (#17506316)
    I let my code have evident, gaping security flaws and make them well known. This way people will never use it in situations where security matters.

    regards,
    The author of sendmail
  • String overflows (Score:3, Informative)

    by Rik Sweeney ( 471717 ) on Monday January 08, 2007 @07:08AM (#17506344) Homepage
    I think for some people, moving from using a language like Java to using C can cause them a multitude of problems since there's no bounds checking by default and overruns aren't caught.

    For example, I recently fixed a bug Blob And Conquer to do with Strings, the code was something like this:

    char nm[2];

    nm[0] = mission[11];
    nm[1] = mission[12];

    The code then went on to doing a

    missionNum = atoi(nm);

    Most of the time, this'd work OK because of the way atoi works. Other times though it'd stray off into other memory and pick up a random number and return a three or more digit number instead.

    Obviously there's an easy way to fix it.
    • That's just college level programming though.. nobody would make that kind of mistake who had any experience.

      99% of the time that code would just crash, or the compiler/runtime would throw up an error saying what you'd done. If anyone actually committed something like that on my watch they'd be in trouble.

  • by Sub Zero 992 ( 947972 ) on Monday January 08, 2007 @07:14AM (#17506368) Homepage
    How do you validate code for correctness? Well, either you use some cool formal specification language, such as Z, and then spend a great deal of time and effort validating (which is actually very advisable for critical code in, say, device controls for medical equipment) or you use blind luck and "proven" techniques, collectively known as Good Programming Practice.

    Traditionally it has been important to "specify and validate" requirements acribically, in the belief that this is was the way to write good code. This is partly true, but that way can quickly turn your process into a dinosaur - stifling change and preventing improvement because of non-compliance with "The Requirements".

    You can try "defensive coding", which really treats all messages with great suspicion, messages being an old term for parameters. This is a cool technique, but can lead to slower code than necessary, and can lead to some bug being buried if code attempts to heuristically correct for "bad" messages (there is rarely any way to formally specify what is "bad"). You can use LINT tools (and there are very many, very sophistacted tools) which will catch a whole lot of stuff before it leaves the developer's screen. You can try practices such as pair programming and independent code inspection. On the coding side, you can even try (gasp) such methods as test driven development and contract based development.

    On the testing side, there is nothing quite like having an experienced, qualified, motivated and _empowered_ testing team. A testing team which knows how to find bugs, knows how to communicate with coders and has the power to step defects going in to production. A technique I particularly like is defect insertion - secretly insert 10 bugs into the code base and see how many get squashed, this will give you an estmate of how many defects your process doesn't find. There are other cool techniques too, some based on mathematical analysis of the code's attribute - the more complex the code the costlier it is to maintain.

    Opening up the codebase to many people might well increase the chance that someone will find the line which causes an error - but IMHO no one goes around looking for bugs unless they are looking for weaknesses. And there we have another (unethical) method - pay some hacker doodz to 'sploit your code. Hopefully they will not find a higher bidder LOL.

    All of these methods are likely to increase development effort and cost, decrease the number of defects, increase user satisfaction, decrease maintainance costs and increase well-being and harmony. So it is a trade off, perfect code is incredibly difficult to create - the question is what level of perfection are you (and your customers) willing to pay for. Problems mostly arise when expectation does not meet reality - some flakiness in an F/oss application suite is more acceptable to me than random crashes in software which cost me hundreds - or tens of thousands - or millions - of dollars.

    In order to increase some quality aspect of code (security, performance, robustness, correctness...) one can therefore focus on one or several categories - the people, the process, the culture, the tools, the technique, the time&cost etc. The choice of what to focus on is dictated by reality: no one has unlimited resources (except, almost, Google).

    There is no silver bullet - but there are golden rules. Finding people who know the difference is crucial I believe.

    (Full disclosure: Yeah, I'm looking for heavy duty PM work :)
  • by forgoil ( 104808 ) on Monday January 08, 2007 @07:24AM (#17506428) Homepage
    In this case it doesn't in one important way. Programming is the same regardless of language, since humans are the same regardless of language. What you need to write good software (again, why just secure? Why focus on a certain aspect, why not just generalize?) is skills/knowledge and good habits. My best advice here is to make sure you give yourself good coding habits. Don't say things like "I'll clean that up later" or "I will add error checks later" or something equally damaging. Give yourself good, sensible, habits and follow them. Any average programmer must know what buffer overflow means, and how to correct it. You can't even be an average programmer unless you know. So why is such insecure code written in C/C++ then? My thinking is plain mistakes and bad habits.
  • You don't know your code is secure. You just know that it handles certain test cases apparently correct.

    (Ok... Silly examples like "while(TRUE);" are partially correct, because they never terminate, and thus you can't tell they handle the test cases incorrectly.)

    It's like scientific theories. You will never know if a scientific theory is entirely correct. You just can point to the test cases you have thrown at the theory which it was able to handle, and to the results you got from using the theory. It still
    • by gkhan1 ( 886823 )
      This is not strictly true. Scientific theories cannot be proven 100% true because they rely on inductive logic, throwing test cases at them as you say. However, programming is fundamentally a branch of mathematics (remember that every program in a Turing complete language is essentially just a Turing machine), and mathematics is not inductive, its deductive. From that you can, in fact, prove that it would be impossible for code to buffer overflow (given that the compiler, OS or hardware isn't malfunctioning
      • by Sique ( 173459 )
        To add a infinite regess:

        If you can't be sure you coded correctly in the first place, how can you be sure to at least write down the formal proof correctly?

        (And scientific theories don't need to rely on inductive logic, as Karl R. Popper pointed out.)
  • by Berkana ( 471619 ) on Monday January 08, 2007 @07:30AM (#17506466) Homepage
    If you program using strictly functional programming, you can not only verify that your code is 100% secure, but you can even automate the process. (Preferably in a functional programming language such as Scheme, caml, Haskel, LISP, or Erlang; imperative languages make it very difficult/slow to do with functions what functional languages do very naturally and easily.) Purely functional code can be subjected to automated code auditing easily, whereas code auditing imperative code cannot be guaranteed to catch every bug and unintentionally available abuse.

    Here's why, and why just about any computational problem can be solved using FP (functional programming):
    Functional languages conform to lambda calculus, which has been shown to be Turing equivalent, which means that any program that can be computed on a Turing machine can be solved using Lambda calculus. So long as you program using strictly functions, your program can be verified according to the rules of lambda calculus, and the verification would be as sure as a mathematical proof. This is the only sure way I know of really knowing with mathematical certainty that your application is secure.

    Pure functional programming has no assignment statements; there are no state changes for you to keep track of in your program, and in many cases abuses resulting unintended changes of state are the root of security problems. This is not to say that there is no state in functional programming; the state is maintained through function call parameters. (For example, in an imperative programming language, iteration loops keep track of a state variable that guides the running of the loop, whereas a functional program never actually keeps track of state with a variable that changes value; a functional program would carry out iteration by recursion, and the state is simply kept as a parameter passed to each call of the function. No variable with changing state is ever coded.)

    Since functional programs lack assignment statements, and assignment statements make up a large fraction of the code in imperative programs, functional programs tend to be a lot shorter for the same problem solved. (I can't give you a hard ratio, but depending on the problem, your code can be up to 90% shorter when described functionally.) Shorter code is easier to debug, which helps in securing code. The reason functional code is so much shorter is that functional programing describes the problem in terms of functions and composition of functions, whereas imperative code describes a step by step solution to the problem. Descriptions of problems in terms of functions tend to be far shorter than algorithmic descriptions of solving them, which is required in imperative code.

    Here's the biggest benefit of managing complexity with functional programming: as a coder, you NEVER have to worry about state being messed with. The outcome of each function is always the same so long as the function is called with the same parameters. In imperative programming as done in OOP, you can't depend on that. Unit testing each part doesn't guarantee that your code is bug free and secure because bugs can arise from the interaction of the parts even if every part is tested and passed. In functional programming, however, you never have to deal with that kind of problem because if you test that the range of each function is correct given the proper domain, and pre-screen the parameters being passed to each function to reject any out-of-domain parameters, you can know with certainty where your bugs come from by unit testing each function.

    If you need to guarantee the order of evaluation (something that critics of FP advocates sometimes use to dismiss FP advocacy), you can still use FP and benefit: in functional programming, order of evaluation can be enforced using monads. Explaining how is beyond the scope of a mere comment though, but in any case, if you need really reliable code, consider using a functional programming style.

    I can't do justice to the matter here; for more information, see th
    • If you want to learn about Lambda Calculus (which was developed by Alonzo Church, a contemporary of Allan Turing), Wikipedia is a good place to start ( http://en.wikipedia.org/wiki/Lambda_calculus [wikipedia.org] ), but mastering Lambda Calculus is not necessary; first master a functional programming language, and a lot of the lambda calculus will be made easier.

      To summarize, here's how you verify with mathematical certainty that a functional program is secure:
      1. You use purely functional code; that guarantees that there are
    • by TheRaven64 ( 641858 ) on Monday January 08, 2007 @08:48AM (#17506918) Journal
      When you start to introduce concurrency into a functional program, you end up with something closer to Pi calculus. The verification of both Pi and Lambda calculus expressions is still an open research problem (being worked on by some very bright people on the floor below me). There are a huge number of problems, not least of which in my mind is Gödel's incompleteness theorem, which states (roughly) that any system can only be completely described by a system more complicated than itself. You can generate a proof from your lambda program, but you still need to verify it and the proof will be more complicated (and thus harder to verify) than your original problem.

      There is also the question of what the proof actually says. You can't prove, for example, whether a lambda program will terminate (Halting Problem), and in fact you can prove that you can't prove this. If you have a sufficiently well expressed specification for your program, you can verify that the program and the specification match. Unfortunately, if you have a specification that concrete, you can just compile it and run it.

      By the way, Scheme is not a functional language. It has a number of properties that make it possible to write functional code, but saying Scheme is a functional language is like saying C++ is an object oriented language.

      • Re: (Score:3, Informative)

        by Coryoth ( 254751 )
        There are a few misconceptions here which deserve comment.

        You can't prove, for example, whether a lambda program will terminate (Halting Problem), and in fact you can prove that you can't prove this.

        This simply isn't the case - there are lots of programs for which you can easily prove termination. The catch with the Halting problem is that you cannot find a procedure that will work for all programs. In other words you may find yourself in a situation where you cannot prove termination for certain programs;

  • by JHWH ( 1046444 )
    You can write code that can be as secure as you want, but what about libraries, compilers and hardware?
    I think the question itself makes little sense without a deeper investigation in the system!
  • Coding 101 (Score:5, Insightful)

    by Tom ( 822 ) on Monday January 08, 2007 @08:03AM (#17506652) Homepage Journal
    We all know the answer if we've studied computer science. The problem is that the answer is boring, lots of work and totally non-hip.

    It's specifications, pre- and post-conditions, all that "theoretical bullshit" we learned in university. It's just that writing code that way is very un-exciting, and that's a vast understatement.
    • Re: (Score:3, Insightful)

      by Coryoth ( 254751 )

      It's specifications, pre- and post-conditions, all that "theoretical bullshit" we learned in university. It's just that writing code that way is very un-exciting, and that's a vast understatement.

      That depends really. As a math geek I find a certain amount of pedantry and formlisation natural. I mean many people are happy to spend the extra time writing annotations to define types signatures for functions (and even types for variables in some languages) which is, really, just a light form of specification. U

  • by joss ( 1346 ) on Monday January 08, 2007 @08:08AM (#17506692) Homepage
    Helped a lot for this kind of thing. The tool went downhill quite
    a long way but its still useful. Electric fence helps too.
    Then a lot of old fashioned software engineering.. use raw arrays
    as little as possible, add bounds checking to std::vector [] if you
    feel inclined, use gprof to identify any code not being excercised
    by your unit tests [you do have unit tests, right]. Lastly, actually
    read the darn code and make sure anytime you are using raw arrays
    you check the size.
  • by DoofusOfDeath ( 636671 ) on Monday January 08, 2007 @08:12AM (#17506712)
    How Do You Know Your Code is Secure?

    Make it part of the critical path in music DRM. Then you know it's not secure.

    Not sure about the flip-side, though.

  • by GroovBird ( 209391 ) * on Monday January 08, 2007 @08:20AM (#17506750) Homepage Journal
    ...you can ship it.

    It's that simple!
  • Who needs security? Thanks to the DMCA, all I have to do is keep my code proprietary. Then it's illegal for people to hax my code, so it won't happen!
  • SPARK (Score:5, Insightful)

    by Rod Chapman ( 781256 ) on Monday January 08, 2007 @08:41AM (#17506882)
    For high-integrity stuff, we use SPARK (http://www.sparkada.com/ [sparkada.com]) - a design-by-contract subset of Ada95 that is entirely designed-from-scratch for verification purposes.
    The verification system implements Hoare-logic and is supported by a theorem prover. Buffer Overflow is only one of many basic correctness properties that can be verified. Properties that can be verified are only limited to what can be expressed as an assertion in first-order logic.
    SPARK is a small language (compared to C++ or Java...) but the depth and soundness of verification is unmatched by anything like FindBugs, SPLINT, ESC/Java or any of the other tools for the "popular" languages.
    (If you don't know or care what soundness is in the context of static analysis, then you've probably missed the point of this post... :-) )
    - Rod Chapman, Praxis
  • "Marucs Ranum notes taht 'It's really hard to tell the difference between a program that works and one that just appears to work.'
    it's hard to tell the difference between being human, and appearing to be human... oh, wait -- there's a misspelling. i think he's real... or maybe just a robot with bad programming.
  • by Random Walk ( 252043 ) on Monday January 08, 2007 @10:06AM (#17507596)
    Although many (if not most) open-souce apps are written in C/C++, there are no really useful open source tools to check C/C++ code for security:
    • valgrind is very nice, but only reports memory corruption if it really occurs (i.e. you have to trigger the bug first). Not very useful to detect bugs.
    • splint doesn't understand the flow of control, thus it needs tons of annotations to work properly. A royal PITA if you work on existing code. Also, it just shifts the problem: how do you now prove that your annotations are correct? Besides, it produces tons of spurious warnings.
    • flawfinder, rats, et. al. just grep the code for suspicious functions like strcpy(). They don't understand C/C++, and thus produce warnings even in cases where it's perfectly clear that these functions are used safely.
    • some academic projects (like e.g. uno, ccured, ...) look interesting, but usually don't work on nontrivial code (at least not unless you are part of the development team and know the required wizardry to make them work). Also, most acedemic project go into limbo as soon as the thesis is written.
    I think one of the major problems is that commercial vendors like e.g. Coverity offer free service at least to major open-source projects, thus stifling any initiative to produce open-source counterparts of such tools.
  • Why would I want to? (Score:3, Interesting)

    by jc42 ( 318812 ) on Monday January 08, 2007 @10:55AM (#17508116) Homepage Journal
    I think it's a bad mistake to make your code secure. If you look at sales figures, you see that sales are inversely proportional to security. So customers don't want secure computer software. If they wanted that, they'd buy it. Clearly, what people want is the most insecure software they can get.

    I say go with The Market, and write the most insecure software you can. Securing your software will only waste your time and decrease your sales.

  • by natoochtoniket ( 763630 ) on Monday January 08, 2007 @11:23AM (#17508464)

    I think it was Knuth who said, "In theory, theory and practice are the same. In practice, they are not."

    In theory, for any nontrivial program, you cannot know absolutely that it is secure. You cannot even know that it will terminate. The Turing showed that there is no algorithm which will decide if a program will halt. Most other problems of program behavior can be reduced to halting. (Just place a call to exit() immediately after the code that outputs the behavior in question.) In general, there is no way to prove that a program has any particular property that can be reduced to a termination property.

    The choice of language does not matter, either. Turing used a language that was very primitive, even compared with the simplest assembly languages. But Turing's language is equivalent in computing power to every modern general-purpose programming language. Church's completeness hypothesis is widely accepted as valid, though a proof in the strict sense cannot be written. So, Turing's mathematical proof of the halting theorem is valid for every modern programming language.

    There are some programs for which we do know that the program is correct. Such programs are all very small, solve well-defined mathematical problems, and are written in well defined functional programming languages. These proofs depend on very careful, mathematical definitions of the programming language, and of the function to be computed. The programming language is, strictly, an algebra. The proofs simply show that the algebraic formula (the program) transforms the algebraic input to the correct algebraic output. In every case, such proofs are quite difficult and tedious. And, as noted above, they are not possible in the general case.

    In practice, we can apply methods that are known as "engineering". That is, we can apply logic, design, inspection, review, and testing to develop some amount of confidence that it will behave as expected. But, engineering methods do not provide certainty. They only provide high confidence. The choice of language and tools have some effect on the ease or difficulty of doing the engineering work, but do not change the boundaries of what is possible.

    How do we "know" that a bridge will not fall down. There are no proofs of bridges. There is only engineering. Engineers apply logic, experience, design, inspection, reviews, and tests, so that they can have confidence in the design. The confidence is based on statistics. For a given shape of steel or concrete, we can measure loads that cause the steel to fail, and we can measure the variance in those loads due to the manufacturing tolerances of the material. When we use that shape and material to build the bridge, we can have statistics about how much load the bridge can support without failing. But even with all that engineering, sometimes bridges do fall down. The load measurements are only statistics, not proofs. There is always a confidence interval around every measurement, and the confidence can never be 100 percent.

    We can never have absolute proof of any property of any real, nontrivial program. We can have confidence as close to 100 percent as we want, if we spend enough effort on the engineering.

  • Why bash C/C++? (Score:3, Insightful)

    by DigitAl56K ( 805623 ) on Monday January 08, 2007 @12:46PM (#17509642)

    As a C/C++ developer I am a little offended by the article summary. Certainly C/C++ has a lot of flexibilities that allow bad developers to write bad code. However, many other languages, e.g. Java, allow bad programmers to write code that looks good because of stronger type checking, reduced use of pointers and the like. However, nothing stops a bad developer from writing insecure code in any language. Maybe you don't manage your resources correctly. Maybe you do a bad job of implementing encryption/protected storage. Maybe your authentication scheme is weak, your site is vulnerable to cross-site scripting vulnerabilities, or your session data can be easily spoofed.

    Secure code is not a product of language, it's a product of developers who take the time to fully understand the tools that they are using to build the product, including the ins and outs of their language of choice and its key risk elements, and who research risk elements for all other parts of the system.

"If it ain't broke, don't fix it." - Bert Lantz

Working...