×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Do Static Source Code Analysis Tools Really Work?

CmdrTaco posted more than 4 years ago | from the if-you're-stupid-they-do dept.

Programming 345

jlunavtgrad writes "I recently attended an embedded engineering conference and was surprised at how many vendors were selling tools to analyze source code and scan for bugs, without ever running the code. These static software analysis tools claim they can catch NULL pointer dereferences, buffer overflow vulnerabilities, race conditions and memory leaks. Ive heard of Lint and its limitations, but it seems that this newer generation of tools could change the face of software development. Or, could this be just another trend? Has anyone in the Slashdot community used similar tools on their code? What kind of changes did the tools bring about in your testing cycle? And most importantly, did the results justify the expense?"

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

345 comments

First! (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#23463692)

first?

In Short, Yes (5, Informative)

Nerdfest (867930) | more than 4 years ago | (#23463724)

They're not perfect, and won't catch everything, but they do work. Combined with unit testing, you can get a very low bug rate. Many of these (for Java, at least) are open source, so the expense in negligible.

Re:In Short, Yes (1, Insightful)

Anonymous Coward | more than 4 years ago | (#23463788)

The proper answer would be: No. A fully working static code analyzer would be like solving the Halting Problem, which has been proven to be impossible. Essentially you can just try to catch as many potential problems as you can, but you can never catch all.

Re:In Short, Yes (3, Interesting)

tritonman (998572) | more than 4 years ago | (#23463794)

yea they work, I wouldn't spend a lot of money on them though, a decent compiler will give you info for stuff like buffer overflows. Most of the bugs will need to be caught at runtime, so if you are on a tight budget, definitely skip the static tools for the more useful ones.

Re:In Short, Yes (5, Informative)

Entrope (68843) | more than 4 years ago | (#23464430)

My group at work recently bought one of these. They catch a lot of things that compilers don't -- for example, code like this:

int array[4], count, ii;

scanf("%d", &count);
for (ii = 0; ii < count; ++ii)
{
scanf("%d", &array[ii]);
}

.. where invalid input causes arbitrarily bad behavior. They also tend to be better at inter-procedural analysis than compilers, so they can warn you that you're passing a short literal string to a function that will memcpy() from the region after that string. They do have a lot of false positives, but what escapes from compilers to be caught by static analysis tools tend to be dynamic behavior problems that are easy to overlook in testing. (If the problem were so obvious, the coder would have avoided it in the first place, right?)

Re:In Short, Yes (4, Informative)

crusty_yet_benign (1065060) | more than 4 years ago | (#23463964)

In my experience developing win/mac x-platform apps, Purify (Win), Instruments (OSX) and BoundsChecker (Win) have all been useful. They find obvious stuff, that might have led to other issues. Recommended.

Re:In Short, Yes (2, Informative)

Simon80 (874052) | more than 4 years ago | (#23464292)

There's also valgrind [valgrind.org], for Linux users, and mudflap [gnu.org], for gcc users. I haven't tried mudflap yet, but valgrind is a very good runtime memory checker, and mudflap claims to do similar things.

Re:In Short, Yes (5, Informative)

HalWasRight (857007) | more than 4 years ago | (#23464758)

valgrind, BoundsChecker, and I believe the others mentioned, are all run-time error checkers. These require a test case that execises the bug. The static analysis tools the poster was asking about, like those from Coverity [coverity.com] and Green Hills [ghs.com], don't need test cases. They work by analyzing the actual semantics of the source code. I've found bugs with tools like these in code that was hard enough to read that I had to write test cases to verify that the tool was right. And it was! The bug would have caused an array overflow write under the right conditions.

In short, YMMV (5, Informative)

Moraelin (679338) | more than 4 years ago | (#23464230)

My experience has been that while in the hands of people who know what they're doing, they're a nice tool to have, well, beware managers using their output as metrics. And beware even more a consultant with such a tool that he doesn't even understand.

The thing is, these tools produce

A) a lot of "false positives", code which is really OK and everyone understand why it's ok, but the tool will still complain, and

B) usually includes some metrics of dubious quality at best, to be taken only as a signal for a human to look at it and understand why it's ok or not ok.

E.g., ne such tool, which I had the misfortune of sitting through a salesman hype session of, seemed to be really little more than a glorified grep. It really just looked at the source text, not at what's happening. So for example if you got a database connection and a statement in a "try" block, it wanted to see the close statements in the "finally" block.

Well, applied to an actual project, there was a method which just closed the connection and the statements supplied as an array. Just because, you know, it's freaking stupid to copy-and-paste cute little "if (connection != null) { try { connection.close(); } catch (SQLException e) { // ignore }}" blocks a thousand times over in each "finally" block, when you can write it once and just call the method in your finally block. This tool had a trouble understanding that it _is_ all right. Unless it saw the "connection.close()" right there, in the finally block, it didn't count.

Other examples include more mundane stuff like the tools recommending that you synchronize or un-synchronize a getter, even when everyone understands why it's OK for it to be as it is.

E.g., a _stateless_ class as a singleton is just an (arguably premature and unneded) speed optimization, because some people think they're saving so much by a singleton instead of the couple of cycles it takes to do a new on a class with no members and no state. It doesn't really freaking matter if there's exactly one of it, or someone gets a copy of it. But invariably the tools will make an "OMG, unsynchronized singleton" fuss, because they don't look deep enough to see if there's actually some state that must be unique.

Etc.

Now taken as something that each developper understands, runs on his own when he needs it, and uses his judgment of each point, it's a damn good thing anyway.

Enter the clueless PHB with a metric and chart fetish, stage left. This guy doesn't understand what those things are, but might make it his personal duty to chart some progress by showing how much fewer warnings he's got from the team this week than last week. So useless man-hours are spent on useless morphing perfectly good code, into something that games the tool. For each 1 real bug found, there'll be 100 harmless warnings that he makes it his personal mission to get out of the code.

Enter the snake-oil vendor's salesman, stage right. This guy only cares about selling some extra copies to justify his salary. He'll hype to the boss exactly the possibility to generate such charts (out of mostly false positives) and manage by such charts. If the boss wasn't already in a mind to do that management anti-pattern, the salesman will try to teach him to. 'Cause that's usually the only advantage that his expensive tool has over those open source tools that you mention.

I'm not kidding. I actually tried to corner one into;

Me: "ok, but you said not everything it flags there is a bug, right?"

Him: "Yes, you need to actually look at them and see if they're bugs or not."

Me: "Then what sense does it make to generate charts based on wholesale counting entities which may, or may not be bugs?"

Him: "Well, you can use the charts to see, say, a trend that you have less of them over time, so the project is getting better."

Me: "But they may or may not be actual bugs. How do you know if this week's mix has more or less actual bugs than last weeks, regardless of what the total there is?"

Him: "Well, yes, you need to actually look at them in turn to see which are actual bugs."

Me: "But that's not what the tool counts. It counts a total which includes an unknown, and likely majority, number of false positives."

Him: "Well, yes."

Me: "So what use is that kind of a chart then?"

Him: "Well, you can get a line or bar graph that shows how much progress is made in removing them."

Lather, rinse, repeat, give up.

Or enter the consultant with a tool. Now while I'll be the first to say that a lot of projects do end up needing a good consultant to get the code to perform right, I'll also warn about the thousands of parasites posing as one. There are more than enough guys who don't actually have any clue, and all they do is run the code through such a tool, and expect miracles to happen. They don't see the big fat SQL query that causes the delay, or the needless loop it's in, but they'll advise blind compliance to the 1 microsecond saving tips of their tool. And of course, again, try to justify his fee by reducing such a total of harmless warnings.

And I guess the moral of the story is that you also need to train the people to understand those tools, and understand what they can ignore and what not. It might mean a bit more expense than just downloading the tools. Especially training some managers what _not_ to do with them.

Re:In short, YMMV (1)

AmaDaden (794446) | more than 4 years ago | (#23464606)

As I understand it these things work but turning the code in to a state diagram. It maps out the code by stepping through it and replacing variables that have a values with sets of possible values. So if the code has the user enter a value for the int X then the sate of X is now what ever the user was permitted to enter. Then when the code hits an "if(X > 10)" statement the map branches so that on one branch it the state is like the IF executed and on the other it is like it did not.

The problem is that to do a full map of this would take far more processor time then any group would be willing to give so programs that do this need to take clever little short cuts. This cause the user that the parent is talking about.

Do I understand it correctly?

Re:In Short, Yes (5, Interesting)

FBSoftware (1224962) | more than 4 years ago | (#23464308)

Yes, I use the formal methods based SPARK tools (www.sparkada.com) for Ada software. In my experience, the Examiner (static analyzer) is always right (> 99.44% of the time) when it reports a problem or potential for runtime exception. Even without SPARK, the Ada language requires that the compiler itself accomplish quite a bit of static analysis. Using Ada, its less likely you will need third-party static analysis tool - just use a good compiler like GNAT.

open source tool list? (1)

fraselsnarf (1292100) | more than 4 years ago | (#23464692)

"...Many of these (for Java, at least) are open source, so the expense in negligible."

can you list a few? This summer I am picking back up a couple of old embedded projects, and am interested in learning more about the open source embedded tools available.

Re:In Short, Yes (2, Interesting)

samkass (174571) | more than 4 years ago | (#23464790)

Due to its dynamic nature and intermediate bytecode, Java analysis tools seem to be especially adept at catching problems. In essence, they can not only analyze the source better (because of Java's simpler syntax), they can also much more easily analyze swaths of the object code and tie it to specific source file issues.

In particular, I've found FindBugs has an amazing degree of precision considering it's an automated tool. If it comes up with a "red" error, it's almost certainly something that should be changed. I'm not familiar with any C/C++ tool that comes close.

Re:In Short, Yes (0)

Anonymous Coward | more than 4 years ago | (#23464868)

At my place of work, we just use OCaml. Most of the features and benefits of these third-party analysis tools are built into OCaml itself.

By using assertions and proper unit testing, virtually all errors can be caught early on. Our product is used on a daily basis by approximately 600 corporate users, and over the past six years, we've only had 37 bug reports originating outside of the company. None in the past two years, in fact. And this is even after heavy modification to our application.

OCaml's static typing catches many of our errors at compile-time. The assertions and unit tests catch problems that manifest themselves at runtime. And in the end our software is top-notch.

Yes. (3, Insightful)

Dibblah (645750) | more than 4 years ago | (#23463726)

It's another tool in the toolbox. However, the results are not necessarily easy to understand or simple to fix. For example, see the recent SSL library issue - Which exhibited minimal randomness due to someone "fixing" an (intended) uninitialized memory area.

Re:Yes. (2, Insightful)

mdf356 (774923) | more than 4 years ago | (#23463784)

If you're using uninitialized memory to generate randomness, it wasn't very random in the first place.

Not that I actually read anything about the SSL "fix".

Re:Yes. (5, Informative)

Anonymous Coward | more than 4 years ago | (#23463888)

I think the actual details weren't very widely reported anyway. Apparently two statements were removed; one read from uninitialised memory, but the other was completely valid. Since the second one was responsible for most of the randomness, removing it reduced the keyspace to the point where it can be brute forced.

Re:Yes. (3, Informative)

Josef Meixner (1020161) | more than 4 years ago | (#23464180)

If you're using uninitialized memory to generate randomness, it wasn't very random in the first place.

It is only one source for the entropy pool and the SSL "fix" was a Debian maintainer running valgrind on OpenSSL, finding a piece of code where uninitialized memory was accessed, "fixed" it and a "similar piece" and accidently removed all entropy from the pool. The result of that is, that all ssh-keys and ssl-certs created on Debian in the last 20 months are to be considered broken. (Debian Wiki SSLkeys on the scope and what to do [debian.org])

Re:Yes. (5, Informative)

Anonymous Coward | more than 4 years ago | (#23463884)

Sigh. That bug wasn't from fixing the use of uninitialized memory, it was from being overzealous and "fixing" a second (valid, not flagged as bad by Valgrind) use of the the same function somewhere near the first use.

Re:Yes. (1)

DougBTX (1260312) | more than 4 years ago | (#23464768)

From the should-I-comment-out-these-lines post [marc.info], apparently both lines were flagged, or at least the person using the tool thought they were flagged, which has much the same effect.

Re:Yes. (3, Insightful)

moocat2 (445256) | more than 4 years ago | (#23463940)

Assuming you are talking about the SSL issue in Debian - the original 'issue' they tried to fix was reported by Valgrind. Valgrind is a run-time analysis tool.

While the parent makes a good point that results are not always easy to understand or fix - since the original post is about static vs run-time analysis tools, it's good to understand that they each have their problems.

Just like compiler warnings... (5, Insightful)

mdf356 (774923) | more than 4 years ago | (#23463734)

Here at IBM we have an internal tool from research that does static code analysis.

It has found some real bugs that are hard to generate a testcase for. It has also found a lot of things that aren't bugs, just like -Wall can. Since I work in the virtual memory manager, a lot more of our bugs can be found just by booting, compared to other domains, so we didn't get a lot of new bugs when we started using static analysis. But even one bug prevented can be work multiple millions of dollars.

My experience is that, just like enabling compiler warnings, any way you have to find a bug before it gets to a customer is worth it.

Re:Just like compiler warnings... (4, Informative)

gmack (197796) | more than 4 years ago | (#23463838)

I found even tools like lint or splint can catch interesting bugs but not nearly as many as the time I enabled GCC's type checking for varargs on a software project a company I work for was developing.

Re:Just like compiler warnings... (1)

ccguy (1116865) | more than 4 years ago | (#23464690)

Here at IBM

any way you have to find a bug before it gets to a customer is worth it.
You must be <30. You definitely weren't there in the OS/2 years.
So what product did you say you were working on? :-)

OSS usage (5, Insightful)

MetalliQaZ (539913) | more than 4 years ago | (#23463742)

If I remember correctly, one of these companies donated their tool to many open source projects, including Linux and the BSDs. I think it led to a wave of commits as 'bugs' were fixed. It seemed like a pretty good endorsement to me...

Coverity Reports Open Source Security Making Grea (4, Informative)

doug (926) | more than 4 years ago | (#23463978)

Is this what you were talking about?

http://it.slashdot.org/article.pl?sid=08/01/11/1818241 [slashdot.org]

- doug

Coverity Prevent Rocks (5, Informative)

Arakageeta (671142) | more than 4 years ago | (#23464240)

My large C/C++ project (2,000,000+ SLOC) started using Coverity Prevent about a year ago. Its results have truly been invaluable. We simply have too much code for standard human code reviews or for detailed run-time coverage analysis (ex. Insure* or valgrind). Prevent has caught many programming errors (some extremely obscure and/or subtle) and saved use a ton of money and time.

* I really like Insure, but it is difficult to set up on a system composed of many shared libraries. However, there are some bugs that really need run-time analysis to catch.

Re:OSS usage (2, Informative)

NewbieProgrammerMan (558327) | more than 4 years ago | (#23464014)

IIRC, they donated the results of their analysis, not the actual tool itself. If somebody *did* actually donate a free license to use their software to those projects, I missed it and would love a link to the details. :)

Re:OSS usage (1)

chromatic (9471) | more than 4 years ago | (#23464206)

That was Coverity [coverity.com]. We've found the results very useful for Parrot [parrotcode.org]. You have to be smart about how you interpret the results and how you fix actual bugs, but their tools did reveal many dubious constructs and actual bugs.

Static analysis tools (4, Interesting)

MadShark (50912) | more than 4 years ago | (#23463802)

I use PC-lint religiously for my embedded code. In my opinion it has the most bang for the buck. It is fast, cheap and reliable. I've found probably thousands of issues and potential issues over the years using it.

I've also used Polyspace. In my opinion, it is expensive, slow, can't handle some constructs well and has a *horrible* signal to noise ratio. There is also no mechanism for silencing warnings in future runs of the tool(like the -e flag in lint). On the other hand, it has caught a (very) few issues that PC-Lint missed. Is it worth it? I suppose it depends if you are writing systems that can kill people if something goes wrong.

Potential issues are the biggest drawback (1)

Anonymous Brave Guy (457657) | more than 4 years ago | (#23463988)

I've also used Polyspace. In my opinion, it is expensive, slow, can't handle some constructs well and has a *horrible* signal to noise ratio.

The signal-to-noise ratio is pretty horrendous in most static analysis tools for C and C++, IME. This is my biggest problem with them. If I have to go through and document literally thousands of cases where a perfectly legitimate and well-defined code construct should be allowed without a warning because the tool isn't quite sure, I rapidly lose any real benefit and everyone just starts ignoring the tool output. Things like Lint's -e option aren't much good as workarounds either, because then even if you're hiding an issue that might be a phantom problem today, you'll still be hiding it if it becomes a real problem tomorrow. :-(

signal to noise (1)

www.sorehands.com (142825) | more than 4 years ago | (#23464400)

That is the problem for a beginner. When you first configure PC-Lint you need to tune the configuration to ignore stuff that you don't have a problem with, ie. assignments within a test. After than you need to configure your project for lint, setting up the lint files to include the correct headers and such. Then the noise is not too bad. Just make sure when you think something is noise, it is not really noise.

Re:Static analysis tools (2, Funny)

somersault (912633) | more than 4 years ago | (#23464016)

I suppose it depends if you are writing systems that can kill people if something goes wrong.
The best way to avoid bugs in that case would be for the developers to test the systems themselves - then they'd be a lot more careful! Plus it helps natural selection to weed out the sloppy coders :) In that case you'd probably want to write all the code from scratch though to ensure that nobody else's bugs kill you.

Re:Static analysis tools (2)

sadr (88903) | more than 4 years ago | (#23464424)

Here's another vote for PC-Lint by http://www.gimpel.com/ [gimpel.com]

You really can't beat it for the money, and it is probably as comprehensive as some of the other more expensive products for C and C++.

They do work (5, Interesting)

Anonymous Coward | more than 4 years ago | (#23463804)

Static analysis does catch a lot of bugs. Mind you, it's no silver bullet, and frankly it's better, given the choice, to target a language+environment that doesn't suffer problems like dangling pointers in the first place (null pointers, however, don't seem to be anything Java or C# are really interested in getting rid of).

Even lint is decent -- the trick is just using it in the first place. As for expense, if you have more than, oh, 3 developers, they pay for themselves by your first release. Besides, many good tools such as valgrind are free (valgrind isn't static, but it's still useful).

Yes (4, Informative)

progressnerd (1189895) | more than 4 years ago | (#23463810)

Yes, some static analysis tools really work. FindBugs [sourceforge.net] works well for Java. Fortify [fortify.com] has had good success finding security vulnerabilities. These tools take static checking just a step beyond what's offered by a compiler, but in practice that's very useful.

Re:Yes (2, Interesting)

Pollardito (781263) | more than 4 years ago | (#23464116)

These tools take static checking just a step beyond what's offered by a compiler, but in practice that's very useful.
that's a good point in that compiler warnings and errors are really just the result of static analysis, and i think everyone has experience in finding bugs due to those

Change bug source (3, Funny)

192939495969798999 (58312) | more than 4 years ago | (#23463814)

The best thing these tools can do is to tell everyone what they probably already know -- that a particular coder or coder(s) are responsible for a whole ton of the errors in the code. I think it'd be much better to move that coder to some other part of the company ... it would be way cheaper than trying to fix all their bugs.

Re:Change bug source (0)

GenMoo (1253138) | more than 4 years ago | (#23464896)

Even the best coders make mistakes... Your comment shows that you're not so familiar with coding

Static analysis tools (3, Interesting)

wfstanle (1188751) | more than 4 years ago | (#23463824)

I am presently working on an update to static analysis tools. Static analysis tools are not a silver bullet but they are still relevant. Look at them as a starting point in your search for programming problems. A lot of potential anomalies can be detected like the use of uninitialized variables. Of course, a good compiler can use these tools as part of the compilation process. However, there are many things that a static analyzer can't detect. For this, you need some way to do dynamic analysis ( execution based testing). As such the tools we are developing also include dynamic testing.

Standard and Custom static analysis tools (1)

peterofoz (1038508) | more than 4 years ago | (#23464078)

Static analysis is another tool in the toolbox. Its a great indicator of overall code quality and care taken by a developer which may predict code quality during dynamic testing.
"You can put lipstick on a pig, but its still a pig".
"You can lint check bad code and add comments, but its still bad code".
At least with the static tools run early in the development process, you can identify the code pigs and make a decision to rebuild parts or team up an experienced developer with a new one. Using RegEx tools like AWK, you can even build your own static analysis tools. We did this for Y2K checking some years ago and will probably need to do it again in 2036.

Yes, they work. (5, Insightful)

Anonymous Coward | more than 4 years ago | (#23463832)

You will probably be amazed at what you will catch with static analysis. No, it's not going to make your program 100% bug-free (or even close), but every time I see code dies on an edge case that would've been caught with static analysis, it makes me want to kill a kitten (and I'm totally a "cat person" mind you).

Static analyzers will catch the stupid things - edge cases that fail to initialize a var, but then lead straight to de-referencing it; memory leaks on edge-case code paths, etc. that shouldn't happen but often do, and get in the way of find real bugs in your program logic.

Of course they can work (5, Interesting)

Idaho (12907) | more than 4 years ago | (#23463840)

Such tools work in a very similar way to what is already being done in many modern language compilers (such as javac). Basically, they implement semantic checks that verify whether the program makes sense, or is likely to work as intended in some respect. For example, they will check for likely security flaws, memory management/leaking or synchronisation issues (deadlock, access to shared data outside critical sections, etc.), or other kind of checks that depend on whatever domain the tool is intended for.

It would probably be more useful if you could state which kind of problem you are trying to solve and which tools you are considering to buy. That way, people who have experience with them could suggest which work best :)

Re:Of course they can work (1)

Yvanhoe (564877) | more than 4 years ago | (#23463990)

It looks like you are reading memory from unallocated memory space, you should comment line 427 in ssl_seed_generator.h

I also see how it could bring a distribution to its knees. But I agree that they will probably be worthwhile 90% of the time.

Re:Of course they can work (3, Insightful)

mrogers (85392) | more than 4 years ago | (#23464238)

Depends on whether one interprets "you should comment" as "you should document" or "you should comment out", I guess. :-)

Re:Of course they can work (1)

jlunavtgrad (1291942) | more than 4 years ago | (#23464312)

Well I'm coding for an embedded linux environment in C and C++, so java tools won't help. I am more interested in the tools that can generate a call-stack and trace through the execution path to show you how a null-pointer dereference might come about. I was looking at Klockwork Insight, but the price tag is practically my salary for the number of licenses we would need. Has anyone worked with that tool specifically?

Re:Of course they can work (1)

laddhebert (570948) | more than 4 years ago | (#23464544)

Yes, we run Klocwork 7.7.x in production an 8.0 in test. Our users (embedded developers, wireless mostly, along with some IDE developers) have had great success with Klocwork. The price tag is up there, but you need to do an ROI exercise to determine if it is worth it. For us, it WAS worth it. We also have Coverity in house, but many users are leaving that tool to use Klocwork due to ease of use, number of false positives, along with a decrease in cost since they can use a shared pool of licenses. Polyspace is on the landscape, but it is commonly thought of as a tool to use once you've fixed all the problems that Klocwork and Coverity discovered.
Hope that helps!
/-l

Re:Of course they can work (1)

procrastitron (841667) | more than 4 years ago | (#23464760)

Are you doing new development or maintaining an already large code base? For new development I would suggest looking at Cyclone [wikipedia.org], which adds static analysis (in the form of type checking) to the language itself.

Re:Of course they can work (1)

Coryoth (254751) | more than 4 years ago | (#23464856)

Such tools work in a very similar way to what is already being done in many modern language compilers (such as javac). Basically, they implement semantic checks that verify whether the program makes sense, or is likely to work as intended in some respect.
The keywords here, for the purposes of good static analysis tools, are "work as intended". Knowing what code is intended to do isn't trivial, and it can be severely limiting in what static analysis tools can check for (or, alternatively, they can spit back a lot of false positives). A good solution is provide the static analysis tool with some hints as to what the programmers intentions are -- that means annotations as used by splint [splint.org], or ESC/Java2 [kind.ucd.ie], or something similar. The benefits for static analysis (and API documentation depending on the tools) can far outweigh the "extra" work*; consider the sorts of errors [kind.ucd.ie] ESC/Java2 can successfully catch for instance. Of course you have to be interested in doing static anaylsis to begin with, btu the benefits are definitely available.

* Note that "extra" is in scare quotes because ultimately most of the annotations are usually things you should be documenting anyway, so it's not so much "extra" work as documentation that gets done sooner rather than later.

Testing cycle (4, Informative)

mdf356 (774923) | more than 4 years ago | (#23463856)

I forgot to answer your other question.

Since we've had the tool for a while and have fixed most of the bugs it has found, we are required to run static analysis on new code for the latest release now (i.e. we should not be dropping any new code that has any error in it found via static analysis).

Just like code reviews, unit testing, etc., it has proved useful and was added to the software development process.

Yes! Uh, sorta. (4, Funny)

BigBlueOx (1201587) | more than 4 years ago | (#23463862)

Ya can't beat a good "Lint party" after all the testing is done! You'll find all kinds of cool stuff that slipped through your testing suites.

However, static code analysis is just one part of the bug-finding process. For example, in your list, in my limited experience, I have found that buffer overflows and NULL pointer derefs get spotted really well. Race conditions? Memory leaks? Hmm. Not so good.

YMMV. Don't expect magic. Oh to hellwithit, just let the end-users test it *ow!*

Yes. (4, Informative)

Anonymous Coward | more than 4 years ago | (#23463868)

The Astrée [astree.ens.fr] static analyser (based on abstract interpretation) proved the absence of run-time errors in the primary control software of the Airbus A380.

Re:who proved Astrée ...? (3, Insightful)

zimtmaxl (667919) | more than 4 years ago | (#23464710)

It may be the best tool in the world - I admit I do not know it. But the word "proved" makes me suspicious. To me this sounds like the typical - and wide spread - management speak to make business decision makers and their insurrers sleep well. Thank you! This gives the perfekt example that the misleading wording is even used by educational bodies.
Is this is a proof or do some mistakenly think they're safe?

Who "proved" Astree to be error free in the first place?!

Yes (4, Informative)

kevin_conaway (585204) | more than 4 years ago | (#23463876)

Add me to the Yes column

We use them (PMD and FindBugs) for eliminating code that is perfectly valid, yet has bitten us in the past. Two Java examples are unsynchronized access to a static DateFormat object and using the commons IOUtils.copy() instead of IOUtils.copyLarge().

Most tools are easy to add to your build cycle and repay that effort after the first violation

Very useful in .Net (2, Interesting)

Toreo asesino (951231) | more than 4 years ago | (#23463904)

While not the be-all-and-end-all of code quality metrics, VS2008/Team Foundation Server has this built-in now so you can stop developers checking in completely junk code if you so wish - http://blogs.msdn.com/fxcop/archive/2007/10/03/new-for-visual-studio-2008-code-metrics.aspx [msdn.com]

FxCop too has gone server-side too (for those familiar with .Net development). It takes one experienced dev to customise the rules, and you've got a fairly decent protection scheme against insane code commits.

Yes, But... (1)

SwashbucklingCowboy (727629) | more than 4 years ago | (#23463912)

Yes, static code tools do work well for finding certain classes of issues. However, they are not a panacea. They do not understand the semantics that are intended and cannot effectively replace code reviews.

The more things change... (4, Insightful)

BrotherBeal (1100283) | more than 4 years ago | (#23463914)

the more they stay the same. Static code analysis tools are just like smarter compilers, better language libraries, new-and-improved software methodologies, high-level dynamic languages, modern IDE's, automated unit test runners, code generators, document tools and any number of other software tools that have shown up over the past few decades.

Yes, static code analysis can help improve a team's ability to deliver a high-quality product, if it is embraced by management and its use is enforced. No, it will not change the face of software development, nor will it turn crappy code into good code or lame programmers into geniuses. At best, when engineers and management agree this is a useful tool, it can do almost all the grunt work of code cleanup by showing exactly where problem code is and suggesting extremely localized fixes. At worst, it will wind up being a half-assed code formatter since nobody can agree on whether the effort is necessary.

Just like all good software-engineering questions, the answer is 'it depends'.

Not Yet, In My Personal Experience. (4, Interesting)

RJCantrell (1290054) | more than 4 years ago | (#23463918)

In my own corner of the world (.NET Compact Framework 2.0 on old, arcane hardware), they certainly don't. Each time I get optimistic and search for new or previously-missed static analysis tools, all roads end up leading to FxCop. Horrible signal-to-noise ratio, and a relatively small number of real detectable problems. That said, I'm always willing to submit myself to the genius of the slashdot masses. If you know of a great one, feel free to let me know. = )

Re:Not Yet, In My Personal Experience. (3, Informative)

FishWithAHammer (957772) | more than 4 years ago | (#23464156)

I believe Gendarme [mono-project.com] might be of some use. Just don't invoke the Portability assemblies and I can't see why it'd fail.

Re:Not Yet, In My Personal Experience. (1)

RJCantrell (1290054) | more than 4 years ago | (#23464356)

At first glance, this looks promising. Thanks for the link!

Re:Not Yet, In My Personal Experience. (1)

FishWithAHammer (957772) | more than 4 years ago | (#23464678)

No problem. I'm ripping Gendarme apart and including its functionality in my Google Summer of Code [edropple.com] project, so I'd be interested in hearing any feedback from you regarding it. Feel free to drop me a line. =)

Re:Not Yet, In My Personal Experience. (1)

boatboy (549643) | more than 4 years ago | (#23464488)

I've never had problems with FxCop - it catches tons of common and not-so common real-world errors, lets you turn off rules you don't care about, and links to usually helpful MSDN articles to explain the rules. I'd say everything in the Security, and Design categories are well worth the few minutes it takes to run a free tool. Having it baked into VS08 is even better, but I was a big fan of it with 05/2.0 as well.

Useful for planning tests (4, Interesting)

Chairboy (88841) | more than 4 years ago | (#23463938)

At Symantec, I used to use these tools to help plan tests. I wrote a simple code velocity tool that monitored Perforce checkins and generated code velocity graphs and alerts in different components as time passed. With it, QA could easily see which code was being touched the most and dig down to the specific changelists and see what was going on. It really helped keep good visibility on what needed the most attention and helped everyone avoid being 'surprised' by someone dropping a bunch of changes into an area that wasn't watched carefully. During the final days of development before our products escaped to manufacturing, this provided vital insight into what was happening.

I've since moved on, and I think the tool has since gone offline, but I think there's a real value to doing static analysis as part of the planning for everything else.

Coverity & Klocwork (5, Informative)

Anonymous Coward | more than 4 years ago | (#23464056)

We have had presentations from both Coverity and Klocwork at my workplace. I'm not entirely fond of them, but they're wayyyyy better than 'lint'. :) I much prefer using "Purify" whenever possible, since run-time analysis tends to produce fewer false-positives.

My comments would be:

(1) Klockwork & Coverity tend to produce a lot of "false positives". And by a lot, I mean, *A LOT*. For every 10000 "critical" bugs reported by the tool, only a handful may be really worth investigating. So you may spend a fair bit of time simply weeding through what is useful and what isn't.

(2) They're expensive. Coverity costs $50k for every 500k lines of code per year... We have a LOT more code than this. For the price, we could hire a couple of guys to run all of our tools through Purify *and* fix the bugs they found. Klocwork is cheaper; $4k per seat, minimum number of seats.

(3) They're slow. It takes several days running non-stop on our codebase to produce the static analysis databases. For big projects, you'll need to set aside a beefy machine to be a dedicated server. With big projects, there will be lots of bug information, so the clients tend to get bogged down, too.

In short: It all depends on how "mission critical" your code is; is it important, to you, to find that *one* line of code that could compromise your system? Or is your software project a bit more tolerant? (e.g., If you're writing nuclear reactor software, it's probably worthwhile to you to run this code. If you're writing a video game, where you can frequently release patches to the customer, it's probably not worth your while.)

Re:Coverity & Klocwork (0)

Anonymous Coward | more than 4 years ago | (#23464666)

Where I work we have signed up corporate wide for the Coverity product. It does work as expected and the false positive rate is acceptably low.

These tools are not the be-all end-all of bug finding; you still need to have competent programmers to start with and you need to enforce unit testing and good design.

Trends or Crutches? (4, Interesting)

bsDaemon (87307) | more than 4 years ago | (#23464066)

I'll probably get modded to hell for asking but seriously -- all these new trends, tools, etc - are they not just crutches, which in the long run are seriously going to diminish the quality of output by programmers?

For instance, we put men on the moon with a pencil and a slide rule. Now no one would dream of taking a high school math class with anything less than a TI-83+.

Languages like Java and C# are being hailed while languages like C are derided and many posts here on slashdot call it outmoded and say it should be done away with, yet Java and C# are built using C.

It seems to me that there is no substitute for actually knowing how things work at the most basic level and doing them by hand. Can a tool like Lint help? Yes. Will it catch everything? Likely not.

As generations of kids grow up with the automation made by generations who came before, and have less incentive to learn how the basic tools work, an incentive which will diminish, approaching 0, I think we're in for something bad.

As much as people bitch about kids who were spoiled by BASIC, you'd think that they'd also complain about all the other spoilers. Someday all this new, fancy stuff could break and someone who only knows Java, and even then checks all their source with automated tools will likely not be able to fix it.

Of course, this is more of just a general criticism and something I've been thinking about for a few weeks now. Anyway, carry on.

Re:Trends or Crutches? (3, Insightful)

Lord_Frederick (642312) | more than 4 years ago | (#23464256)

Any tool can be considered a "crutch" if it's misused. I don't think anyone that put men on the moon would want to return to sliderules, but a calculator is only a crutch if the user doesn't understand the underlying fundamentals. Debugging tools are just tools until they stop simply performing tedious work and start doing what the user is not capable of understanding.

Re:Trends or Crutches? (3, Interesting)

bsDaemon (87307) | more than 4 years ago | (#23464336)

In the 7th grade I left my calculator at home one day when I had a math test. I did, however, have a Jepsen Flight Computer (a circular slide rule) that my dad (a commercial airline pilot) had given me, because I was going to a flying lesson after school.

I whipped out my trusty slide rule and commenced to using it. The teacher wanted to confiscate it and thought that I was cheating with some sort of high-tech device... mind you it was just plastic and cardboard. I'm sure you've all seen one before.

I'm only just about to turn 24, so 7th grade was not long ago for me.

The point is, students should be required to know how to do things by hand. a PhD in physics 20 years ago clearly knew how to do calculus by hand. if he wants to use a Ti-92 to do it now, that's his business. Good that those aren't allowed in class (even if they are allowed on the SAT and AP exams, well the ti-89 is).

Intro to comp sci shouldn't be taught with Java any more than elementary school math should be "intro to the calculator." You just cripple people's minds that way.

I ended up getting a BA in English the first time around because of personal reasons I was too messed up to concentrate on maths. I'm now starting to take a 2nd degree in MechE - and I'm making a good faith effort to do as much by hand as possible, because I don't want to fuck something up in the future because I figured the calculator was giving me a right answer and I had no idea of where the proper answer should be.

summary: using a tool to check your answer is one thing. Relying on it to get the answer in the first place is lazy, stupid, and potentially dangerous.

Re:Trends or Crutches? (1)

fishbowl (7759) | more than 4 years ago | (#23464650)

>a PhD in physics 20 years ago clearly knew how to do calculus by hand.

An undergrad in physics in my school knows how to do calculus by hand,
which is a requirement to be accepted into the degree program.

Long before one is hooded for a PhD here, mere "calculus" is a first language,
a primary tool for expressing research.

Re:Trends or Crutches? (1)

bsDaemon (87307) | more than 4 years ago | (#23464700)

I always loved Bio more than physics, so I'm not really talking from any previous experience here -- I did intern as a C programmer at the Thomas Jefferson National Accelerator Facility out of high school, though.

At my first undergrad institution, I could have taken "Calculus with MATLAB" and relied on the CAS to do everything for me if I so chose, and that's the sort of thing which quite frankly, we should all be able to to agree is b.s.

Re:Trends or Crutches? (2, Insightful)

flavor (263183) | more than 4 years ago | (#23464478)

Did you walk uphill in the snow, both ways, when you were a kid, too? At one point in time, high-level languages like ASSEMBLER were considered crutches for people who weren't Real Programmers [catb.org]. Get some perspective!

Look, people make mistakes, and regardless of how good a programmer you are, there is a limit to the amount of state you can hold in your head, and you WILL dereference a NULL pointer, or create a reference loop, at some point in your career.

Using a computer to catch these errors is just another flavor of metaprogramming. Get over it, and go be more productive with these tools, instead of whining for the days when you coded on bare metal with your bare hands and you liked it.

Arrgh.

Man on the moon - pencil, slide rule and computer! (1)

Folger (76197) | more than 4 years ago | (#23464496)

Ever heard of the Apollo Guidance Computer [wikipedia.org] ? Even if they had several computers on board the space vehicles, they surely used computers to design the space vehicles.

Re:Man on the moon - pencil, slide rule and comput (2)

bsDaemon (87307) | more than 4 years ago | (#23464674)

Yes, I have heard of it. However, it's hardly a Quad Xenon workstation, is it? That's still just part of what I'm talking about. Their computers took up rooms (on the small side), and were less powerful than some calculators today.

It's like when Wheeler died and he was called "one of the last great titans of physics." One fellow slashdotter called this an unfair characterization as it is unfairly biased against the people who are doing work today, which he sees as no less important, comparing it to if say, Linus Torvalds were to die and was called one the "last great titans of computing."

The best computers they had available when they were designing the bomb were like UNIVAC. They had to do their math by hand, and much of their calculations were largely based on assumptions. These days the computer does a lot of the work and doing things by hand is for "suckers."

Comparing Wheeler to Linus is absurd. It'd have been more poinignt to compare Wheeler to Grace Hopper or someone, frankly.

That's just my two cents though. Your exchange rate my vary.

Re:Trends or Crutches? (4, Insightful)

gmack (197796) | more than 4 years ago | (#23464596)

These tools require skill. Blindly fixing things that Lint shows up can introduce new bugs or conversely using lint notation to shut the warnings off can mask bugs.

I also don't think new languages help bad programmers much. Bad code is still bad code so now instead of crashing it will just memory leak or just not work right.

On a software project I worked on before our competition spent two years and two million dollars did their code in visual basic and MSSQL and they abandoned their effort when no matter what hardware they threw at it they couldn't get their software to handle more than 400 concurrent users. We did our project in C and with a team for 4 built something in about a year that handled 1200 users on a quad CPU P III 400mhz Compaq. Even when another competitor posed as a client and borrowed some of my ideas (they added a comms layer instead of using the SQL server for communication) they still required a whole rack of machines to do what we did with one out of badly out of date test machine.

C is a fine tool if you know how to use it so I doubt it will go away any time soon.

Re:Trends or Crutches? (1)

EastCoastSurfer (310758) | more than 4 years ago | (#23464870)

What level of detail is enough for someone to you know? You seem to imply C, while the programmers around when C came onto the scene would say you need to know ASM. Wait, does someone then need to know the details of how a transistor works or how electricity works on the molecular level in order to write good software? You can easily enter a never ending spiral of decomposition where you never end up solving the actual problem you set out to solve.

These tools are not crutches. They make possible the large, complex software that is written today. You're right, we put men on the moon with a pencil and slide rule. How many lines of code were required to accomplish this feat? Probably not more than could be hand checked by a talented person or two. Now look at modern OSs, RDBMs, etc... Code bases that are so large and complex, these tools are not crutches, but necessary parts of the development process.

Now, if you want to argue that software doesn't need to be as complex as it has become I think you might have a point. A look at the recent directions of languages like Python or Ruby shows that simplification is the new thing.

To a degree, yes (4, Interesting)

gweihir (88907) | more than 4 years ago | (#23464098)

You actually need to tolerate a number of false positives in order to get good coverage of the true bugs. That means you have to follow-up on every report in detail and understand it.

However these things do work and are highly recommended. If you use other advanced techniques (like Descign by Contract),they will be a lot less useful though. They are best for traditional code that does not have safety-nets (i.e. most code).

Stay away from tools that do this without using your compiler. I recently evaluated some static analysis tools found that the tools that do not use the native compilers can have serious problems. One example was an incorrecly set symbol in the internal compiler of one tool, that could easily change the code functionality drastically. Use tools that work frrom a build environment and utilize the compiler you are using to build.

Jetbrains IntelliJ IDEA and Resharper (2, Informative)

SpryGuy (206254) | more than 4 years ago | (#23464138)

I've used the above two tools ... the IntelliJ IDEA IDE for Java development, and the Visual Studio plug-in Resharper for C# development ... and can't imagine living without them.

Of course, they provide a heck of a lot more than just static code analysis, but the ability to see all syntax errors in real time, and all logic errors (like potential null-references, dead code, unnecessary 'else' statements, etc, etc) saves way too much time, and has, in my experience, resulted in much better, more solid code. When you add on all the intelligent refactoring, vastly improved code navigation, and customizable code-generation features of these utilities, it's a no-brainer.

I wouldn't program without them.

Yes, absolutely (4, Informative)

Llywelyn (531070) | more than 4 years ago | (#23464190)

FindBugs is becoming increasingly widespread on Java projects, for example. I found that between it and JLint I could identify a substantial chunk of problems caused by inexperienced programmers, poor design, hastily written code, etc. JLint was particularly nice for potential deadlocks, while FindBugs was good for just about everything else.

For example:

  • Failure to make null checks.
  • Ignoring exceptions
  • Defining equals() but not hashCode() (and the other variations)
  • Improper use of locks.
  • Poor or inconsistent use of synchronization.
  • Failure to make defensive copies.
  • "Dead stores."
  • Many others [sourceforge.net]

At least in the Java world, I wish more people would use them. It would make my job so much easier.

My experience in the Python world is that pylint is less interesting than FindBugs: many of the more interesting bugs are hard problems in a dynamically typed language and so it has more "religious style issues" built in that are easier to test for. It still provides a great deal of useful output once configured correctly, and can help enforce a consistent coding standard.

Low startup cost and great benifits (4, Insightful)

iamwoodyjones (562550) | more than 4 years ago | (#23464208)

I have used static analysis as part of our build process on our Continous Integration machines and it's definitely worth your time to set it up and use it. We use FindBugs with our Java code and have it output html reports on a nightly basis. Our team lead comes in early in the morning and peruses them and assigns them to either "Suppress" or fix the issues. We shoot for zero bugs either through suppressing them if they aren't bugs or by fixing them. FindBugs doesn't give too many false positives so it works great.

Could this be just another trend?

I don't worry about what's "trendy" or not. Just give the tool a shot in your group and see if it helps/works for you or not. If it does keep using it otherwise abandon it.

What kind of changes did the tools bring about in your testing cycle?

We use it _before_ the test cycle. We use it to catch mistakes such as "Whoops! Dereferenced a pointer there, my bad" before going into the test cycle.

And most importantly, did the results justify the expense?

Absolutely. The startup cost of adding static analysis for us was one developer for 1/2 a day to setup FindBugs to work on our CI build on a nightly basis to give us HTML reports. After that, the cost is our team lead to check the reports in the morning (he's an early riser) and create bug reports based on them to send to us. Some days there's no reports, other days (after a large check-in) it might be 5-10 and about an hour of his time.

It's best to view this tool as preventing bugs, synchronization issues, performance issues, you name it issues before going into the hands of testers. But, you can extend several of the tools like FindBugs to be able to add new static analysis test cases. So if a tester finds a common problem that effects the code you can go back and write a static analysis case for that, add it to the tool and the problem shouldn't reach the tester again.

Buyer (User) Beware (3, Interesting)

normanjd (1290602) | more than 4 years ago | (#23464228)

We use them more for optimizing code than anything else... The biggest problem we see is that there are often false positives... A senior person can easily look at recomendations and pick whats needed... A junior person, not so much, which we learned the hard way...

Count also the forming of good programming habbits (3, Insightful)

deep-deep-blue (1055812) | more than 4 years ago | (#23464242)

Another good point for using lint, is that after a while a programmer learns the the way, and the outcome is a better code in a shorter time. Of course I also found that are a few ways to avoid lint errors/warnings in a way that lead to some very ugly bugs.

Infinite loop detector (2, Funny)

nuttyprofessor (83282) | more than 4 years ago | (#23464268)

How about a tool that will tell me if my program will
eventually halt or not for a given input? I'd pay big money for that!

Many of never all (4, Informative)

mugnyte (203225) | more than 4 years ago | (#23464302)


  Short version:

      There are real bugs, with huge consequences, that can be detected with static analysis.
      The tools are easy to find and worth the price, depending on the customer base you have.
      In the end, that cannot detect "all" bugs that could arise in the code.

  Worth it?
      Only you can decide, but after a few sessions learning why tools flag suspect code, if you take those suggest to heart, you will be a better coder.

Yes (1)

DerekSTheRed (1292084) | more than 4 years ago | (#23464368)

My company uses Ounce [ouncelabs.com] for static analysis. It's great at helping find the tedious bugs and potential security violations. For instance, server side input validation for web sites is much easier and more accurate with static analysis. As others have said, it's not perfect but having an extra set of digital eyes looking at the code helps. The sooner bugs are fixed in the development cycle, the cheaper it is to fix.

Absolutely (2, Interesting)

Fippy Darkpaw (1269608) | more than 4 years ago | (#23464372)

At my little corner of Lockheed Martin we use Klocwork [klocwork.com] and LDRA [ldra.com] to analyze C/C++ embedded code for military hardware. Since the various compilers for each contract aren't nearly as full-featured as say, Visual Studio or Eclipse, I've found static code analysis tools invaluable. Can't comment on the cost/results ratio though, since I don't purchase stuff. =)

Linux kernel devs use sparse for static analysis (4, Informative)

ncw (59013) | more than 4 years ago | (#23464382)

The linux kernel developers use a tool originally written by Linux Torvalds for static analysis - sparse.

http://www.kernel.org/pub/software/devel/sparse/ [kernel.org]

Sparse has some features targeted at kernel development - for instance spotting mixing up kernel and user space pointers and a system of code annotations.

I haven't used it but I do see on the kernel mailing list that it regularly finds bugs.

Re:Linux kernel devs use sparse for static analysi (2, Funny)

ncw (59013) | more than 4 years ago | (#23464420)

s/Linux Torvalds/Linus Torvalds/ - I keep making that typo ;-)

Re:s and x (0)

Anonymous Coward | more than 4 years ago | (#23464612)

Linus is not Linux

WHOA... nice timing (2, Interesting)

w00f (872376) | more than 4 years ago | (#23464416)

YOU sir, have amazing timing! I just wrote a 2-part article on this topic! Interesting... mine was published http://portal.spidynamics.com/blogs/rafal/archive/2008/05/06/Static-Code-Analysis-Failures.aspx The Solution: http://portal.spidynamics.com/blogs/rafal/archive/2008/05/15/Hybrid-Analysis-_2D00_-The-Answer-to-Static-Code-Analysis-Shortcomings.aspx [spidynamics.com] Comments welcome!! Interesting that this topic is getting so much attention all of the sudden

make up for language deficiencies (3, Interesting)

nguy (1207026) | more than 4 years ago | (#23464448)

Generally, these tools make up for deficiencies in the underlying languages; better languages can guarantee absence of these errors through their type systems and other constructs. Furthermore, these tools can't give you yes/no answers, they only warn you about potential sources of problems, and many of those warnings are spurious.

I've never gotten anything useful out of these tools. Generally, encapsulating unsafe operations, assertions, unit testing, and using valgrind, seem both necessary and sufficient for reliably eliminating bugs in C++. And whenever I can, I simply use better languages.

We use Compuware DevPartner Studio (2)

gravy.jones (969410) | more than 4 years ago | (#23464500)

At my company, we use Compuware DevPartner Studio and have found it to be a very comprehensive package. I have used it for performance optimization, memory leak detection, and resource misuse. I have not used its ability to find "dead" code, but that exists. It plugs into Visual C++ 6 and Visual Studio .NET and it takes minimal time to get used to it. Others may know of it from its legacy name "Bounds Checker"

Watch the differences! (4, Interesting)

tikal_work (885055) | more than 4 years ago | (#23464566)

Something that we've found incredibly useful here and in past workplaces was to watch the _differences_ between Gimpel PC-Lint runs, rather than just the whole output.

The output for one of our projects, even with custom error suppression and a large number of "fixups" for lint, borders on 120MiB of text. But you can quickly reduce this to a "status report" consisting of statistics about the number of errors -- and with a line-number-aware diff tool, report just any new stuff of interest. It's easy to flag common categories of problems for your engine to raise these to the top of the notification e-mails.

Keeping all this data around (it's text, it compresses really well) allows you to mine it in the future. We've had several cases where Lint caught wind of something early on, but it was lost in the noise or a rush to get a milestone out -- when we find and fix it, we're able to quickly audit old lint reports both for when it was introduced and also if there are indicators that it's happening in other places.

And you can do some fun things like do analysis of types of warnings generated by author, etc -- play games with yourself to lower your lint "score" over time...

The big thing is keeping a bit of time for maintenance (not more than an hour a week, at this point) so that the signal/noise ratio of the diffs and stats reports that are mailed out stays high. Talking to your developers about what they like / don't like and tailoring the reports over time helps a lot -- and it's an opportunity to get some surreptitious programming language education done, too.

Doesn't do me any good (1, Funny)

Anonymous Coward | more than 4 years ago | (#23464592)

Static analysis tools never work for me, I don't declare anything static.

To summarize... (2, Insightful)

kclittle (625128) | more than 4 years ago | (#23464708)

Static analysis is a tool. In good hands, it is a valuable tool. In expert hands, it can be invaluable, catching really subtle bugs that only show up in situations unlike anything you've ever tested -- or imagined to test. You know, situations like what your customers will experience the weekend after a major upgrade (no joking...)

Static analysis is part of the basics (3, Insightful)

Incster (1002638) | more than 4 years ago | (#23464764)

You should strive to make your code as clean as possible. Turn on maximum warnings from your compiler, and don't allow code that generates warnings to be checked in to your source repository. Use static analysis tools, and make sure your code passes without issue there as well. These tools will generate many false positives, but if you learn to write in a style that avoids triggering warnings, quality will go up. You may be smarter than Lint, but the next guy that works on the code may not be. Static analysis tools are just another tool in the tool box. Also use dynamic analysis tools like Purify, valgrind, or whatever works in your environment. Writing quality code is hard. You need all the help you can get.

Define "work" (1)

crmartin (98227) | more than 4 years ago | (#23464872)

They will indeed find certain classes of bugs, and code that is lint-free (especially with more modern versions of lint [splint.org] has fewer defects. Other metrics, like McCabe cyclomatic complexity, can also point out areas in which bugs have a high probability.

On the other hand, no tool can find 100 percent of bugs. This is a theorem (via Turning's halting and equivalence theorems), and also because some bugs are places where the code is doing what it was supposed to do, but that isn't what the user actually wanted.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...