Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

The Art of Unit Testing

samzenpus posted more than 4 years ago | from the read-all-about-it dept.

Books 98

FrazzledDad writes "'We let the tests we wrote do more harm than good.' That snippet from the preface of Roy Osherove's The Art of Unit Testing with Examples in .NET (AOUT hereafter) is the wrap up of a frank description of a failed project Osherove was part of. The goal of AOUT is teaching you great approaches to unit testing so you won't run into similar failures on your own projects." Keep reading for the rest of FrazzledDad's review.AOUT is a well-written, concise book walking readers through many different aspects of unit testing. Osherove's book has something for all readers, regardless of their experience with unit testing. While the book's primary focus is .NET, the concepts apply to many different platforms, and Osherove also covers a few Java tools as well.

Osherove has a long history of advocating testing in the .NET space. He's blogged about it extensively, speaks at many international conferences, and leads a large number of Agile and testing classes. He's also the chief architect at TypeMock, an isolation framework that's a tool you may make use of in your testing efforts – and he's very up front about his involvement with that tool when discussing isolation techniques in the book. He does a very good job of not pushing his specific tool and also covers several others, leaving me feeling there wasn't any bias toward his product whatsoever.

AOUT does a number of different things really, really well. First off, it focuses solely on unit testing. Early on Osherove lays out the differences between unit and integration tests, but he quickly moves past that and stays with unit tests the rest of the book. Secondly, Osherove avoids pushing any particular methodology (Test Driven Development, Behavior Driven Development, etc.) and just stays on critical concepts around unit testing.

I particularly appreciated that latter point. While I'm a proponent of *DD, it was nice to read through the book without having to filter out any particular dogma biases. I think that mindset makes this book much more approachable and useful to a broader audience – dive in to unit testing and learn the fundamentals before moving on to the next step.

I also enjoyed that Osherove carries one example project through the entire book. He takes readers through a journey as he builds a log analyzer and uses that application to drive discussion of specific testing techniques. There are other examples used in the book, but they're all specific to certain situations; the brunt of his discussion remains on the one project which helps keep readers focused in the concepts Osherove's laying out.

The book's first two chapters are the obligatory introduction to unit testing frameworks and concepts. Osherove quickly moves through discussions of "good" unit tests, offers up a few paragraphs on TDD, and lays out a few bits around unit test frameworks in general. After that he's straight in to his "Core Techniques" section where he discusses stubs, mocks, and isolation frameworks. The third part, "The Test Code" covers hierarchies and pillars of good testing. The book finishes with "Design and Process" which hits on getting testing solidly integrated into your organization, plus has a great section on trying to deal with testing legacy systems. There are a couple handy appendices covering design issues and tooling.

Osherove uses his "Core Techniques" section to clearly lay out the differences between stubs and mocks, plus he covers using isolation frameworks such as Rhino.Mocks or TypeMock to assist with implementing these concepts. I enjoyed reading this section because too many folks confuse the concepts of stubbing and mocking. They're not interchangeable, and Osherove does a great job emphasizing where you should use stubs and mocks to deal with dependencies and interactions, respectively.

The walkthrough of splitting out a dependency and using a stub is a perfect example of why this book's so valuable: Osherove clearly steps through pulling the dependency out to an interface, then shows you different methods of using a stub for testing via injection by constructors, properties, or method parameters. He's also very clear about the drawbacks of each approach, something I find critical in any design-related discussion – let me know what things might cause me grief later on!

While the discussion on mocking, stubbing, and isolation was informative and well-written, I got the most out of chapters 6 ("Test hierarchies and organization") and 7 ("The pillars of good tests"). The hierarchy discussion in particular caused me to re-think how I've been organizing an evolving suite of Selenium-based UI tests. I was already making use of DRY and refactoring out common functionality into factory and helper methods; however, Osherove's discussion led to me re-evaluating the overall structure, resulting in some careful use of base class and inheritance. His concrete examples of building out a usable test API for your environment also changed how I was handling namespaces and general naming.

If you're in an organization that's new to testing, or if you're trying to deal with getting testing around legacy software, then the last two chapters of the book are must-read sections. Changing cultures inside organizations is never easy, and Osherove shows a number of different tools you can use when trying to drive the adoption of testing in your organizations. My own experience has shown you'll need to use combinations of many of these including finding champions, getting management buy off, and most importantly learning how to deal with the folks who become roadblocks.

The Art of Unit Testing does a lot of things really well. I didn't feel the book did anything poorly, and I happily include it in my list of top software engineering/craftsmanship books I've read. All software developers, regardless of their experience with unit testing, stand to learn something from it.

You can purchase The Art of Unit Testing with Examples in .NET from amazon.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.

Sorry! There are no comments related to the filter you selected.

Frosty P1ss!!11!!11oneone (-1, Troll)

Anonymous Coward | more than 4 years ago | (#31089596)

Yo mama likes to test my unit. Uh huh huh huh, unit.

Re:Frosty P1ss!!11!!11oneone (0, Offtopic)

toastar (573882) | more than 4 years ago | (#31089650)

ATF guy: Wheres the unit?!
Butthead: My unit? In my pants

Re:Frosty P1ss!!11!!11oneone (1, Funny)

Anonymous Coward | more than 4 years ago | (#31089934)

So she uses your unit to test the magnification level of newer electron microscopes? Have she come across one yet that has the resolution to find yours?

Going to be buying this (1)

PmanAce (1679902) | more than 4 years ago | (#31089606)

This will fit nicely besides my msbuild book collecting dust on my desk. Jokes aside, we do tons of unit testing and I have never seen a book solely on unit testing for .NET with TDD, mocking, etc. I'm stoked!

heh heh (0, Offtopic)

Anonymous Coward | more than 4 years ago | (#31089610)

unit heh heh

Wrong Unit (1)

techno-vampire (666512) | more than 4 years ago | (#31089674)

When I first saw the article's title, I thought that this was the UNIT [wikipedia.org] it was referring to. Says a lot about the type of people I hang out with, doesn't it?

Re:Wrong Unit (2, Funny)

Hatta (162192) | more than 4 years ago | (#31089712)

When I read the blurb, I was wondering why they hadn't moved to ELF.

Re:Wrong Unit (1)

Civil_Disobedient (261825) | more than 4 years ago | (#31098542)

That's funny, when I saw the book cover thumbnail, I thought it was a picture of a timelord [ratiosemper.com] , then realized there was no ceremonial headpiece, and thought it must be a Gallifrey Citadel Guard.

Apparently I am a giant hulking geek.

cOM3E ON poeple (1)

For a Free Internet (1594621) | more than 4 years ago | (#31089716)

HAT is everybodu talking about I am totally like totally! And the unite of thiswas with the gonhoreea in monkey shit over cages with tiw thwi really its apples na doranges what ou say ijn Biurma 9u5oijrk insightful rhuekj penois

xUnit Test Patterns (5, Informative)

Nasarius (593729) | more than 4 years ago | (#31089842)

For anyone familiar with the basics of unit testing but struggling to implement it in real world scenarios, I'd strongly recommend xUnit Test Patterns: Refactoring Test Code by Gerard Meszaros.

The idea is not only that automated testing is good, but that testable code is fundamentally better because it needs to be loosely coupled. I still struggle to follow TDD in many scenarios, especially where I'm closely interacting with system APIs, but just reading xUnit Test Patterns has given me tons of ideas that improved my code.

Re:xUnit Test Patterns (1)

assemblyronin (1719578) | more than 4 years ago | (#31090064)

The idea is not only that automated testing is good, but that testable code is fundamentally better because it needs to be loosely coupled.

I'd add to this that testable code often, not always but often, is well planned, well defined, and/or well managed code and this is what makes it fundamentally better. One might say that testable code is well engineered code.

(Disclaimer: Haven't read that book yet, this is just an off the cuff remark from experiencing some of the best and some of the worst levels of unit testing and beyond)

Re:xUnit Test Patterns (2, Insightful)

prockcore (543967) | more than 4 years ago | (#31090192)

but that testable code is fundamentally better because it needs to be loosely coupled.

I disagree. It builds a false sense of security, and artificially increases complexity. You end up making your units smaller and smaller in order to keep each item discrete and separate.

It's like a car built out of LEGO, sure you can take any piece off and attach it anywhere else, but the problems are not with the individual pieces, but how you put them together.. and you aren't testing that if you're only doing unit testing.

Re:xUnit Test Patterns (3, Informative)

msclrhd (1211086) | more than 4 years ago | (#31090314)

Kevlin Henny makes the following distinction:

1. A unit test is a test that can fail if (a) the code under test is wrong, or (b) the test itself is wrong.

2. An integration test is a test that can fail if (a) the code under test is wrong, (b) the test itself is wrong, or (c) the system environment has changed (e.g. the user does not have permission to write a file to a specific folder).

John Lakos refers to individual things under test as components. In his model, there are layers of components that build on each other and interact with each other, but these are well-defined components that just happen to depend on other components.

Re:xUnit Test Patterns (4, Informative)

Lunix Nutcase (1092239) | more than 4 years ago | (#31090426)

It's like a car built out of LEGO, sure you can take any piece off and attach it anywhere else, but the problems are not with the individual pieces, but how you put them together.. and you aren't testing that if you're only doing unit testing.

And that's why you do integration testing too.

Re:xUnit Test Patterns (0)

Anonymous Coward | more than 4 years ago | (#31092768)

So you are saying testable code is not fundamentally better because if you only do unit testing you don't do integration testing?

It's like saying cake isn't good because if you only eat sugar you don't eat bacon.

Re:xUnit Test Patterns (1)

MobyDisk (75490) | more than 4 years ago | (#31090318)

The idea is not only that automated testing is good, but that testable code is fundamentally better

One of the main goals of of Typemock is to eliminate that. TypeMock allows you to mock objects that were not designed to be mocked, and are not loosely coupled.

Re:xUnit Test Patterns (1)

jgrahn (181062) | more than 4 years ago | (#31090684)

For anyone familiar with the basics of unit testing but struggling to implement it in real world scenarios, I'd strongly recommend xUnit Test Patterns: Refactoring Test Code by Gerard Meszaros.

Aargh! They managed to mention unit tests, patterns and refactoring in the same title!

Also, I really dislike xUnit, as I've seen it wedged into Python's unittest module and CPPUnit (C++). It's a horrible design which just gets in the way -- I don't understand what valid reasons a book has to rely on it (except buzzword compilance).

The idea is not only that automated testing is good, but that testable code is fundamentally better because it needs to be loosely coupled.

And loosely coupled code is fundamentally better *why*? "Because it can be easily unit tested" is the only argument I can swallow ...

Loose coupling was a popular catchphrase in the early 1990s (along with Software Reuse), but that kind of thinking is the source of lots of overly-general and vague code.

Re:xUnit Test Patterns (3, Insightful)

Lunix Nutcase (1092239) | more than 4 years ago | (#31090884)

And loosely coupled code is fundamentally better *why*? "Because it can be easily unit tested" is the only argument I can swallow ...

Because if the modules of your system have low to no coupling between themselves you can more easily make changes to individual modules of the system. In a highly coupled system, changes to one part can cause you to have to subsequently changes numerous other pieces of the system as a consequence. This is eliminated or greatly reduced if your modules have little to no dependency on the others. Even if you do no unit testing, having a highly modular and loose coupled system just makes subsequent maintenance work so much easier.

Re:xUnit Test Patterns (2, Insightful)

geekoid (135745) | more than 4 years ago | (#31093160)

Smaller piece are easier to test, easier to maintain, easier to document and several reduce the chance of putting a new bug when changes need to be made.

Unit testing helps enforce small code pieces.

"f lots of overly-general and vague code."

If that's true, then you have dealt with some extreme poor programmers. I suggest working with software engineers instead of programmers.

re-use of common piece is a good thing, and loosely coupled code makes the easier to do as well.

Re:xUnit Test Patterns (1)

Canberra Bob (763479) | more than 4 years ago | (#31093776)

And loosely coupled code is fundamentally better *why*?
"Because it can be easily unit tested" is the only argument I can swallow ...

On the past few systems I have worked on I have had the "fun" job of adding new features to existing legacy code. Adding features to the existing tightly coupled code was a nightmare, finding what did exactly what took ages, some functionality was partially performed in several different locations - each relying on the previous part - and the slightest spec change would need the whole thing to be re-done yet again. The exact same spec changes (eg a new element in a message) were trivial to do in the applications I had written from scratch as each change only needed to be done in a single location and was easy to test. I may have seen some extreme cases but I have certainly become a "loosely coupled" evangelist since.

Re:xUnit Test Patterns (2, Insightful)

shutdown -p now (807394) | more than 4 years ago | (#31091182)

The idea is not only that automated testing is good, but that testable code is fundamentally better because it needs to be loosely coupled.

Which is a faulty assumption. Coming from this perspective, you want to unit test everything, and so you need to make everything loosely coupled. But the latter is not free, and sometimes the cost can be hefty - where a simple coding pattern would do before (say, a static factory method), you now get the mess with interface for every single class in your program, abstract factories everywhere (or IoC/DI with its maze of XML configs).

Ultimately, you write larger amounts of code that is harder to follow and harder to maintain, for 1) a real benefit of being able to unit test it, and 2) for an illusory benefit of being able to extend it easier. The reason why that last benefit is illusory is because, in most cases, you'll never actually use it, and in most cases when you do use it, the cost of maintaining the loosely coupled code up to that point is actually much more than the price you'd have paid for refactoring it to suit your new needs if you left it simple (and more coupled) originally.

Also, it does promote some patterns that are actively harmful. For example, in C#, methods are not virtual by default, and it's a conscious design decision [artima.com] to avoid the versioning problem with brittle base classes [msdn.com] . But "testable code" must have all methods virtual in order for them to be mocked! So you either have to carefully consider the brittle base class issue for every single method you write, or just say "screw them all" and forget about it (the Java approach). The latter is what most people choose, and, naturally, it doesn't exactly increase product quality.

Of course, this all hinges on the definition of "testable code". The problem with that is that it's essentially defined by the limitations of current mainstream unit testing frameworks, particularly their mocking capabilities. "Oh, you need interfaces everywhere because we can't mock sealed classes or non-virtual members". And then a convenient explanation is concocted that says that this style is actually "testable code", and it's an inherently good one, regardless of any testing.

Gladly, TypeMock is about the only sane .NET unit testing framework out there - it lets you mock anything. Sealed classes, static members, constructors, non-virtual methods... you name it, it's there. And that is as it should be. It lets you design your API, thinking of issues that are actually relevant to that design - carefully considering versioning problems, not forgetting ease of use and conciseness, and providing the degree of decoupling that is relevant to a specific task at hand - with no regard to any limitations the testing framework sets.

It's no surprise that some people from the TDD community are hostile towards TypeMock [wordpress.com] because it's "too powerful", and doesn't force the programmer to conform to their vision of "testable code". But it's rather ironic, anyway, given how TDD itself is by and large an offshoot of Agile, which had always promoted principles such as "do what works" and "make things no more complicated than necessary".

Re:xUnit Test Patterns (1)

bondsbw (888959) | more than 4 years ago | (#31095914)

A good chunk of your post assumes that the ideas of interface-based decoupling, IoC, etc. are all unnatural. My guess is that those things are not your enemy, but that design is your problem.

It's probably true that most programmers decide, from day one of learning how to program, that tightly coupled code defers the difficult task of design until the very last possible minute. But that doesn't mean that decoupling is unnatural, and it is certainly not bad. It means that we need to teach programmers these principles from the start.

the cost of maintaining the loosely coupled code up to that point is actually much more than the price you'd have paid for refactoring it to suit your new needs if you left it simple (and more coupled) originally.

I don't see this at all. Maintainability is the point of loose coupling.

Case in point... I have been working on a data entry system for a few years now that, through previous design and my own old habits, has become very tightly coupled. Unit testing probably won't ever happen. I once needed to add a field in section 2. I did and released an update. A few days later, we noticed that data had been half-entered in hundreds of records. It took days to track down the issue... it turns out that I didn't find all the places that my field needed to be updated, and because of consistency errors, anytime a button was pressed in section 5, any future attempts to save the record were lost.

That situation probably never would have happened with loosely coupled code, because sections 2 and 5 would have had nothing to do with each other. All the places that a new field would need to be added would have been more obvious.

Be careful when making blanket statements about how complicated things are, when they are simply different. To me, the coupled code I have above is leaps-and-bounds more complicated than code that has a well thought-out design. Testability just happens to come for free.

(or IoC/DI with its maze of XML configs)

Why do you have a "maze" of IoC configs? It seems you're doing something wrong if that's a major issue.

Re:xUnit Test Patterns (3, Insightful)

shutdown -p now (807394) | more than 4 years ago | (#31096312)

A good chunk of your post assumes that the ideas of interface-based decoupling, IoC, etc. are all unnatural.

No, it doesn't. It assumes that they're not always natural, and that it's not always worth it.

Sometimes it is right and proper for two classes to be tightly coupled. Sometimes, we want to decouple them, but that decoupling doesn't necessarily have to take the form of interface per class and IoC.

By the way, I would argue that IoC is very unnatural in many things. Its use should be an exception rather than a rule. Among other things, it tends to replace proper object-oriented design with service-centric one.

I don't see this at all. Maintainability is the point of loose coupling.

It's at best a side effect (when it's there). The primary point of loose coupling is to be able to independently substitute parts - that is, extensibility, and testability to the extent that testing frameworks use that (rather than backdoors).

Case in point... I have been working on a data entry system for a few years now that, through previous design and my own old habits, has become very tightly coupled. Unit testing probably won't ever happen. I once needed to add a field in section 2. I did and released an update. A few days later, we noticed that data had been half-entered in hundreds of records. It took days to track down the issue... it turns out that I didn't find all the places that my field needed to be updated, and because of consistency errors, anytime a button was pressed in section 5, any future attempts to save the record were lost.

What you've described is a problem with code duplication, not tight coupling.

Also, the problem would have been solved by unit tests (which do not require decoupling).

Why do you have a "maze" of IoC configs?

By that I mean that it's often entirely non-obvious where things come from, just looking at one particular piece of code. It's actually a problem with OOP at large, to some extent - it's what you inevitably get with decoupling - but IoC takes this to the extreme, where it actually becomes very noticeable.

Let me try to give an analogy. A monolithic design is a single "brick" where everything is interconnected. A modular one is when you have several bricks, each doing its own thing. If those bricks are made such that you can only put them together, and cannot replace any brick, the design is tightly coupled. If you can freely replace any brick with a similar one (no matter what it's made of - so long as it's made to the spec), it's loosely coupled.

The problem is that we, as programmers, don't see the system as a whole - we see individual bricks, and have to mentally reconstruct the whole thing. When there are too many of them (because they're too small), and they're so generic and interchangeable, it's not entirely obvious where any particular one fits without looking at many others.

It's not an unsurmountable problem, and one can certainly train oneself to handle it. The problem, as with any "purist" approach, be it OO, FP, or anything else, is that at some point, the return on investment is negative - you spend a lot of time learning to put tiny bricks together, and then actually putting them together, while the problem can be solved by a less experienced programmer using smaller and cruder bricks, for cheaper, and pretty much just as good from a pragmatic point of view. The only thing that is left for your design is that it's more "elegant", but it's not a business goal in and of itself.

It's important to maintain that balance. Slip too much to one side, and your design becomes an unmaintainable, unreadable mess of tightly coupled spaghetti code. Slip too much to another one, and it's an unmaintainable, unreadable elegant mess of tiny classes with single-liner methods, wired together by IoC, where all bits together produce the desired result, but no-one really knows how. I've seen both. Both are very painful to maintain, debug, and extend (though that said, I usually still prefer the latter - at least it's more amenable to refactoring).

Re:xUnit Test Patterns (1)

bondsbw (888959) | more than 4 years ago | (#31099216)

I can definitely agree with the non-purist point of view. You can take everything to an extreme. When decoupling, you can pull so much out of your classes that they become anemic, and really you have something that no longer gains the benefits of OOP.

Among other things, it [IoC] tends to replace proper object-oriented design with service-centric one.

I disagree. I'm sure you could take it to that extreme, but I would say it tends to promote OO design. IoC relies on inheritance of interfaces and base classes. Without IoC, I've many times seen entire programs created without any inheritance at all (except maybe when the IDE does it for you, like in C#/WinForms when inheriting System.Windows.Forms.Form, which is provided by the Visual Studio designer).

So it's not replacing OO design, it's extending OO design. Be careful not to violate what you said in your next-to-last paragraph... OO is not the answer to everything.

What you've described is a problem with code duplication, not tight coupling.

No, it's a problem with coupling. I had practically everything in one class (a single form... again, bad practice). It was quite easy to write the first time around, but little did I know that 3 years later I would pay the price. Actually, I've paid the price pretty much every day of the past six years of maintenance. I waste time fixing problems that would have been obvious the first time around with a decoupled design.

Also, the problem would have been solved by unit tests (which do not require decoupling).

That's the thing... this code is not unit testable. Let me take that back. I probably could try, but every test would have database hits or web service hits and would take forever to run. Decoupling would remove those dependencies and provide testability that is tractable.

The problem is that we, as programmers, don't see the system as a whole - we see individual bricks

This is good. It's like having an atlas. We don't have to see every street in every city in the world. It has one map of the country (analagous to our IoC mappings), and more detailed maps of smaller areas like cities and states (individual classes)

and have to mentally reconstruct the whole thing

No, again IoC is the larger, less detailed map. That provides your reconstruction. IoC allows you to focus when you need to focus, and see the big picture when you need to do that.

Again, you have valid arguments when you take things to an extreme. That's where good design comes to play... and I would argue that "elegant" code is that which is designed with the best use of the principles and patterns that you know. Like you said, code gets ugly fast when you take a purist approach.

And don't be so afraid of things that provide more than one benefit simultaneously. Just because decoupling provides both maintainability and substitutability, and somewhat provides testability, doesn't mean it's bad. It just means that you need to be careful about the design, and to understand the difference in the benefits and when you take them to an extreme.

Re:xUnit Test Patterns (1)

shutdown -p now (807394) | more than 4 years ago | (#31101040)

No, it's a problem with coupling. I had practically everything in one class (a single form... again, bad practice).

Well, that's not quite coupling, either. It's the infamous "magic button" anti-pattern, where all logic gets shoved directly into event handlers.

I guess you could call it coupling in some sense, but to me, coupling is about dependencies between seemingly distinct components (classes etc). When it's a single component that "does everything", it's a more fundamental code organization and OO design problem.

That's the thing... this code is not unit testable. Let me take that back. I probably could try, but every test would have database hits or web service hits

The trick is that something like TypeMock lets you mock e.g. ADO.NET or ASP.NET web service APIs directly - starting from constructor calls such as "new SqlConnection()", and moving on.

Heck, they can mock SharePoint, which is a really horrible API from any point of view (which is probably why they make a separate point for that on their website).

Re:xUnit Test Patterns (1)

ocularDeathRay (760450) | more than 4 years ago | (#31095046)

For anyone familiar with the basics of unit testing but struggling to implement it in real world scenarios,

I think that all slashdot readers fall into this category....

Why when I was a young man in the program . . . (4, Funny)

Tanman (90298) | more than 4 years ago | (#31089870)

When I was a young man in the program, they tested the unit by having us march shoeless through 2 miles of uphill, mine-ridden, barbed-wire-laced snow! The unit got tested, and tested HARD! The program didn't allow for no pansy-ass pussy-footers. And did the unit in the program pass its tests? By God it did! You youngsters got it easy just havin to do some stupid vocabulary test to test your unit in the program. Plugging in words. HAH! Try plugging in the gaping hole left by the bark of an exploding tree!

It's not art, it's basic engineering (5, Insightful)

syousef (465911) | more than 4 years ago | (#31089888)

The only part that is an "art" is working out how to successfully isolate the component that you're trying to test. For simple components at lower layers (typically data CRUD) it's not so hard. Once you find you're having to jump through hoops to set up your stubs, it gets harder to "fake" them successfully and becomes a more error prone and time consuming process. It can also be difficult if there's security in the way. The very checks you've put in to prevent security violations now have to be worked around or bypassed for your unit tests. There's also a danger of becoming too confident in your code because it passes the test when run against stub data. You may find there's a bug specific to the interfaces you've stubbed. (For example a bug in a vendor's database driver, or a bug in your data access framework that doesn't show up against your stub).

All of those distracting side issues and complications aside, we are dealing with fundamental engineering principles. Build a component, test a component. Nothing could be simpler, in principle. So it's disappointing when developers get so caught up in the side issues that they resist unit testing. There does come a point where working around obstacles makes unit testing hard and you have to way benefit against cost and ask yourself how realistic the test is. But you don't go into a project assuming every component is too hard to unit test. That's just lazy and self-defeating. It comes down to the simple fact that many programmers aren't very good at breaking down a problem. In industries where their work was more transparent, they wouldn't last long. In software development where your code is abstract and the fruit of your work takes a long time to get to production, bad developers remain.

Re:It's not art, it's basic engineering (1)

noidentity (188756) | more than 4 years ago | (#31090024)

Indeed. As a recovering premature optimizer, I think efficiency is a big reason people avoid breaking something into smaller parts that can be tested independently. Plus things like the Singleton pattern result in designs that are harder to test, because there are some states you cannot "rewind" back to without restarting the program.

Re:It's not art, it's basic engineering (2, Interesting)

Anonymous Coward | more than 4 years ago | (#31090214)

Singletons are pretty easy to test as long as you don't use the *antipattern* of the class that enforces its own, uh, singletonicity. If you have .getInstance() methods, you have that antipattern. Yes, it's in the GoF book but frankly the GoF is just wrong on that point. It's a lifecycle pattern, and lifecycles like that should be taken care of by the context, like a class factory or a DI container. If you have a DI container, testing a singleton is an absolute snap and in fact *easier* than non-singletons.

A decent mock framework can handle even the bad old singleton pattern though. And part and parcel of a mock framework is their ability to "rewind".

Re:It's not art, it's basic engineering (3, Informative)

msclrhd (1211086) | more than 4 years ago | (#31090522)

When testing a system, if you cannot put a given component under test (or do so by "faking" its dependants -- e.g. by the things that talk to the database) then the architecture is wrong.

I strive never to have any "fake" parts of the system in a test. It makes it harder to maintain (e.g. changing some of the real components will break the tests). You cannot easily change the data you are testing with, or having a method generate an error for a specific test. You are also not really testing the proper code; not all of it, at any rate.

You should implement interfaces at the interface boundaries, and have it so that the code under test can be given different implementations of that interface. This means that you don't need to fake any part of your codebase -- you are testing it with different data and/or interface behaviours (e.g. exceptions) that are designed to exercise the code under test. The code under test should not need modification in order to run (aside from re-architecturing the system to make it testable).

The main goal of testing is to have the maximum coverage of the code possible to ensure that any changes to the code don't change expected behaviour or cause bugs. Ideally, when a bug is found in manual testing, it should be possible to add a test case for that bug so that it can be verified and so that future work will not re-introduce that bug.

Start where you can. If you have a large project, put the code that you are working on under test first to verify the existing behaviour. This also works as an exploratory phase for code that you don't fully understand.

Also remember that tests should form part of the documentation. They are useful for verifying an interface contract (does a method accept a null string when the contract says it does? does the foo object always exist like the document says it does?)

Re:It's not art, it's basic engineering (1)

syousef (465911) | more than 4 years ago | (#31093228)

What you're arguing against is mocks. In theory what you strive for is fantastic. In practice you don't always get to determine the architecture. You get to choose the best of a bad bunch. Combine this with the thinking in some circles that mocks are the best thing since slice bread, and you can sure end up in a mess. I happen to agree with you in principle: Mocks are wasteful and can be dangerous. In practice, sometimes you have no choice, because setting up with real objects can be too complex or time consuming (or both).

Re:It's not art, it's basic engineering (1)

Taevin (850923) | more than 4 years ago | (#31104550)

I think you and msclrhd are conflating unit testing and integration testing. In order to test components as a single unit (at least for components with dependencies), mocks are critical. If component A fails when working with component B, did it fail because of a bug in A or because B is not behaving according to its contract? In other words, without mocking, unit tests end up being integration tests which are very important to verify the overall function of your application, but tell you nothing about your individual components (other than that one or more of them failed, somewhere).

Unit tests allow you to clearly define your expected behavior for a single component and allow you to quickly (speed is important) verify that the implementation behaves.

Integration tests allow you to verify the broader functionality of your application that arises from the combination of two or more components. That is, making sure that the application is in the state you expect after one widget garbles or foos another.

I guess what I'm saying is, I don't see how mocks are wasteful or dangerous unless you're using them to avoid testing part of your application because it's too hard/boring/whatever. In that case, poor testing is poor testing, mocks or no.

Re:It's not art, it's basic engineering (2, Insightful)

wrook (134116) | more than 4 years ago | (#31093416)

This is a really good post. I wish I could moderate you up. Like some people, I've become less enamoured with the word "test" for unit tests. It implies that I am trying to find out if the functionality works. This is obviously part of my effort, but actually it has become less so for me over time. For me, unit tests are used for telling me when something has changed in the system that needs my attention. I liken it to a spider's web. I'm not trying to find all the corner cases or prove that it works in every case. I want bugs to have a high probability of hitting my web and informing me. When writing new code I also want to be informed when I make an assumption about existing code that is different from the original author. I think about my assumptions and try to write unit tests that verify the assumptions. This often fills out most of my requirements for a "spider's web" since when people start messing with code and break my assumptions, my tests will also break.

Finally, your point about documentation is extremely good. A large number of people, even if they are used to writing unit tests, don't understand unit testing as documentation. I've gone to the extreme of thinking about my tests as being literate programming written in the programming language rather than English. To this extent, I've embraced BDD and write stories with tests. For each story that I'm developing, I'll create unit tests that explain how each part of the interface is used. I then refactor my stories mercilessly over time to maintain a consistent narrative. However, I often feel like I want a "web" (as in TeX's literate programming tool) tool that will generate my narrative, but will still allow me to view the code as units (which is useful for debugging).

Engineering not an art? (1)

fm6 (162816) | more than 4 years ago | (#31091268)

You seem to think that "art" refers to something that is fundamentally mysterious [abcgallery.com] . A lot of art is, but that's not an intrinsic feature. The word itself has a lot different meanings. Here are some of the most fundamental, from the Oxford English Dictionary.

        1. Skill in doing something, esp. as the result of knowledge or practice.
        2. Skill in the practical application of the principles of a particular field of knowledge or learning; technical skill. Ob
        3. As a count noun.
              a. A practical application of knowledge; (hence) something which can be achieved or understood by the employment of skill and knowledge...
              b. A practical pursuit or trade of a skilled nature, a craft; an activity that can be achieved or mastered by the application of specialist skills...
              c. A company of craftsmen; a guild.
        4. With modifying word or words denoting skill in a particular craft, profession, or other sphere of activity.
        5. An acquired ability of any kind; a skill at doing a specified thing, typically acquired through study and practice; a knack. Freq. in the art of —.

Before you can offer an informed opinion as to what is and is not engineering, you need to read something by Henry Petroski. He defines it as "the art of rearranging the materials and forces of nature".

Re:Engineering not an art? (1)

syousef (465911) | more than 4 years ago | (#31093254)

It does depend on your definition of the word "art" but I'm not the only one who uses the word in the context you describe as erroneous. At best the word art is ambiguous and should be avoided. The word engineering is less ambiguous and more accurate.

Re:Engineering not an art? (1)

fm6 (162816) | more than 4 years ago | (#31093530)

I'm not saying your usage is erroneous. In some contexts it does make sense. This just isn't one of them. When you use language, you need to be sensitive to context, you can't just blinding plug in whatever definition suits you.

Unless you're in politics, of course...

Re:Engineering not an art? (1)

syousef (465911) | more than 4 years ago | (#31110490)

I'm not saying your usage is erroneous. In some contexts it does make sense. This just isn't one of them. When you use language, you need to be sensitive to context, you can't just blinding plug in whatever definition suits you.

What you did was take a contrary definition and insist that it is the only one that applies. Do you even understand the irony here?

Unless you're in politics, of course.

Pot. Kettle. Black.

Re:Engineering not an art? (1)

fm6 (162816) | more than 4 years ago | (#31117188)

We're in flame mode, I see. When you grow up, you'll discover that people can disagree with you without attacking you.

Re:Engineering not an art? (1)

syousef (465911) | more than 4 years ago | (#31131962)

Take a close look at your own post before you accuse me of descending into flame mode. I don't have a problem with you disagreeing with me. I have a problem with you telling me my argument doesn't make sense unless I'm in politics. Then you add to the irony with the whole "when you grow up" routine.

Re:Engineering not an art? (1)

fm6 (162816) | more than 4 years ago | (#31133142)

My quip about politics was meant as a joke. It was not meant as a personal attack. I'm sorry if it offended.

In the future, you might consider saying "I take offense at" instead of going on the offensive yourself.

Re:Engineering not an art? (1)

sjames (1099) | more than 4 years ago | (#31096062)

The problem IS one of semantics. Too many of the people who want to remove the word art (or worse the ones who insist that 'art' is inferior) believe that so long as correct procedures are followed at each step even drooling morons can crank out perfect programs (design and all) just like workers on an assembly line (in whatever country has the cheapest labor at the moment). They don't want "in my experience..." they want the result from a magic formula to cover their ass. After all, you can hardly be blamed if math is wrong.

OTOH you also have cowboys who claim what they do is art and that simple professionalism is removing the art.

Tiring to read (4, Interesting)

noidentity (188756) | more than 4 years ago | (#31089894)

I read this book recently and found it tiring. Much of it reads like a blog, and like many books, the author randomly switches stances. He'll refer to the reader as "the reader", "you", "we", and in the third person. This is the kind of book where it's hard to keep a clear idea of what the author is talking about, because he doesn't have a clear idea of what he's trying to communicate.

When I think of tiring books like this, I cannot avoid always remembering Steve McConnel's Code Complete (first edition; I haven't looked at the second edition yet). Reading that book is like having your autonomy assaulted, because the author constantly tries to get you to accept the things he's claiming, via whatever means necessary, rather than presenting them along with rational arguments, and letting you decide when to apply them. I'm not saying Osherove's book is that bad, just that it has that same unenjoyable aspect that makes it a chore to read and get useful information from.

I recently also read Kent Beck's Test-Driven Development and highly recommend it, if you simply want to learn about unit testing and test-driven development. It's concise and enjoyable to read. Unfortunately it doesn't cover as many details, and I don't have any good alternatives to books like Osherove's (and I've read many at my local large university library).

Re:Tiring to read (1)

weicco (645927) | more than 4 years ago | (#31096310)

Reading that book is like having your autonomy assaulted, because the author constantly tries to get you to accept the things he's claiming

This is exactly what I'm looking for in books, blogs etc. I can read all the technical information about different desing/coding/testing/project-leading techniques I want from Wikipedia but I want to read how these things are done in the real life as well.

Let's take an example. I've been recently focused on MS Sql Server and T/SQL. Couple of weeks ago I read everything there is written about Table Partitioning (horizontal). I know now all the T/SQL magic I have to do to partition table. But what I don't know is when I should use this feature. What I now want to read is about actual real life experiences when DBAs have decided to partition tables, what was the motivation, how they did it and especially how things worked out. More the better if the text is written in imperative.

Now if the text is written in imperative all the better. It makes me to think from the writers perspective. Of course I don't always comply with the writer but at least it gives me a new perspective to things.

Oh, yes. I haven't actually read The Art of Unit Testing or Code Complete (just browsed it a bit). I just wanted to open up a bit. It's good for the /dev/soul :)

As is Beck's book... (1, Interesting)

Anonymous Coward | more than 4 years ago | (#31096792)

Interesting that you should mention Kent Beck's book, as I too have read it recently and found it to be the shittiest pile of steaming turd at I've ever seen put into book form. It was *SO* slow going, so condescending, and it was so sorely lacking in the way of cohesive rational arguments that if I hadn't been sold TDD through other means, I would have abandoned the concept altogether. It might be okay as an introduction to a complete novice programmer, but if you've had any experience in the industry at all, I'd recommend avoiding it unless you either want something to put you to sleep, or you're a sucker for punishment.

Read it; Loved it. (2, Interesting)

fyrie (604735) | more than 4 years ago | (#31089924)

I'm fairly experienced with unit testing, and I've read several books on the subject. This is by far the best introduction to unit testing I have read. The book, in very practical terms, explains in 300 pages that which took me about five years to learn the hard way. I think this book also has a lot of value for unit testers that got their start a decade or more ago but haven't kept up with recent trends.

Error coding... (0, Offtopic)

girlintraining (1395911) | more than 4 years ago | (#31089958)

Could I go out on a limb here and ask why error handling is considered a black art, requiring truckloads of books to understand? I've done well following a few basic rules;

1. Know exactly what the system call does before you use it.
2. Check the return value of every one.
3. Check the permissions when you access a resource.
4. Blocking calls are a necessary evil. Putting them in the main loop is not.
5. Always check a pointer before you use it.
5a. ...even if it is a return from a system call that never fails.
6. Build your project in pieces -- and try to cause as many different failure conditions as possible.
6a. Anything that could require new equipment if failure testing kills it? Use someone else's.
7. No matter how good your code is, that el cheapo power supply is waiting. And it is hungry.

Re:Error coding... (1)

dkleinsc (563838) | more than 4 years ago | (#31090344)

This isn't (solely) about error handling. It's about logical testing, which can be as nasty as a technical error. For instance, replacing your real database with a fake one, then run through your business logic modules and make sure the right data is getting passed back, and that any calls you make to change that data get reflected in the right updates to the database. The same techniques can be used to find out what your code does if it tries to talk to the database and the database is out to lunch.

What you're describing is more integration testing, where you test the whole thing from front-end to lowest-level code, probably with the help of a dedicated tester / QA analyst. This is also good, but these other techniques are often handy for isolating problems quickly as well as being able to improve one piece and see immediately what effects it has on other pieces.

Re:Error coding... (1)

S77IM (1371931) | more than 4 years ago | (#31090352)

Maybe you're on the wrong thread? It's a review of a book about unit testing, not error handling. The two are only moderately related, as far as programming concerns go.

  -- 77IM

Re:Error coding... (1)

MobyDisk (75490) | more than 4 years ago | (#31090370)

Fortunately, modern concepts like exceptions have eliminated the need for steps 2 through 5. It is very annoying to go back to old code and see:

return_value = DoStep1();
if (return_value != success) then handle_error(return_Value);
return_value = DoStep2();
if (return_value != success) then handle_error(return_Value);
return_value = DoStep3();
if (return_value != success) then handle_error(return_Value);
.
.
.

or worse:

return_value = DoStep1();
if (return_value == success)
{
      return_value = DoStep2()
      if (return_Value == success) ... And so on, indented to the 500th column...
}

instead of:

DoStep1();
DoStep2();
DoStep3();
upon failure, handle_error.

Re:Error coding... (1)

LateArthurDent (1403947) | more than 4 years ago | (#31092034)

Fortunately, modern concepts like exceptions have eliminated the need for steps 2 through 5.

They have minimized the need for steps 2 through 5, not eliminated it. Exception handling is expensive, and you should only rely on catching exceptions for errors that are not supposed to happen often. If a pointer is not supposed to be null, you can rely on catching an exception. If the pointer could be null and you're supposed to create a new object in case it is, you check it.

Basically, if you're throwing an exception, it should be for something unexpected that shouldn't have happened, and now it requires you to ask the user "wtf did you do? I'm getting garbage in, so instead of giving you garbage out, here's the error." For anything that could be a normal state of the program, handle it yourself.

Re:Error coding... (0)

Anonymous Coward | more than 4 years ago | (#31092226)

> Exception handling is expensive

Actually it's far less expensive than laboriously checking every return value. Most every language with exception handling makes entering a protected block free. You only pay the price when the exception is triggered and it has to unwind the stack to look for your handler. Within the same frame, it's virtually identical code to what you'd otherwise have written by hand.

Unfortunately, Java still makes you check almost every damn thing, because every object reference can be null. This is a step *backward* from C++, which has references that for the most part cannot be null (yes I know it's possible, but only with casts).

Re:Error coding... (1)

LateArthurDent (1403947) | more than 4 years ago | (#31100466)

you only pay the price when the exception is triggered...

That's why I said that the place to use it is when you don't expect the exception to be triggered often. However, I've seen plenty of people use exceptions as a state machine for what they're coding, and if you're doing that, return codes values are the way to go.

Re:Error coding... (1)

Cassini2 (956052) | more than 4 years ago | (#31095592)

Actually, I despise programmers that depend on exceptions. If you are trying to work on any kind of hardened system, like a real-time system, or a system that "just has to work", or a system that must work in fixed memory, exceptions are nightmares. You have to prove that every single exception either can't happen or can be safely handled. Even "safe" handling can be a challenge. For instance, if an error occurs, the control system must keep running, otherwise more errors occur. In some contingencies, the software must just "keep handling errors". It is amazing how many functions can't handle a constant stream of errors without leaking memory, pausing for long periods of time, or pausing for operator input when no operator is present.

One project involved complex mechanical automation. The system performance was defined solely by the rate exceptions to the normal process occured. It was vital to have software that just works. If the mechanical exceptions caused exceptions in the computer software, then everything stopped working. It was impossible to keep track of all the complexities involving all the exceptions.

The only solution was simple software that had very few error cases, and each error case clearly exposed the error handling. As such, all error cases could be checked. Curiously, this was the same system that worked much better when ported from .NET to C. For low-level code, C is a much better language, no matter how trendy .NET is.

Re:Error coding... (1)

MobyDisk (75490) | more than 4 years ago | (#31101414)

All your points are good, but I think they apply regardless of whether or not you use exceptions. If you write the tedious logic that I gave in my example, you still have the same code to handle the return values. For microcontroller coding, just think of exceptions as an automatic if-then-else that is placed around every function call. If the function call succeeds the check is free. If the function call fails, the check is expensive. It's quite a nice trade-off.

Funny note: I work for a medical equipment manufacturer, who uses vxWorks and C++. They use exceptions in their coding, and most of their memory allocations are fixed due to memory constraints.

If the mechanical exceptions caused exceptions in the computer software

It's interesting you say that. In our app, mechanical exceptions are the one place where exceptions are not permitted to cascade upward. They are exceptions, in that they are very rare and we don't want to write the tedious code to check return values every time. Exceptions are just an easier way to handle the return_value pattern I showed in my example. It really doesn't change very much.

I have seen some books on using Java on microcontrollers. I have never done it, but I would curious to see how they approach it since Java has checked exceptions.

Re:Error coding... (2, Insightful)

jgrahn (181062) | more than 4 years ago | (#31090792)

Could I go out on a limb here and ask why error handling is considered a black art, requiring truckloads of books to understand? I've done well following a few basic rules;

1. Know exactly what the system call does before you use it.
2. Check the return value of every one.
3. Check the permissions when you access a resource.
4. Blocking calls are a necessary evil. Putting them in the main loop is not.
5. Always check a pointer before you use it.
...

*Detecting* the problem isn't hard. What's hard is *handling* it -- and there was nothing about that on your list. Hint: calling abort(2) is not always acceptable.

Re:Error coding... (2, Interesting)

msclrhd (1211086) | more than 4 years ago | (#31091078)

Error handing and reporting is very complex. This is mostly due to the complexities involved with different parts of a system interacting with each other.

Windows GetErrorInfo call will return NULL on the second call, despite no other COM calls made in-between. This took a while to understand what was happening.

Do you check that an IErrorInfo object is valid for the method you just called by using ISupportsErrorInfo? Do you check to see if this is a C#/.NET System.Exception (_Exception) and then use ToString to get back the nice stack trace?

Does the Windows API call return a HRESULT, an NT error code, a registry API error code, a BOOL with a corresponding GetLastError call to get the details or something else? Are you checking errno on all C API calls? Or the appropriate error checking call for the API you are working with? Do you realise that some Windows APIs have different return code behaviour on Win9x and NT+ (e.g. the GDI calls)?

Do you guard all your COM calls written in C++ to ensure that a C++ exception does not leak outside the COM boundary? Do you report said exception as a HRESULT and IErrorInfo that lets you track down the problem?

Do you ensure that an exception is not thrown from outside a destructor? A thread function? A Windows/GTK/Qt/... event handler? Across language boundaries?

How do you present an error to the user? Are you showing the exception message (e.g. displaying E_UNEXPECTED as "Catastrophic failure")?

It is not always possible to write perfect code. Developers will forget to check for access permissions to a file before writing to it, or just ignore any errors (did you know that std::ofstream will not report an error if the file it is writing to is read-only (at least with the Microsoft C++ implementation and Windows, without any fancy std::ios flags?)).

Have you ever dealt with infinite recursions that involve 3 or more functions? Do your COM/DBus/... calls check for/handle network failures? Do you report these in a friendly way to the user? Do you try to recover/reconnect in this case?

Knowing exactly what a system call does is also impossible, unless you have access to the source code for that particular configuration. The MSDN documentation is not reliable for Windows APIs, as it leaves a lot of the important stuff out. The POSIX documentation only covers the important/most common error cases.

Re:Error coding... (2, Insightful)

shutdown -p now (807394) | more than 4 years ago | (#31091270)

Check the permissions when you access a resource.

Careful, you can easily have a race condition there. Say, you're trying to open a file. You check for permissions before doing so, and find out that everything is fine. Meanwhile, another process in the system does `chmod a-r` on the file - and your following open() call fails, even though the security check just succeeded.

Re:Error coding... (0)

Anonymous Coward | more than 4 years ago | (#31092382)

1. Know exactly what the system call does before you use it.
2. Check the return value of every one.
3. Check the permissions when you access a resource.
4. Blocking calls are a necessary evil. Putting them in the main loop is not.
5. Always check a pointer before you use it.
5a. ...even if it is a return from a system call that never fails.

Catching errors, which is all that you've listed, is so easy that any moron can do it. Error handling, i.e. knowing what to do with them afterwards, is the hard part.

(Displaying an error and terminating the program is almost always the wrong answer.)

Re:Error coding... (1)

LordLucless (582312) | more than 4 years ago | (#31092480)

(Displaying an error and terminating the program is almost always the wrong answer.)

It might not be the best answer, but it's better than ignoring the error without fixing the cause, or trying to handle the error transparently, and failing. If in doubt, terminate with a meaningul error. That way if the error condition is ever triggered (hopefully in testing), it's easy to find and diagnose.

Re:Error coding... (0)

Anonymous Coward | more than 4 years ago | (#31094822)

It might not be the best answer, but it's better than ignoring the error without fixing the cause, or trying to handle the error transparently, and failing. If in doubt, terminate with a meaningul error. That way if the error condition is ever triggered (hopefully in testing), it's easy to find and diagnose.

Give me a break, it's not even a good answer. Does this quote sound familiar?

I was writing paper on the PC, and it was, like, beep, beep, beep, beep, beep, beep,and then, like, half of my paper was gone. And I was, like... heh? It devoured... my paper. It was a really good paper. And then I had to write it again and I had to do it fast, so it wasn't as good. It's kind of... a bummer.

It's even wronger in the case of transaction processing systems, but let's not go there.

Re:Error coding... (1)

LordLucless (582312) | more than 4 years ago | (#31094924)

It sort of sounds familiar before auto-save become a standard feature on pretty much all word-processors. Failing (relatively) gracefully and notifying the user is better than continuing along with corrupt data - even in transaction processing systems. Of course, by the time something gets to production, it (ideally) shouldn't even encounter it's error states.

Related: Working Effectively with Legacy Code (2, Interesting)

noidentity (188756) | more than 4 years ago | (#31090006)

I've read several unit testing books recently, and another I found somewhat useful is Michael Feathers' Working Effectively with Legacy Code [slashdot.org] . It has all sorts of techniques for testing legacy code, i.e. code that wasn't designed for testability, and which you want to make as few modifications to. So he gets into techniques like putting a local header file to replace the normal one for some class used by the code, so that you can write a replacement class (mock) that behaves in a way that better exercises the code. Unfortunately Feathers' book is also somewhat tiring to read, due to a verbose writing style and rough editing, but I don't know anything better.

Good news for you all... (-1, Troll)

Anonymous Coward | more than 4 years ago | (#31090060)

You're all at increased risk for heart disease! :-) Bye bye!

http://uk.reuters.com/article/idUKTRE61900L20100210 [reuters.com]

Lol at backward wielding sword samurai (0)

Anonymous Coward | more than 4 years ago | (#31090118)

Lol at backward wielding sword samurai..

tl;dr (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#31090276)

tl;dr

Unit testing is not a silver bullet (3, Interesting)

CxDoo (918501) | more than 4 years ago | (#31090310)

I work on distributed real-time software (financial industry) and can tell you that unit tests for components I write are either

1. trivial to write, therefore useless
2. impossible to write, therefore useless

I find full logging and reliable time synchronization both easier to implement and more useful in tracking bugs and / or design errors in environment I deal with than unit testing.

tl;dr - check return values, catch exceptions and dump them in your logs (and use state machines so you know where exactly you were, and so on...)

Re:Unit testing is not a silver bullet (2, Funny)

chromatic (9471) | more than 4 years ago | (#31090540)

Fortunately, reliable software is not a werewolf.

Re:Unit testing is not a silver bullet (1)

ClosedSource (238333) | more than 4 years ago | (#31090642)

A lot of the methodologies were designed by people who have experience only in writing MOR (Middle Of the Road) code and in many cases haven't written any production code in years.

So it's not surprising that it's a bad fit for most specialty projects.

Re:Unit testing is not a silver bullet (4, Informative)

lena_10326 (1100441) | more than 4 years ago | (#31090678)

Unit tests should be trivial for the majority of classes. Good OO design will cause your many of your classes to be single purpose and simplistic therefore the unit tests will also be simplistic. That's the point of OOD (or even modular design)--breaking down complex problems into many simpler problems*.

Maybe you should consider that unit testing is not just for validating the current set of objects but also validating that future revisions do not break compatibility. In other words it makes regression testing possible or easier with automation.

Writing the unit tests also serve to prove to your teammates you've thought about boundary conditions and logic errors. When you're forced to think them in a structured way then you're in a better position to catch code bugs while writing the unit tests. Many times you'll find them before even executing the test code.

Note: If anyone responds with something along the lines of "complex problems cannot always be simplified" I will literally punch you--repeatedly.

Re:Unit testing is not a silver bullet (0)

Anonymous Coward | more than 4 years ago | (#31092114)

Have you ever worked in an industry where all the software is real-time (e.g. avionics, or even video games)? Thought not.

Punch away.

Re:Unit testing is not a silver bullet (1)

iangoldby (552781) | more than 4 years ago | (#31097072)

simplistic

I think you mean simple, or perhaps very simple. Simplistic means too simple or over-simplified. If your unit tests are simplistic then thay are not adequate for the job.

Re:Unit testing is not a silver bullet (1)

Rathkan (1732572) | more than 4 years ago | (#31097736)

I don't think you understand the problem the parent described. Unit tests can't help you to diagnose multi-threaded and time-related issues. When you have a bug which only reproduces in the wild once every 3 months, just saying "unit tests" won't allow you to reproduce the bug, fix it and add the test to reproduce this bug to regression. At best you need to create the tools to reproduce the bug yourself, and with certain systems and certain bugs, this can be far from trivial to develop. Multi-threaded and real-time systems are a whole different kettle of fish than basic class design and testing.

Re:Unit testing is not a silver bullet (1)

lena_10326 (1100441) | more than 4 years ago | (#31097988)

Of course I understand what the parent poster said. I've worked on real-time kernel based clustered applications. First off, you're assuming unit tests will find all bugs. Bad assumption. It's just another tool in the toolbox; it's not perfect. Nothing is. Second, if one is saying they cannot test a real-time application then they're not building scaffolding code right. Third, it is true there are synchronization and hardware scenarios that cannot be tested with unit tests because the scenario only exists in an integrated environment or when there are multiple nodes. No one is saying you're supposed to perform integration testing with unit testing. That would be retarded and represent a total misunderstanding of what a unit test is and what it's used for. Using the real-time excuse to avoid testing the 60% to 90% of your code which generally consists of primitives, containers, and general purpose code is just an excuse to be a lazy, sloppy programmer.

Re:Unit testing is not a silver bullet (1)

scamper_22 (1073470) | more than 4 years ago | (#31091624)

I'll say this much.

Unit testing has two big uses.
1. it formalizes the testing you do anyways and keeps that test. Just today, I had to write a tricky regexp to split some logging apart. I used the unit test just to formalize the testing I'd do anyways (feed in some dummy strings) to verify it works.

2. It forces you to write better code.

2 is a bit flakly... as if someone writes crappy code, unit testing isn't going to make them a better coder. Yet, it does keep me check. There are countless times you just want to rush some code in that works well. Being part of unit testing mindset, you are forced to abstract away file access and database access to support mock objects or stubbing. It forces me to write more object oriented code.

I really that 2 is one of the reasons unit testing leads to better code. It's not the actual testing. But its the testing that forces you to write your code in a certain way. It won't make a bad developer a good one, but it will make a good one more consistent.

Re:Unit testing is not a silver bullet (2, Insightful)

geekoid (135745) | more than 4 years ago | (#31093212)

That just means you are horrible at your job and that you think no one else will ever work on it.

"I find full logging and reliable time synchronization both easier to implement and more useful in tracking bugs and / or design errors in environment I deal with than unit testing."
THAT is a separate issue, that you should ALSO do.

I suspect you have no clue why you should be designing and using unit tests.

Re:Unit testing is not a silver bullet (1)

CxDoo (918501) | more than 4 years ago | (#31098268)

So the answer to my fairly self-evident assertion that unit tests are not useful everywhere is that I am at best ignorant, and at worst an idiot?

1. 'Units' we deal with are very simple. Their relationships are not.
2. A good portion of 'units' we deal with are not written by us. Sometimes we get a usable specification, sometimes not. Code never. These are black boxes with often unpredictable behavior.
3. We work real time, so we don't call methods, we send messages. What should I mock and test there - message delivery framework?
4. You distrust a 3rd party 'unit' by default (see 2), ergo you check and fully log whatever you receive on each perimeter, ergo why the fuck do I need to write additional tests?
5. Your assumption that we never did incorporate unit/integration testing in a project is false.
6. I work with highly skilled & paid people and am not going to force them to do pointless work just for the sake of it. We have trouble dealing with race conditions, deadlocks, loss of time sync and inconsistencies it causes, ways to recover from hardware failures, bugs and lack of features in 3rd party code we can't fix and NOT with basic programming literacy.

Bottom line is that I didn't say 'testing is for lesser men' I said 'unit testing is not a universal solution'. Software was tested before TDD fad caught on, and will be tested when it passes. I sure as hell am not going to base my designs on a set of tests that will identify 0.1% of issues I am dealing with.

As someone mentioned, the point of testing is not to see if your system works in scenarios you envisioned, but to royally fuck it up in ways you are not even aware of.
You don't do this by writing a buch of asserts you already see will be satisfied. You do this by setting up the whole environment and letting everybody and their mother try to fuck it up, including yourself, using any means possible, from sending misformatted input to pulling out network cables & power plugs.

tl;dr - Fuck off faggot.

Re:Unit testing is not a silver bullet (1)

Civil_Disobedient (261825) | more than 4 years ago | (#31098596)

1. trivial to write, therefore useless
2. impossible to write, therefore useless

This has been my experience as well.

Obligatory (1)

TXFRATBoy (954419) | more than 4 years ago | (#31090492)

This is Slashdot...the only unit testing is by manual means...ZING!

Just wondering... (0)

Anonymous Coward | more than 4 years ago | (#31090532)

What's .NET?

Re:Just wondering... (1)

b4dc0d3r (1268512) | more than 4 years ago | (#31091298)

Microsoft's version of .ORG - it's still in the first of the 3 E's.

Pet peeve - the purpose of testing (2, Insightful)

Zoxed (676559) | more than 4 years ago | (#31090840)

Rule #1 of all testing: The purpose of testing is not to prove that the code works: the purpose of testing is to *try to break* the program.
(A good tester is Evil: extremes of values, try to get it to divide by 0 etc.)

Re:Pet peeve - the purpose of testing (1)

theguru (70699) | more than 4 years ago | (#31092094)

This may be the purpose of manual testing, but the idea of having high code coverage with automated testing it to prevent regressions, minimize time to release, and even act as documentation via example usage.

It's less about release N, and more about release N+1.

Re:Pet peeve - the purpose of testing (1)

PostPhil (739179) | more than 4 years ago | (#31093258)

Uh no, it's to demonstrate that the code "works". The problem here is what it means "to work". Part of the usefulness of TDD is that you might not fully understand what it means "to work" yet, and the tests help you flesh that out.

Let me clarify, so you don't think I'm 100% ditching what you're saying versus stating it a different way. A test suite will tend to have BOTH tests for what the correct behavior *is* and also tests for what the correct behavior *is not*. In other words, what you're doing is defining the BOUNDARIES between correct and incorrect behavior. You're right in the sense that if your *strategy* is to write only *optimistic* tests (i.e. "proving that it works"), you'll miss subtle areas where the behavior isn't fully clarified (i.e. corner cases).

But here's the problem: for absolutely anything in the universe, there is an INFINITE number of things something *is not*, but only a finite amount of things something *is*. I've seen people go too crazy with using tests as a way of type-checking everything where smarter data types would have been a better choice, or performing a hundred "this isn't what I want" tests that could have been handled with a single "this IS what I want" test. My point is that you're supposed to program for the correct case, not design as if you always expect everything to go wrong. Write for the correct case, test for the correct cases FIRST, test for the EXCEPTIONAL cases, and write handling code for the things that are exceptional. Don't write an infinite test suite of what something is not.

CONCLUSION: Write the most EFFECTIVE tests you can that covers the most ground. Don't write *pointless* tests you have to maintain later if there was a better test. If a test covers a lot of logical ground by defining the boundaries of what something *is not*, then write the test for that. If it covers a lot of ground by defining what something *is*, write the test for that.

Does he back up anything he says (3, Insightful)

TheCycoONE (913189) | more than 4 years ago | (#31091002)

I was at Dev Days in Toronto a few months ago, and one of the speakers brought up a very good point relating to different software engineering methodologies. He said that despite all the literature written on them, and the huge amount of money involved, there has been very few good studies on the effectiveness of various techniques. He went on to challenge the effectiveness of unit testing and 'agile development.' The only methodology he had found studies to demonstrate significant effectiveness was peer code review.

This brings me to my question. Does this book say anything concrete with citations to back it up, or is it all the opinion of one person?

Re:Does he back up anything he says (4, Interesting)

Cederic (9623) | more than 4 years ago | (#31092632)

Does your speaker have anything concrete with citations to back his assertions up, or is he happily dismissing one of the few genuine advances in software engineering in the last decade?

we found that the code developed using a test-driven development practice showed, during functional verification and regression tests, approximately 40% fewer defects than a baseline prior product developed in a more traditional fashion. The productivity of the team was not impacted by the additional focus on producing automated test cases. This test suite will aid in future enhancements and maintenance of this code.

-- http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.129.7992&rep=rep1&type=pdf [psu.edu]

A Spring 2003 experiment examines the claims that test-driven development or test-first programming improves software quality and programmer confidence. The results indicate support for these claims

-- http://portal.acm.org/citation.cfm?id=949421 [acm.org]

Experimental results, subject to external validity concerns, tend to indicate that TDD programmers produce higher quality code because they passed 18% more functional black-box test cases.

-- http://qwer.org/tdd-study [qwer.org]

We observed a significant increase in quality of the code (greater than two times) for projects developed using TDD compared to similar projects developed in the same organization in a non-TDD fashion.

-- http://portal.acm.org/citation.cfm?id=1159733.1159787 [acm.org]

My apologies for the rough and ready citations, I only picked the ones I could find on the first fucking page of Google search results.

Re:Does he back up anything he says (1)

TheCycoONE (913189) | more than 4 years ago | (#31098874)

He did, and I wish I could find his slides to better present what he was saying. I believe he said there were a lack of scientifically rigorous studies which would be necessary to adopt a practice in other disciplines (eg. business.) Your first citation for example is a study of less than two dozen people. The second I can't read, but in general you'll notice that while they have the same conclusions the actual number vary quite wildly which brings into doubt the methods and the conclusions.

Re:Does he back up anything he says (1)

TheCycoONE (913189) | more than 4 years ago | (#31098964)

Actually I found his slides: http://www.slideshare.net/gvwilson [slideshare.net] The slides themselves don't touch unit testing, and should be combined with his talk. I never meant to refute unit testing in the first place though, I just wanted to ensure before I spent the time and money going through the above book that it provided empirical evidence that his methods were better.

Re:Does he back up anything he says (1)

Cederic (9623) | more than 4 years ago | (#31101834)

Ignore the book for a moment, and read the thoughts of leading software engineers.

What do Kent Beck, Martin Fowler, Alistair Cockburn, Erich Gamma, Steve McConnell, Scot Ambler, Rob Martin, Andy Hunt and Dave Thomas all say? Hit Google, do some 'free' exploration and reading.

Then read up on people's experience on these things. There are various mail lists, where people have tried these (and other techniques) and report their experiences.

You can reach the point fairly quickly where you can decide whether you want to give it a go or not. If you do, buy a book (or two, or three - Michael Feathers and Kent Beck have excellent books on this subject matter too) to help you get up to speed, but mainly, give it a go. Force yourself into the high-discipline "do it properly" frame of mind, give it a go for 2-3 months (ideally a full project) then decide whether it works for you.

If it does, keep doing it. If not, don't.

All good software engineers try that, continually, on anything that sounds interesting and might help. Sometimes a whole new process is adopted, but it's the little changes (unit testing, automated builds, design patterns... hell go back far enough and source code control was once new and scary) that add incremental value and turn programmers into software engineers that can reliably and consistently produce efficient maintainable working software.

Re:Does he back up anything he says (2, Interesting)

Aladrin (926209) | more than 4 years ago | (#31093012)

I have never seen any scientific studies on it, but I use Unit Testing as a tool to help me code and debug better and it works a LOT better than anything I tried prior to that. And when I break some of my old code, I know exactly what's breaking with just a glance.

Also, I have occasionally be charged with massive changes to an existing system, and Unit Testing is the only thing I know of that lets me guarantee the code functions exactly the same before and after for existing uses.

tl;dr - I don't need a scientific study to tell me a tool is working well for me.

Re:Does he back up anything he says (1)

wrook (134116) | more than 4 years ago | (#31093892)

The problem with measuring the effectiveness of programming techniques is that it is very difficult. It is quite valid to say that there are few studies to back up the effectiveness of various "agile" techniques. But I will point out that this is true of every programming technique.

The problem with measuring this is that it is impossible to get a baseline. There is a huge difference in productivity based simply on individual talent. This has been shown. So you will need thousands of programmers to test any theory. Problems are also extremely variable, so it is difficult to measure productivity across different problems. You would need to solve hundreds of non-trivial problems to test your theories. Finally, objective code quality is an unknown. Existing metrics are well known to be bad at estimating real quality. Solving any one of these measurement problems would be enough to get you a PhD.

If someone could find a good way to test different techniques and provide statistically significant results, they would be rich beyond the dreams of avarice. You use a variety of different techniques, which I assume you feel are more effective (or at least as effective) as others. Check the literature. Do you have any proof other than your own (or other's) anecdotal experience to back up your opinions?

Unfortunately, with the current state of affairs, we are very vulnerable to methodology snake oil salesmen. Everybody wants the cheap cure-all. Every popular methodology has more than its fair share of such leeches. As soon as it becomes a buzz, somebody wants to make a buck off it. The truth is that there are a lot of individual techniques that are effective, but you are going to have to put effort in to evaluate them yourself. Try to keep an open mind and keep several hours a week available for training and exploring these possibilities. You won't be sorry.

Re:Does he back up anything he says (0)

Anonymous Coward | more than 4 years ago | (#31095774)

Microsoft has a very long standing proven commitment to not caring about quality. Perhaps when attending such marketing events it would advisable to first question the speaker rather than what's spoken?

Cover (1)

dbialac (320955) | more than 4 years ago | (#31091176)

Somebody likes their Anime a bit too much.

shi7! (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#31091690)

thing for The BSD has always

These are not the units you are looking for (1)

galego (110613) | more than 4 years ago | (#31094804)

[waves hand in front of face]

a.out (0)

Anonymous Coward | more than 4 years ago | (#31095182)

A book using .NET is being referred to as AOUT? [wikipedia.org]

What has the world come to...

Don't lose sight of the goal (1)

CyberLife (63954) | more than 4 years ago | (#31103380)

I think many are getting caught up in terminology and forgetting (or perhaps they never knew) that the overall general purpose of any testing is to eliminate assumptions. Do the requirements really reflect what the customer wants? Does the system really meet the specs? Does component X really perform its job? Or are these things just assumed to be true? Testing gives one the ability to find out.

Now the decision of what to test and what to ignore is an important one, and ultimately it comes down to recognizing one's assumptions. What is one willing to assume, and what must they really know for certain?

I overcame the problems of unit testing (1)

crovira (10242) | more than 4 years ago | (#31103718)

and unit specification early on in my career with a documentation technique which let me specify the order of as well as the limits of the API (whether human or systemic components were involved.)

My success and income over the years was derived from the work doe in 1983-84, printer in Computer Language Magazine in 1990 and released into the wild in 2007.

Check out http://media.libsyn.com/media/msb/msb-0195_Rovira_Diagrams_PDF_Test.pdf [libsyn.com]

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?