Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Writing Unit Tests for Existing Code?

Cliff posted more than 9 years ago | from the chickens-and-eggs dept.

Programming 86

out-of-order asks: "I recently became a member of a large software organization which has placed me in the role of preparing the Unit Test effort for a component of software. Problem is that everything that I've read about Unit Testing pertains to 'test-driven' design, writing test cases first, etc. What if the opposite situation is true? This organization was writing code before I walked in the door and now I need to test it. What methodology is there for writing test cases for code that already exists?"

cancel ×

86 comments

Sorry! There are no comments related to the filter you selected.

Treat it like the code isn't written (0)

Anonymous Coward | more than 9 years ago | (#12451386)

Hopefully, you've got well-defined interfaces to the various modules/objects/subroutines within the code (if not, what exactly are you intending going to apply Unit Tests to?). Now, using only those specifications, design the tests you'd like the code to pass, exactly as if the code had not been written.

Now test it.

Re:Treat it like the code isn't written (1)

lilmouse (310335) | more than 9 years ago | (#12480159)

Hopefully, you've got well-defined interfaces to the various modules/objects/subroutines within the code


<Maniacal Laughter>

Unit test the bugs you need to fix (2, Insightful)

aurelianito (684162) | more than 9 years ago | (#12451446)

Before you actually fix it.

Re:Unit test the bugs you need to fix (2, Informative)

s100w (801744) | more than 9 years ago | (#12451748)

Based on experience with existing projects, this is the way to go -- write unit tests for bug fixes and new features. It's an overwhelming, time-consuming, job to write unit tests for a big mass of existing code. I also find that once I get going, I end up throwing away or heavily refactoring a lot of legacy code anyway. So if I had written tests, I'd be throwing them out, too. End-to-end system test, even superficial ones, have more value. I will say that sometimes writing tests can help you understand messy old code. You might want to check out "Working Effectively With Legacy Code" by Michael C. Feathers. It's got some good stuff.

Re:Unit test the bugs you need to fix (2, Insightful)

Dan-DAFC (545776) | more than 9 years ago | (#12452213)

I also find that once I get going, I end up throwing away or heavily refactoring a lot of legacy code anyway. So if I had written tests, I'd be throwing them out, too.

If you are heavily refactoring, it's probably worth putting in the effort to write the tests beforehand. Otherwise how can you be confident that your refactoring hasn't broken anything?

Re:Unit test the bugs you need to fix (1)

s100w (801744) | more than 9 years ago | (#12452344)

Good point. But not much value in wirting the unit tests until you are ready to start refactoring that bit of code. It's hard to predict ahead of time what you will work on.

And sometimes the dependencies and interfaces are so bad, you're better off building system/integration test coverage to support your refactorings.

Re:Unit test the bugs you need to fix (1)

Anonymous Brave Guy (457657) | more than 9 years ago | (#12465573)

I also find that once I get going, I end up throwing away or heavily refactoring a lot of legacy code anyway. So if I had written tests, I'd be throwing them out, too.

If your unit tests are testing observable behaviour as they should be, and your refactoring doesn't change observable behaviour at all as it shouldn't, then you don't need to throw away the tests. Similarly, if you're throwing away old code then presumably that's because you have new code that produces the same required behaviour, and therefore the tests for that behaviour will still be equally valid. If you find yourself left with irrelevant tests in either case, then probably your tests weren't very well written in the first place...

Re:Unit test the bugs you need to fix (1)

s100w (801744) | more than 9 years ago | (#12484347)

Well, yes, poorly-written or not useful. I did a bad job of explaining myself. It's true that the best approach is to write tests around the legacy code.

However, my practical experience is that it's time-consuming to build unit tests for bad legacy code. So I tend to invest my time in broader-scope system/integration/functional tests for existing code.

I've also found that bad code tends to have many redundancies, unnecessary interfaces, "we'll need it someday" features, and "wouldn't it be cool ..." features. Once I understand the code and have system tests, I'm going to remove that code anyway, so why bother to write unit tests for it?

code coverage profiling (1)

wscott (20864) | more than 9 years ago | (#12451456)

After you have a testing infrastructure written and have a couple tests go learn about GCC code coverage profiling (assuming your language is supported) and have your tools generate coverage information. Then start writing tests to match holes in your coverage. It will take forever.

Also require all new code to have matching tests and setup automatic tests to slap developers who add code that doesn't get tested.

Good luck.

Test First is not a Test Strategy (3, Interesting)

okock (701281) | more than 9 years ago | (#12451476)

I don't see TestFirst as a Test Strategy, but as a design technique. Writing Tests first forces you to think differently about what you want to write.
This forces you to write testable code - writing tests afterwards does not force you to do that.

Of course, having the tests available later proves valuable for testing your application, but the tests main purpose is to lead you to a testable design

You'll most likely experience severe difficulties in adding Unit Tests to previously untested code. It might be easier to add acceptance tests (e.g. high-level scripts that utilize the application), especially if you want to cover more than small partitions of the application quickly

Re:Test First is not a Test Strategy (1)

crmartin (98227) | more than 9 years ago | (#12452876)

I'd move it back even a little further, and call TDD a specification strategy.

A specification is just a statement of a testable way to tell if a program is behaving correctly or not; you can think of it as specifying a characteristic function for the set of input and output pairs of the program.

In set theory (and similar things) there's a notion of two distinct kinds of definitions for a set: the intentional, and extensional.

An intentional definition for a set specifies the characteristic function, like "the set of all odd natural numbers" or "for all X, x mod 2 = 1".

A formal specification for a program or function is an intentional definition: "given 0 extensional definition is given by example: instead of "the set of all odd naturals", "{1,3,5,7,...}". You have to infer the characteristic function, and of course that can lead to errors.

A test case in TDD is an example of correct behavior for the program: assertTrue(answer(35)==54). Hopefully, a number of interesting test cases will make it clear that you really are doing what's wanted, but you (the reader) must infer this from other knowledge. So effectively a TDD case is part of an extensional definition.

Fake Ignorance (3, Interesting)

samael (12612) | more than 9 years ago | (#12451682)

Write the tests as if the code _hadn't_ been written. Get the requirements and then write the tests from them.

Then if they fail the tests you'll have to discover if the requirements are wrong or if it's the code that's at fault. But at least you'll have something to start from - and you'll probably find some bugs they missed.

That's nice but... (2, Interesting)

TheLink (130905) | more than 9 years ago | (#12453384)

Often there aren't detailed enough requirements.

With requirements at a typical business level, you could have X totally different systems that meet them (and most better given hindsight). And often that level is as much as you're going to get when the original team has left.

Anyway recreating requirements at a detailed technical level could be a waste of time - because some module could be required to do something stupid by another module. Once you fix things all round, this requirement will be thrown out.

At one of my workplaces I made major changes in behaviour of some modules - e.g. instead of N^2 it's just N. And some things I just threw out because they were redundant.

I suppose, you could rewrite the requirements (after figuring things out), and then rewrite the code. But that's quite different from _getting_ the requirements.

Re:That's nice but... (1)

Evil Pete (73279) | more than 9 years ago | (#12473604)

Not only are there usually no written specs but the code will also have undocumented obscure fixes for particular problems. If you don't know what the problem is you can't test for it. It may be that under special circumstances for a particular customer certain things need to be done so that for most customers ripping out the code to conform to some half baked spec you just thought of will work. This is so common in legacy code its almost a law. Joel had a bit to say on this in his essay on why you shouldn't re-write code (oh gees do I have to give a link ... ok for the google impaired here [joelonsoftware.com] )

Re:Fake Ignorance (0)

Anonymous Coward | more than 9 years ago | (#12462017)

Cool, now I can update my stupid "witty quote" chain e-mail!

Dance like nobody's watching, love like you've never been hurt, test like nobody's coding.

Bonus educational nugget - adding commas changes the meaning of a sentence! In this case, the Microsoft motto.
"Test, like, nobody's coding."

do the simplest thing - write tests as u need them (4, Informative)

djfuzz (260522) | more than 9 years ago | (#12451701)

You have to write tests as you change features. Lets say you have a simple change, tweaking a short method to do something different. First you write a test for its exisiting functionality, make sure it passes. Then add a test for the new functionality, run it and watch it fail. Make your change and make the test pass. This would also be the point where you can do some refactoring or clean up, or extend the test to catch boundy conditions.

With legacy code, you just have to start writing tests with the code as you go, writing tests for functionality that you need to understand or review. If you try and take x number of weeks to write test cases, your doomed to fall behind and have obsolete tests when you are dumb.

Also, see Working Effectively With Legacy Code by Michael Feathers --> http://www.amazon.com/exec/obidos/tg/detail/-/0131 177052/002-8698615-6720004?v=glance [amazon.com]

Re:do the simplest thing - write tests as u need t (1)

Sir Robin (9082) | more than 9 years ago | (#12455715)

I too have had good luck with Working Effectively With Legacy Code. Highly recommended.

White Box Testing Utilities (1)

_flan (156875) | more than 9 years ago | (#12451708)

We use McCabe (http://www.mccabe.com/ [mccabe.com] ) to point us to problem code. Running their tool gives you a good idea of where the complexity of an application lies and where you should focus your testing.

It works kinda like this: First the tool parses everything and generates a ton of metrics. This will point you to the complex modules of the application. Then it breaks down each function/method in the module into its possible execution paths and turns this into a graph. By looking at the graph, you can see what you need to do to get your test to follow a particular path.

McCabe also tells you how many test cases you'll net to completely test a give block of code. It assumes, however, that all of the tests are independent, which is not always the case.

On the down side, McCabe is big, expensive, and sometimes returns unreliable data. But it's a lot better than nothing.

If you're looking for free tools, look for things that calculate "cyclomatic complexity". This is the measure of the number of paths through a given block of code. Armed with that, you should be able to make some headway.

Good luck!

Start With the Documented Requirements (3, Insightful)

north.coaster (136450) | more than 9 years ago | (#12451720)

I spent several years managing a test team in a Fortune 100 company, and I have seen this situation many times (it's probably the norm, rather than the exception, in industry today).

Let the documented requirements for the code (or product) be your guide. Use those requirements to develop test cases, then design one or more tests that hit all of the test cases.

If there are no documented requirements, then you should ask yourself why you are working there. This situation usually leads to many arguments about what the code/product is really suppose to do, and you'll just become frustrated while you waste lots of time. It's not worth it.

No mod points but insightful (1)

marcus (1916) | more than 9 years ago | (#12451875)

As above, write your tests to the specs.

Run the tests and document the results.

Let someone else mod the specs ;-) if necessary.

If n.c's third paragraph applies, you either have to find a managerial ally who will support you as you re-work the design process and local culture to be a bit more rigorous and disciplined. It will be tough, but can also be rewarding.

Re:Start With the Documented Requirements (1)

rthought (68998) | more than 9 years ago | (#12452303)

For the most part, the parent has the right of it. Specifications should drive what is tested. If the documents don't say the software should do "foo" then don't bother testing whether or not it does "foo", even if everyone is saying that it can, that it does and that it will do "foo". But, the parent poster is a bit pie-in-the-sky for his last paragraph:
If there are no documented requirements, then you should ask yourself why you are working there. This situation usually leads to many arguments about what the code/product is really suppose to do, and you'll just become frustrated while you waste lots of time. It's not worth it.
Having no spec for code to be written is sadly becoming the norm these days. It would be nice to find and only work for companies that write specs for their projects, but if we all waited for that, there'd be a lot more unemployed coders/testers. If you're stuck in the case where there aren't specifications, you'll have to take things into your own hands. You can do one of several things, each of which are likely to only work partially, so you'll probably have to organize them all. Among those things are:
  • Ask the developers, or better yet the marketeers, to document what the software should do. You can even suggest that they don't have to do it all before you'll start writing tests from it.
  • Break down the software into functional units, and start writing the tests for those units. It'll help if you can get the developers/marketeers to review the tests, to be sure you're covering everything that the software does.
  • Just start hammering at the software.
Whatever you do, write down what you do. While the main function of testing is, indeed, to test the software, the best way to insure that is to make sure that every time you test you test the same things in the same way. If you write down what you do, then you can do it again the next build. Repeatability is the key, along with showing people what you're doing.

Re:Start With the Documented Requirements (5, Informative)

dwpenney (608404) | more than 9 years ago | (#12452501)

Ok. I am confused by this. There is a distinct difference between System Test cases and Unit Test cases. If you are working from a design document detailing the requirements from a working Business or Systems Requirements document and testing the items to make sure that requirements are met you are performing a System Test - a test at a much higher level than Unit Testing. At the Unit Test level you are checking the boundries in the code itself to make sure that loops are exited correctly and logic is performed correctly. In essence a Unit Test is at a component level, intended to look inside the component and make sure that it operates correctly based on it's very limited sets of inputs and outputs. At a higher level, System Test cases look at how the component interacts with other components and wether this interaction meets the requirements. Stupid example using web code: Putting ^5$*1@ in a text search input box is a Unit test case makeing sure that the entry parameters if this code will not barf on the input. System test cases should not need to test this (with a proper procedure for unit testing) but should instead be focusing on if search results are in line with what was requested. These necessary definitions limit and focus the level of your testing and the understanding of the question that is being asked.

Re:Start With the Documented Requirements (0)

Anonymous Coward | more than 9 years ago | (#12462039)

Someone mod this up more! I'm sure it's old news to those of you who build tests as a matter of course, but I've never seen this explained quite so efficiently, and as a "test newbie" it's like liquid enlightenment.

Re:Start With the Documented Requirements (1)

TapeCutter (624760) | more than 9 years ago | (#12461941)

Exactly, you can't test "it" if you don't know wtf "it" is. Sadly it does seem to be the norm. It is often an expectation placed on test teams that they "know" what "it" is and when "it" is delivered "it" will be fully tested. This normally leads to a situation where the test team try to define the requirements after "it" is built, the whole thing gets tied up in an argument then released with nothing properly tested or defined. This gets even worse when "it" is sold to a powerfull customer. Since nobody properly defined "it" the customer can simply claim every inconvienience is a bug and the support nightmare will start to snowball.

The original question "how do I unit test.." implies that the person asking has little or no guide as to what "it" is. If I were in thier shoes I would send it back to the boss and ask "what is it".

Take a look at a book at by Michael Feathers (1)

BlahBlech (526661) | more than 9 years ago | (#12451750)

There is a book that covers this subject well: Working Effectively with Legacy Code by Michael Feathers.

http://www.amazon.com/exec/obidos/tg/detail/-/0131 177052/qid=1115394770/sr=8-1/ref=sr_8_xs_ap_i1_xgl 14/103-3591758-7198219?v=glance&s=books&n=507846 [amazon.com]

Look at Junit (1)

HughsOnFirst (174255) | more than 9 years ago | (#12451756)

When I worked at Cisco on a project that was written in Java we used an automated unit testing tool that would test each method and report on what would break , for instance if you passed a particular value to a method it would fail, maybe you should fix the code to deal with that possibility. It was either Junit or Jtest ( one of them costs $3000 a seat, we used that one. ) It was good thing since QA categorically refused to "test" the software by trying to break it, they would only tested to see if it could work if the customer did everything right.

Anyway, even if you aren't using Java you might look at the documentation of Junit for ideas about building test fixtures for components of your code.

Thankfuly I don't work there any more.

Re:Look at Junit (2, Interesting)

creimer (824291) | more than 9 years ago | (#12454014)

It was good thing since QA categorically refused to "test" the software by trying to break it, they would only tested to see if it could work if the customer did everything right.

What kind of testing is that? You have to assume that the customer won't do everything right if you're going to find bugs. Just because you're using automated code testing, it doesn't mean that the unit tests themselves have been written correctly or all the code works perfectly together. A good QA team needs to have attitude that everything will be tested and everyone else can kiss ass.

Re:Look at Junit (1)

HughsOnFirst (174255) | more than 9 years ago | (#12455409)

Quote "We are QA, not testing! We do QA, we don't test"

My suggestion that random banging on the keyboard, pushing buttons, and unexpectedly closing windows would be a good thing was not appreciated because there was no way to write it up as a test plan, or describe it as a repeatable bug.

Re:Look at Junit (3, Interesting)

creimer (824291) | more than 9 years ago | (#12455679)

My suggestion that random banging on the keyboard, pushing buttons, and unexpectedly closing windows would be a good thing was not appreciated because there was no way to write it up as a test plan, or describe it as a repeatable bug.

In the video game industry, that's called button smashing. Programmers hated it because it meant that their input code didn't consider multiple buttons being pressed at the same time, and, worst, it was usually time dependent. Nintendo is very good at finding button smashing bugs.

Re:Look at Junit (0)

Anonymous Coward | more than 9 years ago | (#12481181)

My suggestion that random banging on the keyboard, pushing buttons, and unexpectedly closing windows would be a good thing was not appreciated because there was no way to write it up as a test plan, or describe it as a repeatable bug.

And what did you do to address this criticism? Here's what I would have done: found or developed an input layer that could record and replay these keystrokes.

Do you have automated testing tools available? (3, Insightful)

Richard Steiner (1585) | more than 9 years ago | (#12451807)

Such tools can make after-the-fact testing quite a bit easier.

We used automated regression testing scripts in the mainframe environment I worked in 12 years ago, and that made some aspects of unit testing relatively easy.

Unisys had a tool (TTS1100) which allowed us to record each online transaction entry and computer response and then play it back later, and that made it possible to perform the exact same tests dozens or hundreds of times if needed. We used to run them after each set of changes was applied to make sure nothing broke. :-)

One could also record a single occurrence of a lengthy interactive sequence and then add things like variables and looping structures into the recorded script to automate the handling of various test cases using different values.

Such a tool makes after-the-fact test design a little bit easier because you can sit down and methodically address each and every variation of each and every input field on a given screen.

Of course, the nature of the software you're using might make that sort of thing more difficult, or perhaps even easier.

I've never been able to do up-front unit test design -- specifications can change rather quickly when doing in-house software development, and the overall environment is a lot more dynamic than a typical "software house" environment would be where one always has formal detailed product specs to code to. We're often writing code based on an e-mail or on a couple of phone conversations.

Unit testing is not a goal (2, Interesting)

Laz10 (708792) | more than 9 years ago | (#12451812)

Unit testing is a method you use to achive something. Is the current component very buggy and you need to rewrite it or do you need to extend production quality software without breaking existing functionality?

If you are testing a component try to figure out if it is possible to in some schematic way. If you can figure out a way for the "business" people to write the tests for you that will take a lot of knowledge off your shoulders.

If it is an existing component maybe you could explore if it is possible to make some mechanism that "records" actions to the component and then later be able to "replay" them and check if the results are the same with your new or changed component as the production quality one.

I recently started as a contractor on a J2EE project that has lots of problems. The application has a classic backend with lots of ugly EJB anti-patterns and everything. The frontend is a VB client that communicates via. a simple webservice.
In a couple of days I was able to make a regression test engine that can save the xml-communication that our business-clever testers make to the server and then at any given time later run the same requests to the server and check if the responens match the originally recorded ones.

It works wonders and I now have free hands to clear out a lot of the technical mess while always having proof that I havn't broken anything.

Think about what your goals are. Then find the best tool to get there.

Re:Unit testing is not a goal (1)

thsths (31372) | more than 9 years ago | (#12456114)

> Think about what your goals are. Then find the best tool to get there.

I couldn't say it any better. If you want to make sure that functionality stays intact while you change things, write system tests. If you want to use code in a different context, document it and write unit tests first.

Whatever you do, you need some "absolute reference" to find out what is right and what is a bug. Tests are no good if they just preserve old bugs for eternity.

Welcome to the real world (1)

ecklesweb (713901) | more than 9 years ago | (#12451820)

Welcome, to the world of the Real.

In most IT shops, I'm sorry to say, test cases are a low priority and almost always come after the code is complete or nearly complete.

If you're looking for a "methodology" for creating unit test cases, I think you're overthinking the problem. You need to create a set of test caes that assure you that the unit is working in and of itself. There are a number of things you can do to accomplish that:

1. Look at the design document.
See what the design document says the unit is supposed to do. Write test cases to test the various aspects of what the unit is supposed to do. If it transforms input, write test cases that test expected input, boundary cases, completely invalid input, no input, and as much damn input as you can generate. Do that for every function of the unit.

2. Look at the code.
Because the design document may be outdate, incomplete, or just MIA, look at the code. Hopefully it's commented well and tells you what the unit should do. Write test cases that you are sufficient to exercise each function in the unit, and if possible each branch of code in the functions in the unit. Ideally every line of code gets executed at least once by your unit test case suide, though 100% coverage is not always possible or cost effective.

3. Look at other test cases
If Integration or System test cases have been written, then you may be able to use those to school on. Integration test cases will in particular show you what you don't need to bother testing at the unit level. It will also show you where your stubs and drivers need to be built. The system test cases may give you more ideas for functionality that needs to be tested.

Sorry that you've been dumped in this situation, but it's the same situation that many, many of us face every day. It is the unfortunate reality of software engineering at a lot of IT shops.

Re:Welcome to the real world (1)

RomulusNR (29439) | more than 9 years ago | (#12471248)

You can't actually write complete tests until after the code is complete, if you have no functional or design documentation. You can come up with use cases, but those aren't necessarily tests until the product/component is in final form. Then you determine how to turn use cases into test procedures.

This goes for unit testing as much as it does for integration testing. If the design hasn't be (entirely) proscribed, then it's going to be pretty much (at least partially) invented at coding time -- meaning interfaces and process won't be completed. So you can't hope to have fully usable tests until after those aspects have been determined.

Whhhaaaa! (4, Insightful)

LouCifer (771618) | more than 9 years ago | (#12451852)

Gimme a fucking break.

Every testing job I've ever had we've had ZERO documentation. NADA. ZIP.

How do we survive? WE TEST. We put down the book (like we had one to begin with) and we test. Surely you have a server somewhere running dev-level code (at least) and you start poking around. Sure, its less than ideal, but you deal with it. And you bitch about how crappy it is and how it goes against all the principals of so-called 'real world' methodologies.

The thing is, this is how the real world does it.

Sure, in a perfect world, everyone has their shit in order. But in a perfect world we're not all competing against code monkeys working for 1/10th of what we make and that live in a 3rd world country.

Re:Whhhaaaa! (1)

TheLink (130905) | more than 9 years ago | (#12453734)

I've been looking at code from coders that lived in a 1st world country and earned 1st world pay. And it really isn't that good... (BTW you're close enough about the zero documentation bit...)

It seems there are very few good programmers. If you're going to get crap anyway, better to pay 1/10th for it :).

Not saying that I'm a good programmer. Far from it. But heh, even I can do better than the crap I saw.

With just the fixes/rearchitecting of some recent code, I might have justified most (if not all) of my entire _year's_ salary already ;).

In the original teams defense: The stuff worked well enough in the 90s when the loads were lower.

Re:Whhhaaaa! (1)

FriedTurkey (761642) | more than 9 years ago | (#12457735)

As someone who has been on a dedicated testing team I can say just poking around is a good idea. Very often the test scripts will work but doing things not on the test script will produce an error.

I had one developer who got very pissed at me because I did things outside of the test script and caused some errors. He was like "That's not how your supposed to do it" and then showed me that it worked if you did it like the test script. I was like "Ummm I don't think you can count on end users to do it exactly like the test script". He then told our manager I didn't know how to test.

Re:Whhhaaaa! (1)

LouCifer (771618) | more than 9 years ago | (#12458247)

Leave it up to a developer to tell a tester how to do our jobs.

Reverse the tables and watch them blow their lids.

Of course, they can't think outside the box (doesn't apply to all developers, but a lot I deal with).

Hopefully your manager realized the developer was/is an idiot and told HIS manager as much.

Fortunately, I've got a great manager (former developer) who knows what's what when it comes to testing.

Re:Whhhaaaa! (1)

RomulusNR (29439) | more than 9 years ago | (#12471398)

Well, developers don't know how to test, or else we wouldn't fucking have TESTERS.

("We don't tell you guys how to code, do we?" -- well, actually, I make coding/fix suggestions once in a while, but I have some RW coding experience.)

Sorry to hear you had to face the developer-tester impasse in front of your boss before you'd had it explained to h{im|er} beforehand.

Developers' job is to understand how to make the product, testers' job is to understand how it will be used.

Re:Whhhaaaa! (3, Insightful)

RomulusNR (29439) | more than 9 years ago | (#12463539)

Except when you're expected to have a test plan. You can't come up with a test plan without a functional spec. A design doc helps even more.

You can't possibly ensure that the application does what it's supposed to if no one can communicate to you what that entails. Imagine testing a house by spraying it with water, banging on the windows, and tromping on the lawn. Those all sound like good things, until the future owner tries to open the front door, and can't.

Make sure anything changed is tested (3, Insightful)

Chris_Jefferson (581445) | more than 9 years ago | (#12451993)

Trying to write a test case for all the code you have will be very difficult, very long and to be honest not buy you a lot.

A few open source projects have found themselves in the same situation as you, and they seem to work by 3 rules:

1) If you change any code at all which doesn't have a test, add a test

2) If you find a bug, make sure you add a test that fails before, and works now

3) If you are ever wandering around trying to understand some code, then feel free to write some tests :)

One thing I will say is to try very hard to keep your tests organised. Keeping them in a very similar directory structure to the actual code is helpful. Without this it's very hard to tell what has and hasn't got a test.

My Experience... (5, Informative)

Dr. Bent (533421) | more than 9 years ago | (#12452480)

I inherited a 1000 class Java based toolkit from my predecessor, which had exactly zero unit tests. Over the last two years, we've made a sustained effort to employ Test-Driven Development and add more tests to ensure that everything works as advertised. As of today the toolkit has over 830 tests, with line coverage of 61% and class coverage of 96%. We've still got a long way to go, but were much better off than we were. Here's how we got there...

1) A lot of people are going to tell you that you need to write your tests from scratch. That you should assume that your code is broken and work out the expected results by hand and create the test assertions accordingly. I disagree [benrady.com] . If you're testing old code, it's much more useful to use the test to ensure that it does whatever it did before, instead of ensuring that it's "correct". I prefer to treat the code as though it is correct, and build the tests around it. Even if the assumption is occasionaly wrong, you can make the tests much quicker this way. That allows you to refactor and extend your system with confidence, knowing that you haven't broken anything. Remember, TDD isn't really about quality assurance, it's about design and evolving design through refactoring. More tests == more refactoring == better system.

2) You're probably not going to get a lot of extra time to sit around and write tests. You need to captialize on the time that you have and turn problems into oppertunities to add tests. Whenever you find a bug, make a test that reproduces it. If you need to add supporting stub or mock objects, consider making them reusable so that future tests will be easier to write.

3) If you need to add new functionality to the system, just follow the standard TDD steps of Test->Code->Refactor, and make sure that you add tests for anything that might be affected by the change.

4) I'm assuming that you already have a continous integration build that runs the tests, but if you don't, make one. Now. Also consider adding other metrics to the build like code coverage (we use Emma), findbugs, and jdepend. These will help you track your progress and can be very useful if you have to defend your methdology to people who view TDD as a waste of time (The Code Coverage to Open Bugs ratio gets them every time).

5) In general, you need to look for oppertunities to write tests. Don't understand how a module works? Write a test for it. Found a JDK bug? Reproduce it with a test. Performance too slow? Use timestamps to ensure that the performance of a alrorithm is in a reasonable range.

You've probably got a long road ahead, but it's worth the work. Keep at it, and good luck.

Re:My Experience... (1)

p3d0 (42270) | more than 9 years ago | (#12459424)

Nice post. I just lost my mod points about an hour ago or else you'd have a well-earned Insightful from me. (Especially point #1.)

More recommendations (0)

Anonymous Coward | more than 9 years ago | (#12452634)

Look through old records of bugs that have been fixed in the code (e.g. bug-tracker tickets), and write unit tests to detect that bug. Prioritize bugs that have shown up more than once. The idea here is that one doesn't want to have to diagnose and fix the same bugs twice.

Pick up a book by Boris Beizer (1)

Banner (17158) | more than 9 years ago | (#12453012)

The purpose of unit testing is to make sure that the unit works (and to characterize unit failures). It's the sanity check before you throw it 'over the wall' to the test and quality organization.

You want to do path coverage, statement coverage, bounds checking on the inputs, error conditions, that kind of stuff.

Pick up a book by Boris Beizer, read his stuff and ignore everyone else. I've been in QA and Test for almost twenty years now, Beizer is -the man- to read about testing. If you're really desperate send me a message here and I'll send you a template to work from.

What about ... (2, Funny)

Free_Trial_Thinking (818686) | more than 9 years ago | (#12453036)

Guys, my legacy code doesn't have functions, just VB subroutines that modify global variables. Any idea how to make unit tests for this? And by the way, the functions aren't cohesive, each one is 100's of lines and does different sometimes unrelated things.

Re:What about ... (1)

dreadway (459923) | more than 9 years ago | (#12454904)

just VB subroutines that modify global variables [...] And by the way, the functions aren't cohesive, each one is 100's of lines

Searching the web on 'Junit VB' you should be able to find test harness freeware.

If there's the possibility of modifying the code organization that would help out a lot, obviously, because then you could make smaller functions doing more sharply defined things and therefore presumably easier to test.

If not perhaps assume you can't test more of the existing code except at a high-level, or perhaps investigate what libraries are available for asserts/tracing within VB code. In combination with asserts/tracing maybe the test harness would provide different sets of test data/actions and exercise different paths through the big blog of code, thereby demonstrating correct results, API limitations, error/exception handling, etc.

Good luck.

Re:What about ... (1)

dreadway (459923) | more than 9 years ago | (#12454968)

err... big BLOB of code

Re:What about ... (1)

Nasarius (593729) | more than 9 years ago | (#12454913)

See if you can write some tests to ensure basic functionality. Then tear it apart and start refactoring. Writing properly testable code is no simple task.

Re:What about ... (0)

Anonymous Coward | more than 9 years ago | (#12461261)

Congratulations, you got a great prototype, showing how a system can look. Now use that knowledge to build the real system.

Hmmm... (1)

Sparr0 (451780) | more than 9 years ago | (#12453169)

Something about a million monkeys sitting at a million keyboards comes to mine :)

No such thing as too many (1)

dmorin (25609) | more than 9 years ago | (#12453316)

If you're suggesting that your job is to be the unit test guy, then I would just start writing tests as you think of them. Critical mass is important - if you only have a few unit tests no one will care. Write some obvious ones for everything. And then go back and dig in where you think unit test make more sense. In the web app world in particular it is often very hard to write real unit tests without getting into a whole variety of special rigging. So focus on the logic components that tend to be more abstract and either do or do not do the right thing without reliance on databases and other servers.

As long as all the tests pass, there's really no harm in testing anything you want. Nobody's ever going to tell you to write less tests. Just remember to do breadth first and get something written for all the obvious spots, rather than getting all trivial and obscure on your favorite class just because you can.

The developer excuse is almost always "I don't have the time/energy/patience to write the unit tests for this legacy code." So what you want to do is provide a foot in the door that allows the developer to realize that maybe he is really only updating existing tests, and possible creating a few, but not all of them. Much less work.

But then again I may have misunderstood your question.

Re:No such thing as too many (1)

Nasarius (593729) | more than 9 years ago | (#12455048)

In the web app world in particular it is often very hard to write real unit tests without getting into a whole variety of special rigging.

True enough, but if you've written the code in good abstracted OO with the "special rigging" (ie, mock objects) in mind, you'll be much better off. I'd say there's very little complex code that doesn't require mock objects or the equivalent to test, so it's something well worth learning. As you say, once you've got all the groundwork set up, it's much easier to extend.

What I would do (1)

Tamerlan (817217) | more than 9 years ago | (#12453625)

1. Choose a testing framework. It depends on language. For C/C++ - cppunit, for Java - junit.
2. Start writing unit tests from lowest level functions: those that use barebones system libraries and start moving upwards. Do not go too far - it's a unit (!) testing, not a general or rgeression testing.
3. Ideally you should test all possible paths in functions. Apparently it is not feasible for large functions (that is one of the reasons function should be small). You should try to create a test to hit every condition and every branch of a condition at least once.
4. It's more complex that these 3 items, but I'd say this is a start.

Unit test == Code review (1)

Paul Johnson (33553) | more than 9 years ago | (#12454772)

Some people are recommending that you treat the modules as "black boxes" and write the tests according to their specs. Problem is (as others have pointed out) the specs are not detailed enough. So you will inevitably wind up looking at the code and then writing tests that prove the code does what it says it does.

And this is actually OK, because it simply means that the "Unit test creation" process is actually a detailed code review process. Expect to find far more bugs from looking at the code than from running your tests. But this doesn't matter: the bugs have been found.

Don't expect to be able to reuse the tests after modifying the code because any change to the code will generally break the test.

You might have a look at QuickCheck though. It was written as a unit test system for Haskell functions: you write a formal spec of the properties of the module and it generates random tests according to that spec. The latest prototype (see the website pointed to by www.haskell.org) also handles modules with state, and there is no reason why it couldn't be used for general software test. By organising your test around formally specified properties of modules instead of individual procedures you should get more reusable tests.

Paul.

Re:Unit test == Code review (1)

ClosedSource (238333) | more than 9 years ago | (#12454979)

"So you will inevitably wind up looking at the code and then writing tests that prove the code does what it says it does."

Code always does what it says it does. The problem is determining what the code should be doing and testing for that.

Re:Unit test == Code review (1)

Viking Coder (102287) | more than 9 years ago | (#12457700)

"Code always does what it says it does."

Yes. Absolutely. 100%.

Until someone changes it.

When someone changes the code, it can be very enlightening to see the cascade of side-effects (especially if you thought there weren't going to be any!) Having unit tests that merely document the current behavior of code in a format that a machine can rapidly reproduce from the source code (compile unit test, run unit test, compare output to old output) can help you identify... Regressions! Very handy.

The main way this helps you is by pointing out very directly what changed, when you made a source change. Then you can ask yourself - is that what it should be doing? Or: do I like the old behavior better than the new behavior? If so, time to undo and start over.

Re:Unit test == Code review (1)

ClosedSource (238333) | more than 9 years ago | (#12459721)

"Until someone changes it."

No. It still does exactly what it "says" it does, it just "says" something different after the code has been modified.

Regression testing can be very handy as long as the tests reflect what the code is supposed to do. Otherwise passing the regression simply means that your code is consistently failing to achieve its requirements.

Re:Unit test == Code review (1)

HalWasRight (857007) | more than 9 years ago | (#12459867)

Regression testing can be very handy as long as the tests reflect what the code is supposed to do.
Looking at the code to see what it is supposed to do is silly. Just because code does something doesn't mean it is supposed to. What if it segfaults for some input? Was it supposed too? I've seen code that was supposed to seg fault!

Re:Unit test == Code review (1)

ClosedSource (238333) | more than 9 years ago | (#12465567)

"Looking at the code to see what it is supposed to do is silly."

Well, there are times when looking at the code is useful but I never claimed you needed to do that to test it. All I'm saying is that a test should be devised to determine if its requirements are met.

"What if it segfaults for some input? Was it supposed too? I've seen code that was supposed to seg fault!"

As a former Atari 2600 programmer, I can appreciate the idea that correct behavior may be quite unconventional (such as performing an indexed write to a scratch location in order to maintain consistent scan line timing without triggering a repositioning of a "player"), but the point (as I think you'll agree) is to confirm the appropriate behavior regardless of how it is achieved.

I'm not a big fan of working around bugs instead of fixing them, however. So I think the old behavior should also be the correct behavior.

Re:Unit test == Code review (1)

Viking Coder (102287) | more than 9 years ago | (#12460353)

If you want to be purposefully obtuse, I can't stop you.

My only point, and I feel I was abundantly clear on this, is that regression tests provide a record of what source code used to do.

If you like what it used to do, then if the output is different, you can bet you did something wrong.

If you didn't like what it used to do, you can use the output to try to figure out if you changed the parts you wanted to. ("Passing" a regression test is a bad thing, if your test captured the fact that you used to have a bug.)

If you want to pour over every possible use case of the function you're just barely modifying, feel free. Most businesses don't have the time for Rain Man to read the phone book. Since your time (as a human) is more valuable than the computer's time (as a cheap piece of hardware), doesn't it make more sense to let the compiler check for compile errors, the linker check for link errors, and the regression tests to check for regression errors? Yes you could do it by hand - but isn't that a bit inefficient?

Re:Unit test == Code review (1)

ClosedSource (238333) | more than 9 years ago | (#12465473)

If the old behavior was what you "wanted" but different behavior then what your requirements state, you have problems that testing will not solve.

Re:Unit test == Code review (1)

Viking Coder (102287) | more than 9 years ago | (#12470038)

But what we're talking about is legacy code, where you probably don't even have requirements, or the requirements are so out of date that they're useless.

Now what?

Regression tests can help pinpoint the results of code changes - for good or bad. If the old behavior was what you wanted, then you undo your changes.

Again - are you being purposefully obtuse?

Re:Unit test == Code review (1)

ClosedSource (238333) | more than 9 years ago | (#12473315)

"But what we're talking about is legacy code, where you probably don't even have requirements, or the requirements are so out of date that they're useless."

I don't see any reason why legacy code is less likely to have requirements than new code. In any case, if the requirements are not updated with the code, than you have a flaw in your development process.

You can certainly peform regression tests even if your process is flawed in other ways, I'm not disputing that.

"Again - are you being purposefully obtuse?"

Perhaps being obtuse isn't really what you want to accuse me of. Why don't you double-check the definition.

Re:Unit test == Code review (1)

Viking Coder (102287) | more than 9 years ago | (#12473880)

If you don't have regression tests, then you have a flaw in your development process.

That's what we're all talking about here.

You've ignored that from the beginning, and that's what I keep pointing out again and again.

Re:Unit test == Code review (1)

ClosedSource (238333) | more than 9 years ago | (#12473934)

I've never objected to regression tests in this thread, apparently you haven't been paying attention.

Re:Unit test == Code review (1)

Viking Coder (102287) | more than 9 years ago | (#12477412)

"Regression testing can be very handy as long as the tests reflect what the code is supposed to do." (Emphasis added.)

This statement is the crux of our disagreement. I assert that having regression testing is handy, regardless of whether the code is doing what it is supposed to be doing, whether the tests reflect what the code is supposed to do, or basically any other factor.

There are a few factors which make them valuable to me, even in the worst of conditions:

1) It provides kind of a minimal environment in which you can ensure that your code even compiles. I'm not saying that code that compiles (and is way wrong) is necessarily better than code that does not compile (but is closer to the truth), but it can be a very handy piece of information to know that one bit - does it currently compile (and link) or not? Regression tests can often answer that question more quickly than your production environment. The value of information decreases the longer you have to wait to get it, so regression tests have value. (Again, whether the code is doing what you want it to or not.)

2) To record what code currently does, you can read all of it, or you can look at some distilled format. If all I need is a small record of current behavior, a regression test can be quite handy. Again, whether the code does what it's supposed to do or not.

3) I guess I'd rather start with something than with nothing. It gives me a place to drop more test code quickly, and it's already wired in to the build environment.

4) Oh yeah, again it provides a minimal environment for you to run a debugger so you can step through the code - often very handy for quickly understanding code.

5) If nothing else, crappy code and crappy regression tests give you something to compare your new code against. You can quickly determine, "Well, the old code worked for test cases A and B, but not C and D, and I don't think they even thought of E. My code handles all those cases."

6) Finally, at one time, somebody thought the regression test had some amount of value. It's useful to try to understand why.

Even if you have a flawed process. Even if requirements are not documented. Even if the code is misbehaving. Even if the regression test is not exercising the code the way it should. You keep bringing those points up, and I'm shooting them down, because I assert that the regression tests still have value.

Re:Unit test == Code review (1)

ClosedSource (238333) | more than 9 years ago | (#12478985)

Then I suggest you write a single regression test for all of your projects since any arbitrary regression test can meet your minimum criteria of "regardless of ... whether the tests reflect what the code is supposed to do".

Re:Unit test == Code review (1)

Viking Coder (102287) | more than 9 years ago | (#12489433)

Yes, yes - clearly by taking your flippant responses seriously I deserve to have you deride my post without any attempt to understand my core sentiment:

Even a bad regesssion test is better than no regression test.

Working effectively with legacy code. (1)

_xs_ (14098) | more than 9 years ago | (#12454776)

Try to get your hands on Working effectively with legacy code [amazon.com] by Michael Feathers.

General testing philosophy (1)

Reelworld (120784) | more than 9 years ago | (#12454864)

First of all you need to decide what you want to achieve from your testing. If you're following a methodology such as the 'V'-cycle, then the point of unit testing is to verify that the code correctly implements the design (system testing is where you check that the system implements the requirements).

Many replies here are along the lines of "we don't have any documentation" - well in that case, you can't do truely meaningful testing. You can test what you think the code should do, but that'll always be based on assumptions (and we all know about those ...) and it could be argued that this kind of testing is almost pointless - it would be far more productive to concentrate on system testing.

Of course, there's nothing to stop you ammending/clarifying/extending any documentation that you do have while in the unit testing process, as long as you take due care and follow your prescribed process.

Finally, if I may offer some advice - document (and apply QA) your unit tests. In many places there seems to be an attitude that "they're only tests - they don't need good documentation", which in turn creates a maintenance nightmare in itself.

Re:General testing philosophy (1)

vidarh (309115) | more than 9 years ago | (#12469231)

Writing tests for undocumented code serves a very important purpose: The tests will act as documentation for what behaviour you expect of the code. Those test can then serve as a guide to writing documentation in the knowledge that they document the actual behaviour of the code (since otherwise the tests would fail).

Documenting legacy code that doesn't have test cases without at the same time building a test suite is a nightmare.

Re:General testing philosophy (1)

Reelworld (120784) | more than 9 years ago | (#12474284)

I think here that you've hit upon a fundamental issue - 'expected behaviour' as opposed to 'intended behaviour'.

With no documentation of the existing code, you can write as many tests as you like, but you still can't prove that the code performs as it was originally intended to - you can only prove that the code does what the code does.

Granted that this is of some use for the purpose of creating a regression test suite, but it doesn't alter the fact that your tests can't, by definition, find bugs in the exsiting code, as you've nothing to test against ... other than your assumptions ...

I recommend this article (3, Informative)

gstover (454162) | more than 9 years ago | (#12455156)

A recent article in print about automated unit tests for legacy code was

"Managing That Millstone"
By Michael Feathers
Software Development
January 2005
http://www.sdmagazine.com/documents/s=9472/sdm0501 c/sdm0501c.html [sdmagazine.com]

It included suggestions for how to inject unit tests into code which isn't loosely coupled, some tips on how to refactor to get loosely coupled interfaces, & what you can do when neither of those approaches will work. It was a valuable & enjoyable read for me, at least.

gene

Unit Testing not a goal by itself. (1)

DynamiteNeon (623949) | more than 9 years ago | (#12456022)

Others have touched on this, but you shouldn't be looking to write unit tests just to say, "Hey, I've tested some of this code." Wait till you have to change something or add a new feature, then focus your energy on writing tests in those areas you need to protect. Then make your changes.

If you are doing unit testing on finished code... (1)

joto (134244) | more than 9 years ago | (#12457351)

...you are wasting your time.

Unit testing is for finding bugs early on (preferably design errors, but also coding errors).

If the code is already written and works, then it's not likely to be worth the effort to add random unit tests all over the place. What you need then is either (a) stress testing, to discover hidden bugs, or (b) regression tests, to make sure the software keeps working, even after programmers have "improved" upon it.

Re:If you are doing unit testing on finished code. (1)

vidarh (309115) | more than 9 years ago | (#12469190)

And guess what, unit tests are very much a part of a good regression test suite... So writing unit tests are most certainly not a waste of time even if the software is "working". Besides I've lost count of the number of times I've found serious errors in "working" code when adding unit tests.

Once upon a time (1)

davecb (6526) | more than 9 years ago | (#12458042)

... there was an ADL [opengroup.org] -based toolset, called if memory serves, JavaSpec, which made api testing hard but doable. As opposed to "let's not but say we did". I admit I used a hacked-up subset, but for large-scale problems being able to generate tests and test data sets via a tool was A Good Thing.

Even worth learning ADL (:-))

--dave

thats simple really (1)

josepha48 (13953) | more than 9 years ago | (#12459001)

If the product already exists, then you know what it is supposed to do. All you have to do is come up with scenerios to test what it does. You should already have a users guide, so you basically go to your users guide and look to see what it says its supposed to do. Then start testing the guide.

You are so screwed (2, Insightful)

HalWasRight (857007) | more than 9 years ago | (#12459283)

Hah!

You are so screwed. Writing tests for untested code is a thankless job. You are going to find so many bugs, and everyone is going to get really pissed off about that new hire that is rocking the boat complaining about "quality problems".

You are in a no win situation. They will tell you your tests are too picky, that no one will use it like that. Unit testing is thankless, you can't argue. Given that there was no test plan, I bet there isn't even a spec! Where there is smoke there is fire.

I'd start looking for a new job right away.

Re:Dial 922 for the WAAmbulance (1)

RomulusNR (29439) | more than 9 years ago | (#12471165)

Dude, you just pretty much defined all QA everywhere. The only thing you missed was the part about being crammed in at the end of the release schedule after development was days late on their end and being expected not to let the date slip.

Read "Working Effectively with Legacy Code" (2, Insightful)

James Thatcher (852316) | more than 9 years ago | (#12462152)

...By Michael Feathers. The scenario that you may find is "we can't refactor until we have unit tests and we can't have unit tests until we refactor". The book has some strategies for getting around that paradox. You may find, however, that some code is essentially un-testable as written.

Black box the hell out of them. (1)

RomulusNR (29439) | more than 9 years ago | (#12463576)

Presumably since these components are all built, they have some idea of what they are supposed to do, and some sense of parameters.

Make stubs or some other kind of testing tools, hammer data into them, examine the data coming out.

I guess I don't quite follow the question. You're actually in a better position with already-developed code, for the simple reason that if these things are already developed, they already have a defined purpose (whether that was defined before or after the fact). Your only problem will be to figure out how to separate the components and find ways to interface with them.

Re Writing Unit Tests for Existing Code? (1)

hidoh (882531) | more than 9 years ago | (#12474085)

First of all, don't panic, you're not alone! I have done this a few times already. Here's my approach:
1. Identify "enduring business themes" in the code. This means basically a group of code that can be predictably tested by feeding certain input and expecting certain output. For example, you know that if you order two pens, a purchase order for two pens will come out the other end.
2. Once you establish a few of these scenarios you can write a few high-level unit tests. These will help you acertain whether or not this code works ok.
3. Once you have this, you're on your way to having test driven development in place. When any code changes have been done, quickly run your unit test to ensure results are still ok. If not, it means code change broke something (test driven quality assurance).
4. Now you can dig deeper and make more detailer fine grained unit tests.

Most times the best you'll be able to do is to lock in existing code base with a handful of solid high level unit tests and ensure that any new code is tested properly.

Forget about Unit Tests for now (0)

Anonymous Coward | more than 9 years ago | (#12475489)

You're better off writing integration tests and system tests - much more bang for your testing buck.
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>