×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Automated Software QA/Testing?

timothy posted more than 9 years ago | from the build-robots-which-can-automate-testing dept.

Software 248

nailbite writes "Designing and developing software has been my calling ever since I first used a computer. The countless hours/days/months spent on imagining to actualizing is, to me, enjoyable and almost a form of art or meditation. However, one of the aspects of development that sometimes "kills" the fun is testing or QA. I don't mind standalone testing of components since usually you create a separate program for this purpose, which is also fun. What is really annoying is testing an enterprise-size system from its UIs down to its data tier. Manually performing a complete test on a project of this size sucks the fun out of development. That's assuming all your developers consider development as fun (most apparently don't). My question is how do you or your company perform testing on large-scale projects? Do you extensively use automated testing tools, and if so, can you recommend any? Or do you still do it the old-fashioned way? (manually operating the UI, going through the data to check every transaction, etc.)"

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

248 comments

404 (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#9853386)

Haha.

Old fashioned way... (0)

Anonymous Coward | more than 9 years ago | (#9853387)

And I hate it. Looking for tips here myself.

I wish they tested this site (0, Offtopic)

Megor1 (621918) | more than 9 years ago | (#9853391)

I keep getting errors following links on slashdot today.

Re:I wish they tested this site (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#9853463)

Who modded this offtopic? This complaint is completely on topic. It would have been offtopic in another thread, but this thread is about software testing, and Slashdot clearly hasn't been tested.

Next and previous links (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#9853474)

Another complaint: where are the links to the next and previous article?!

I seem to recall that they were removed once because of the "load they imposed on the database". Back then they were quickly returned because of public demand.

We do it the old fashioned way (1, Funny)

Anonymous Coward | more than 9 years ago | (#9853398)

We do it the old fashioned way - we don't.

Seriously, why bother?

You're not alone. (3, Funny)

irokitt (663593) | more than 9 years ago | (#9853399)

However, one of the aspects of development that sometimes "kills" the fun is testing or QA.
I'm sure that quite a few Microsoft employees agree wholeheartedly.

Laugh, it's good for you.

Re:You're not alone. (-1, Troll)

Anonymous Coward | more than 9 years ago | (#9853439)

At least Microsoft does have a quality assurance department.

When it comes to OSS, you've just better RTFM and fix the code to suit yourself.

I don't know anyone (including myself) who has ever received anything but abuse from the developers of open source software when sending in feature requests and bug reports.

Re:You're not alone. (0)

Anonymous Coward | more than 9 years ago | (#9853565)

If you say the contractions in your sig as separate words, you have a rather nice haiku...

Re:You're not alone. (0)

Anonymous Coward | more than 9 years ago | (#9853770)

And those that don't agree with you are likely the STEs that work at Microsoft and get their fun from breaking what others have created.

Manual Testing (3, Interesting)

Egonis (155154) | more than 9 years ago | (#9853404)

Although I haven't personally used many testing tools:

I created an Enterprise Application composed of Client/Server Apps -- the best test was a small deployment of the Application to users who are apt to help you conduct the test, over a few weeks, I found bugs I never caught with my own manual tests.

Applications that test your code, etc are great from what I have heard, but will not catch Human Interface related issues, i.e. GUI Mess-Ups, Invisible Buttons, etc.

Re:Manual Testing (3, Insightful)

GlassHeart (579618) | more than 9 years ago | (#9853675)

What you're referring to is called "beta testing", where a feature-complete product is released to a selected group of real users. This is a highly effective technique, because it's simply impossible to think of everything.

However, if you go into beta testing too early, then major basic features will be broken from time to time, and you'll only irritate your testers and make them stop helping you. This is where automated tests shine, because they help you ensure that major changes to the code have not broken anything.

Put another way, automated test can verify compliance to a specification or design. User testing can verify compliance to actual needs. Neither can replace the other.

nothing to see here, move along. (3, Insightful)

F2F (11474) | more than 9 years ago | (#9853405)

how about we go back to basics and read the proper books on computer science? no need for your shmancy-fancy-'voice debugged'-automagically-'quality assured' offerings, thanks.

i'll stick with The Practice of Programming [bell-labs.com] . at the very least i trust the people who wrote it to have a better judgement.

Re:nothing to see here, move along. (3, Interesting)

Anonymous Coward | more than 9 years ago | (#9853442)

How about we read books on the subject written by software engineering researchers and not programming language researchers? See the Dynamic Analysis lecture notes. [mit.edu]

You shouldn't be doing it (5, Insightful)

dirk (87083) | more than 9 years ago | (#9853406)

The first thing you need to learn is that you shouldn't be doing large scale testing on your own systems. That is just setting yourself up for failure, since the only real testing is independent testing. Preferably you should have full-time testers who not only design what needs to be tested, but how the testing will be done and who will do the actual testing. Where I work, we have 2 testers who write up the test plans, and then recruit actual users to do the testing (because they can then not only get some exporsure to the system, they can suggest any enhancements for the next version). Testing your own work is a huge no-no, as you are much more likely to let small things slide than an independent tester is.

Re:You shouldn't be doing it (5, Informative)

wkitchen (581276) | more than 9 years ago | (#9853506)

Testing your own work is a huge no-no, as you are much more likely to let small things slide than an independent tester is.
Yes. And also because you can make the same mental mistakes when testing that you did when developing.

I once worked with a dyslexic drafer. He generally did very good work, except that his drawings often had spelling errors. When he looked at his own drawings, the errors were not obvious to him like they were to everyone else. Most people don't have such a pronounced systematic tendency to make some particular error. But we all occasionally make mistakes in which we were just thinking about something wrong. And those mistakes are invisible to us because we're still thinking the same wrong way when we review it. So having work checked by someone other than the one who created it is a good practice for just about any endeavor, not just software.

Re:You shouldn't be doing it (3, Funny)

jkubecki (26300) | more than 9 years ago | (#9853708)

I once worked with a dyslexic drafer. He generally did very good work, except that his drawings often had spelling errors.

Imagine that. A "drafer" who makes spelling errors.

Automated testing both contact center and Web (0)

Anonymous Coward | more than 9 years ago | (#9853415)

www.empirix.com --- blatent plug

As a former professional software tester ... (5, Insightful)

eddison_carter (165441) | more than 9 years ago | (#9853418)

Nothing can compare to having a dedicated test staff. At the last software place I worked (part-time, in school, while getting my engineering degree), we had 3-6 college students working on testing most of the time (we would also be given some small projects to work on).

Testing goes far beyond what any automated system can test, if you have a user in there somewhere. You also need to check things like "How easy is it to use?" and "Does this feature make sense?". We also suggested features that the program did not have, but from our experiance using it, thought that it should have.

Re:As a former professional software tester ... (0)

Anonymous Coward | more than 9 years ago | (#9853698)


As a current tech support professional who was once assigned to help test a large enterprise effort I'll state that, based on my n of 1, technical support (TS) should _always_ be included in on testing.

The QA department found things the engineers didn't find, TS found things neither found, engineers found things other engineers missed (and things they knew were wrong got fixed).

Yes, absolutely, roll out to users, but make sure that the Support people are involved.

Testing is fun too. It is MEETINGS that suck. (2, Insightful)

Wargames (91725) | more than 9 years ago | (#9853421)

I agree about programming. I prefer the design phase. I like to design a system to the point that programming it is a cinch. What really sucks about software development is not testing it is meetings. Meetings suck the fun out of programming. Stupid senseless timewasting meetings. Scott Adams hits the nail on the head about meetings every time.

QA is a separate function (5, Insightful)

drgroove (631550) | more than 9 years ago | (#9853422)

Outside of unit testing and limited functional testing, developers should be doing QA on their own code. That's a bit like a farmer certifying his own produce as organic, or a college student awarding themselves a diploma. It misses the point. QA function, automated, regression et al testing is the responsibility of a QA department. If your employer is forcing you to perform QA's functions, then they obviously don't "get it".

Re:QA is a separate function (1)

ghettoboy22 (723339) | more than 9 years ago | (#9853497)

Don't you mean "should not be doing QA on their own code.".... ?

Re:QA is a separate function (1)

jc42 (318812) | more than 9 years ago | (#9853685)

Don't you mean "should not be doing QA on their own code.".... ?

In actual practice, he got it right by forgetting the "not". ;-)

I usually present the QA folks with lots of test code. This makes them my friends, since they are usually under pressure to get the product out the door and don't have time to think up all the tests themselves. I don't either, of course, and I can't ever give them something complete. And I have the usual developer's problem of not being permitted any contact with actual users, so I can't guess at their misconceptions when they try using the software without having read any of the documentation that I also included.

Quite often, I get a bit of flak from management for being too friendly with the QA people. They usually have this silly "clean-room" concept for how it should be done. And the tests should test all possible paths. Yeah, as if they have the millenia that that would take on the fastest cpus ...

One serious problem is that the QA testers always have to modify my tests for their test setup. I have my own collection of (mostly perl and tcl) testing tools, but of course that's totally different that what the QA people are using. And there isn't time for us to teach each other to use all the tools.

It's all a long way from an ideal setup. But that's The Market for you.

Re:QA is a separate function (1)

Rob Riggs (6418) | more than 9 years ago | (#9853818)

Quite often, I get a bit of flak from management for being too friendly with the QA people. They usually have this silly "clean-room" concept for how it should be done.

Your management has their hearts in the right place. The problem with the developer providing the QC of their own code is that they may miss the same problems in their test code as they did in development.

I think of the system testing at my company as comprised of two main activities: integration testing and feature testing. I can write unit tests for the last one, but if I misinterpret a requirement, my tests will validate that misinterpretation.

Itegration testing of enterprise applications is more difficult to do by the developer, because usually the development environment does not exactly mimic the production environment.

Re:QA is a separate function (1)

Yelskwah (111041) | more than 9 years ago | (#9853584)

I think you forgot a 'not'.

Though the sentiment is clear: developers should not test or QA certify their own code. There is the obvious problem of the moral standards (or lack thereof) of the engineer ("Aw shit it broke, ah f*ck it. It's 2am. Label it. I'm off home."), there's also the psychological effect of engineers subconsciencly shying away from testing areas they know are not sufficiently protected with validation and exception handling, or where the specification (if there /was/ one) was fuzzy.

In my experience, if you want to nail your shorts to the mast and say "we believe in software product quality", make it the primary responsibility of a group of people with a named leader. Have that leader own the QA process (and DEFINE that process so it is refinable and repeatable) and BLAME that team as a whole when the software blows up in production. This is the only metric that really matters to that team. Link pay to that metric.

Yes, that's harsh. Yes, the team will be reviled and hated. You - and they - will have to deal with that.

But over time, the QA team will develop a bulletproof QA process. Google for IBM's "Black Team" for an example of a QA team that worked.

-JH

Re:QA is a separate function (2, Insightful)

the_weasel (323320) | more than 9 years ago | (#9853599)

I do a fair bit of QA and testing of our software product - and If i could have a nickel for the number of times its been apparent that a developer has checked in code they clearly NEVER tested.

A developer has some responsibility to ensure thier code at least functions in the context of the overall application before making it my problem. Just because it compiles does not mean it is done.

Re:QA is a separate function (1, Funny)

Anonymous Coward | more than 9 years ago | (#9853640)

I used to work as a QA tester and now I'm the gumpty who admins our source control and build platforms. I used to think our developers were idiots for checking in non-tested code. Now I know better; our developers are morons and regularly check code that doesn't even compile!

Testing (3, Informative)

BigHungryJoe (737554) | more than 9 years ago | (#9853423)

We use certification analysts. They handle the testing. Unfortunately, the functional analysts that are supposed to write the tests rarely know the software well enough to do that, so some of my time is spent helping to write the black box tests. Writing good tests has become especially important since most of the cert analyst jobs are being sent to India and aren't on site anymore.

bhj

How it should be done (0)

Anonymous Coward | more than 9 years ago | (#9853427)

Automated in house testing tools (for the ui all the way down), combined with an army of testers walking through every criteria manually. So to answer your question we do it both ways. And that's all testers, developers don't get involved with the tests. Having developers test their own code is like having an author edit his own work.

Test Matrix (4, Interesting)

Pedro Picasso (1727) | more than 9 years ago | (#9853428)

I've used auto-test thingies, ones that I've written, and packaged ones. Some situations call for them. Most of the time, though, it's just a matter of doing it by hand. Here's what I do.

Create a list of inputs that includes two or three normal cases as well as the least input and the most input (boundaries). Then make a list of states the application can be in when you put these values into it. Then create a graph with inputs as X and states as Y. Print your graph and have it handy as you run through the tests. As each test works, pencil in a check mark in the appropriate place.

Now that you've automated the system to the point where you don't need higher brain functions for it, get an account on http://www.audible.com, buy an audio book, and listen to it while you run through your grid. It still takes a long time, but your brain doesn't have to be around for it.

This is going to sound incredibly elementary to people who already have test methodologies in place, but when you need to be thorough, nothing beats an old fashioned test matrix. And audiobooks are a gift from God.

(I'm not affiliated with Audible, I just like their service. I'm currently listening to _Stranger in a Strange Land_ unabridged. Fun stuff.)

Automation is difficult (5, Informative)

TheSacrificialFly (124096) | more than 9 years ago | (#9853433)

It's extremely difficult to develop and maintain on any enterprise size system. One of the big problems management has with automation I've found is that once they've put the money into initally developing the automation, they think it will run completely automatically forever more.

From my personal experience at one of the world's largest software companies, automation maintenance for even a small suite (200-300 tests, 10 dedicated machines) is a full time job. That means one person's entire responsibility is making sure the machines are running, the tests aren't returning passes and fails for reasons other than they are actually running the tests, and making changes to the automation both when either the hardware or software changes. This person must know the automation suite as well as the tests attempting to be performed intimately, and must also be willing to spend his days being a lab jockey. It's usually pretty difficult to find these people.

My point here is that even after spending many dev or test hours developing automation, in no way is it suddenly complete. There is no magic bullet to replace a human tester, the only thing you can do is try and improve his productivity by giving him better tools.

-tsf

Re:Automation is difficult (4, Insightful)

Bozdune (68800) | more than 9 years ago | (#9853546)

Parent has it right. Most automation efforts fail because the test writers can't keep up with the code changes, and not many organizations can pay QA people what you need to pay them if you expect them to be programmers (which is what they need to be to use a decent automation tool). Plus, one refactoring of the code base, or redesign of the UI without any refactoring of the underlying code, and the testers have to throw out weeks of work. Very frustrating.

Even in the best case, automation scripts go out of date very quickly. And, running old scripts over and over again seldom finds any bugs. Usually nobody is touching the old functions anyway, so regression testing is largely pointless (take a lude, of course there are exceptions).

I think the most promising idea on improving reliability I've seen in recent years is the XP approach. At least there are usually four eyes on the code, and at least some effort is being put into writing unit test routines up front.

I think the least promising approach to reliability is taken by OO types who build so many accessors that you can't understand what the hell is really going on. It's "correctness through obfuscation." Reminds me of the idiots who rename all the registers when programming in assembly.

Re:Automation is difficult (0)

Anonymous Coward | more than 9 years ago | (#9853589)

Exactly. Not only that but most test suites such as WinRunner or Rationale Rose are scripted. That means you need a QA tester who can also hack some code. Most testers who can code would rather be developers, so there are slim pickings when it comes to staffing your QA department.

As an ex. tester at several different companies and testing several different products we always did it the "hard" way; write our own human readable instructions and run through them manually. Even manual test scripts fall out of date quickly though.

The only way to test enterprise software is with a test team doing the hard graft and writing their own tests. Trying to do it any other way is both silly and doomed to failure.

Re:Automation is difficult (1)

MCZapf (218870) | more than 9 years ago | (#9853697)

I'd only use automation if it saves a LOT of time. As you hinted, the automation is a whole development project in itself, which means it requires development AND TESTING of its own. Yes, you have to thoroughly test the automation to make sure it's actually doing the tests properly: is it generating the right inputs and checking the outputs? Even then, if automated tests fail, you still have to wonder (and investigate) whether it's a problem with the automated test itself or a problem with the tested software. Worse, if an automated test program, due to some bug, passes a test case that actually fails, you might not find out about it until too late.

Where I work, new releases of the automated test tools (developed in-house) come out more often that the actual application we have to test. This means we have to validate the test tools every time: running the test with the tool, but checking everything manually as we do it. It's a pain.

Manually performing a complete test (1)

Tony-A (29931) | more than 9 years ago | (#9853438)

is an oxymoron.
Difficult if there is no logic and no interactions.

Scripts will be of some help.
Probably best will be long complicated sequences of operations whose net effect must logically be precisely nothing.

If you're lazy like me, integrate first, and every module is responsible for not going beserk in the presence of buggy neighbors. Too much software seems designed to assume that everything else is perfect. The old game of "gossip" shows that even if everything is perfect, your idea of perfect and my idea of perfect may not perfectly coincide.

Re:Manually performing a complete test (1)

guitaristx (791223) | more than 9 years ago | (#9853563)

Really, the length and frustration of the testing phase has an inverse relationship to the detail in the design (and adherence to the design). The better you design it, the easier the testing is. Modularize! Unit test per-module! That way, you can use a process of elimination: "The bug isn't here, we tested this module thoroughly." It also helps to enforce design docs to be made as engineers write. That way, you know exactly what the code is doing when you test it. Otherwise, it's hard to identify border cases, bottlenecks, etc.

XDE Tester (1)

JavaPunk (757983) | more than 9 years ago | (#9853449)

I dunno about anybody else, but my dad works on AUIML(Abstract User Interface Markup Language Toolkit), a large gui back-end for iSeries and any other java front end, web back-end admin type tools. Written in entirely in java there is an eclipse plugin to output the xml gui code (You can find it on the alpha works IBM web site for a free trial). My dad does all the testing with XDE tester from Rational Tools (owned by IBM, how nice!). XDE can test both the web and the java side of this. The tester is really cool, test cases are written in XML. XDE has some odd problems with the Internet Explorer java plugin and he has horror stories about trying to get it to work (He seems to be the only person around capable of getting it to run :) ). He now runs all his test cases over night on a headless machine (They take a couple hours), checks the error reports in the morning to know what he has to fix.

Model based testing (1)

d0st03vsky (550442) | more than 9 years ago | (#9853468)

A trend in the past few years which has been successfully implemented in the industry is Model Based Testing. Basicly, you define what it is you're testing, what you can do to it, and what the expected results of those actions would be. Once you have a complete valid model, you can then create any testing you want, simply asking the model, "if you're like this, and I do this, what should happen?" This can be done at an API level or an enterprise level.

The fun development part is once you've built the model, you can use any technology to develop the automation to run the resulting test cases.

Dedicated test staff (1)

smoyer (108342) | more than 9 years ago | (#9853478)

We too use dedicated test engineers who write the test plans and at least oversee the testing (they do most of it themselves). We are working on several different automated test ideas at the present. Automatic regression testing (our code is in Java and each class and package has JUnit test code) will be accomplished during each build cycle.

As you noted, it is more difficult to test the entire system end-to-end. Our problem is complicated by the fact that we don't have the infrastructure to completely load test the system. Load testing of the web interface is another class of problem that can be solved fairly easily ... and if the tests are selected properly, you can exercise your middle-ware and back-end DB server at the same time.

Automated process (1, Insightful)

cubicledrone (681598) | more than 9 years ago | (#9853479)

Here is the standard management response to automating anything:

"We don't have time for that. Just get the QA testing complete so we can start the layoffs."

This basically makes the entire question of automating processes academic. Now, if automating processes can lead to massive job loss, salary savings and bonuses, it might actually be approved.

Long-term value is never EVER approved instead of short-term pocket-stuffing, EVEN IF a business case can be made for it. I've seen near-perfect business cases (complete financials, charts, graphs, blow-dried corporate phone-flipping management prick with a light pointer hosting Wheel of Buzzwords in the conference room) made for automating very expensive work schedules, and they were a) ignored or b) shouted down.

Based on this, it's very possible that even if an automated tool could be built, and worked, it would still be ignored because it was non-standard. Yes, I've seen this happen too. Five people assigned to a project that a Perl script could do in a half hour. The Perl script completes the job, but management refuses to believe the results are accurate, so they keep the five people working on the same project for four days... and produce the exact same results.

Now let's all sing the company song...

That's a bunch of crap (1)

melted (227442) | more than 9 years ago | (#9853841)

While it is true that companies start to realize that automation can help them with the quality of their software, companies that lay off testers or don't perform manual ad-hoc testing on top of what's automated are most likely poorly managed.

The thing is, once you automate something your automation will walk the same code path that was fixed after you logged your bugs after you ran this test case first time. It is very likely that the code paths that you're testing will never be touched again and therefore will never be broken.

There is a possibility that interaction of the API that you've tested with other APIs will cause it to fail, though, and automation will catch that (if it's good) so any time spent on automation is time well spent.

My point here is, there MUST be ad-hoc testing performed on the product, and the only purpose of automation is to free up testers' time to do more ad-hoc stuff, or automate large matrices of testing that would be prohibitively expensive to test manually more than once. Anyone who relies completely on automation is NOT TESTING THEIR PRODUCT WELL!

To add to that, any sufficiently sophisticated automation infrastructure requires serious maintenance. It is not uncommon to have a dozen test case failures every time you run it, and half of the time these failures are test case's fault.

So it's not like you can fire everyone and have an untrained monkey do everything else. Anyone who thinks like this is a fucking moron.

6 year experience in QA (5, Insightful)

cemaco (665884) | more than 9 years ago | (#9853485)

I worked 6 years as a Quality Assurance Specialist. You cannot avoid manual testing of a product. Standard practice is to manually test any new software and automate as you go along, to avoid having to go over the same territory each time there is a new build. You also automate specific tests for bugs found after they are fixed, to make sure they don't get broken again. My shop used Rational Robot from IBM. There are a number of others, Silk is one I have heard of, but never used. Developers often have an attitude that Q.A. is only a necessary evil. I think part of it is because it means admitting that they can't write perfect code. The only people I have seen treated worse are the help desk crowd. (another job I have done in the past). The workload was terrible and when layoff time came, who do you think got the axe first? As for developers doing their own testing? That would help some but not all that much. You need people with a different perspective.

Re:6 year experience in QA (1)

moblsv (801893) | more than 9 years ago | (#9853759)

Our shop also uses Robot. Record playback is very difficult to maintain but we have a solution which has worked well for us. We seperate the pages within our product into object maps and then build the Robot recognition strings from these maps at runtime. This way we can update an object in our map when it changes in the product and automagically fix all the scripts. Our test department is fortunate to have developers who truly understand the importance of testing. Unfortunately, we have also been caught up in corporate consolidation and have been bought out twice in the past two years. Now, even though our location values testing, the people who hold the purse strings think of testing as nothing more than a bottleneck. End result is a great automated testing framework and not enough commitment to it to really use it.

TDD (3, Insightful)

Renegade Lisp (315687) | more than 9 years ago | (#9853490)

I think one answer may be Test Driven Development (TDD). This means developers are supposed to create tests as they code -- prior to coding a new feature, a test is written that exercises the feature. Initially, the test is supposed to fail. Add the feature, and the test passes. This can be done on any level, given appropriate tools: GUI, End-to-End, Unit Testing, etc. Oh, did I mention JUnit? The tiniest piece of code with the most impact in recent years.

I came across this when I recently read the book by Erich Gamma and Kent Beck, Contributing to Eclipse. They do TDD in this book all the time, and it sounds like it's actually fun.

Not that I have done it myself yet! It sounds like a case where you have to go through some initial inconvencience just to get into the habit, but I imagine that once you've done that, development and testing can be much more fun altogether.

Re:TDD (0, Troll)

r.jimenezz (737542) | more than 9 years ago | (#9853811)

It's not that fun :)

I read the book. I had to write an Eclipse plugin recently and decided to give TDD a test drive (I had developed a previous, non-plugin project using TDD before).

The thing is, when you are working with something like Eclipse, i.e. huge and vastly undocumented (the book is good, but doesn't cover many important issues), TDD is good because it helps you choose a path. It allows you to divide your work, is an invaluable help in designing and certainly boosts your confidence for refactoring and stuff like that.

However, if you have a clear idea of what to do, TDD becomes rather boring and tedious. What's worse, it's very low-level testing for the most of it and does little to help you in usability and acceptance testing. And like many other posters point out, the best way to get testing done is to have independent people do it.

That said, TDD certainly has a place in the development process, and one thing I do appreciate about it is that proponents seem to push it more sensibly than XP.

i hate to be obvious, but (0, Troll)

wobblie (191824) | more than 9 years ago | (#9853492)

duh, open source

best testing there is

Re:i hate to be obvious, but (0)

Anonymous Coward | more than 9 years ago | (#9853623)

Do be quiet whilst the grownups are talking.

We have automated testing for our system at work.. (0)

Anonymous Coward | more than 9 years ago | (#9853503)

they are called coops ;)

Automation versus Manual Testing (5, Insightful)

kafka47 (801886) | more than 9 years ago | (#9853511)

At my company, we have a small QA group that tests several enterprise client-server applications, including consumer-level applications on multiple platforms. To exhaustively test all of the permutations and platforms is literally impossible, so we turn to automation for many of the trivial tasks. We've developed several of our own automation harnesses for UI testing and for API and data verif. testing. The technologies that we've used :
- Seque's silktest [segue.com]
- WinRunner [wilsonmar.com]
- WebLoad [radview.com]
- Tcl/Expect [nist.gov]

There are *many many* problems with large-scale automation, because once you develop scripts around a particular user interface, you've essentially tied that script to that version of your application. So this becomes a maintenance problem as you go forward.

One very useful paradigm we've employed in automation is to use it to *prep* the system under test. Many times its absolutely impossible to create 50,000 users, or 1,000 data elements without using automation in some form. We automate the creation of users, we automate the API calls that put the user into a particular state, then we use our brains to do the more "exotic" manual testing that stems from the more complex system states that we've created. If you are to embark on automating your software, this is a great place to start.

Hope this helps.

Re:Automation versus Manual Testing (0)

Anonymous Coward | more than 9 years ago | (#9853789)

A good design for the automated tests is essential. I've investigated automated testing for the product that I work on, but there isn't enough time with the schedule. However, I talked to somebody that is implementing automated tests for another product, and he's picked up a few of my ideas.

My number one assertion about automated test development is that it is no different than any software development. If it's a good design principle for regular software, it's a good design principle for your test scripts.

The best example would be the sticky problem of UIs. That always gives automated testing headaches, such that many people give up on automation. The thing is, if their scripts are designed correctly, they can minimize the impact of UI changes. The key is to do the same thing you do with any software; separate UI from the business logic. The code that interfaces with the UI goes in one module. The code that tests the application functionality goes in another. If the UI changes, you change the UI module. If the business logic changes, you change the business logic module.

That's just one example. There are other ways to make good tests, such as code libraries, data hiding, and good variable names. If it applies to regular software development, it applies to automated test development.

Don't forget that automated testing isn't going to catch much in comparison to manual tests. A computer isn't as capable of asking "what if" as a person is, isn't very good at recognizing odd states, and is certainly incapable of determining the cause of a defect. No computer can figure out the 50 obscure steps that made your software start shooting naked lawn gnomes out of the floppy drive.

Design it for testability (1)

ortholattice (175065) | more than 9 years ago | (#9853513)

Design into the app, up front, the ability for every function and transaction to be scripted. Have diagnostic hooks that can interrogate that can interrogate and verify the state of the app, verify the integrity of database transactions, etc. Then all of the testing can be automated except for the GUI itself.

As a bonus you'll end up with a scriptable app that advanced users will love.

Re:Design it for testability (1)

kafka47 (801886) | more than 9 years ago | (#9853532)

When we committed our entire Engineering department to the path of automation, obviously the product itself had to change to accomodate our requirements (we have our own inhouse automation harness). Going forward, we've posted a slogan into the cubicles of every QA person :

"If you can't automate it... its a bug.

/kafka

Manual work and automated testing tools (3, Informative)

Anonymous Coward | more than 9 years ago | (#9853517)

I was part of a team for a major global company that looked into automated testing tools for testing the GUI front end web applications. These were both staff and customer facing applications.

We evaluated a bunch of tools for testing the front end systems, and after a year long study of what's in the marketplace, we settled on the Mercury Interactive [note: I do not work for them] tools: QuickTest Professional for regression testing, and LoadRunner for stress testing.

No one product will test both the front and back ends, so you will need to use a mixture of the tools, open source tools, system logs, manual work, and some black magik.

Our applications are global in scope. We test against Windows clients, mainframes, UNIX boxes and mid-range machines.

The automated tools are definitely a blessing, but are not an end-all-be-all. Many of the testing tool companies just do not understand "global" enterprises, and working to integrate all the tools amongst all your developers, testers, operation staff, etc can be difficult.

Getting people on board with the whole idea of automated tools is yet another big challenge. Once you have determined which toolset to use, you have to do a very serious "sell" job on your developers, and most notably, your operations staff, who have "always done their testing using Excel spreadsheets".

Another big problem is no product out there will test the front and the back end. You will have to purchase tools from more than one vendor to do that. A tool for the backend iSeries, for example, is Original Software's TestBench/400. Again, this does not integrate with Mercury's tools, so manual correlation between the two products will be needed.

You can only go so far with these tools; they will not help you to determine the "look and feel" across various browsers - that you will have to do yourself.

QuickTest Professional does have a Terminal Emulator add-in (additional charge), that allows you to automated mouse/keystrokes via Client Access 5250, and other protocols.

The best way to determine your needs, is call up the big companies (CA, IBM, Mercury) and have them do demos for your staff. Determine your requirements. Setup a global team to evaluate the products. Get demo copies and a technical sales rep to help you evaluate in your environment. Compare all the products, looking at the global capability of the product, as well as support 24/7.

No tool is perfect, but it is better than manual testing. Once everybody is on board, and has been "sold" on the idea of the tools, you won't know how you lived without them.

Also, make sure that you have a global tool to help in test management to manage all your requirements and defects and test cases. Make sure it is web-based.

Expect to spend over a year on this sort of project for a global company. Make sure you document everything, and come up with best practice documentation and workflows.

Good luck!

Solution (1)

Anonymous Coward | more than 9 years ago | (#9853518)

We have a special group of people at our company to test our new releases. They're called "users".

Re:Solution (0)

Anonymous Coward | more than 9 years ago | (#9853548)

We have a special group of people at our company to test our new releases. They're called "users".

We have those too and they're worthless. The moment you give them anything to test, they break it. Give me some nice predictable reliable testing software any day!

There is no single answer (1)

Triscuit (122259) | more than 9 years ago | (#9853535)

Hundreds of books and resources have been written/made available on the subject. You won't find the "right" answer for "you" here. No one here knows your environment/situation/funding/etc. I suggest following the leads the folks here are pointing you to and go from there.

Word of wisdom; don't assume automated testing is the best choice for everything.

Test engineers (0)

Anonymous Coward | more than 9 years ago | (#9853538)

I am a test engineer at a very large company. I work in the integration and test department. My job is to test the componets of the system working together as a whole ensuring that everything has been implemented in a compatible way. Developers are responsible for unit test so what they give us should function in the way it is suppose to(not that it does).

First there are the system engineers who spend their times writing specs, thousands of pages of specs detailing in exact detail every part of the system. The developers then take these specs and implement the systems. The system is very large consisting of many software and hardware componets using systems from VAX/VMS mainframes to windows ce clients and diffrent groups responsible for different portions of the system. the developers implement the parts how they interpet the specs and throw out things here and there that arent implementable. then we get the releases and setup everything in the lab completely mirrioring a production enviroment. We also have a tools team which write tools to give us automation, most of these tools are perl modules which us testers can use to write perl scripts to exercise parts of the system. The developers in coordination with our tools and test teams try to give us hooks or api's into the system in order for us to be able to test at all layers of the system, from ui, to service layers, transport layers etc. Most of the time they give us a network socket that we can send XML to that the application will interpet and call the approriate methods. ( I am trying to convince them to use something like SOAP or xml-rpc, but some parts of the system are embedded and http might be too much overhead)

Us test engineers write test plans based of the specs and go through and make sure every part of the system is to spec and that all parts of the system interpet the spec in the same way preventing incompatabilities.

The goal is that during development we come up with a complete suite of test cases and perl scripts and other tools for automation which can be used in the maintence portion of the life of the system for regression test.

You might ask how i like this job, its pretty cool. I just graduated with a B.S. in computer engineering i am making 55k a year and i get to break stuff all day and play with cool stuff. I hope to eventually move to the tools team and then development but for now this is a great way to get familliar with the company and their processes.

Software Testing (1)

pipingguy (566974) | more than 9 years ago | (#9853544)


I suggest you beta test it with the end user and see what they have to say. Remember that THEY will be the ones that will have to use it on a day-to-day basis.

It seems to be a fait accompli that most businesses will accept IT bullshit, but the engineering companies (umm, people that build bridges and refineries, not engineering wannabees) are a lot less malleable when some computer specialist comes calling.

Why do you think (2, Funny)

Black Noise (683584) | more than 9 years ago | (#9853554)

Why do you think g0d made interns?

Re:Why do you think (0)

Anonymous Coward | more than 9 years ago | (#9853801)

There's not faster way to produce really crappy software...

And I say this as an intern working for a company that thought that way until very recently. Please. Do not underestimate the value of experienced testers.

Mercury Test Director and Winrunner (1)

hughk (248126) | more than 9 years ago | (#9853555)

I have been using the Mercury Interactive suite of tools in recent times. They work very nicely for applications with a Windows frontend. Unfortunately, TD works using an Active-X front-end and requires IE (the plugin on Mozilla will not work).

To be honest many of the things they do could easily be done by something else, but QA/Testing may not seem to be the most interesting for open source developers.

Testing? (1)

melonman (608440) | more than 9 years ago | (#9853560)

  1. Finish last line of code 30 minutes before deadline
  2. Set email autoresponder to pick a helpdesk response from:
    • You probably have a virus, please reinstall Windows
    • This is a known problem with the latest Windows Update, please reinstall Windows
    • The software will only crash if you have been downloading porn on work time, please leave your extension number if you wish us to investigate your hard disc in detail

  3. Send memo to line manager explaining that there is currently no technical documentation for the project, but that this shouldn't be a problem as long as the current team remains intact. Copy to personnel
  4. Book vacation on Lastminute.com

It starts with design (1)

meburke (736645) | more than 9 years ago | (#9853580)

I've been programming since 1965, when I started doing cryptology in assembly language (AUTOCODER) on IBM 1401's. EACH segment, module or object needs to be tested for internal consistency and a table of results should be generated by each programmer before delivering the responsible section. Then, as has been mentioned, a separate testing team should test at each stage of integration before delivering the system for customer/client testing.

I've struggled with a methodology for development of Object-Oriented systems. When I first started to learn to program we planned the system first and developed flowcharts before we actually did any code. If the design was right, coding from the flowchart was very easy, and testing was pretty easy also. I have been struggling, trying to get the same results from UML, and finally found a book called, "Tried and True Object-Oriented Development: Practical approaches with UML" by Aalto, et al, and this is how I choose to develop software today. (I do more formal use-case diagrams, and I'm not above letting Rose or Visio generate the first couple of code iterations.) Their method, developed at NOKIA, is a logical intgration of development and testing.

Mike

OpenSTA (1)

alanw (1822) | more than 9 years ago | (#9853594)

Shortly before it went tits-up in the aftermath of Y2K (lots of testing in 1999, not so much afterwards), and the bursting of the Dot.Com bubble, one of my previous employers decided to release the software testing application they had developed under the GPL. It's called OpenSTA [opensta.org] and it's available at SourceForge [sourceforge.net] .

It's designed to stress test web pages, and analyse the load on web servers, database servers and operating systems.

There is also a new company - Cyrano [cyrano.com] that has risen from the ashes of the old one, and provides many other testing tools, including regression testing.

I Wonder if Anybody Does It (1)

Lucas Membrane (524640) | more than 9 years ago | (#9853595)

I've had some exposure lately to something where automated testing is so far from imaginable. I wonder if and how anyone could test a system like this automatically:
  • Inputs Come From: web browser, green screens via curses wrapped by proprietary mumbo, EDI, phone system, email, scanners, tablets to get handwritten input, etc
  • Outputs go to: web browser, green screens via curses wrapped by proprietary mumbo, EDI, email, phone system, proprietary 3rd-party document storage
  • Database: Mostly proprietary storage in oddball proprietary data files, some relational DB's
  • Age of system: Almost 30 years
  • Number of functions: Enterprise-wide support of enterprises up to about 1000 users, say 6 departments and six major subsystems
  • Number of screens: hundreds of major and thousands of minor
  • System architecture: Pathological. More global connections than Mullah Omar. Fix something here, and something way over there breaks.
  • Lines of code: Unknown. Maybe 5 million.
  • Programming languages: Multiple

What's the silver bullet for such?

Re:I Wonder if Anybody Does It (1)

hey (83763) | more than 9 years ago | (#9853620)

You need to slowly begin rewriting it into something more sane. Like Java.

Re:I Wonder if Anybody Does It (1)

triso (67491) | more than 9 years ago | (#9853817)

What's the silver bullet for such?
What is gradual replacement of the most fragile subsystems, Alex?

I was trying to guess what this system is and concluded that must be FedEx's, or some other courier company's, package tracking system. Am I even close?

Re:I Wonder if Anybody Does It (0)

Anonymous Coward | more than 9 years ago | (#9853823)

There isn't. The closest thing to a silver bullet is mathematical proof of correctness. It's still a new concept, and even when it's developed enough to be a common discipline, it will only work in certain situations.

The other poster is correct. Your best bet to save this system is to rebuild it in a sane way.

use Perl ;) (0)

Anonymous Coward | more than 9 years ago | (#9853614)

I've tried WR, Silk, and just about every automated tool available. The biggest drawbacks were the price and learning curve. In the end, I developed an app (completly in Perl) that tested our product (which consisted of a web portal and windows app on a multi-teired setup).

I started with a web frontend where test could be launched and results could be kept (using mysql). The client machine (which runs the test) would poll the mysql db to see if any tests are available. The tests themselves were also in Perl. I used the following perl modules to accomplish testing:

Win32::GuiTest (for windows testing)
Win32::IE::Mechanize (testing with IE)

I used a host of other modules, but those are the main ones. Since Perl is a pretty easy language to pick up, the entire QA staff began writting tests into the framework almost immediatly.

The only area that I cant test with this Perl based framework is Java apps/servelets (yet).

Moral of the story: Just roll your own. :)

My 0.02 VEB (1)

Hey_Bliss (777683) | more than 9 years ago | (#9853617)

I do not think a single approach is fit for every aspect of software testing, as you well say one can consider at least user interface and business logic to be two separate independent items (and it is a good programming technique to keep them as separate as possible as well.) anyhow, I certainly beleive programming bussines logic is better tested via automated software, since the rules for it's input and responses are well defined mathematical and logical rules that can be very thoroughly and quickly analized via software catching errors in the infinitessimal that the human eye would just not catch at first glance.

Currently I do not use any pre-made testing software for my development projects and I would do well to find one, what I do is to give the bussiness rules and the api or ways to call my program to a friend programmer to code a quick test program to try different valid and invalid values plus border values (the frontier between valid and invalid) and then check if the result is correct.
I give this to an indipendant programer because if any flawed logic on my part (other than typing bugs) causes erroneous data it is very possible I'll apply the same flawed logic to test the code and thus consider a really invalid response as a valid one, using a different codder to come with the test code should apply some safeguard to this situation.

User interfaces themselves are another totally different aspect.. there is a data correctness aspect which can be tested automatically or by a codder who understands the system in which you'd basically test if the data entered on the UI is passed correctly to the system, and if the data returned by the system is displayed correctly by the UI. But the UI also has very different aspects which are ergonomics, and ergonomics can only be tested by humans, and by the final users of the system per se. As programmers we do not always create software for programmers (in fact less than 5% of my projects are ever intended to be used by programmers), for the rest we create software for administrators, HR, ITs, Engineers, Managers, Cashiers, etc etc etc.. the work flow and tought process of different professions are different as well, also as non experts on the area of application there are details either small or huge we can miss, the order of work might differ from what is needed, and also responses could come in places where they are not usefull. For that it is always better to have a small group of end users, prefferably ones you can trust in (beleive me otherwise it can QUICKLY turn into hell for you and them) to test the system and come back to you with any suggerences and errors they find.. also more people on the system produce a higer chance of finding mistakes (misplaced items, misspellings, etc), than a single person would miss (man hours count a lot for a large system testing.).

It's simple (1)

sw155kn1f3 (600118) | more than 9 years ago | (#9853618)

Once you separate the business logic and other standalone components - just do unit testing on them.
Your UI/network code should not contain wiring logic spaghetti! Separate into validation etc logic that also can be unit-tested. If a button handler has 50 lines of code... - you have much more problems in the first place, solve them first.

Re:It's simple (0)

Anonymous Coward | more than 9 years ago | (#9853840)

That helps but it's not complete. Unit testing shows that each module works individually in the cases that you've thought of. Do your modules have side effects? Is unit testing all you've done? Are you the only one to look at the tests you wrote? If you say yes to all of these questions (or even any two of them), then kiss your butt goodbye because your software's about to crash and burn.

Integrated Testing (2, Insightful)

vingufta (548409) | more than 9 years ago | (#9853621)

I work for a mid sized systems company with about 2K employees and we've struggled with testing our systems. There are a few reasons for this:
  • The system has grown overly complicated over the years with tens of subsystems. There is some notion of ownership of the individual subsystems in both dev and qa but no such thing exists when it comes to interoperability of these subsystems. Dev engineers are free to make changes to any subsystem they want to while QA does not feel that they should test anything outside their subsystem. What ends up happening is that we do rigorous subsystem testing but very little inter-operability testing which leads to a lot of issues in the field since those represent more realistic customer scenarios.
  • As a consequence, management teams have been pushing the dev engineers to write test plans and automated tests for the functionality their working on since they believe that dev can do a better job at testing. This not only overloads the dev engineer but also decreases the testing quality since I belive that developers cannot be solely held responsible for testing their own code. They are more likely to work under a lot of assumptions and would overlook a lot of bugs. Secondly they would not think of the interactions between various subsystems since they'll be concentrating on their own code.
  • Finally, it is very important that there are standard QA practices in a company. We've been lacking this since each subsystem started their QA efforts individually and ended up developing tools and methods that did not fit with each other. We do have a common reporting method on number of tests conducted vs planned but the quality of tests various so significantly that those numbers make no sense.

I would like to know how people in other systems companies divide up testing work between Dev and QA. I would also be interested in learning more about the kind of tools people use to develop and track QA.

Use a QA team and test-driven development (4, Interesting)

William Tanksley (1752) | more than 9 years ago | (#9853629)

You need two things: first, people who are dedicated to testing and aren't concerned merely to uphold their pride in the code they wrote (this is a long way to say that you need a dedicated testing team that doesn't report to the coding team); and second, testable code. The best way to get the second needed item, in my experience, is to have your developers write their automated unit tests BEFORE they write the unit they're developing.

This is generally called "test-first" development. If you follow it, you'll find some nice characteristics:

1. Each unit will be easily testable.
2. Each unit will be coherent, since it's easier to test something that only does one thing.
3. Units will have light coupling, since it's easier to express a test for something that depends only lightly on everything else.
4. User interface layers will be thin, since it's hard to automatically test a UI.
5. Programmers will tend to enjoy writing tests a bit more, since the tests now tell them when they're done with their code, rather than merely telling them that their code is still wrong.

You can go a step further than this, and in addition to writing your tests before you write you code, you can even write your tests as you write your design. If you do this, your design will mutate to meet the needs of testing, which means all of the above advantages will apply to your large-scale design as well as your small-scale units. But in order to do this you have to be willing and able to code while you're designing, and many developers seem unwilling to combine the two activities in spite of the advantages.

-Billy

SilkTest (1, Informative)

siberian (14177) | more than 9 years ago | (#9853630)

We use SilkTest for automated testing as well as monitoring. Our QA guys love it and it free's them up from doing regular system testing to focus on devising new and devise tests to confound the engineering teams!

Downside? Its Windows stuff AND its hellaciously expensive..

Re:SilkTest (1)

_|()|\| (159991) | more than 9 years ago | (#9853820)

I've used SilkTest for the last five years, and I'm working on getting rid of it. I have not been able to develop an effective strategy for automated testing of GUIs, and it is not ideal for the rest of the automation I do (mostly CHUIs and APIs).

Good:

  • the 4Test scripting language
  • easy to call out to some Windows API functions

Mixed:

  • the outline-based test selection (I'd rather query a database)
  • the results view (I'd really rather put the results in a database)

Bad:

  • the cost (~$5,000 / seat)
  • the maintenance (~$1,000 / year)
  • the licensing scheme (node-locked license server)
  • the support (before and after they sent it to Ireland)
  • Windows only

Yet Another Entry From The Peanut Gallery (1)

psykotedy (750720) | more than 9 years ago | (#9853635)

Well, I have to say that everything here seems to make sense. Personally, I use a not-quite-purist version of extreme programming, in which I write unit tests like a madman. I also take the approach that everything I write is an API that will be used by others (even if it won't because I know I'm just stupid enough to forget how I coded something months ago, and introduce some misuse of it). If you make sure all of your pieces are air-tight, then you should be okay. Then integration testing is used to test how well the little blocks play with one another.

As far as end-to-end testing, we have a testing staff that takes care of that. We perform load testing that is derived from actual user usage (as captured in the access log files), but that's about it. If we wanted to be really thorough, we could run light versions of load tests in the integration environment to simulate user behavior and see if that breaks things, but we don't currently.

Test Methodology (1)

DuhFace (801888) | more than 9 years ago | (#9853636)

No matter what test tools, methodologies or approaches you take, testing should always about these things: An independent look at the product (not performed by the developer). A spec with discrete functions defined. A plan that explains what you need to start testing, what you are going to test and how you will report what you find. A documented method to assign severities to bugs. A database to store bug reports in. A set of test cases that focus on testing discrete functions. AND most importantly - a commitment on the part of the project management to review the bug list regularly and assign issues back to developers. Lot's of testing gets done and then results are ignored until someone yells about a particular problem. As for automation, it's great, but it will always be behind the current development curve, too limited in scope and dependent on the person who wrote it. The tests that are automated should just be there to prove that stuff that is already working didn't get broken in the last iteration. I also think it's best to have as many users as possible interact with the product as early as possible. You'll learn so much more from watching them try to use the thing than you can any other way. Plus, the fear that the product is going to actually be used by customers is a great motivator for the team to focus on what matters - shipping a satisfying product.

jameleon (1)

primus_sucks (565583) | more than 9 years ago | (#9853649)

We've had good luck with Jameleon [sourceforge.net] . We use JUnit for the low level stuff and Jameleon to test the web front end. Of course, it's probably a good idea to use human testers before you go to production, but automated testing can cut down on the bugs that make it to the human testers.

Pragmatic Programmer series (1)

chiph (523845) | more than 9 years ago | (#9853650)

You need to buy a copy of the Pragmatic Programmer's starter kit [pragmaticprogrammer.com]

The third book in the series is about project automation, where they teach you how to repeat in a controlled manner all the stuff you learned in the first two books. The basics:

1) Write unit tests before writing your methods
2) Your unit tests should follow the inheritance tree of the classes under test to avoid duplicate test code.
3) Write a vertical "slice" of your application first (all the layers, none of the functionality). This will prove out communications and give the QA people something to work with while you flesh out the app.
4) Build & unit test nightly. Any build or unit-test errors need to be fixed the next day, and no later.
5) Release to QA as often as things get semi-stable, and when they have time to test.
6) Try not to ship with any known bugs. How do you know if you've got bugs? Your unit tests, integration tests, QA tools, and end-users tell you via a bug-tracking tool like BugZilla [bugzilla.org] or FogBugz [fogcreek.com]

Do we do all this at my current employer? No. But we're working towards it.

Chip H.

Last few companies I worked for (1)

Orion Blastar (457579) | more than 9 years ago | (#9853655)

the users did the QA testing, just like Microsoft has the users do. We had QA people, but I am not sure what exactly they were doing becuase many flaws and mistakes got past them. Just not from my programs, but ones my coworkers wrote. I did my own QA testing, and took longer to develop code, hence I was let go.

Testing (4, Interesting)

Jerf (17166) | more than 9 years ago | (#9853661)

Lemma One: It is impossible to comprehensively test the entire system manually.

Proof: For any significantly sized system, take a look at all the independen axes it has. For instance: The set of actions the user can take, the types of nouns the user can manipulate, the types of permissions the user can have, the number of environments the user may be in, etc. Even for a really simple program, that is typically at least 5 actions, 20 nouns, (lets estimate a minimal) 3 permission sets (no perm for the data, read only, read & write), and well in excess of 5 different environments (you need only count relevant differences, but this includes missing library A, missing library B, etc.). Even for this simple, simple program, that's 5*20*3*5, which is 1,500 scenarios, and yes, you can never be sure that precisely one of those will fail in a bad way.

Even at one minute a test, that's 25 hours, which is most of a person-week.

Thus, if you tested a enterprise class system for three days, you did little more than scratch the surface. Now, the "light at the end of the tunnel" is that most systems are not used equally across all of their theoretical capabilities, so you may well have gotten 90%+ of the use, but for the system itself, a vanishing fraction of the use cases. Nevertheless, as the system grows, it rapidly becomes harder to test even 90%.

(The most common error here is probably missing an environment change, since almost by definition you tested with only one environment.)

Bear in mind that such testing is still useful, as a final sanity check, but it is not sufficient. (I've seen a couple of comments that say part of the value of such testing is getting usability feedback; that really ought to be a seperate case, both because the tests you ought to design for usability are seperate, and because once someone has fuctionally tested the system they have become spoiled with pre-conceived notions, but that is better than nothing.)

How do you attack this problem? (Bear in mind almost nobody is doing this right today.)

  1. Design for testability, generally via Unit Tests. There are architectures that make such testing easy, and some that make it impossible. It takes experience to know which is which. Generally, nearly everything can be tested, with the possible exception of GUIs, which are actually provide a great example of my "architecture" point.

    Why can't you test GUI's? In my experience, it boils down to two major failings shared by nearly all toolkits:

    1. You can not insert an event, such as "pressing the letter X", into the toolkit programmatically, and have it behave exactly as it would if the user did it. (Of the two, this is the more importent.)
    2. You can not drive the GUI programmatically without its event loop running, which is what you really want for testing. (You need to call the event engine as needed to process your inserted events but you want your code to be in control, not the GUI framework.) One notable counterexample here is Tk, which at least in the guise of Python/Tk I've gotten to work for testing without the event loop running, which has been great. (This could be hacked around with some clever threading work, but without the first condition isn't much worth it.)
    The GUIs have chosen an architecture that is not conducive to testing; they require their own loop to be running, they don't allow you to drive them programmatically, they are designed for use, not testing. When you find a GUI that has an architecture at least partially conducive to testing, suddenly, lo, you can do some automated tests.

    And in my case, I am talking serious testing that concerns things central to the use of my program. I counted 112 distinct programmatic paths that can be taken when the user presses the "down" key in my outliner. I was able to write a relatively concise test to cover all cases. Yes, code I thought was pretty good turned out to fail two specific cases (only one case, with no relationship to any other failure), and a specific combination of three axes that failed independently of the other settings. Are you manually going to test all 112 cases every time you touch the GUI code? I know I wouldn't!

    If you don't have this, you have already lost.
  2. Run the testing suite every day at a minimum. Create a culture where tests failing are a big deal.
  3. You will find you can't quite test everything, although as your programmers get into it you will find you can test many, many more things than conventional wisdom says you can. Conventional wisdom in this area comes from defeatists.
I can already hear the outcry: "But I can't make this work in my job, with umpteen bajillions of lines of existing code!" To which I reply, no, no you can't. That's what I mean by you've already lost. Just keep doing what you are doing and CYA.

(If you are a programmer, you can start introducing tests in your own area and gain some benefit but you may have a serious cultural problem.)

Unit tests and systems designed for testing also can not test all combinations. But you will find that they can usually cover their individual axes well enough to get the vast majority of the gain, and help you locate bugs and such. You can also cover the common full-system test cases.

Testing the system then becomes running the automated test suites, which should be run daily at a minimum, and covering the smattering of cases that can't be covered with that.

Let me make this clear: I am coming right out and saying that in the majority of cases, there is no clear and clean way to do testing. Testability is a feature like any other feature and must be designed in from the start to really work well. (Even if you agree with nothing else in this post, I hope you'll agree that "testability is a feature" is at least worth thinking about.) Thus, do whatever you need to do; you're already in the "hack" zone and as a tester, you can not get out of it on your own.

Sorry this is so depressing, but you can't effectively test a system not designed for it, you can only poke it and prod it and hope for the best.

Testing (1)

Merdalors (677723) | more than 9 years ago | (#9853664)

You need full-time testers. In a small company, they can also double as support staff.

It never ceases to amaze me what strange and unexpected scenarios a tester can conjure, often exposing flaws and weaknesses in the code. A good tester can intercept many bugs before the product goes out the door. Congratulate and reward them.

Summer students also bring a different perspective, and can evince whole new categories of problems with the code.

We have found that automated testing tools require the skills (and salary) of a programmer, so what's the point? Pay the programmer to program, and hire more testers.

get only really smart and really dumb testers.. (1)

xot (663131) | more than 9 years ago | (#9853684)

In our 'enterprise environment' type company(3000+ users) we only do manual testing.We gather up some real techies and some real non tech people.Its amazing how much better the non techs are at testing and letting you know of various bugs.
They will let you know flaws that you never imagined were possible!All in all nothing beats manual testing.We dont have any dedicated testing staff so we just gather up a few end users.

Huge framework developed inhouse (0)

Anonymous Coward | more than 9 years ago | (#9853702)

The company I work at developed their own framework for the software product. There are about 100 developers on this product and 20 QA. The product has VBA macros built in, so all the testing is done by executing these macros. But there are thousands of different auto tests.

A significant portion of the testing is completely automatic -- every build goes through a smoke and regression test as it comes through the build chain. There's no user interaction at all on those. The smoke test is about 45 minutes -- smoke test failures are very serious, and everyone in QA gets email when they fail. The regression tests take several hours, and failures in those are less serious but still important.

Each QA member also manually starts certain automated tests when the product is nearing release. They all know how to start the tester and choose files or scripts to test. Output from these tests is automatically put on a database and accessible from the intranet.

I just started working there a few months ago -- I have been continually impressed with the quality of the automated testing. The problem is that this testing does not cover significant portions of the product, since the only tests that can be run are ones that are executable through VBA. So UI performance is not easily automated. I did some research but I wasn't able to find a product that did a decent job UI testing -- the product is extremely complex and the stuff I looked at was all inadequate.

Check out FIT (0)

Anonymous Coward | more than 9 years ago | (#9853709)


FIT - Framework for Integration Test, is a cool way of wrapping your app so that people can test it more easily. It's open source, and very powerful.

http://fit.c2.com

the UI is a pain (1)

bob_jenkins (144606) | more than 9 years ago | (#9853764)

Anything you can test with a command line, do. Text in, text out, use 'diff result.log official.log' to find if you broke anything. Anything that requires a mouse, though, I hear there's products for that but I've never used them.

I have a pet project (jenny [burtleburtle.net] ) that generates testcases for cross-functional testing, where the number of testcases grows with the log of the number of interacting features rather than exponentially. It often allows you to do a couple dozen testcases instead of thousands, and it lets you cover more interactions too. It's of no use for pull-down menu interfaces, though, where the goal is pretty much to test every node in the tree and there's no interactions between nodes.

Design your code to be automatically tested (1)

The Pim (140414) | more than 9 years ago | (#9853767)

All of it. My beef with the unit testing craze is the suggestion that only individual "units" (whatever the hell a unit is) need to be tested automatically. If you design your application well, you should be able to simulate everything up to the multi-step transactions a user would go through. Instead of your code being 75% testable, make it 95% testable. You'll always need people to test that last bit that can't be automated (since the only true test is end-to-end), but the more bugs you can find before it gets to them, the better (cheaper, faster).

When it comes down to human testing, I'd say hire someone to be the "user from hell". Even if your automatic testing finds all the real bugs, this user should be able to find unanticipated use cases, annoying interfaces, performance problems, etc.

It's an investment (1)

Catamaran (106796) | more than 9 years ago | (#9853777)

Creating the infrastructure for automated testing is a big investment, one that generally only pays off in the long run. If your company decides to take the plunge here are a couple of suggestions:

Don't let developers check in code until it has passed the entire regression suite. Otherwise, SQA (the test maintainers) will be in a constant catch-up mode and ultimately the disconnect will grow until the whole test suite is abandoned.

Make tests easy to create. This is harder than it sounds. Tools that simply record user inputs and take screenshots are way too brittle. You need hooks at the functional level so that you can drive and monitor the application.

Prefer standardized languages and open-source tools to homegrown stuff. Much easier to maintain, especially across a large organization.

Our recipe (1)

xant (99438) | more than 9 years ago | (#9853794)

  1. A dedicated QA staff. You should have as many testers as you have developers.
  2. Tools for the QA staff to create their own automation. They don't like doing manual testing much, either, so they'll have incentive to use the tools. :-) I'll talk about the tools we use in a bit.
  3. Training for the QA staff on the tools. Hire QA people capable of at least a little shell programming. And the tools they use should be not much harder than writing shell scripts.
  4. A good SCM (source code management) system that provides atomic commits, so that when you fix a bug, you can tell your testers exactly what revision number it's fixed in, and they can get exactly that revision verify it in the same system you had when you fixed it.
  5. A bug tracker. It doesn't have to integrate with the SCM, but if it doesn't, you should make it a hard policy that your commit log messages should say what bug number they are a fix for, and when you resolve a bug, you must say what revision the fix went into. I can't even estimate how much time this policy saves.
  6. Automated rebuilds of every revision of the software. Spend a lot of time on this, it's key. It lets your testers test things the minute you fix them. That means, if you failed to fix it correctly, you'll find out SOON while the fix is still fresh in your mind, and you'll save even more time by not having to get back into the mindset of that bug. You will need special software to do it, so read on.
So here's how those break down:
  1. For us, our project has had 1-2 developers working full time (me, plus one additional deveoper at various times). We've also had 1-2 testers working full time. That sounds like a small project, but after two years of dev it is a lot of code, and all that code needs testing. The fulltime test staff available right from the start was absolutely not money wasted.
  2. The development is done in Python, with Twisted [twistedmatrix.com] , and so we used a combination of unit tests written by the developers and black box tests written by the testers. Because our app is primarily web, I developed my own web test system (having found no others that were suitable for use by non-programmers). This system is PBP [berlios.de] , which is a shell-like scriptable web browser.
  3. Our main tester had a little bit of C in school (she actually had forgotten most of it ;-) and a little bit of unix command line experience. This was more than enough to be able to design and build tests with PBP. Then I spent about one full day showing her how to use it and brainstorming testmaking strategies with her.
  4. Subversion. [tigris.org]
  5. We've been successful with Bugzilla. If I had to start over, I probably would have used Trac [edgewall.com] , with which I've had good experiences on other projects.
  6. I built a completely automatic build system using Buildbot [sourceforge.net] to trigger the builds after each commit and A-A-P [a-a-p.org] to script the build process.

MetaTest (1)

bigattichouse (527527) | more than 9 years ago | (#9853814)

I've been working on a simple idea for a while, a programming language that *provides* the testing capability, implementing the testing story. Its not a horribly difficult idea, just something I've been playing with. My goal is create a test language you can then compile against various languages to see if the implementation in any given language behaves in the same way.. also to make unit tests a little easier to read. You can read about it on the project blog: http://www.bigattichouse.com/thoughtbrew.php?BREWI D=METATEST [bigattichouse.com]

Large scale automation is a very complex project (1)

qaguru (777981) | more than 9 years ago | (#9853842)

Here are some lessons learned the had way in my 10+ years of automated testing experience: 1. Large scale automation is a very complex project. Think of it not as a testing activity, but as a full scale development project. Start with setting goals, then gather the requirements, and continue as you are writting product. 2. Get at least one person on the team which has participated in large test automation project. There are many common mistakes you can avoid. 3. Make an extensive analisys what you need to test. Some areas are very suitable for automation, others does not make it cost effective. 4. Always think of reducing the maintanance tasks. maintanance is the killer of the automated test. A good framework design will save you thousands of hours of work. One of the most commong approaches is to have separate interface components, so most changes in the UI will leave your tests unchanged. 5. Select the right tool for the job - there are hundreds of them on the market. Check the license price, the power of the scripting language, how it supports our interfaces, can it be integrated with other products. I especially like TestComplete, because is cheap and powerfull. But time for training is also an expences, and many companies select the most popular products there (from Mercury, Rational and Segue). 6. Participate in mailing lists, newsgroups, etc. It gives you the ability to get fast advise from some of the best people in this area. And last, IMHO, the test automation, if done right, is always a winning strategy. But it is not a silver bullet.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...