You cannot exhaustively test any software system. Therefore, every project being tested should have criteria defined to indicate “we’re done testing”.
Each test effort should have specific quantifiable goals.
For example, “100% of scripts for new functionality must be run during system integration testing”. That’s one goal. There should be a goal related to regression scripts as well. You may even have goals that are specific to testing certain types of functionality if for some reason you did not have test scripts.
Now, here’s something to consider: is testing complete when all testing has been competed, or is testing complete when all test scripts have successfully run?
Do you see the difference there? You could have 100 scripts, with all 100 run but 10 have failed. Is that considered testing complete? Or do those 10 failed scripts need to pass before testing is considered complete?
This question is something that should be discussed with your QA organization if you don’t have a set standard yet. You can handle this a couple of different ways:
- As part of your test completion criteria you can say all scripts have to successfully complete – meaning no scripts can be in a failed status for testing to be considered complete.
- You can include criteria that defines how many bugs and at what priority they can be at in order to say your finished testing.
- Everyone goes to production with some known bugs, it can’t be helped. But what you can do is define what is acceptable to go to production with. For example you could say “testing is completed when 95% of all scripts have passed and there are no critical or high defects open and any medium defects must have a work-around defined”.
In the above example of 100 scripts/10 failed, testing would not be considered complete because that’s a 90% pass rate, not 95.
I say this all the time and it continues to be a valuable lesson. “What process you have in place is not as important as actually having one…and following it”.
Make sure you have a standard process so you’ll know when testing is complete and the development team you’re working with knows what the expectations are. This is invaluable in your on-going relationship and communication with your developers.