News The consequence of poor test design

The consequence of poor test design

During my many years in testing one aspect has stood out amongst all others as costing significant time, money and quality; the test analysis and design process. Not only does test design take a great deal of time; the end result of it is often poor in terms of duplicate testing, gaps in coverage and inaccurate test procedures. If we assume that we have a set of inefficient, incomplete and incorrect test cases then obviously we are going to throw good money after bad when it comes to test execution. Take it one step further and we are going to invest in activities such as automation and test data that will simply execute our inefficient tests faster.

A traditional situation today

Let’s take a look at the traditional situation. A change is ordered and the Requirements Analysts create a text based requirements document. The requirements are reviewed by test but due to the volume of (ambiguous) text the review process was not so effective in finding issues and clearing up all misinterpretations. If issues were found they are often only cleared up verbally i.e. the documents are not updated. Test now have the task of designing an optimal set of accurate test cases that fully cover the requirements. Test then ask the Requirement Analysts to review their test cases but due to the volume of text and repetition in them, only skim read and approve.

So what just happened? Well we took a long time to create two sets of independent documentation, requirements and tests, which no one had the ability or time to thoroughly review. In all likelihood the test cases probably contain a great deal of misunderstandings and gaps that will affect us later on. In addition, based on recent research at some major institutions, the test cases probably only covered around 25% of the requirements and had duplicate testing up to 40 times.

Summary is that we have taken too much time creating too many test cases that cover too little, have many inaccuracies and will take a lot of effort and time to execute.

Too many irrelevant test cases

The problem does not, unfortunately, end there. Every time there is a change we receive updated sets of requirements, which need to be tested. We also need to assess how that change has affected our existing test cases. Which test cases need to be modified and how? Which test cases are no longer relevant and shall be removed? Which test cases contain duplicate testing of the new test cases we just created? Updating test cases can be extremely time-consuming as one change can affect the steps of many test cases. If these are not updated the detailed steps are no longer accurate.

Summary is that we again have spent too much time creating even more test cases, have not aligned our existing test cases with the change and have introduced increasing levels of inaccurate and duplicate testing. In the end we have out-dated test cases, which we continue to execute and have lost control of what they represent in terms of coverage.

Create fewer test cases covering more at a lower cost

I have finally found a solution which automatically converts structured requirements into detailed, accurate and maintainable test cases via a range of coverage techniques. The end result being less time to create fewer test cases covering more requirements and the ability to keep them up to date and relevant.

For me this solution addresses all three goals of time, cost and quality. It ensures more accurate development and testing is performed in significantly less time and effort. Only when we have a true understanding of what we are testing, and are perfectly aligned with that, should we start to implement other improvements such as simulations and automation.

Want to know more? Contact Matthew Lapsley.

Comments are closed.