Test organisations are constantly stating that there is never enough time to test, yet they continually invest a great deal of time producing reports that miss their intended purpose, to report. Reports typically contain a great deal of data but have limited information and hence lead to delays, poor quality and unnecessary costs.
It is important to first understand why we create a report. What is the purpose of the report? What decisions will be made based on the information it? What are the consequences of not providing the report or if the information in it is inaccurate? The answers to these questions will guide how much effort should be made into creating the report and maintaining the integrity of the data feeding it.
Here are some quick guidelines for effective reporting:
- Relevant – who will read the report? Reports should contain information that people actually need. Are the information needs clear? All too often we create reports that contain information people have not asked for or provided in a structure that makes the information less relevant. Make sure there is a need for the information and that the detailed needs are clear.
- Fitting – it is important to appreciate that there are several different stakeholders for information and the likelihood is that they have different information needs. The danger of providing a “one size fits all” report is that it can fail to meet the specific needs of anyone. A Test Leader may want information per user story but a manager may want per “Investment Proposal” or “Epic” (group of user stories).
- Accessibility – all too often we wait to be sent information rather than have a method to go and get it ourselves. If we are able to access ourselves it can, however, be time consuming to find the reports that we are interested in due to the volume of reports available. Create a space where reports can be accessed directly and implement a structure that enables relevant information to be easily located e.g. by combinations of team, sprint, system, risk etc.
- Automated – manual process driven reports take too long to create and are often out dated by the time they are ready. The number of hours spent on reporting due to this on a yearly basis is significant. In addition they tend to have a range of errors making them less trustworthy or useful. Make the report generation process as automated as possible so that, for example, the information can be accessed from a status meeting and not taken to a status meeting.
- Consistent – how many organisations have several Test Leaders producing several different styles of weekly status reports, with varying quality, but trying to communicate the same information? Establish an organisation level reporting framework of high quality. Not only will this contribute to the goal of implementing an automated reporting process but also provide other benefits such as making the reports easier to interpret and trust due to their consistency. There will always be a need for some specific reports but this should be the exception and not the rule.
- Informative – requirements, tests and defects are often related to each other (should be in any case) but often we report the status of each independently. Avoid reporting, for example, the defect status only in terms of severity and status and relate it to requirements. Better still report it on different requirements levels (parent and children) as, again, different stakeholders are interested in different information. Finally rather than just report a number e.g. 80% passed, report it against expected progress or exit criteria so that it has more meaning.
What is the standard of reporting in your organisation? Do you have room for improvement with reporting? Do you already have an automated organisation level reporting framework and if so can you share details of this?
Author: Matthew Lapsley, Solution Manager Quality Management at Lemontree