Sunday, August 15, 2021

Accurate Test Reports

One of the main challenges I faced during my 20++ yrs. career as a software tester and test manager is to resist project and release delivery pressure.

When deadlines are close - and the software-under-test is still not ready to ship - testing automatically becomes a "virtual" bottleneck.

Resisting the temptation of short-cutting testing and trying to keep thoughts out the way such as  “it’s not going to be that bad” can be extremely stressful.

Deploying a release in time but of poor quality will almost certainly fall back on QA. Critical questions are usually addressed first to the testers.

On the other hand, shifting a release-date may move customers into an uncomfortable situation when they booked resources for their final acceptance tests and/or training. Plus, it will trigger questions like “why have the testers not found these issues earlier”.

 To be honest, I had given in sometimes on the temptation to perform short-cutting and in most (but not all) of these cases it surprisingly worked out, even though these were tense moments. Shortcutting testing can work well if one sticks to the high priority test cases first, and those test cases you assume beeing affected by the latest changes.

But, accurate prioritization of test cases is only possible if you know the software and your customers well. Prioritizing test cases works if you have a good sense where the SUT is likely to break/survive depending on the change applied. This includes knowing how customers use your program. Some bad experience in the past might help here. Don't forget to visit your customers from time to time.You can get valuable insights when you see the customers working with your system.

Speed is an important element in testing.
High prio bugs are found earlier if high prio test cases are executed first. The longer QA needs to prove the software isn't worth shipping, the higher the risk people look at QA as perfectionists - trying to find/report all possible bugs. If QA gets such labels, their reports are interpreted similarly. At the worst, testers are considered a thwarting element in the delivery process.

When in doubt (and available time does not allow digging deeper into analysis of doubtful areas) - add these concerns to the report. Be as accurate as possible...and, for God's sake...don't let management make the test manager providing the final GO on the release. Instead, invite all important stakeholders, put all the facts on the table. This includes what you know and what you don't know. Then let the "going-live" decision be owned by the team.

It has worked for me...sometimes  =;O)

 

 

No comments:

Post a Comment