Wednesday, January 27, 2021

Interpreting the Regression Test Activity Graph

We just completed a 1.5 week intensive manual regression test phase where we executed almost the complete set of all (several hundreds) test cases. We are in a lucky situation. Our documented test cases represent nearly 100% of all implemented features. If we achieve a 70-80% test coverage, then we get a real good picture of the overall quality of the product increment. That means, aside from the many automated tests, it's worth from time to time, doing some manual end-to-end regression testing.

While tracking the regression testing progress using a cloud based test case management tool, we were looking at the activity graph and it made us smile. It's exactly what we expected.


 
At the beginning, testers focus executing those test cases that are well documented, having clear instructions and which rely on previously well prepared test data. I mean, objects that are in specific states where testers can execute just the transition from one state to the next and not worry about the laborious setup steps.
 
Then, testers switch to more complex test cases which take a little more time to understand and test.  This is when the progress curve reaches the peak and progress starts to slow down.
 
Of course, we also find anomalies. Bugs can slow you down because analyzing and understanding where and when defects were introduced takes additional time. After a few days, the first bugfixes are delivered, too. Developers require your attention to test their fixes. This interrupts testers from working on their suite. The rate of passed tests is decreasing, but still decreasing in a constant and expected way.
 
In parallel, developers are working already on the next generation of the product, meaning, their user stories get shipped and require testing too. The tester's brain is now busy with a lot of context switching; clearly more than at the beginning of the sprint.
 
Now that we are more than half way through, we switch to the monster test cases. I call them like that because they do not consist of simple steps, they contain several tests expressed in tables of inputs and expected outputs. That's why I think it's nonense to talk about the number of test cases. A test case can be atomic and executed in seconds, yet another test case can keep you busy for half an hour and more.

Some of the test cases may be poorly documented and require maintenance or correction. Some test cases require the help of a domain expert. The additional information gained should be documented in the test suite, so we don't have the same questions next time. These are all activities running in parallel.

Last but not least, weekend is getting closer. The first enthusiasm is gone, you're starting to get bored.
You hear music from your neighbour. The caféteria gets louder. The sound of clinking glasses reaches your desk. It's time for a break, time to reboot your brain. TGIF! And now it's weekend time!
 
And then, Monday is back! It's time for another final boost and time to say thank you. Great progress.
 
We made it Yogi! 
 
....and I like that graph.
 
 
 

 


Monday, January 11, 2021

About Bias and Critical Thinking

 I recently stumbled over a small article about the "Semmelweis-Reflex". It was published in a Swiss magazine and it was quite interesting as I drew some analogy to software testing:

In 1846, the Hungarian gynecologist Ignaz Semmelweis realized in a delivery unit that the rate of mothers dying in one department was 4 percent while in another within the same hospital, the rate was 10 per cent.

While analyzing the reason for it, he identified the department with the higher death rate was mainly operated by doctors which were also involved in performing post-mortem examination. Right after the autopsy they went helping mothers without properly disinfecting their hands.

The other department with the lower death rate was maintained mainly by midwifes who were not involved in any such autopsy. Based on this observation, Semmelweis adviced all employees to disinfect their hands with chlorinated lime.

The consequence: Death rate decreased remarkably.

Despite clear evidence, the employees' reaction remained sceptical, some were even hostile. Traditional believes were stronger than empiricism.

Even though this is more than 150 years ago, people haven’t changed so much in these days. We are still biasing ourselves a lot. The tendency to reject new arguments that do not match our current beliefs is still common today. It is known as the Semmelweis-Reflex. We all have our own experience and convictions. This is all fine, but it is important to understand, these are personal convictions and may not be transferred to a general truth.

How can we fight such bias? Be curious! If you spontaneously react with antipathy on something new, force yourself to find pieces in the presenter’s arguments that could still be interesting, despite your current disbelieve as a whole.

Second, make it a common practice to question yourself by telling yourself “I might be wrong”. This attitude helps overcome prejudice by allowing new information getting considered and processed.

Back to testing:

From this article, I am learning we should start to listen and not hide behind dismissive arguments simply because what is told to us doesn't match our current view of the world.

But this story has two sides. If I am getting biased, then others may be biased, too. Not all information reaching my desk, can be considered right by default. The presenter's view of the world may be equally limited.

Plus, the presenter may have a goal. His/her intention is to sell us something. The presenter's view may be wrong and based on "sloppy" analysis if any fact collection was done at all.

Call me a doubting Thomas, but I don't believe anything, until I've seen the facts.

So what?

If someone tells you "a user will never do that", question it!

It may be wrong.

 

If someone tells you "users typically do it this way", question it!

It may be an assumption and not a fact.

 

If someone tells you "this can't happen", question it!

Maybe she has just not experienced it yet and will happen soon.

 

If someone tells you "we didn't change anything", question it!

One line of code change is often considered as hardly anything changed at all, but in fact this adaptation can be the root of a new severe bug or a  disaster.

 

I trust in facts only, or, as someone said: "in God we trust, the rest we test". This is the result of many years testing software. Come-on, I don't even trust my own piece of code. I must be kind of a maniac!

 

Sources: text translated to English and summarized from original article  published as "Warum wir die Wahrheit nicht hören wollen», by Krogerus & Tschäppeler, Magazin, Switzerland, March 20, 2021