Monday, October 25, 2010

Revolt of the Test Scripts

If testers are not invited early enough, they will not have the chance and the time to determine whether or not they have to adapt their existing scripts. As a matter of fact, some scripts will have to be de-activated since they start to report errors that aren't any. Someone smart once called this "False negatives", consequently the opposite exists, too (false positives). see also the blog entry as of July 2009 (The Test Successfully Failed).


...by the way, Test Tools and/or your own test scripts can be buggy, too (here a false negative).

...in the meantime, I found out under which conditions this situation occurs. In the NUnit Test Automation Framework, even if all your test scripts were running fine the whole suite may report a failure if someting goes wrong outside the test suite. The TearDown function is such a method that gets called automatically after each script and/or after the suite is completed. At that point, all scripts reported already a successful run, but the TearDown can still throw an exception if for instance your implementation invokes a call of a null object (null pointeer exception). In such case, NUnit reports a failed suite although all child scripts were passing all tests.

Monday, October 18, 2010

Automated Testing 4 Oil

 It is easy to build up a new test automation suite from scratch. It is a different story to keep it up running for a life-time. Changes in the software under test will be implemented. One day your scripts need to be adjusted to the new situation. Some scripts will become obsolete, some new scripts need to be added, some just don't work anymore.

The more test cases you have the more difficult it is to keep them up-to-date and make them provide the same benefit they did at the time they were born and run successfully. More and more scripts will start to fail and if you don't have the time to analyze the root-cause and fix the scripts, the number of scripts printing red alarm signals increases.

You will start to become familiar with the fact, that some scripts always print red warnings and the day isn't far where you don't even notice anymore that more and more scripts suffer of the same issue.
Probably you don't even trust your scripts anymore since they may report so called false-negatives. Make sure you keep them up-to-date at the time they fail. We call that script maintenance, and yes, this not only applies to manual testing. It is especially true for automated testing, too.

This is my second cartoon published in a printed magazine (STQA Magazine, September/October 2010). 
ThanX to Rich Hand and Janette Rovansek.

Wednesday, October 6, 2010

Mission Impossible in Paris


I just came back from a holiday trip in Paris where we met good old friends from Scotland.
While I could relax from the stressful tester's life, I had a nice déjà-vu on my last day at Paris in the Centre Pompidou.

My kids were playing with a conveyor machine in the playing area where you could pick up colorful everyday items out of a big repository and put it on the conveyor band. Someone could then turn the wheel and all those items got transferred back to the repository. 

What first looked like a boring task to me, appealed so much to my kids and other children that it was hard for me to get them away of the play zone and enjoy the beautiful exhibition. However, the wheel of the conveyor regularly blocked so the kids were no longer able to turn the wheel. As a consequence, no items could get transferred back to the repository. The reason was the stack of items in the repository gaining so much height that it reached the thin vertical slot of the conveyor after only a short while of running. Due to the kids continuously pulling the wheel, some items were involved into the slot and blocked the machine completely.

I spent most of the time making sure, the stack was kept at a safe height so nothing could disappear into the slots, hereby making it easier for the kids to play with the conveyor on a continuous basis. While doing this, I realized another small item got stuck in the slot but did not cause the conveyor to block completely. Instead it slowed it down. The kids had to put a lot of forces on pulling the wheel, so their items got transferred back in the repository. I was trying to get the "bug" out so the kids had a lot easier job.

I failed miserably, because none of kids could stand still for a moment. I had risked my fingers each time I tried to get the "bug" out of the slot. Whenever I almost had it, another kid took control over the wheel and caused me pray for my fingers. Surprisingly, I was the only one parent trying to help those kids and keep the machine running smoothly. When I finally decided to step back and watch the scene from a distance, I saw kids trying to pull the wheel even harder when it no longer worked. None really took the initiative and scaled down the size of the stack.

Thinking about that issue, it could have been easy for the local management to fix the problem. First there were way too many items in the repository. Fewer items had solved the issue because the stack never had a chance to grow up fast and so high up. In other words, less items in the repository and the conveyor never had such a continuous maintenance problem. Second, none really seemed to care about a badly moving conveyor. Instead of someone stopping the kids from using the wheel for just one minute and trying to solve the root cause that slowed down the conveyor band, local management just accepted (or wasn't aware of) the fact, that those poor kids had to "work hard" and spend too much effort.

At this moment I realized the marked similarities to my job as a software tester and test automation engineer where we experience such situations almost every day...