My job is done when I broke the software I am testing. If I all my tests become a green status, I am getting distrustful. Can it really be, that all those tests passed? I mean, couldn't it be that I missed some important tests or didn't I test thorough enough? The possibility that the software could simply work as designed rarely crosses my mind. I even don’t trust my own test scripting. I am sure, my scripts have the same kinds of bugs that developers continuously add into our software.
At this point I remember an article where I read something about "False positives, and false negatives". Applied to my daily work as a test automation scripter, how can I find out quickly, whether my new or touched scripts will find errors if they occur and when the feature currently work fine?
We have a lot of scripts which test the API to our service based application and I found a really stupid but effective way to smoketest my own test scripts. I plugged out the network cable and started the test suite. In such scenario, I expect my complete test suite to "successfully fail" for all tests, but it was big surprise seeing one of the test cases to still get the green status icon.
I wasn't online and all my tests were supposed to submit and query information from a service that needs a network connection to get any meaningful results. How can it then be that this test passed without a network connection?
Looking closer at the validation/assertion the script was executing (hereby debugging my own code and trying to understand why this test returned a "passed" status) revealed that I was doing an assertion to an object that becomes the same state, no matter whether you are online or not.
For example, I was querying a list of items in a database which can be of either one or the other state. For simplicity let's take the states ACTIVE or INACTIVE.
The interface I was testing offered 3 parameters to pass to the API. Passing "0" gives me the INACTIVE items. Passing "1" gives me the ACTIVE items. Passing an empty string returns all items.
As our test environment is used by a many other testers, too, hence the exact number of items could not be determined at runtime unless you clean the database to start from an initial state.
That could not be done, because it affected too many people. Therefore I focused on testing the service by passing all 3 parameters in a sequence and then compare the number of items among each other. I verified that the total number of ACTIVE items plus the total number of INACTIVE items are the same as the total number of all items using the following code:
Assert.IsTrue(nNumTotal==(nNumActive+nNumInactive));
Assert.IsTrue(0 == (0 + 0));
The fix in my script then was to add an additional Assert-Statement that tests for the number of items being greater than 0.