About a decade ago, a manager wondered why I asked questions about how the software-under-test was supposed to work. He responded: "You don't need to know all that stuff, all you need to do is test".
This happened so many years back, I can't remember whether he really meant it seriously or was he just making fun.
I mention this episode here, because - even if you are looking at software from a black box or business oriented perspective, you should still be interested in how things work from a technical point of view.
If you are digging deeper into the entrails of a software-component, you are likely to find interesting information resulting in the detection/development of great new test cases.
James Bach (Satisfice) once provided a great example in one of his speaches when asking the audience "how much test cases do we need" while showing a diagram that consisted of a few simple rectangles and diamonds. Various numbers were summoned towards the speaker's desk, most of them being wrong or inappropriate. When James Bach revealed the details behind this simplified diagram, the scales fell from their eyes. It became clear that there is much more behind this graph. When people guessed numbers like 2, 3 or 5 tests, it became now clear, this was just the start and the real number of required tests would be multiple times higher than their first guess.
But, I have also my own story to share with you and demonstrate why it is important to ask more questions about the software-under-test.
A quick intro
Every passenger vehicle is registered using a 17-digit vehicle identification number (VIN) which is unique worldwide. The first 3 letters identify the manufacturer.
For example:
- WVWZZZ6NZTY099964 representing a 1994 a VW Polo
- WF0WXXGBBW6268343 representing a 2006 Ford Mondeo V6
- WBAPX71050C101516 representing a 2007 BMW 5series
In order to retrieve the details of each car's equipment such as painting color, 2-door, 4-door, tinted glass, sunroof, etc. we fired a VIN number as input and received the complete vehicle information as an XML output.
This service was an important part of a cloud based service we developed and sold as part of a bigger solution to insurance companies and bodyshops. The customers used it to estimate the total repair cost of damaged cars. Frankly speaking, in order to perform an accurate estimation, one needed at least the part prices and repair steps or workload. Our company was great in this business, since they had all that data from almost all car manufacturers. I have to admit, this is a simplified explanation of a more complex piece of software.
Back to testing
Our job was to make sure the vehicle identification service returns the correct car related data. To test that, we took our own and our friends' cars as a reference, because it was the easiest way for us to verify the data makes any sense.
What we didn't know yet; our company wasn't the owner of all data. Depending on the manufacturer of the queried vehicle, our system either retrieved data from our own database or had to call a third party web service to request the required information. This was the case for BMW and PSA (Peugeot, Citroen, Opel, Vauxhall) just to name two examples.
For the end-user, this wasn't visible. All the user saw was a 17-digit VIN field with a button to trigger the search and then wait until we returned the complete vehicle information along with the equipment data.
What does that mean for the development of test cases?
Knowing, that the system - in some cases - communicates with third party components to get the information, is essential when testing this service. Simply submitting one VIN to see that the service responds correctly, is not enough.
We needed to know what data we own, and what data is gathered through calling external services. Unfortunately, we never managed to get a comprehensive list to shed a little light on the inwards of the software. Therefore, the only weapon for us testers was to test the service using a broard variety of exemplary VIN numbers covering as much as different manufacturers as possible.
But this is just half the story. When a request was being fired, our system cached the generated XML response for an unspecified period of time.
That means, if you submit a particular BMW VIN number for the first time, followed by later calling that service again using the exact same VIN number, this wouldn't trigger the same developer's code paths anymore.
Also, we should know, for how long the data is cached. Unfortunately, the owners of the product didn't reveal the secret to the testers. I could now start with a new article on testability and rave over the necessity to have extra functions for testers to flush the cache when needed, but this goes beyond the scope of this article.
What else?
Having now two different kinds of tests such as...
- consider different brand names when testing VIN
- consider submitting the same VIN a second time to test the cache
..we should start thinking about what happens when one of the external services is unavailable.
How does our system react in such scenario, especially if the response isn't cached yet?
How would we even know that it's the external service and not our own code that got broken?
One of the approaches was to add extra tests examining the external services in isolation from our implementation.
Yet another approach - and this is what we did (because we were not given the details) - was to have a set or group of different VIN numbers that belong to the same manufacturer. For example, a set of 5 BMWs, another set of 5 Peugeots, 5 Fords, etc.
If an external service is down, then a whole set of such test cases would fail. If instead only 1 test within a set failed, then the root cause was probably somewhere else.
Visualized
The below circles demonstrate it visually. At the beginning one knows little to nothing about the system- or component-under-test (1). By questioning how things work, one detects new areas that may or may not be relevant while developing new test cases (2). Knowing and understanding these areas enlarge the awareness area of circle and result in minimizing the risk of skipping important test cases.
Developing test cases is not a monotonous job. The contrary is the case, it requires asking questions, sometimes critical questions and having a good understanding what are meaningful test scenarios to cover and which can be skipped without accepting high risks. Sometimes, you need to be obtrusive when it comes to interviewing techies to get the details, especially when asking for testability hooks.
Although testing activities face a lot of repeating patterns of software-defects, developing appropriate test cases is a task that involves a lot of thinking. That's why I say, no test is the same, use your brain!
BTW, I forgot to add...not all external service providers allowed us to cache data, even if it was cached just for one day. That's an additional test to think about, and I am sure, I forgot to mention many more tests we actually developed at that time.
Below, find a simplified flowchart of the vehicle identification service.
No comments:
Post a Comment