Monday, January 11, 2021
Tuesday, January 5, 2021
A decade ago, a manager once asked me why I had questions about the software-under-test and how it was supposed to work. He added: "You don't need to know all that stuff, all you need to do is test".
The exact wording of the German speaking manager was: "Wozu must du wissen, wie das funktioniert? Du sollst es nur testen".
This happened so many years back, I can't even remember whether he really meant it seriously or whether he was just teasing me.
Anyhow, I put it here, because the more you know how a piece of software really works, the more target-oriented and effective your tests evolve. With "effective" I mean tests, that find ugly bugs.
Sunday, November 22, 2020
Sunday, October 25, 2020
Saturday, October 10, 2020
During my holidays, I read two fantastic books about problem solving and focusing on the important stuff. "Range" by David Epstein fascinated me in that there are plenty of examples demonstrating how people without specialized knowledge in a particular area could find solutions to problems where the best experts got stuck. The book rejects the common believe one has to start and specialize early in order to really get good at something. It lists plenty of famous people in the world having demonstrated the contrary such as Roger Federer, Vincent Van Gogh, Dave Brubeck, etc.
Experimenting in your career, trying out different stuff broadens the skill to look at problems from different angles and find solutions that are much more difficult to identifiy if you can't get out of your box. It explains also the success story of Nintendo which once was a small company and not very attractive to high talented graduates.
The other book "Simplicity" by Benedikt Weibel, former CEO of Swiss Railway Corporation (SBB"), goes into a similar direction with other and less detailed examples. He analyzes how the best chess players think in terms of patterns and how to focus on the essential stuff. Less is more. Weibel encourages to make more use of checklists and he also makes a heretical remark when he says that "without a great checklist, Sullenberger had not managed to bring down the Airbus on the Hudson savely" (that's not really my opinion. I think it was a mixture of all, great experience, courage, and a little bit of luck).
Big Data is an interesting tool, but it is not solving our problems and it is not free of failure (examples in the book). I am not advertizing but simply sharing my thoughts on two great books full of valuable hints and references although I know, it will be difficult to not fall back into old habits.
Interesting related reading:
- The Carter Racing Case Study: https://www.academia.edu/20358932/Carter_racing_case
- The missing bullet: https://onebiteblog.com/finding-the-missing-bullet-holes/
Thursday, October 1, 2020
The tickets were raised long time back but, because of release pressure, these were parked aside in the backlog and assigned low priority. We called it the “parking-place”. At that time, a restrictive process of prioritizing tickets was needed to manage the deadlines. The limitations were accepted for a while. But over time, things changed. What was once rated low priority for release N, all of a sudden became more important in the next release and at a bad timing. For many stakeholders including me, these tickets came out of nowhere like submarines; and of course, all at same time.
As a lessons learnt, we - as testers - have decided to get better by regularly reviewing parked tickets. This is to help everyone in the team to become aware earlier of potential new dangerous "submarines" intending to surfac on the water plane soon. There would still be enough time then to tag these before they have a chance to shoot.
Sunday, September 27, 2020
I struggled with some of our new features when looking from an end-to-end workflow view. I was convinced we didn't take into account the unexpressed users’ requirements enough to not just get a user’s job done, but to also get it done quickly.
To find out whether we were right, we prepared a plan.
When the customers were
invited to our offices the very next time - testing our new features - for one time, we didn’t prepare
anymore the typical test case checklist. This time, instead, we
prepared just one simple and realistic end-to-end scenario that combined the
features of the old version with the new features. Don't ge me wrong, I love checklists, they are a perfect tool to guarantee we don't forget anything but for this particular case, we needed to jump out of the standard operation procedure. Instead of ticking off an atomic feature list, this time we took into accocunt its place in a real-life scenario.
We were really surprised about the reaction of the domain experts. This approach revealed the "pilikia". All experts agreed this process requires improvement.
As a result, initially low rated internal tickets became a higher attention and unfortunately had to be implemented as part of late change requests in the middle of the stabilization phase. My bad. The timing was a disaster. It was so bad, it was almost good. These late changes payed off. The implemented improvement was significant and worth it. The customer appreciated the correction and our ability to respond quickly to their concerns.
They probably did because we "shocked" them with what they were getting next. Okay, it worked this time, but I guess we shouldn't do this too often.
Sunday, September 6, 2020
Sunday, August 30, 2020
As a software tester, I often feel like being an archeologist. I am analyzing bits and pieces that are found in requirements, use-cases, user-stories, emails, phone calls, balcony talks, meetings, defects, etc. I am collecting all these wide spread pieces of information and try to put them back together as accurate as possible. Unlike the two men in the excavation, I like to do this kind of job. When the connected pieces turn into a nice picture in my head, I am gluing them together in the form of well documented test cases or - if it is a really large "dinosaur" - in a final report that contains all information needed. I do it mainly for me, but I also do it for other stakeholders so they don't need to to through the same laboursome process when someone has the same questions.
The other analogy to a tester is the explorer who collects facts, numbers, painstaking noticing observed behavior or numbers gained from measures. The explorer then groups the data and analyses the collected material trying to find patterns that allow for interesting new findings that either confirm or refute assumptions made upfront. These conclusions are then (hopefully) used to make accurate decisions elsewhere (and also at your own desk).
Yet another analogy is the one of a fed. Testers are regularly cluttered with fake information. Of course, we also get accurate information, but it's all mixed in a bucket and we need to sort it out. Unlike a criminal who tries to protect himself using lies, we may get information that has either not been accurately analyzed or was invented to stop others from further asking and investigating. We, the testers, get information that is based on assumptions and it is our job to question everything. Some nice guy once stated "testers question everything, this includes authority".
There is also some analogy to medical practitioners. The patient gets interviewed by the doctor whose goal is to identify the root cause of the suffering. The software-under-test is the patient, while the tester is the doctor who checks the sick patient. When successful, the doctor identifies the root cause and solves the problem by prescribing selective medicine for treatment. The tester does similar things. She raises a defect and describes the symptoms as accurate as possible. If the examination reveals a problem that is not within the doctor's specialized field, she may delegate additional examination to be made by someone who is more specialized in the area where the doctor assumes a problem. The tester assignes the ticket to a developer or DevOps engineer for further investigation..
In case you have more examples of such analogy, simply post a comment to this blog or drop me a message at Linked-In or whatever channel you like.
Saturday, August 29, 2020
After measuring the size with a scale and analyzing the shape I put a 2-Swiss Frank coin into the track and muttered to myself.."this was clearly a bear". I must add to this point, until this date, the only tracks I had ever reliably identified were those of a small rabbit. Regardless, everyone agreed, my wife, my kids and my parents in law who were joining us at holiday. It was enough confirmation to take some photos and show them to the local tourist information bureau. She too, was amazed by the photos. She picked the phone and called the forest ranger. While waiting at the counter and watching the officer talking with the ranger, I heard him mentioning a cat. Wait a minute....!
"Are you kidding me?", I protested. "Can you please tell the ranger about my photos and send those to him?", I added mortally offended.
at all one can mention a cat, then only because these footprints had
the size of a mark from a cat's after-lunch nap in the snow. That's for sure.
The ranger promised to check our garden shortly before dawn and let us know. Of course, he never did and so I showed the photos to a local farmer. He confirmed, he had come to the same conclusion as me. He guessed, the ranger as well as the tourist office may both just have been afraid the story and the pictures could make it to the news and scare the tourists.
That's good. Finally an expert who confirms what
I've thought from the very beginning. Back home, two weeks later, I
assorted my photos and decided to send a few of them directly to the
ranger who never visited our holiday house. I was confident, once he
sees the pictures, he'll be convinced and confirm my theory.
But, the ranger - who responded promptly - came up with a very interesting new theory. He claimed, it is not likely for the tracks being left by a bear. At the time the bears are usually still hibernating. He stated, these tracks were either left by a wolf or a big dog.
To be honest, I'd loved to hear something different , but it was an interesting argument and these statements made at lot more sense than the original comparison to a domestic cat. Of course, I am still not really convinced, but at least I am now at a point where I have to admit, there may be more than just my only one explanation.
time to time it happens that we find cool and unexpected bugs that have
the potential of being a real big thing; the one killer bug. We might
be euphoric and immediately record it and probably all to easy forget to
collect more facts before we celebrate the great finding and lable it
highest priority. Often, when looking closer to an ugly looking bug and
spending more time to understand the impact, the probability of the
anomaly to occur in real life and the number of affected users, then
what's left may be the sobering that we've just found another low, maybe
medium priority bug, but we made a mountain out of a molehill upfront.
It's great to find bugs, but be careful with the initial rating without
having a second set of eyes looking at it.
By the way...., I am still convinced it was a bear, for sure =;O)