Sunday, December 23, 2012

End of the World

This is the last cartoon for this year and it's really time to sit back for a moment, relax from "hard" work and my sporting injuries. Merry X-Mas and a happy new year to all of you  who are following me either through this blog or via my email distribution list. See/draw you again, next year.

Best wishes Torsten

Friday, November 9, 2012

Performance Testing

This is a reworked cartoon. Originally I drew the cartoon without any keyboards at his hands/tentacles and it was actually also published like this in "The Testing Planet". But now the joke is a little bit more clear, I guess. Enjoy.

How to do performance testing
What a test manager is interested before going live is often this: "Does the system respond fast enough and still accurately if there are 10,20,30 or more users's working in parallel?

Performance testing tools provide you the ability to develop scripts that fire scripted scenarios and measure their response time when executed isolated or all together using these ramp up numbers. Often, problems already show up with a much smaller number of parallel users using the system. Before you buy expensive experts, solve these problems first. Write either one or more simple Selenium UI scripts or some API-tests like REST/SOAP, build an endless loop around it and ask some colleagues whether they are willing to run the scripts from their workstations. Then find a free workstation where you can do manual testing in parallel to feel how it is not to be alone anymore.

If that reveals no bugs, move to the next stage and hire experts who can do the ramp-up as explained in the first sentence. Usually you should come up with 4-5 typical scenarios which cover at least 1 read, 1 update, 1 create and probably 1 delete operation. Not more, because scripting and providing test data for these scripts can be the most time-consuming and costly part in a performance test session. Also note, that a typical distribution of user's actions is 90/5/5 (90% read, 5% update, 5% create) or 80/10/10 or similar.

When using a professional performance testing tool (or working with an expert company), make sure you test the different scenarios in isolation. That means, let's test what happens if you have 10,20,30 or more users only doing read requests, then do the same only for update and yet another set only for creation. In most cases you will learn that a high number of read requests are nearly noticeable while it is often a different story once you test one of the other scenarios. Combining all scenarios together is more realistic but should be done after you have already learnt about the first tests. A combination makes it always hard to pinpoint "who is the problem"?

Don't forget to have someone manually test the system while you are running the automated performance test scripts. Learning how it feels if all of a sudden the performance goes down, is an invaluable information that numbers can hardly give you.

In a project I coached the performance test experts, it turned out the biggest amount of work was spent during the preparation of meaningful test data for the scripts. I mean, we were firing the tests on a real productive database, but finding accurate records with which one could perform the update scenarios without triggering this or that validation message wasn't easy. It turned out it was probably better if we created these data ourselves.

Not only this, we assigned each virtual users to work on only one particular set of data that others don't write to. If we didn't follow this approach, we had another challenge of locked records or trying to update something that was already updated beforehand that would have resulted in validation error messages. Next was the creation of records that allowed only one instance. A second record wasn't allowed. So we had to expand the script to also always delete right after the record was created. Also for this scenario, each virtual user had a particular set of cases that he/she could add new records to without getting in the way of another.

Friday, September 14, 2012

Everyday life at the Southern Hemisphere

People often ask me, how long it takes me to draw a cartoon. I cannot really provide a right answer here as it heavily depends. For this cartoon it took me only about 1 hour. This is very fast and also the number of drafts (only 2) is surprisingly low.  Usually, even though I am not a perfectionist, it takes much longer, probably 2 hours, but I've also worked on cartoons which took me 3 hours. I have stopped coloring my cartoons because this is an extra challenge at which I don't really excel. I mean, I have to spend a lot of time and the outcome is chilling. I think I do much better on some kind of grey scale. 
My daughter askmed me whether I drew the Orca straight out of my head. Of course, NOT. I cannot draw an orca just like this out of my head. I googled the web to understand the most important characteristics, then did two drafts and this is what came out. If I didn't do that, I'd definitly missed the white spots and put the eye at the wrong place. Below you see the "final" before I scan it into the computer and the very first draft.

Final version before scanning into computer
 
 
 Very first draft

Tuesday, September 4, 2012

Captcha #1

CAPTCHAs are used to prevent automated software from performing actions which degrade the quality of service of a given system and/or to protect the service from attackers trying to hack login credentials using brute-force attacks.

Until now, I never had to test CAPTCHAs but thinking about it, testing CAPTCHAs automatically is impossible if testability isn't considered at all. Testability here could mean for the roboter to offer a backdoor which contains the correct clear-text. Of course such information should only be available to the script and de-activated when deployed live. Sometimes, even I struggle to identify the clear-text of the CAPTCHA, and I am NOT a roboter...

Thursday, August 30, 2012

The Thompson Test

Two weeks ago, during a soccer match I experienced a sudden short pain at my left leg, near the achilles tendon. When at hospital, a doctor analyzed my leg, and her first assumption was an Achilles tendon rupture.  But since my pain was at an untypical place, she called another doctor for a second set of eyes, I was told to turnaround, then the doctors hold my left leg, executed an elegant grab handle first on my right, then on my left leg and said “it is clearly an Achilles tendon rupture”.

I was impressed, because it took the doctor only seconds to make a clear statement without even asking me questions about where I feel any pain. Later, the magnetic resonance imaging procedure (MRI) confirmed the doctors' diagnosis. When I later googled the web, I learnt, the doctor executed the so called "Thompson Test".

Now, let's try to bring this experience into context with Software Testing. Of course, otherwise I wouldn’t have mentioned it in my blog. In contrary to the doctors, we testers usually look for bugs and not necessarily how to solve an existing problem. This is more typically the job of a software developer, although we testers also try to help as much as possible in finding some indication for the root cause of the problem (btw, works only if managers don't measure a tester's throughput by counting the number of bugs found...).

We use test techniques that are effective in one area and probably less effective in another. One such technique I use often for documenting software bugs, is the classification according to Kepner-Tregoe. By answering a set of simple questions you may either find the solution to the problem on your own (actually the main goal of this technique) or, if not, you provide at least some valuable set of information to the developer. This makes it much easier for him to localize the issue and become more efficient in solving it. If you want to learn more about Kepner-Tragoe, go ahead, GIYBF.

Additional we use logging tools (actually we have several ones), where we can grab the exact exception message; something that is typically NOT shown to a user because it might frighten him, but it is important for the testers and supporters to have access to, so the developer does not need to spend too much time investigating and trying to reproduce the issue.

TOJZ...still suffering the aftermath of my sporting injury for quite a while...

Thursday, July 19, 2012

How a double-click downed the backend-system


The cartoon was originally drawn 2012 by me and it has its roots in us testers suffering from the fact that sometimes our bug reports were not written well enough for the stakeholders to understand the importance of some bugs.

Just early this year (2019), I had an interesting experience for which this cartoon fits even better. This is the story:

A few weeks ago, my automated API based test suite caused the system go to hell in a handbasket. The database service crashed and had to be recovered manually each time someone visited a grid that loaded data from the backend. After some investigation, together with a developer, we found the reason for this exceptionnel behavior that caused all users getting a system non-accessible message.

When I say users, I mean internal developers and testers, because luckily we were still far away from going live. It turned out the system persisted a duplicate UUID. This duplicate piece of data caused the system to crash whenever a query was reading the affected record. I corrected the corrupt entry in the database and the problem was solved. But how the heck did my test suite manage to introduce such duplication? I tried over and over again but never ever did I manage to make my automated tests do the same thing again. As a consequence, the problem was considered low priority based on the assumption the probability for this scenario to re-occur was almost zero. In fact, over a long period of time, this never happend again, until slap bang - during a manual test - I double clicked the OK button in the web page to persist a new object. After a response time of about half a minute, I got the same system non-accessible message. Jackpot!
With something as simple as a double click we got the backend system to crash from a dumb web client which resulted in a denial of service without me to flood the system with superfluous requests.
With this new information the previously low-rated issue all of a sudden got a different kind of attention. The probability of an end-user to double-click a UI element even in web clients is very high. The real reason for the duplication was the fact that the web client created a new UUID in memory already at the time a user was opening the web page for inserting objects. That means, it created the UUID before the user actually submitted the request to the server. When the user finally clicked OK that UUID was passed to the backend as part of other data entered by the user. When one double-clicks the button, the same dataset including the same UUID submitted the request twice in sequence. The backend had no unique index to check for duplicate UUIDs.

The most appaling part in this short story is that none of us ever thought of the double-click as a potential scenario to reproduce when we originally detected it. There were several experts and architects involved in the analysis of the bug and even though I had a great set of test patterns at hand (the double click was on the list), I was unable to think out of the box promptly. Weeks later, I had a weird flash of inspiration and it took me only seconds to reproduce this issue. The good thing is, we are still not live, so there is still time to fix it.
 
Add-on, August 2023:
The cartoon fits perfect to another defect that we detected more than a year ago. A grey-scaled scanned document, when edited and rotated in program A, could no longer be viewed in program B, both tools that were used in parallel. But, the anomaly didn't gain enough attention as it looked like an edge-case and customers never experienced any issues until "slap bang" a customer could no longer export their documents because of edited, grey-scale documents. What followed was weeks of investigation and experiments/workarounds, all without a hunky-dory solution. So yes, that cartoon was like a precursor for the next ugly thing to happen.


ThanX to the The Testing Planet magazine editors who were so kind to publish my cartoon in their issue 8

Sunday, April 15, 2012

Good old times

Almost every week, I get some updates notification on my smartphone. Fortunately I can decide on my own on when to download and/or to install it.
Where I work, customers have no such choice. When we upgrade our releases, every customer worldwide either gets some new features or suffers from the fact that we have to deliver another series of extra releases to fix what we've broken in the previous update. Anyhow, it is only a question of time until cars also get equipped with software that you may or may not need to upgrade/patch on a regular basis in order to keep them running while all those oldies will still work fine without.



Friday, April 6, 2012

Automated Test Script Desease #1

If it is not a new unexpected confirmation dialog box that shows up, then it may be a unique GUI control identifier that either changed or is even missing. The response could also be slower than originally recorded. That means we have to either extend the script with hard-coded extra wait statements or - if you are smarter - write some extra logic which always waits for the control to load completely before it gets "touched". The object could also be at a different location, let's say in a list that is sorted differently than before. Hope you didn't record the position but negotiated with the developer for a predicable algorithm so your roboter is still able to reach it even though it shows up somewhere else.Poor you if one detects that the developers now use third party controls that your test automation tool can't deal with. You might also experience ugly test script- or module-dependencies. As if this wasn't enough, you may work in a world where testers don't control the test environment where others can change or update configuration without you knowing until you see the script failing. You may also experience a deployment of a new software version in the middle of test execution or while you are playing Sherlock Holmes to investigate a real fat bug. Your script might only work from your workstation but not from the server you're triggering and executing it ...from one day to the other.

And often, even if none of those typical UI test automation challenges is one that you face today, you will still have to sit there watching the script running,  so you're ready for some extra test script babysitting actions. If you weren't there observing your scripts but going out for a coffee instead, don't expect your roboter to have completed successfully its job when you've come back to your desk...

That is just a few of the reasons, why I love testing below the UI so much.

Wednesday, March 28, 2012

New procedure for build breakers

A new habit is about to start and it reminds me to the Dark Ages were thieves were put to the pillory, so everyone could see and shout at them.

Saturday, January 14, 2012

Giraffe Accessible, The Workaround

A workaround was quickly found, although on the long run, it wasn't the most comfortable position.

Friday, January 13, 2012

Giraffe Accessible

A reference to compatibility in general. For instance, browser compatibility. What works for FF, doesn't necessarily work for IE and/or other kinds of web browsers.