Thursday, December 23, 2021

Cheerful debugging messages and its consequences

Over a year ago, we tested the automated printing of a clerk’s signature on letters being sent to doctors, lawyers, etc. There were some issues where the signature was missing, that’s why we marked the defective signature-template with a debugging text. The idea was to test whether the template is processed at all or whether the problem is within the signature itself.
 

I don't know what came over me when I used "meow-meow" as the debugging text. Probably, Luna is to blame for it. Luna was our 16 year old cat which died shortly before. She's now eternalized in this (less testing related) cartoon.

We quickly found the cause for the missing signature, we fixed the bug and shipped it to the customer along with an updated template. Unfortunately, I forgot to remove the debugging text in the template. The customer found the problem during their internal BETA test. We fixed the template, shipped it, no big deal, over.

But, about half a year later, the customer reported that he'd seen a letter in the productive document management system containing the complimentary close "meow-meow" right below the printed signature.

I could feel my face becoming soaked with blood. How the heck could this happen again? Even though we don't execute all regression tests each time we ship a minor release or hotfix, the print-out is part of all smoke tests. Therefore, my thoughts were "this simply can't be true", 'cause we've never seen any such text being printed on any of the letters that ended on our printers.

A little investigation and it became soon clear what happened. Even though we had originally sent the correct template, there was still a copy of the old "meow-meow" template around. When someone decided to include the template into the git-repository, he took the wrong one. As a result, with the next minor software-version shipped, the old signature-template was deployed again. 

While this explains how the bug was re-introduced, it does not explain, why testers did not find the issue while testing the print-out. 

Further investigation revealed, even though ALL letters processed the "meow-meow" signature-template, NOT all letters were printing the text below the signature. We found out, it depended on several things such as the size of the signature and the configuration of the target letter being sent out. If there was a fixed size between signature and complimentary close, then the text "meow-meow" simply didn't make it in between and wasn't printed. If instead, the letter was configured to dynamically grow with the size of the signature, then also "meow-meow" made it to the printer.

Without going into too much details, only one particular type of letter was affected and fortunately, it was a letter that customers sent only to internal addresses (like an internal email) and not to stakeholders outside the company. Lucky me! 

Well, not really...because...

..one of the customers told me that - despite this letter is sent only internally - it may still be used as an enclosure for other letters going out to lawyers and doctors. The company feared they'd lose credibility when such letters made it to their customers. It was therefore important to remove all identified documents in production before they pay a high price for this slip.

What are the lessons learnt?
First of all, it could have been worse. I mean, if you get a letter that ends with "meow-meow", it's likely to put a smile on your face. In my 20 years career as a tester and developer, I've seen comments and debugging texts that are much worse and sometimes below-the line. My friends told me several stories about similar happenings. 

A colleague told me, they had added an exhilarant test-text for their ATM. The text was triggered when the card was about to get disabled. When in production - their boss wanted to withdraw money from the machine, the card got blocked and showed the text "You are a bad guy. That's why we take your card away". I only know, he wasn't amused about it. 

Looks like, we are not alone.

But, as we have just learnt, it depends on who is affected and who will read it. I can only suggest to never ever type anything like that in any of your tests because you never know where it's going to show up.

Instead of "meow-meow" a simple DOT or DASH would have done the same trick, and didn't raise the same kind of alarm in case such debugging messages make it to the bottom of a letter. 


This was the last cartoon and blog entry for 2021. There are more to come next year. Please excuse, this cartoon here wasn't a pure testing related cartoon, but the story still is.
I wish you all a merry XMAS and a happy new year. I hope you enjoy the blog and continue to regularly visit me here.
 


 

Friday, October 22, 2021

Test Patterns out of control

 In my blog entry Apostrophe, Ampersand, Exclamation mark & Co. early this year, I highlighted how important the use of special characters has become for our team during testing. A closer look at the article demonstrates that these characters aren't really exceptionnel in real life. They are commonly used; actually more common than what most of us initially thought. It's therefore even more important, we don't treat these letters as a special cases used by testers only.

I smirked when I recently noticed how many fake persons were added into our reference test data containing German umlauts or apostrophes, etc. Looks like a lot of testers are crazy to find maniac but still realistic data. It has become much harder to find vanilla data. Mission accomplished.

 




Sunday, August 15, 2021

Accurate Test Reports

One of the main challenges I faced during my 20++ yrs. career as a software tester and test manager is to resist project and release delivery pressure.

When deadlines are close - and the software-under-test is still not ready to ship - testing automatically becomes a "virtual" bottleneck.

Resisting the temptation of short-cutting testing and trying to keep thoughts out the way such as  “it’s not going to be that bad” can be extremely stressful.

Deploying a release in time but of poor quality will almost certainly fall back on QA. Critical questions are usually addressed first to the testers.

On the other hand, shifting a release-date may move customers into an uncomfortable situation when they booked resources for their final acceptance tests and/or training. Plus, it will trigger questions like “why have the testers not found these issues earlier”.

 To be honest, I had given in sometimes on the temptation to perform short-cutting and in most (but not all) of these cases it surprisingly worked out, even though these were tense moments. Shortcutting testing can work well if one sticks to the high priority test cases first, and those test cases you assume beeing affected by the latest changes.

But, accurate prioritization of test cases is only possible if you know the software and your customers well. Prioritizing test cases works if you have a good sense where the SUT is likely to break/survive depending on the change applied. This includes knowing how customers use your program. Some bad experience in the past might help here. Don't forget to visit your customers from time to time.You can get valuable insights when you see the customers working with your system.

Speed is an important element in testing.
High prio bugs are found earlier if high prio test cases are executed first. The longer QA needs to prove the software isn't worth shipping, the higher the risk people look at QA as perfectionists - trying to find/report all possible bugs. If QA gets such labels, their reports are interpreted similarly. At the worst, testers are considered a thwarting element in the delivery process.

When in doubt (and available time does not allow digging deeper into analysis of doubtful areas) - add these concerns to the report. Be as accurate as possible...and, for God's sake...don't let management make the test manager providing the final GO on the release. Instead, invite all important stakeholders, put all the facts on the table. This includes what you know and what you don't know. Then let the "going-live" decision be owned by the team.

It has worked for me...sometimes  =;O)

 

 

Saturday, May 1, 2021

Apostrophe, Ampersand, Exclamation mark & Co.

Since I am in the testing business, I am "preaching" the use of special characters and umlauts wherever and whatever we test. 

Umlauts (ä,ö,ü) and chars like é,â, etc. are very common in Europe. In the US and elsewhere, you often see last names with a single quote such as O'Neill, O'Connor or, think about company names like Macy's and McDonald's.

Sometimes, we forget to include these characters in our tests and often we promptly pay the price in the form of bugs reported in the field.

But, awareness is increasing in our team. I am really delighted when I see a colleague posting notes into the group-chat yelling: 

"Hey Buddy, there were no special chars in your demo.Will it work with an exclamation mark, too?"

  I love that, but I also recommend to first test the happy case using simple inputs. There is no means to attack an application with special chars if it can't even deal with the basics. Once we have convinced ourselves that the happy case works, we go over to feeding the program with more interesting input.

 It's always exciting to watch how a system deals with a combination of chars like
"{äöü},[áàâ]/|éèê|\ë!:;+(&)'%@". But, if such a test fails, developers and/or business analysts will likely give you the hairy eyeball when they see a defect documented with such example input. They probably close the issue unfixed, assuming such inputs are unrealistic and won't be seen in production.

  What has worked better for us: Use these characters in a more meaningful context so it becomes obvious to all stakeholders that the problem you spotted is worth getting fixed.

For example, we have made it a habit to use M&M's Worldstore Inc. whenever we use or create a company address. That's not because we love chocolate so much but rather because it has a mix of special characters that all caused headaches in our past and current career as a software tester: the ampersand, the apostrophe and the dot.

Same approach we apply for street-names and ZIP codes. It is a common misbelief that ZIP codes need to be numeric. Go visit the UK and check how their ZIP codes look alike. Street names can contain slashes, dashes and single quotes. Street numbers can be alphanumeric separated by a slash.

A blank in a name is not exotic. Look at Leonardo Di Caprio or Carla Del Ponte. We have seen programs that cut away the second part, leaving the famous actor with the short name "Di" or "Del".

Below find a few examples of names we regularly use in testing:

  • M&M's Worldstore
  • Mr. & Mrs. O'Leary
  • Léonardo Di Caprio 
  • Renée-Louise Silberschneider-Kärcher
  • Praxisgemeinschaft D'Amico & Ägerter GmbH

Example street names:

  • 29, Queen's Gardens
  • 4711 Crandon Blvd., Appartment F#1000
  • Rue du Général-Dufour 100-102
  • 55b, Rue de l'Université 
  • Elftausendjungferngässlein 12b

and ZIP-Codes:

  • 4142 Münchenstein 1 (Switzerland)
  • 33149 Key Biscayne, FL (USA)
  • EH4 2DA Edinburgh (Scotland)

   Special characters are interesting wherever you can submit text.

If you post a message like "meet you at Lexington Av/59 St" and, if that text is stored/exported in an XML file without properly escaping the slash or embedding the input in a CDATA tag, you may find interesting bugs.

Backslashes are used in many programs to escape characters. Quotes and single quotes are used to terminate text elements. The latter is a common way for hackers to manipulate SQL statements so the command fired forces the back-end to reveal information not intended for the normal user. Semicolons can confuse web client logic or cause problems when information is exported to CSV files. Question marks and ampersands are both used in URL links, curly braces are common markers in REST payloads, etc. 

I personally have never really understood what is so exciting using the pangram the quick brown fox jumps over the lazy dog in testing or the filler "Lore Ipsum". Both do not contain any interesting characters. When testing German language based programs, we prefer to use  Victor jagt zwölf Boxkämpfer quer über den großen Sylter Deich because it contains umlauts and this weird ß character, although it is an unknown character in Switzerland even in the German dialect speaking area.

Most often, we use my own pangram which we have further adapted over time:

Chloë Valérie O'Loughlin-Bäcker runs the "Deacon Brodie" (Scottish Pub & Largest Whisky Collection) at Saint-Louis du Ha!Ha!, a town in Québec Canada. Her car is a Jaguar X and runs a max. of > 200 km/h. Isn't this amazingly fast?

It contains all letters of the alphabet, plus umlauts and a selection of interesting characters that challenge text processors. But, it has not helped us later find a bug in our system where an exception was thrown when the annotation text contained a capital Ü (a bug in Open Edge Progress btw.). Other umlauts were no problem and even the lowercase ü didn’t cause any harm. The system really failed only with the capital version of it. Crazy!

One of my personal treasures is the Canadian town St. Louis du Ha! Ha! This is the weirdest town-name I've ever heard of, because it has two exclamation marks. Okay, the likelihood of such a city ever being entered by our customers is close to zero. Canadian citizens living near the area of Quebec probably disagree. Testing and test input is a question of context. Westward Ho! is a town in the UK which also houses an exclamation mark.


 

Null, True, All, Test and other funny bugs related to people's names

Besides the recommendation of using special characters, it is also worth to sneak a peek at reported stories related to people's names like Jennifer Null and Rachel True. Their names were processed and misinterpreted as NULL [BBC16] or - in the case of Rachel - a boolean value [9TO5] causing a problem using her iCloud account. I have not experienced either of the two cases myself in any of my tests but we found a similar issue where submitting the search term "Alli" returned documents titled "Alligator" and "Allister". All fine, but submitting "All" ended in an exception. 

Stephen O, a Korean native living in the US had been hassled by credit card companies, because his last name was too short. When he applied the workaround by adding an extra "O" to end up with "OO", it didn't take long until he had a meeting with the government because he provided false information [NYT91].
 
Graham-Cumming could not register his name on a web-site because the hyphen was considered an invalid character [GRA10].

There is also Natalie Weiner who could not register either because here name was considered offensive [PAN18].

Yet another example is the story of William and Katie Test who were both unable to book airplane tickets simply because the system scanned the names for the term "test". A match triggered a different procedure and disallowed the process of booking tickets in production [COY17].

Wooha! That's exactly what one of my clients had implemented, too! We were surprised as there was none to ever report any issues with it. So, we investigated a little and found out, in Switzerland there are only around 50-60 people carrying the last name "Tester". Second, our investigation also revealed the system did NOT scan the last name but rather the first name for the term "test" to avoid operations executed in production; Doozy! How clever!

References:

Further similar reading:


Monday, April 26, 2021

Esteem or Truth


At the drop of a hat, they decided to stay on the hazardous highway.

Sunday, March 14, 2021

Architecture Review

"Our architects recently studied all your ill-favored coding styles, but the good news are...we don't need any obfuscator tools anymore!"

Wednesday, January 27, 2021

Interpreting the Regression Test Activity Graph

We just completed a 1.5 week intensive manual regression test phase where we executed almost the complete set of all (several hundreds) test cases. We are in a lucky situation. Our documented test cases represent nearly 100% of all implemented features. If we achieve a 70-80% test coverage, then we get a real good picture of the overall quality of the product increment. That means, aside from the many automated tests, it's worth from time to time, doing some manual end-to-end regression testing.

While tracking the regression testing progress using a cloud based test case management tool, we were looking at the activity graph and it made us smile. It's exactly what we expected.


 
At the beginning, testers focus executing those test cases that are well documented, having clear instructions and which rely on previously well prepared test data. I mean, objects that are in specific states where testers can execute just the transition from one state to the next and not worry about the laborious setup steps.
 
Then, testers switch to more complex test cases which take a little more time to understand and test.  This is when the progress curve reaches the peak and progress starts to slow down.
 
Of course, we also find anomalies. Bugs can slow you down because analyzing and understanding where and when defects were introduced takes additional time. After a few days, the first bugfixes are delivered, too. Developers require your attention to test their fixes. This interrupts testers from working on their suite. The rate of passed tests is decreasing, but still decreasing in a constant and expected way.
 
In parallel, developers are working already on the next generation of the product, meaning, their user stories get shipped and require testing too. The tester's brain is now busy with a lot of context switching; clearly more than at the beginning of the sprint.
 
Now that we are more than half way through, we switch to the monster test cases. I call them like that because they do not consist of simple steps, they contain several tests expressed in tables of inputs and expected outputs. That's why I think it's nonense to talk about the number of test cases. A test case can be atomic and executed in seconds, yet another test case can keep you busy for half an hour and more.

Some of the test cases may be poorly documented and require maintenance or correction. Some test cases require the help of a domain expert. The additional information gained should be documented in the test suite, so we don't have the same questions next time. These are all activities running in parallel.

Last but not least, weekend is getting closer. The first enthusiasm is gone, you're starting to get bored.
You hear music from your neighbour. The caféteria gets louder. The sound of clinking glasses reaches your desk. It's time for a break, time to reboot your brain. TGIF! And now it's weekend time!
 
And then, Monday is back! It's time for another final boost and time to say thank you. Great progress.
 
We made it Yogi! 
 
....and I like that graph.
 
 
 

 


Monday, January 11, 2021

About Bias and Critical Thinking

 I recently stumbled over a small article about the "Semmelweis-Reflex". It was published in a Swiss magazine and it was quite interesting as I drew some analogy to software testing:

In 1846, the Hungarian gynecologist Ignaz Semmelweis realized in a delivery unit that the rate of mothers dying in one department was 4 percent while in another within the same hospital, the rate was 10 per cent.

While analyzing the reason for it, he identified the department with the higher death rate was mainly operated by doctors which were also involved in performing post-mortem examination. Right after the autopsy they went helping mothers without properly disinfecting their hands.

The other department with the lower death rate was maintained mainly by midwifes who were not involved in any such autopsy. Based on this observation, Semmelweis adviced all employees to disinfect their hands with chlorinated lime.

The consequence: Death rate decreased remarkably.

Despite clear evidence, the employees' reaction remained sceptical, some were even hostile. Traditional believes were stronger than empiricism.

Even though this is more than 150 years ago, people haven’t changed so much in these days. We are still biasing ourselves a lot. The tendency to reject new arguments that do not match our current beliefs is still common today. It is known as the Semmelweis-Reflex. We all have our own experience and convictions. This is all fine, but it is important to understand, these are personal convictions and may not be transferred to a general truth.

How can we fight such bias? Be curious! If you spontaneously react with antipathy on something new, force yourself to find pieces in the presenter’s arguments that could still be interesting, despite your current disbelieve as a whole.

Second, make it a common practice to question yourself by telling yourself “I might be wrong”. This attitude helps overcome prejudice by allowing new information getting considered and processed.

Back to testing:

From this article, I am learning we should start to listen and not hide behind dismissive arguments simply because what is told to us doesn't match our current view of the world.

But this story has two sides. If I am getting biased, then others may be biased, too. Not all information reaching my desk, can be considered right by default. The presenter's view of the world may be equally limited.

Plus, the presenter may have a goal. His/her intention is to sell us something. The presenter's view may be wrong and based on "sloppy" analysis if any fact collection was done at all.

Call me a doubting Thomas, but I don't believe anything, until I've seen the facts.

So what?

If someone tells you "a user will never do that", question it!

It may be wrong.

 

If someone tells you "users typically do it this way", question it!

It may be an assumption and not a fact.

 

If someone tells you "this can't happen", question it!

Maybe she has just not experienced it yet and will happen soon.

 

If someone tells you "we didn't change anything", question it!

One line of code change is often considered as hardly anything changed at all, but in fact this adaptation can be the root of a new severe bug or a  disaster.

 

I trust in facts only, or, as someone said: "in God we trust, the rest we test". This is the result of many years testing software. Come-on, I don't even trust my own piece of code. I must be kind of a maniac!

 

Sources: text translated to English and summarized from original article  published as "Warum wir die Wahrheit nicht hören wollen», by Krogerus & Tschäppeler, Magazin, Switzerland, March 20, 2021