Friday, December 3, 2010

Minor Risk Patches

Testers like to demonstrate that the software is not ready to ship (yet) but sometimes they fail miserably by making it clear enough..., and the customer might get a nasty surprise.

Vito, my colleague, once said: "Hey, we're really good at working wonders, but sometimes, it just acts up.

Friday, November 12, 2010

The Unexpected Ugly Bug

I remember the time where we had a set of only few automated tests and a big set of manual test cases. As the number of features grew also did the number of test cases. The number of available testers was a constant number and we didn't manage to execute all the test cases. At that time I started to rate the test cases based on their likelihood of detecting bugs. Whenever we executed a test case that helped uncovering a bug, we added a link to the defect's ID. Each time the test suite was executed again, the tester was notified about the number of defects and its priorities it found in the past.

That was a practical approach, but also a risky one if I think of what just happened to me last week at my current employer.

I had an automated test script which couldn't be executed due to some problems with the new version of the test automation framework we were using. Since there wasn't enough time to investigate, I just skipped this script, reported "FAILED due to unknown reasons", marked it as low priority and continued with other tests. I did it because the component it was testing never showed up any defects in the past and so I thought, I could easily put it aside for a while until I found enough time to investigate what was wrong.

When I finally had the time and managed to "revive" the script, it uncovered a real ugly bug, and it was almost too late to fix it for the release that we were about to deploy. My luck, the release-deployment had to be shifted by one week due to another issue and it wasn't a big deal for the developer to fix it.

Anyhow I felt uncomfortable for the fact that I found this issue so late in the process whereas it could have been uncovered much earlier. What does it tell me? Even a test case / script which may look useless for many test cycles, may become an important script at a later point in time. Today, while writing or looking at new/existing scripts to determine its priority, I ask myself..., what could go wrong if that script isn't executed rather than thinking of the success rate in terms of "how many defects did it uncover in the past".



This cartoon along with the text is now also published by the STQAMagazine's blog
(and later on, also in the STPMagazine print version of January 2011 (first page)).

Monday, October 25, 2010

Revolt of the Test Scripts

If testers are not invited early enough, they will not have the chance and the time to determine whether or not they have to adapt their existing scripts. As a matter of fact, some scripts will have to be de-activated since they start to report errors that aren't any. Someone smart once called this "False negatives", consequently the opposite exists, too (false positives). see also the blog entry as of July 2009 (The Test Successfully Failed).


...by the way, Test Tools and/or your own test scripts can be buggy, too (here a false negative).

...in the meantime, I found out under which conditions this situation occurs. In the NUnit Test Automation Framework, even if all your test scripts were running fine the whole suite may report a failure if someting goes wrong outside the test suite. The TearDown function is such a method that gets called automatically after each script and/or after the suite is completed. At that point, all scripts reported already a successful run, but the TearDown can still throw an exception if for instance your implementation invokes a call of a null object (null pointeer exception). In such case, NUnit reports a failed suite although all child scripts were passing all tests.

Monday, October 18, 2010

Automated Testing 4 Oil

 It is easy to build up a new test automation suite from scratch. It is a different story to keep it up running for a life-time. Changes in the software under test will be implemented. One day your scripts need to be adjusted to the new situation. Some scripts will become obsolete, some new scripts need to be added, some just don't work anymore.

The more test cases you have the more difficult it is to keep them up-to-date and make them provide the same benefit they did at the time they were born and run successfully. More and more scripts will start to fail and if you don't have the time to analyze the root-cause and fix the scripts, the number of scripts printing red alarm signals increases.

You will start to become familiar with the fact, that some scripts always print red warnings and the day isn't far where you don't even notice anymore that more and more scripts suffer of the same issue.
Probably you don't even trust your scripts anymore since they may report so called false-negatives. Make sure you keep them up-to-date at the time they fail. We call that script maintenance, and yes, this not only applies to manual testing. It is especially true for automated testing, too.

This is my second cartoon published in a printed magazine (STQA Magazine, September/October 2010). 
ThanX to Rich Hand and Janette Rovansek.

Wednesday, October 6, 2010

Mission Impossible in Paris


I just came back from a holiday trip in Paris where we met good old friends from Scotland.
While I could relax from the stressful tester's life, I had a nice déjà-vu on my last day at Paris in the Centre Pompidou.

My kids were playing with a conveyor machine in the playing area where you could pick up colorful everyday items out of a big repository and put it on the conveyor band. Someone could then turn the wheel and all those items got transferred back to the repository. 

What first looked like a boring task to me, appealed so much to my kids and other children that it was hard for me to get them away of the play zone and enjoy the beautiful exhibition. However, the wheel of the conveyor regularly blocked so the kids were no longer able to turn the wheel. As a consequence, no items could get transferred back to the repository. The reason was the stack of items in the repository gaining so much height that it reached the thin vertical slot of the conveyor after only a short while of running. Due to the kids continuously pulling the wheel, some items were involved into the slot and blocked the machine completely.

I spent most of the time making sure, the stack was kept at a safe height so nothing could disappear into the slots, hereby making it easier for the kids to play with the conveyor on a continuous basis. While doing this, I realized another small item got stuck in the slot but did not cause the conveyor to block completely. Instead it slowed it down. The kids had to put a lot of forces on pulling the wheel, so their items got transferred back in the repository. I was trying to get the "bug" out so the kids had a lot easier job.

I failed miserably, because none of kids could stand still for a moment. I had risked my fingers each time I tried to get the "bug" out of the slot. Whenever I almost had it, another kid took control over the wheel and caused me pray for my fingers. Surprisingly, I was the only one parent trying to help those kids and keep the machine running smoothly. When I finally decided to step back and watch the scene from a distance, I saw kids trying to pull the wheel even harder when it no longer worked. None really took the initiative and scaled down the size of the stack.

Thinking about that issue, it could have been easy for the local management to fix the problem. First there were way too many items in the repository. Fewer items had solved the issue because the stack never had a chance to grow up fast and so high up. In other words, less items in the repository and the conveyor never had such a continuous maintenance problem. Second, none really seemed to care about a badly moving conveyor. Instead of someone stopping the kids from using the wheel for just one minute and trying to solve the root cause that slowed down the conveyor band, local management just accepted (or wasn't aware of) the fact, that those poor kids had to "work hard" and spend too much effort.

At this moment I realized the marked similarities to my job as a software tester and test automation engineer where we experience such situations almost every day...

Thursday, September 2, 2010

Told Ya

It can be both, frustrating and encouraging if you see a software package getting rolled-out / deployed, without ever having passed the hands of a tester.

Frustrating because you couldn't convince anyone to roll out an untested release.

Encouraging since you get an immediate demonstration of what are the costs of a missing tester.

Sunday, August 8, 2010

Less Room for Bugs

We have automated UI tests, keyword driven, reflecting exactly the user's workflow as they use our programs, or let's say exactly the way we believe the customers still use them.

In addition we run component tests below the UI (WebServices), automatically invoked by a continuous integration server, covering nearly 100% of the interfaces that our company offers to our customers and to make our systems testable from below the UI.

Also we regularly execute manual tests on an end-to-end basis to complete our "spectrum" of available testing techniques....any yet....bugs still make it through to the field.  There is not much we can do against it. This is just an accepted fact, no matter how much effort you spend in trying to avoid it.

As a tester I celebrate if the software goes belly up, but of course only if this occurs during the testing phase. As all other contributors to writing/developing software components and systems, me too, are "praying" that we don't get any surprises, once the software is shipped or deployed online. If customers find reasons to complain in spite of all our applied strategies in stopping severe bugs bothering them, it is our performance of duty to question ourselves and our activities each time from scratch.

The one question which always stands out is "Could we have identified this particular bug before the customer had the pleasure to experience it?". In most cases I must add, we could not, and the reason is obvious:

Following a plan and assuring execution of all agreed test cases within the time given does not guarantee that new bugs impacting our customers workflows can be ruled out. In our particular situation I see an increasing number of bugs slipping through due to poor communication to the testing department. If testers don't know what features customers are using and how, then how can one expect the testers to check the right area? Well, the good thing about such bugs is, that we learn from them and ask the right questions. That bug will surely not show up again since it is now covered in the test suite, so which one will show up next...?


By the way, this cartoon is my first cartoon that got published in a PRINTED magazine. ThanX to "The Testing Planet" aka. softwaretestingclub.com

Wednesday, June 16, 2010

Vuvuzela Testing

Software Testing here not only happens under annoying background noise, it is a great challenge for people who want to test their nerves under a constant exposure of weird irradiation.

Wednesday, June 9, 2010

Do-it-yourself Virus

Great innovations and lots of new ideas are emerging.

Thursday, May 13, 2010

Behind the Scenes of Chaos Software Ltd.

If you are familiar with Gary Larson you might recognize the scene as close related to one of his creature cartoons where the dinosaur stands in front of its calendar striking each day out ("kill something and eat it"). I hope Gary forgives me for the similarities, but the scene fits perfect to our common experience.

Wednesday, May 5, 2010

It Works on My Machine

I am sure that any tester has received a message like this at least once in his life as a tester. But to be honest, it happened only rarely to me.
Such a rare incident occured recently when we detected an optimistic lock exception in one of the WebServices under very specific caller scenarios on one of our integration test environment.

However, the developer could not reproduce the error on his workstation and although he was  already running a newer version of the code, he was actually right.

It turned out the problem occurred on a clustered environment only where several containers and nodes are running in parallel. Lesson learnt: It doesn't need to be the programmer who broke a piece of software. Sometimes other influences lead the software to behave different. Here it was the environment in which the software was running.

Tuesday, April 13, 2010

Defibrillator

Actually a good idea to install a defibrillator at a place where all hell breaks loose with this never ending deployment virus . I just hope, none gets the idea of using this machine to improve test efficiency and giving some testers an extra shake...

Different Goals


When a tester didn't find any cool bugs, then he basically has two options to think about.

a) The software is good enough to go live
b) The software wasn't tested well enough

Typically, the developer thinks it is "a". And, of course, the tester always fears it could be option "b".

Sunday, April 11, 2010

Very First Exception

While I got some emails where people voted this cartoon as excellent, I must admit that there was quite a bunch of people who didn't get the point.

I don't know how the story of Aladdin is told in English spoken areas, but here in Switzerland and probably also in Germany you must use a cleaning rag and wipe it over the oil lamp (well here it's rather a teapot) to get the ghost out of it. In this cartoon it is the small bug that went over the teapot. He triggered the event for the Ghost to come out and of course, he didn't see anybody.

Cartoons that need explanation are probably not worth getting published but here I still think it's worth it.

Someone asked me whether ghosts really cast a cloud over the floor... Actually a good question. Don't know, I never met a ghost.

Saturday, April 10, 2010

The Worm Catcher

I'll better not comment this one, may be at a later point in time.

Monday, April 5, 2010

Chaos Theory

It was quite a challenge identifying this hustle and bustle with a unique and accurate name, until Einstein crossed my mind...

Wednesday, March 31, 2010

Magic Black Box in Your Car

Many car owners probably have not yet realized that their new cars are already equipped with blackbox systems which have lots of sensors and log many operations executed by the driver.

The question here is, who is actually "watching" you.

According to TCS (Touring Club Schweiz) there are reported cases where car owners lost their warranty claim just because the manufacturer could prove they have stressed the motor by driving in a suboptimal gear.

Source: Tagesanzeiger, March 11, 2010
Direct link: Wer ein neues Auto fährt, kann nichts verbergen


Tuesday, March 30, 2010

The Printer Test

Every other two years we start inventing new processes which don't solve the problems we introduced with the previous processes.

Sunday, March 28, 2010

Invalid Test

How often have you run into a weird behavior of the SuT and then tried hard to find evidence for this phenomena? Developers sometimes just don't believe you until you can prove with a video. Screen shots are no hard facts anymore, since they don't demonstrate what steps you have done aside.

The time that can be spent into investigation can grow dramatically. The more time you spend into finding the root cause, the more pressure on you to be successful so the time spent for searching is justifiable.

Woe betide you if you can't find it after so much time has been wasted.

Woe betide you if you cannot find the bug but the customer will.

Woe betide you if you found the bug, reported it, but someone still thinks it is reasonable enough for the customer...and then the customer doesn't like it...

Saturday, March 13, 2010

Husky Adventure in Oberwald Switzerland

For one time, something different to let it all hang out. I made these (and many, many more) pictures today at Oberwald, Wallis (Switzerland).

Friday, March 12, 2010

Rose Colored Glasses

The biggest shock to most industry candidates is the sheer size of the test discipline at Microsoft and the amount of clout they wield. The developer to tester ratio at Microsoft is about 1 to 1. Typical for industry is about 5 developers to 1 tester and sometimes 10 to 1 and higher.

When you are outnumbered like that, a test organization is usually focused on survival rather than evolving their engineering skills and practices.

Source: How We Test at Microsoft, MicrosoftPress (Page, Johnston, Rollison)

Thursday, March 11, 2010

How to confuse intruders and keep them away from your site

Some improvements have been made to improve security of our web site by suppressing detailed information of what went wrong during a B2B transaction, or by providing information that is useless to attackers. While the information provided before was helpful for support and testing for quickly finding the root cause of a failed data upload, this has become almost a mission impossible and the reactions, namely from support came promptly.

(c) Mueller & Zelger

Tuesday, March 9, 2010

The Exciting Job of a Tester

Testers like to hunt for and find stuff. The hunt is exciting, and finding an error is the ultimate motivation.

If you work for the type of organization that is not focused on quality and does not recognize or fix anything your testers have worked so hard to find, a test team is going to view that as a lack of respect for them or their work. And if you don't give your testers the respect they deserve, you'll demoralize them pretty quickly.

Source: Beautiful Testing, O"Reilly ("Was It Good for You", by Linda Wilkinson)

Wednesday, March 3, 2010

There is nothing like starting young

I fully support that test automation plays a key-role for the testing team to cope with rapid changes. But I also see, is that many stakeholders in the software development process have an odd way of understanding the consequence of test automation. It doesn't come to you automatically.

While most management agree that test automation is important, interestingly many have no clue that efficient test automation (and also manual testing) requires the code to be testable.

Testability seems to be a foreign word and it is therefore not understood as being one of the key-roles to successful implementation of test automation. So, why is this important part not pressed ahead?

The opposite is often the case. Making the code testable is considered a low priority "feature" with the result that the effort to do test automation increases dramatically. And then people ask why the test team can no longer cope with the vast amount of changes. I hear phrases like "Oh, yes, that is a nice idea, but we will do this later". Later here means at a time none has the time anymore to do it and when it is too late for the test team.

Some also believe the fairy-tale that test automation saves tester resources.

In all those years being involved in a handful testing and test automation projects I realize an ever recurring pattern in this scene. The number of features grows over time and as a consequence also the number of test cases that should be executed.
However, since the number of testers remains a constant regardless of these facts it ends up with the testers limiting their activities on scratching the surface of the AUT instead of practicing the theories what all those testing certification authorities are trying to teach us.

I am usually sitting in a fast train that never stops and I can only hope that we don't hit the buffer stop with full speed like Toyota experienced recently.

So, where is the link to the cartoon? All evil starts with missing testability, and only today I have learnt once more that I was involved too late to bring in any meaningful testability features which could have helped testing more efficiently.

Hey Linda, when you read this, yes, this is the feedback I provided to your article at StickyMinds only today. By the way, I also read your other articles and you must know that your book was one of the very first I ever read about test automation, many years back.

Sunday, February 21, 2010

Back From Skiing

No fun today. Just came back from my family winter holiday in Engadin, Switzerland. I shot tons of such pictures with my Canon 40D and wide angle lenses from Tokina (11-16). On the left: Julia Pass, on the right: La-Punt Chamues-ch view towards Samedan and St. Moritz)

Monday, February 8, 2010

Return of Nemesis

Last autumn I visited a fascinating speech of a Swiss astronomer at the University of Basel.
He explained the theory that our Sun may have a twin star moving in an
huge elliptical orbit and may be held responsible for the mass extinction of the dinosaurs.

Here is a very good description of Nemesis which I found on WIKI:

Suppose our Sun was not alone but had a companion star. Suppose that this companion star moved in an elliptical orbit, its solar distance varying between 90,000 a.u. (1.4 light years) and 20,000 a.u., with a period of 30 million years. Also suppose this star is dark or at least very faint, and because of that we haven't noticed it yet.

This would mean that once every 30 million years that hypothetical companion star of the Sun would pass through the Oort cloud (a hypothetical cloud of proto-comets at a great distance from the Sun). During such a passage, the proto-comets in the Oort cloud would be stirred around. Some tens of thousands of years later, here on Earth we would notice a dramatic increase in the the number of comets passing the inner solar system. If the number of comets increases dramatically, so does the risk of the Earth colliding with the nucleus of one of those comets.

When examining the Earth's geological record, it appears that about once every 30 million years a mass extinction of life on Earth has occurred. The most well-known of those mass extinctions is of course the dinosaur extinction some 65 million years ago. About 25 million years from now it's time for the next mass extinction, according to this hypothesis.

This hypothetical "death companion" of the Sun was suggested in 1985 by Daniel P. Whitmire and John J. Matese, Univ of Southern Louisiana. It has even received a name: Nemesis. One awkward fact of the Nemesis hypothesis is that there is no evidence whatever of a companion star of the Sun. It need not be very bright or very massive, a star much smaller and dimmer than the Sun would suffice, even a brown or a black dwarf (a planet-like body insufficiently massive to start "burning hydrogen" like a star). It is possible that this star already exists in one of the catalogues of dim stars without anyone having noted something peculiar, namely the enormous apparent motion of that star against the background of more distant stars (i.e. its parallax). If it should be found, few will doubt that it is the primary cause of periodic mass extinctions on Earth.

Some scientists thought this Nemesis theory was a joke when they first heard of it -- an invisible Sun attacking the Earth with comets sounds like delusion or myth. It deserves an additional dollop of skepticism for that reason: we are always in danger of deceiving ourselves. But even if the theory is speculative, it's serious and respectable, because its main idea is testable: you find the star and examine its properties.

There are programs running where astronomers are searching for the star. If the deamon star exists, we will surely know within the next two years.

Sunday, February 7, 2010

How the Mighty Fall

On the occasion of recent problems with Toyota, Akio Toyoda, new corporate CEO at Toyota, cited a few phrases of Jim Collins' book "How the Mighty Fall". He also estimated that Toyota has reached already level 4.

Saturday, February 6, 2010

Nespresso Again

You may remember my last years post titled "Don't feed Your Plants", where the gardeners complaint about the fact that some people seem to feed their plants with coffee.

Since our plants feel so sick, the company has now decided to let the gardeners check the flowers every other third week.

So, my dear plants...no more coffee, is this clear!
=;O)

Saturday, January 9, 2010

Think Positive

We start the new year as we ended it. Full thrust into abyss...