Vito, my colleague, once said: "Hey, we're really good at working wonders, but sometimes, it just acts up.
Friday, December 3, 2010
Friday, November 12, 2010
That was a practical approach, but also a risky one if I think of what just happened to me last week at my current employer.
I had an automated test script which couldn't be executed due to some problems with the new version of the test automation framework we were using. Since there wasn't enough time to investigate, I just skipped this script, reported "FAILED due to unknown reasons", marked it as low priority and continued with other tests. I did it because the component it was testing never showed up any defects in the past and so I thought, I could easily put it aside for a while until I found enough time to investigate what was wrong.
When I finally had the time and managed to "revive" the script, it uncovered a real ugly bug, and it was almost too late to fix it for the release that we were about to deploy. My luck, the release-deployment had to be shifted by one week due to another issue and it wasn't a big deal for the developer to fix it.
Anyhow I felt uncomfortable for the fact that I found this issue so late in the process whereas it could have been uncovered much earlier. What does it tell me? Even a test case / script which may look useless for many test cycles, may become an important script at a later point in time. Today, while writing or looking at new/existing scripts to determine its priority, I ask myself..., what could go wrong if that script isn't executed rather than thinking of the success rate in terms of "how many defects did it uncover in the past".
This cartoon along with the text is now also published by the STQAMagazine's blog
(and later on, also in the STPMagazine print version of January 2011 (first page)).
Monday, October 25, 2010
...in the meantime, I found out under which conditions this situation occurs. In the NUnit Test Automation Framework, even if all your test scripts were running fine the whole suite may report a failure if someting goes wrong outside the test suite. The TearDown function is such a method that gets called automatically after each script and/or after the suite is completed. At that point, all scripts reported already a successful run, but the TearDown can still throw an exception if for instance your implementation invokes a call of a null object (null pointeer exception). In such case, NUnit reports a failed suite although all child scripts were passing all tests.
Monday, October 18, 2010
The more test cases you have the more difficult it is to keep them up-to-date and make them provide the same benefit they did at the time they were born and run successfully. More and more scripts will start to fail and if you don't have the time to analyze the root-cause and fix the scripts, the number of scripts printing red alarm signals increases.
You will start to become familiar with the fact, that some scripts always print red warnings and the day isn't far where you don't even notice anymore that more and more scripts suffer of the same issue.
Probably you don't even trust your scripts anymore since they may report so called false-negatives. Make sure you keep them up-to-date at the time they fail. We call that script maintenance, and yes, this not only applies to manual testing. It is especially true for automated testing, too.
This is my second cartoon published in a printed magazine (STQA Magazine, September/October 2010).
ThanX to Rich Hand and Janette Rovansek.
Wednesday, October 6, 2010
Thinking about that issue, it could have been easy for the local management to fix the problem. First there were way too many items in the repository. Fewer items had solved the issue because the stack never had a chance to grow up fast and so high up. In other words, less items in the repository and the conveyor never had such a continuous maintenance problem. Second, none really seemed to care about a badly moving conveyor. Instead of someone stopping the kids from using the wheel for just one minute and trying to solve the root cause that slowed down the conveyor band, local management just accepted (or wasn't aware of) the fact, that those poor kids had to "work hard" and spend too much effort.
Thursday, September 2, 2010
Frustrating because you couldn't convince anyone to roll out an untested release.
Encouraging since you get an immediate demonstration of what are the costs of a missing tester.
Sunday, August 8, 2010
In addition we run component tests below the UI (WebServices), automatically invoked by a continuous integration server, covering nearly 100% of the interfaces that our company offers to our customers and to make our systems testable from below the UI.
Also we regularly execute manual tests on an end-to-end basis to complete our "spectrum" of available testing techniques....any yet....bugs still make it through to the field. There is not much we can do against it. This is just an accepted fact, no matter how much effort you spend in trying to avoid it.
As a tester I celebrate if the software goes belly up, but of course only if this occurs during the testing phase. As all other contributors to writing/developing software components and systems, me too, are "praying" that we don't get any surprises, once the software is shipped or deployed online. If customers find reasons to complain in spite of all our applied strategies in stopping severe bugs bothering them, it is our performance of duty to question ourselves and our activities each time from scratch.
The one question which always stands out is "Could we have identified this particular bug before the customer had the pleasure to experience it?". In most cases I must add, we could not, and the reason is obvious:
Following a plan and assuring execution of all agreed test cases within the time given does not guarantee that new bugs impacting our customers workflows can be ruled out. In our particular situation I see an increasing number of bugs slipping through due to poor communication to the testing department. If testers don't know what features customers are using and how, then how can one expect the testers to check the right area? Well, the good thing about such bugs is, that we learn from them and ask the right questions. That bug will surely not show up again since it is now covered in the test suite, so which one will show up next...?
By the way, this cartoon is my first cartoon that got published in a PRINTED magazine. ThanX to "The Testing Planet" aka. softwaretestingclub.com
Wednesday, June 16, 2010
Wednesday, June 9, 2010
Thursday, May 13, 2010
Wednesday, May 5, 2010
Such a rare incident occured recently when we detected an optimistic lock exception in one of the WebServices under very specific caller scenarios on one of our integration test environment.
However, the developer could not reproduce the error on his workstation and although he was already running a newer version of the code, he was actually right.
It turned out the problem occurred on a clustered environment only where several containers and nodes are running in parallel. Lesson learnt: It doesn't need to be the programmer who broke a piece of software. Sometimes other influences lead the software to behave different. Here it was the environment in which the software was running.
Tuesday, April 13, 2010
a) The software is good enough to go live
b) The software wasn't tested well enough
Typically, the developer thinks it is "a". And, of course, the tester always fears it could be option "b".
Sunday, April 11, 2010
I don't know how the story of Aladdin is told in English spoken areas, but here in Switzerland and probably also in Germany you must use a cleaning rag and wipe it over the oil lamp (well here it's rather a teapot) to get the ghost out of it. In this cartoon it is the small bug that went over the teapot. He triggered the event for the Ghost to come out and of course, he didn't see anybody.
Cartoons that need explanation are probably not worth getting published but here I still think it's worth it.
Someone asked me whether ghosts really cast a cloud over the floor... Actually a good question. Don't know, I never met a ghost.
Saturday, April 10, 2010
Thursday, April 8, 2010
Monday, April 5, 2010
Wednesday, March 31, 2010
The question here is, who is actually "watching" you.
According to TCS (Touring Club Schweiz) there are reported cases where car owners lost their warranty claim just because the manufacturer could prove they have stressed the motor by driving in a suboptimal gear.
Source: Tagesanzeiger, March 11, 2010
Direct link: Wer ein neues Auto fährt, kann nichts verbergen
Tuesday, March 30, 2010
Sunday, March 28, 2010
The time that can be spent into investigation can grow dramatically. The more time you spend into finding the root cause, the more pressure on you to be successful so the time spent for searching is justifiable.
Woe betide you if you can't find it after so much time has been wasted.
Woe betide you if you cannot find the bug but the customer will.
Woe betide you if you found the bug, reported it, but someone still thinks it is reasonable enough for the customer...and then the customer doesn't like it...
Saturday, March 13, 2010
Friday, March 12, 2010
When you are outnumbered like that, a test organization is usually focused on survival rather than evolving their engineering skills and practices.
Source: How We Test at Microsoft, MicrosoftPress (Page, Johnston, Rollison)
Thursday, March 11, 2010
(c) Mueller & Zelger
Tuesday, March 9, 2010
If you work for the type of organization that is not focused on quality and does not recognize or fix anything your testers have worked so hard to find, a test team is going to view that as a lack of respect for them or their work. And if you don't give your testers the respect they deserve, you'll demoralize them pretty quickly.
Source: Beautiful Testing, O"Reilly ("Was It Good for You", by Linda Wilkinson)
Wednesday, March 3, 2010
Testability is a foreign word although it is the key to successful test automation. I have heard phrases like "Oh, yes, that is a nice idea, but we will do this later". Later here means either never or at a time when it is already too late and becomes too costly to implement it.
Some also believe test automation saves tester resources. That's not really true, because test automation enables you doing tests you would never have hired any test resources for anyway. Test Automation enables you doing things that 100 testers couldn't do, that's right, but you would never hire these testers anyway, so you cannot talk about "saving tester resources".
Test Automation actually saves resources, but not really at QA. Test Automation enables you to point out failing code right after it is committed/checked-in. So, this happens at a time a developer still remembers what he just did. He can correct the error with no big deal. Without automation you would notice the same errors days or weeks later. At that time, the developer already moved on with different stuff and hardly anyone may know who broke the code or what piece of code is responsible for the detected anomaly. An expensive analysis and debugging starts where not only one but liklely more people are involved investigating the issue. These are the costs you can avoid with automation, but never say it saves test resources.
Sunday, February 21, 2010
Tuesday, February 9, 2010
Monday, February 8, 2010
He explained the theory that our Sun may have a twin star moving in an
huge elliptical orbit and may be held responsible for the mass extinction of the dinosaurs.
Here is a very good description of Nemesis which I found on WIKI:
Suppose our Sun was not alone but had a companion star. Suppose that this companion star moved in an elliptical orbit, its solar distance varying between 90,000 a.u. (1.4 light years) and 20,000 a.u., with a period of 30 million years. Also suppose this star is dark or at least very faint, and because of that we haven't noticed it yet.
This would mean that once every 30 million years that hypothetical companion star of the Sun would pass through the Oort cloud (a hypothetical cloud of proto-comets at a great distance from the Sun). During such a passage, the proto-comets in the Oort cloud would be stirred around. Some tens of thousands of years later, here on Earth we would notice a dramatic increase in the the number of comets passing the inner solar system. If the number of comets increases dramatically, so does the risk of the Earth colliding with the nucleus of one of those comets.
When examining the Earth's geological record, it appears that about once every 30 million years a mass extinction of life on Earth has occurred. The most well-known of those mass extinctions is of course the dinosaur extinction some 65 million years ago. About 25 million years from now it's time for the next mass extinction, according to this hypothesis.
This hypothetical "death companion" of the Sun was suggested in 1985 by Daniel P. Whitmire and John J. Matese, Univ of Southern Louisiana. It has even received a name: Nemesis. One awkward fact of the Nemesis hypothesis is that there is no evidence whatever of a companion star of the Sun. It need not be very bright or very massive, a star much smaller and dimmer than the Sun would suffice, even a brown or a black dwarf (a planet-like body insufficiently massive to start "burning hydrogen" like a star). It is possible that this star already exists in one of the catalogues of dim stars without anyone having noted something peculiar, namely the enormous apparent motion of that star against the background of more distant stars (i.e. its parallax). If it should be found, few will doubt that it is the primary cause of periodic mass extinctions on Earth.
Some scientists thought this Nemesis theory was a joke when they first heard of it -- an invisible Sun attacking the Earth with comets sounds like delusion or myth. It deserves an additional dollop of skepticism for that reason: we are always in danger of deceiving ourselves. But even if the theory is speculative, it's serious and respectable, because its main idea is testable: you find the star and examine its properties.
There are programs running where astronomers are searching for the star. If the deamon star exists, we will surely know within the next two years.
Sunday, February 7, 2010
Saturday, February 6, 2010
Since our plants feel so sick, the company has now decided to let the gardeners check the flowers every other third week.
So, my dear plants...no more coffee, is this clear!