Sunday, December 25, 2011
Saturday, December 3, 2011
Want some examples?
Wednesday, November 9, 2011
I think the good news is that there are a lot of amateurs out there who will use a broad variety of similar tools which calculate and “estimate” the same thing. If one of those tools fails, there is still a bunch of others who do it right.
Redundancy is the key to success sometimes. We usually try to avoid this in coding, but it has its place for instance at safety critical applications. But still, it needs a brain, common-sense and - believe it or not - capacity for teamwork, especially if you analyze inconsistent scenarios. Without one or the other, redundancy can also become deadly where it was actually meant to save lives.
The two altimeters in an airplanes cockpit work independent from each other. If one fails, there is still a high chance the other works. But if a hard-headed doesn't take into account that it may be HIS altimeter which fails and not the one of his co-pilot, then it may end up in a disaster like the one in 1990 where the DC-9 was flying about 900 feet (275 m) too low during approach and then hit high ground at Stadlerberg, near Zurich, Switzerland.
Saturday, July 23, 2011
Please excuse my short excursion away from my traditional cartoons about software testing. But this one is worth looking at, since it is related to testing indirectly.
My family and me just came back from a superb holiday trip at Greece Island Kos. We also made a short trip to Bodrum, Turkey. While we were eating traditional Kebap there, I looked at the building vis-a-vis and was wondering about the architecture of the balconies.
Something was wrong with them but I couldn't immediately figure out what it was. After a while I suddenly realized that the sea-side balcony at the second floor had no French window, meaning, it was unreachable. Then I saw that there was another one with the same "pattern" just below in the first floor. Fine, no doors. But why need a balcony there if you can't reach it? As a tester, my first thought was, it must be a bug, maybe an architectural bug,..no idea, just kidding! Maybe the two balconies where there just to achieve some kind of symmetry.
My theories didn't make sense and so I asked the waiter at our restaurant whether he can tell me more about this special "feature" at the white sea-side building vis-a-vis.
He didn't really see the problem outright and when he finally noticed, he stood on the sidelines for a moment . Then he explained to me what he was assuming. He thought the doors had been closed with masonry because of the strong wind and because of too much sun at this side of the building. Then I noticed him calling one of his colleagues and pointing to the abnormal balcony. Few seconds later a third colleague came by and all together were discussing excitingly the anomaly they/me just detected. After a while, the waiter came back to me and confessed that he'd never noticed this issue before even though he was working in the restaurant for several years. He and his colleagues, too, were fascinated about the missing doors and the waiter expressed to me some kind of amazement about how I as a tourist are looking at things.
When he said that, I immediately thought about James Whittaker who in his book "Exploratory Testing" often compares testers with tourists. Usually, tourists look at things differently than than those who live there here all the time. As a tourist you don't take things as granted. You are typically more curious and want to learn more about an area you've never visited before. This different view makes you notice things which people take for granted or don't know or care about. My plan is not to make too many relationships to the testing scene here but fresh eyes will always find new things, no matter how well you do your job as a software tester. At one point in time it is better to have someone else look at your "baby". If you know the SUT (software-under-test) too well you start to accept and tolerate things where others would move their eyebrows...This message is also for developers who don't think their code needs testing. =;O)
Thursday, July 21, 2011
If you try to be polite and ignore the ugly baby, it is just a matter of time until the customer faces you with the same conclusion or even worse when he states that ALL your babies are ugly.
Thursday, April 7, 2011
Hello? Are you serious? Come on, I mean, a great place indeed, but not a prospectus that convinced me to leave my kids there for skiing, at least not alone.
What if we write something like this to our customers, each time we deploy / ship a new release to production, for example:
"500 new features implemented, 100 of them are tested"
OK, why not? At least, it would be the truth....=;O)
Wednesday, March 30, 2011
It doesn't matter what kind of tool you buy. Implementing and keeping test automation up and running for a long-term period comes with a cost that a lot of people did not expect when going for test automation. Whenever I had the chance to talk to people about test automation or even see what they've done or started to do, I recognized that most people tried to implement their own frameworks (me included), simply because what they bought or downloaded was not good enough or not easy enough to use for the tester.
Enriching your tool with a test framework that fits your needs is not wrong per se, but what most of my contacts had in common, is the fact that they started to automate testing on the Graphical User Interface (GUI) only.
That is funny, because the GUI is one of the most difficult and complex interfaces to automate for testing. The scripts are usually slow and the GUI changes often. Developers might embed components of different technologies and newer versions of it and your test tool might not (yet) support it. And then, developers start changing the GUI, re-arranging it, inserting additional dialogs that were not there before, etc. That is when a tester / test automation expert / framework developer / is suddenly more busy adapting existing scripts or the framework itself, than writing new test cases.
Doesn't this sound common: "Uhh, ohh that script does not work anymore because they have changed, this and that, so let us exceptionally execute this one test case manually because I need to have the report tonight. Maybe another test script is affected, too, uhh, ohh and may be a third and fourth test script as well".
I call this the Cretaceous Period of Test Automation. The scripts start to die one by one until you see so much red that you are attempted to question test automation and maybe even banish it and put the expensive tools back to the shelf.
I am not telling you here, that GUI Automation is bad. But there are other ways which can be more effective and easier and cheaper to maintain. For example, testing below the UI. This could be an API or a B2B interface (WebService), or something similar. If you are a lucky quality analyst who tests software components which provide a public interface to customers...use it for testing!
For those, who don't have APIs, ask for it, even if it is just for testing. We do quite a lot of automated testing on a WebService level. BTW, we did NOT abandon GUI automation, although my answer here may sound like we did. Of course not, we need it, but we are always trying to test the feature below the UI if possible. The API-Tests run on a daily basis and can therefore act as an early warning system. Something the big amount of UI scripts cannot do for us.
Of course, you always have to do the sort of testing / test automation that is appropriate for your needs and your situation, but at least you may think about the alternative of testing somewhere else than the UI.
My cartoon about this topic was also published at the "The Testing Planet" Magazine (March 2011)
Tuesday, February 8, 2011
Thursday, January 20, 2011
The fact that the OK team now organized a separate room for the testers and the deployment engineers gives me plenty of new inspiration for some follow-up cartoons. ThanX to Janette Rovansek who was so kind to publish the cartoon at STP Magazine newsletter.
Tuesday, January 4, 2011
Monday, January 3, 2011
A P1 call on a Friday afternoon 4pm from a crestfallen customer after we deployed a new release the night before.
I asked myself, "how would it feel if we deployed a version that worked straightaway without any user running into troubles the day after we shipped the new release?"
Without expecting it ever to happen, only a few months later, we got forwarded an email from a customer who congratulated us for the great achievement. He was happy because we delivered a piece of software which worked at first go. The customer was surprised, since he did not expect this from us, so he felt he needed to tell us how amazed he was.
What does it tell me? It's obvious, that message was ambiguous. Did he really want to simply be grateful or did he want to tell us something else...?
BTW. ThanX to Rob Lambert who was so kind to include my cartoon into the free eBook "A Tester is for Life, Not Just for Christmas".