Friday, April 6, 2012

Automated Test Script Desease #1

If it is not a new unexpected confirmation dialog box that shows up, then it may be a unique GUI control identifier that either changed or is even missing. The response could also be slower than originally recorded. That means we have to either extend the script with hard-coded extra wait statements or - if you are smarter - write some extra logic which always waits for the control to load completely before it gets "touched". The object could also be at a different location, let's say in a list that is sorted differently than before. Hope you didn't record the position but negotiated with the developer for a predicable algorithm so your roboter is still able to reach it even though it shows up somewhere else.Poor you if one detects that the developers now use third party controls that your test automation tool can't deal with. You might also experience ugly test script- or module-dependencies. As if this wasn't enough, you may work in a world where testers don't control the test environment where others can change or update configuration without you knowing until you see the script failing. You may also experience a deployment of a new software version in the middle of test execution or while you are playing Sherlock Holmes to investigate a real fat bug. Your script might only work from your workstation but not from the server you're triggering and executing it ...from one day to the other.

And often, even if none of those typical UI test automation challenges is one that you face today, you will still have to sit there watching the script running,  so you're ready for some extra test script babysitting actions. If you weren't there observing your scripts but going out for a coffee instead, don't expect your roboter to have completed successfully its job when you've come back to your desk...

That is just a few of the reasons, why I love testing below the UI so much.

Wednesday, March 28, 2012

New procedure for build breakers

A new habit is about to start and it reminds me to the Dark Ages were thieves were put to the pillory, so everyone could see and shout at them.

Saturday, January 14, 2012

Giraffe Accessible, The Workaround

A workaround was quickly found, although on the long run, it wasn't the most comfortable position.

Friday, January 13, 2012

Giraffe Accessible

A reference to compatibility in general. For instance, browser compatibility. What works for FF, doesn't necessarily work for IE and/or other kinds of web browsers.

Sunday, December 25, 2011

Deployment @ XMas (-Dinner)

Whenever possible, I try to draw cartoons which win a smile from a general audience. This time it is probably too specific and therefore an "insider".

Sorry for that.

Saturday, December 3, 2011

The Umlaut Test Pattern

One of the most reliable "bugs" we met regularly is what we call the "Umlaut" bug. Often we notice cases in our database which are displayed incorrectly. Namely Polish and/or German characters are regularly screwed up somewhere. Corrupt here means that it is displayed as a black square with a white question mark or a simple empty rectangle. The test pattern to look for broken text is effective at all test levels, be it it the UI or even at a lower level such as Web Service responses.

And it doesn't stop with Umlauts. Even though it sounds kind of insane when I tell you to use special characters wherever you can. Special characters have been the root cause of so many bugs even at places where only people like me would ever test them. Even an innocent blank between two strings can be considered a special character. For example, “M & M’s World Store in New York”, “Super-Mini-Taxi”, “Dr. Pepper”, “O’Brien”, etc. These are all common names that caused problems in programs we tested.




Examples for broken umlauts













Wednesday, November 9, 2011

YU55 Near Miss

I dare to ask about the consequences of an eventually failing asteroid observation software.

I think the good news is that there are a lot of amateurs out there who will use a broad variety of similar tools which calculate and “estimate” the same thing. If one of those tools fails, there is still a bunch of others who do it right.

Redundancy is the key to success sometimes. We usually try to avoid this in coding, but it has its place for instance at safety critical applications. But still, it needs a brain, common-sense and - believe it or not - capacity for teamwork, especially if you analyze inconsistent scenarios. Without one or the other, redundancy can also become deadly where it was actually meant to save lives.

The two altimeters in an airplanes cockpit work independent from each other. If one fails, there is still a high chance the other works. But if a hard-headed doesn't take into account that it may be HIS altimeter which fails and not the one of his co-pilot, then it may end up in a disaster like the one in 1990 where the DC-9 was flying about 900 feet (275 m) too low during approach and then hit high ground at Stadlerberg, near Zurich, Switzerland.

Saturday, July 23, 2011

Where's the French window?

Please excuse my short excursion away from my traditional cartoons about software testing. But this one is worth looking at, since it is related to testing indirectly.

My family and me just came back from a superb holiday trip at Greece Island Kos. We also made a short trip to Bodrum, Turkey. While we were eating traditional Kebap there, I looked at the building vis-a-vis and was wondering about the architecture of  the balconies. 

Something was wrong with them but I couldn't immediately figure out what it was. After a while I suddenly realized that the sea-side balcony at the second floor had no French window, meaning, it was unreachable. Then I saw that there was another one with the same "pattern" just below in the first floor. Fine, no doors. But why need a balcony there if you can't reach it? As a tester, my first thought was, it must be a bug, maybe an architectural bug,..no idea, just kidding! Maybe the two balconies where there just to achieve some kind of symmetry. 

My theories didn't make sense and so I asked the waiter at our restaurant whether he can tell me more about this special "feature" at the white sea-side building vis-a-vis. 

He didn't really see the problem outright and when he finally noticed, he stood on the sidelines for a moment . Then he explained to me what he was assuming. He thought the doors had been closed with masonry because of the strong wind and because of too much sun at this side of the building. Then I noticed him calling one of his colleagues and pointing to the abnormal balcony. Few seconds later a third colleague came by and all together were discussing excitingly the anomaly they/me just detected. After a while, the waiter came back to me and confessed that he'd never noticed this issue before even though he was working  in the restaurant for several years. He and his colleagues, too, were fascinated about the missing doors and the waiter expressed  to me some kind of amazement about how I as a tourist are looking at things. 

When he said that, I immediately thought about James Whittaker who in his book "Exploratory Testing" often compares testers with tourists. Usually, tourists look at things differently than than those who live there here all the time. As a tourist you don't take things as granted. You are typically more curious and want to learn more about an area you've never visited before. This different view makes you notice things which people take for granted or don't know or care about. My plan is not to make too many relationships to the testing scene here but  fresh eyes will always find new things, no matter how well you do your job as a software tester. At one point in time it is better to have someone else look at your "baby". If you know the SUT  (software-under-test) too well you start to accept and tolerate things where others would move their eyebrows...This message is also for developers who don't think their code needs testing. =;O)

Thursday, July 21, 2011

Testers meeting at the bar

Testing is a thankless job. Product owners (and their developers) get the glory when it works and testers are the first to blame if it doesn't. A tester's job is to tell product managers when their baby is ugly even if it is difficult to stay friends after that. But, if testers put on rose-coloured glasses, it is just a matter of time unitl the customer does. Worse, if the customer claims that ALL your babies are ugly.

Thursday, April 7, 2011

A Trustworthy Invitation

"2 chairlifts, one of them safe for children"...

Hello? Are you serious? Come on, I mean, a great place indeed, but not a prospectus that convinced me to leave my kids there for skiing, at least not alone.

What if we write something like this to our customers, each time we deploy / ship a new release to production, for example:

"500 new features implemented, 100 of them are tested" 

OK, why not? At least, it would be the truth....=;O)

Wednesday, March 30, 2011

Early Checkers in the Cretaceous Period

It doesn't matter what kind of tool you buy. Implementing and keeping test automation up and running for a long-term period comes with a cost that a lot of people did not expect when going for test automation. Whenever I had the chance to talk to people about test automation or even see what they've done or started to do, I recognized that most people tried to implement their own frameworks (me included), simply because what they bought or downloaded was not good enough or not easy enough to use for the tester.

Enriching your tool with a test framework that fits your needs is not wrong per se, but what most of my contacts had in common, is the fact that they started to automate testing on the Graphical User Interface (GUI) only.

That is funny, because the GUI is one of the most difficult and complex interfaces to automate for testing. The scripts are usually slow and the GUI changes often. Developers might embed components of different technologies and newer versions of it and your test tool might not (yet) support it. And then, developers start changing the GUI, re-arranging it, inserting additional dialogs that were not there before, etc. That is when a tester / test automation expert / framework developer / is suddenly more busy adapting existing scripts or the framework itself, than writing new test cases.

Doesn't this sound common: "Uhh, ohh that script does not work anymore because they have changed, this and that, so let us exceptionally execute this one test case manually because I need to have the report tonight. Maybe another test script is affected, too, uhh, ohh and may be a third and fourth test script as well".

I call this the Cretaceous Period of Test Automation. The scripts start to die one by one until you see so much red that you are attempted to question test automation and maybe even banish it and put the expensive tools back to the shelf.

I am not telling you here, that GUI Automation is bad. But there are other ways which can be  more effective and easier and cheaper to maintain. For example, testing below the UI. This could be an API or a B2B interface (WebService), or something similar. If you are a lucky quality analyst who tests software components which provide a public interface to customers...use it for testing!
For those, who don't have APIs, ask for it, even if it is just for testing. We do quite a lot of automated testing on a WebService level. BTW, we did NOT abandon GUI automation, although my answer here may sound like we did. Of course not, we need it, but we are always trying to test the feature below the UI if possible. The API-Tests run on a daily basis and can therefore act as an early warning system. Something the big amount of UI scripts cannot do for us.

Of course, you always have to do the sort of testing / test automation that is appropriate for your needs and your situation, but at least you may think about the alternative of testing somewhere else than the UI.

TOJZ

My cartoon about this topic was also published at the "The Testing Planet" Magazine (March 2011)


Tuesday, February 8, 2011

Thursday, January 20, 2011

Unexpected Xmas Dinner

With this cartoon I earned an incredible number of feedback through email and some even came by my desk personally to condolate. While thinking it over I could hit the nail on the head even better by adding a speech balloon on top of one of the figures saying "Not again..."  as it wasn't the first business XMAS-Dinner like this. The last XMAS-Dinner or shall I say release plan was exactly of the same pattern.


The fact that the OK team now organized a separate room for the testers and the deployment engineers gives me plenty of new inspiration for some follow-up cartoons. ThanX to Janette Rovansek who was so kind to publish the cartoon at STP Magazine newsletter.

Tuesday, January 4, 2011

If Cars Were Built Like Software

ThanX to Rob Lambert (again) and Rosie Sherry who published my cartoon into the "The Testing Planet" magazine listed here at The Testing Planet








Monday, January 3, 2011

Confused Customer

OK, at the time I drew this cartoon, I thought, how would it feel if we testers suddenly got a call from a happy customer rather than the usual thing which is:

A P1 call on a Friday afternoon 4pm from a crestfallen customer after we deployed a new release the night before.

I asked myself, "how would it feel if we deployed a version that worked straightaway without any user running into troubles the day after we shipped the new release?"

Without expecting it ever to happen, only a few months later, we got forwarded an email from a customer who congratulated us for the great achievement. He was happy because we delivered a piece of software which worked at first go. The customer was surprised, since he did not expect this from us, so he felt he needed to tell us how amazed he was.

What does it tell me? It's obvious, that message was ambiguous. Did he really want to simply be grateful or did he want to tell us something else...?

BTW. ThanX to Rob Lambert who was so kind to include my cartoon into the free eBook "A Tester is for Life, Not Just for Christmas".

Friday, December 3, 2010

Minor Risk Patches

Testers like to demonstrate that the software is not ready to ship (yet) but sometimes they fail miserably by making it clear enough..., and the customer might get a nasty surprise.

Vito, my colleague, once said: "Hey, we're really good at working wonders, but sometimes, it just acts up.

Friday, November 12, 2010

The Unexpected Ugly Bug

I remember the time where we had a set of only few automated tests and a big set of manual test cases. As the number of features grew also did the number of test cases. The number of available testers was a constant number and we didn't manage to execute all the test cases. At that time I started to rate the test cases based on their likelihood of detecting bugs. Whenever we executed a test case that helped uncovering a bug, we added a link to the defect's ID. Each time the test suite was executed again, the tester was notified about the number of defects and its priorities it found in the past.

That was a practical approach, but also a risky one if I think of what just happened to me last week at my current employer.

I had an automated test script which couldn't be executed due to some problems with the new version of the test automation framework we were using. Since there wasn't enough time to investigate, I just skipped this script, reported "FAILED due to unknown reasons", marked it as low priority and continued with other tests. I did it because the component it was testing never showed up any defects in the past and so I thought, I could easily put it aside for a while until I found enough time to investigate what was wrong.

When I finally had the time and managed to "revive" the script, it uncovered a real ugly bug, and it was almost too late to fix it for the release that we were about to deploy. My luck, the release-deployment had to be shifted by one week due to another issue and it wasn't a big deal for the developer to fix it.

Anyhow I felt uncomfortable for the fact that I found this issue so late in the process whereas it could have been uncovered much earlier. What does it tell me? Even a test case / script which may look useless for many test cycles, may become an important script at a later point in time. Today, while writing or looking at new/existing scripts to determine its priority, I ask myself..., what could go wrong if that script isn't executed rather than thinking of the success rate in terms of "how many defects did it uncover in the past".



This cartoon along with the text is now also published by the STQAMagazine's blog
(and later on, also in the STPMagazine print version of January 2011 (first page)).

Monday, October 25, 2010

Revolt of the Test Scripts

If testers are not invited early enough, they will not have the chance and the time to determine whether or not they have to adapt their existing scripts. As a matter of fact, some scripts will have to be de-activated since they start to report errors that aren't any. Someone smart once called this "False negatives", consequently the opposite exists, too (false positives). see also the blog entry as of July 2009 (The Test Successfully Failed).


...by the way, Test Tools and/or your own test scripts can be buggy, too (here a false negative).

...in the meantime, I found out under which conditions this situation occurs. In the NUnit Test Automation Framework, even if all your test scripts were running fine the whole suite may report a failure if someting goes wrong outside the test suite. The TearDown function is such a method that gets called automatically after each script and/or after the suite is completed. At that point, all scripts reported already a successful run, but the TearDown can still throw an exception if for instance your implementation invokes a call of a null object (null pointeer exception). In such case, NUnit reports a failed suite although all child scripts were passing all tests.

Monday, October 18, 2010

Automated Testing 4 Oil

 It is easy to build up a new test automation suite from scratch. It is a different story to keep it up running for a life-time. Changes in the software under test will be implemented. One day your scripts need to be adjusted to the new situation. Some scripts will become obsolete, some new scripts need to be added, some just don't work anymore.

The more test cases you have the more difficult it is to keep them up-to-date and make them provide the same benefit they did at the time they were born and run successfully. More and more scripts will start to fail and if you don't have the time to analyze the root-cause and fix the scripts, the number of scripts printing red alarm signals increases.

You will start to become familiar with the fact, that some scripts always print red warnings and the day isn't far where you don't even notice anymore that more and more scripts suffer of the same issue.
Probably you don't even trust your scripts anymore since they may report so called false-negatives. Make sure you keep them up-to-date at the time they fail. We call that script maintenance, and yes, this not only applies to manual testing. It is especially true for automated testing, too.

This is my second cartoon published in a printed magazine (STQA Magazine, September/October 2010). 
ThanX to Rich Hand and Janette Rovansek.

Wednesday, October 6, 2010

Mission Impossible in Paris


I just came back from a holiday trip in Paris where we met good old friends from Scotland.
While I could relax from the stressful tester's life, I had a nice déjà-vu on my last day at Paris in the Centre Pompidou.

My kids were playing with a conveyor machine in the playing area where you could pick up colorful everyday items out of a big repository and put it on the conveyor band. Someone could then turn the wheel and all those items got transferred back to the repository. 

What first looked like a boring task to me, appealed so much to my kids and other children that it was hard for me to get them away of the play zone and enjoy the beautiful exhibition. However, the wheel of the conveyor regularly blocked so the kids were no longer able to turn the wheel. As a consequence, no items could get transferred back to the repository. The reason was the stack of items in the repository gaining so much height that it reached the thin vertical slot of the conveyor after only a short while of running. Due to the kids continuously pulling the wheel, some items were involved into the slot and blocked the machine completely.

I spent most of the time making sure, the stack was kept at a safe height so nothing could disappear into the slots, hereby making it easier for the kids to play with the conveyor on a continuous basis. While doing this, I realized another small item got stuck in the slot but did not cause the conveyor to block completely. Instead it slowed it down. The kids had to put a lot of forces on pulling the wheel, so their items got transferred back in the repository. I was trying to get the "bug" out so the kids had a lot easier job.

I failed miserably, because none of kids could stand still for a moment. I had risked my fingers each time I tried to get the "bug" out of the slot. Whenever I almost had it, another kid took control over the wheel and caused me pray for my fingers. Surprisingly, I was the only one parent trying to help those kids and keep the machine running smoothly. When I finally decided to step back and watch the scene from a distance, I saw kids trying to pull the wheel even harder when it no longer worked. None really took the initiative and scaled down the size of the stack.

Thinking about that issue, it could have been easy for the local management to fix the problem. First there were way too many items in the repository. Fewer items had solved the issue because the stack never had a chance to grow up fast and so high up. In other words, less items in the repository and the conveyor never had such a continuous maintenance problem. Second, none really seemed to care about a badly moving conveyor. Instead of someone stopping the kids from using the wheel for just one minute and trying to solve the root cause that slowed down the conveyor band, local management just accepted (or wasn't aware of) the fact, that those poor kids had to "work hard" and spend too much effort.

At this moment I realized the remarking similarities to my job.

Thursday, September 2, 2010

Told Ya

It can be both, frustrating and encouraging if you see a software package getting rolled-out / deployed, without ever having passed the hands of a tester.

Frustrating because you couldn't convince anyone to roll out an untested release.

Encouraging since you get an immediate demonstration of what are the costs of a missing tester.

Sunday, August 8, 2010

Less Room for Bugs

We have automated UI tests, keyword driven, reflecting exactly the user's workflow as they use our programs, or let's say exactly the way we believe the customers still use them.

In addition we run component tests below the UI (WebServices), automatically invoked by a continuous integration server, covering nearly 100% of the interfaces that our company offers to our customers and to make our systems testable from below the UI.

Also we regularly execute manual tests on an end-to-end basis to complete our "spectrum" of available testing techniques....any yet....bugs still make it through to the field.  There is not much we can do against it. This is just an accepted fact, no matter how much effort you spend in trying to avoid it.

As a tester I celebrate if the software goes belly up, but of course only if this occurs during the testing phase. As all other contributors to writing/developing software components and systems, me too, are "praying" that we don't get any surprises, once the software is shipped or deployed online. If customers find reasons to complain in spite of all our applied strategies in stopping severe bugs bothering them, it is our performance of duty to question ourselves and our activities each time from scratch.

The one question which always stands out is "Could we have identified this particular bug before the customer had the pleasure to experience it?". In most cases I must add, we could not, and the reason is obvious:

Following a plan and assuring execution of all agreed test cases within the time given does not guarantee that new bugs impacting our customers workflows can be ruled out. In our particular situation I see an increasing number of bugs slipping through due to poor communication to the testing department. If testers don't know what features customers are using and how, then how can one expect the testers to check the right area? Well, the good thing about such bugs is, that we learn from them and ask the right questions. That bug will surely not show up again since it is now covered in the test suite, so which one will show up next...?


By the way, this cartoon is my first cartoon that got published in a PRINTED magazine. ThanX to "The Testing Planet" aka. softwaretestingclub.com

Wednesday, June 16, 2010

Vuvuzela Testing

Software Testing here not only happens under annoying background noise, it is a great challenge for people who want to test their nerves under a constant exposure of weird irradiation.

Wednesday, June 9, 2010

Do-it-yourself Virus

Great innovations and lots of new ideas are emerging.

Thursday, May 13, 2010

Behind the Scenes of Chaos Software Ltd.

If you are familiar with Gary Larson you might recognize the scene as close related to one of his creature cartoons where the dinosaur stands in front of its calendar striking each day out ("kill something and eat it"). I hope Gary forgives me for the similarities, but the scene fits perfect to our common experience.

Wednesday, May 5, 2010

IWOMM - It Works on My Machine

I am sure that any tester has received a message like this at least once in his life as a tester. But to be honest, it happened only rarely to me.
Such a rare incident occured recently when we detected an optimistic lock exception in one of the WebServices under very specific caller scenarios on one of our integration test environment.

However, the developer could not reproduce the error on his workstation and although he was  already running a newer version of the code, he was actually right.

It turned out the problem occurred on a clustered environment only where several containers and nodes are running in parallel. Lesson learnt: It doesn't need to be the programmer who broke a piece of software. Sometimes other influences lead the software to behave different. Here it was the environment in which the software was running.

Tuesday, April 13, 2010

Defibrillator

Actually a good idea to install a defibrillator at a place where all hell breaks loose with this never ending deployment virus . I just hope, none gets the idea of using this machine to improve test efficiency and giving some testers an extra shake...

Different Goals


When a tester didn't find any cool bugs, then he basically has two options to think about.

a) The software is good enough to go live
b) The software wasn't tested well enough

Typically, the developer thinks it is "a". And, of course, the tester always fears it could be option "b".

Sunday, April 11, 2010

Very First Exception

I received a few emails where people voted this cartoon as excellent, but I must admit that there was quite a bunch of people who did not really understand the punch line.


I don't know how the story of Aladdin is told in English spoken areas, but here in Switzerland you must use a cleaning rag and wipe it over the oil lamp (in the cartoon it is a teapot) to get the ghost crawling out of it. In this cartoon it is the small bug that went over the teapot. He trig-gered the event and the ghost slipped out. He was confused as he did not understand who woke him up.

Another one asked me whether ghosts really cast a cloud over the floor... Actually a good question. I don't know, I never met a ghost in person.




Saturday, April 10, 2010

The Worm Catcher

I'll better not comment this one, may be at a later point in time.

Monday, April 5, 2010

Chaos Theory

It was quite a challenge identifying this hustle and bustle with a unique and accurate name, until Einstein crossed my mind...

Wednesday, March 31, 2010

Magic Black Box in Your Car

Many car owners probably have not yet realized that their new cars are already equipped with blackbox systems which have lots of sensors and log many operations executed by the driver.

The question here is, who is actually "watching" you.

According to TCS (Touring Club Schweiz) there are reported cases where car owners lost their warranty claim just because the manufacturer could prove they have stressed the motor by driving in a suboptimal gear.

Source: Tagesanzeiger, March 11, 2010
Direct link: Wer ein neues Auto fährt, kann nichts verbergen


Tuesday, March 30, 2010

The Printer Test

Every other two years we start inventing new processes which don't solve the problems we introduced with the previous processes.

Sunday, March 28, 2010

Invalid Test

How often have you run into a weird behavior of the SuT and then tried hard to find evidence for this phenomena? Developers sometimes just don't believe you until you can prove with a video. Screen shots are no hard facts anymore, since they don't demonstrate what steps you have done aside.

The time that can be spent into investigation can grow dramatically. The more time you spend into finding the root cause, the more pressure on you to be successful so the time spent for searching is justifiable.

Woe betide you if you can't find it after so much time has been wasted.

Woe betide you if you cannot find the bug but the customer will.

Woe betide you if you found the bug, reported it, but someone still thinks it is reasonable enough for the customer...and then the customer doesn't like it...

Saturday, March 13, 2010

Husky Adventure in Oberwald Switzerland

For one time, something different to let it all hang out. I made these (and many, many more) pictures today at Oberwald, Wallis (Switzerland).

Friday, March 12, 2010

Rose Colored Glasses

The biggest shock to most industry candidates is the sheer size of the test discipline at Microsoft and the amount of clout they wield. The developer to tester ratio at Microsoft is about 1 to 1. Typical for industry is about 5 developers to 1 tester and sometimes 10 to 1 and higher.

When you are outnumbered like that, a test organization is usually focused on survival rather than evolving their engineering skills and practices.

Source: How We Test at Microsoft, MicrosoftPress (Page, Johnston, Rollison)

Thursday, March 11, 2010

How to confuse intruders and keep them away from your site

Some improvements have been made to improve security of our web site by suppressing detailed information of what went wrong during a B2B transaction, or by providing information that is useless to attackers. While the information provided before was helpful for support and testing for quickly finding the root cause of a failed data upload, this has become almost a mission impossible and the reactions, namely from support came promptly.

(c) Mueller & Zelger

Tuesday, March 9, 2010

The Exciting Job of a Tester

Testers like to hunt for and find stuff. The hunt is exciting, and finding an error is the ultimate motivation.

If you work for the type of organization that is not focused on quality and does not recognize or fix anything your testers have worked so hard to find, a test team is going to view that as a lack of respect for them or their work. And if you don't give your testers the respect they deserve, you'll demoralize them pretty quickly.

Source: Beautiful Testing, O"Reilly ("Was It Good for You", by Linda Wilkinson)

Wednesday, March 3, 2010

There is nothing like starting young

Test automation plays a key-role in buidling great software, especially in today's rapid delivery cycles. What works today, may be broken tomorrow. But while most managers agree it is important, many struggle investing accordingly or do not understand that they have to support the testers while asking for testable code. Making code testable is often seen as something less important than delivering features quickly.

Testability is a foreign word although it is the key to successful test automation. I have heard phrases like "Oh, yes, that is a nice idea, but we will do this later". Later here means either never or at a time when it is already too late and becomes too costly to implement it.

Some also believe test automation saves tester resources. That's not really true, because test automation enables you doing tests you would never have hired any test resources for anyway. Test Automation enables you doing things that 100 testers couldn't do, that's right, but you would never hire these testers anyway, so you cannot talk about "saving tester resources".

Test Automation actually saves resources, but not really at QA. Test Automation enables you to point out failing code right after it is committed/checked-in. So, this happens at a time a developer still remembers what he just did. He can correct the error with no big deal. Without automation you would notice the same errors days or weeks later. At that time, the developer already moved on with different stuff and hardly anyone may know who broke the code or what piece of code is responsible for the detected anomaly. An expensive analysis and debugging starts where not only one but liklely more people are involved investigating the issue. These are the costs you can avoid with automation, but never say it saves test resources.