This is a reworked cartoon. Originally I drew the cartoon without any keyboards at his hands/tentacles and it was actually also published like this in "The Testing Planet". But now the joke is a little bit more clear, I guess. Enjoy.
How to do performance testing
What a test manager is interested before going live is often this: "Does
the system respond fast enough and still accurately if there are
10,20,30 or more users's working in parallel?
Performance testing
tools provide you the ability to develop scripts that fire scripted
scenarios and measure their response time when executed isolated or all
together using these ramp up numbers. Often, problems already show
up with a much smaller number of parallel users using the system.
Before you buy expensive experts, solve these problems first. Write
either one or more simple Selenium UI scripts or some API-tests like
REST/SOAP, build an endless loop around it and ask some colleagues
whether they are willing to run the scripts from their workstations.
Then find a free workstation where you can do manual testing in parallel
to feel how it is not to be alone anymore.
If that reveals no
bugs, move to the next stage and hire experts who can do the ramp-up as
explained in the first sentence. Usually you should come up with 4-5
typical scenarios which cover at least 1 read, 1 update, 1 create and probably 1
delete operation. Not more, because scripting and providing test data
for these scripts can be the most time-consuming and costly part in a
performance test session. Also note, that a typical distribution of
user's actions is 90/5/5 (90% read, 5% update, 5% create) or 80/10/10 or
similar.
When using a professional performance testing tool (or
working with an expert company), make sure you test the different
scenarios in isolation. That means, let's test what happens if you have
10,20,30 or more users only doing read requests, then do the same only
for update and yet another set only for creation. In most cases you will
learn that a high number of read requests are nearly noticeable while
it is often a different story once you test one of the other scenarios.
Combining all scenarios together is more realistic but should be done
after you have already learnt about the first tests. A combination makes
it always hard to pinpoint "who is the problem"?
Don't forget to
have someone manually test the system while you are running the
automated performance test scripts. Learning how it feels if all of a
sudden the performance goes down, is an invaluable information that
numbers can hardly give you.
In a project I coached the
performance test experts, it turned out the biggest amount of work was
spent during the preparation of meaningful test data for the scripts. I
mean, we were firing the tests on a real productive database, but
finding accurate records with which one could perform the update
scenarios without triggering this or that validation message wasn't
easy. It turned out it was probably better if we created these data
ourselves.
Not only this, we assigned each virtual users to work on
only one particular set of data that others don't write to. If we didn't
follow this approach, we had another challenge of locked records or
trying to update something that was already updated beforehand that
would have resulted in validation error messages. Next was the creation
of records that allowed only one instance. A second record wasn't
allowed. So we had to expand the script to also always delete right
after the record was created. Also for this scenario, each virtual user
had a particular set of cases that he/she could add new records to
without getting in the way of another.
Friday, November 9, 2012
Performance Testing
Labels:
80/10/10,
90/5/5,
JMeter,
load testing,
load-testing,
octopus,
performance,
performance-testing,
psychologist,
ramp-up,
REST,
scenario,
SilkTest,
SOAP,
testdata,
testing
Subscribe to:
Post Comments (Atom)
The goal of Software Performance Testing is not to find bugs, but to eliminate bottlenecks and establish a baseline for future regression testing. To conduct performance testing is to engage in a carefully controlled process of measurement and analysis. Ideally, the software under test is already stable enough so that this process can proceed smoothly.
ReplyDeleteManaging the performance of an application environment is one of the biggest challenges faced by all the enterprises today.
ReplyDeleteSoftware Performance Testing provide greater insight into application performance and its overall architecture.
Great cartoon. Happy for me to use this on our blog and link back here?
ReplyDeleteThanks
Thanks Tom
DeleteGood post.....I appreciate yor way of writing that make the blog attractive and make reader to hold longer to your blog. Thank you for sharing.Performance testing services is a highly specialized field of testing.
ReplyDelete