Sunday, June 9, 2019

Duplicate Bugs Arguing in JIRA

I guess, we all agree, duplicate bug reports may cause people to spend time working on avoidable tasks. But it isn't always easy to find out whether or not a bug report is a duplicate.

When a developer believes that a series of bug reports all have the same root cause, she tends to claim these bugs are all duplicates. 
The test engineer on the other hand would disagree and state “these are all different scenarios from an E2E perspective”. 

At the time of reporting an issue, we usually don’t know the root cause unless we dig deeper into understanding the anomaly. Even if a developer assures the bugs all have the same cause, it still makes no sense to mark these reports as redundant. One can never be sure the developer is right.  

What if one or more of these issues is still broken after the developer fixed one or the other? Isn’t it a good idea to re-test all the identified scenarios where that bug left his mark?

Michael Stahl [1] makes an interesting note when he states:
"Why would the same tester report the same issue twice? It just adds extra work for the tester, who for sure remembers the first report. Usually, a duplicate bug is reported when two testers identified the same problem and both reported it without first checking if it’s already in the system".

I personally believe that doing an upfront research in the bugtracking system doesn’t really help avoiding duplicates. The best experience I’ve made is to contact your colleagues directly rather than searching for bug descriptions which use different wordings. A screenshot could help setting the record straight, but searching for clarifying pictures is even harder. Better ask your colleagues directly “have you already seen that…”.If no, raise the issue, if yes clarify with them verbally whether that's the same issue and then enrich the existing bug report with additional data if you have.

Even if your employees raise duplicate bug reports, I consider it a reliable indication that these bugs are either easy to find or really annoying.

But what are you doing if your customers raise bug reports? You cannot expect a customer to first ask all other customers whether they have the same problem. A customer does not care about how many times the same bug was already reported. Some smart techies might "google" for a solution to their problems, but you can hardly avoid duplicate bug reports raised by customers. For example, according to Castelluccio [2], Mozilla receives hundreds of bug reports and feature requests from Firefox users every day. It is clear now that in such cases, tools are required which categorize bug reports based on similarities of other bug reports to save a developer's valuable time. 

Per Runeson [3] describes an approach using NLP to support the automatic identification of duplicates. Their conclusion: "Even though only 40% of the duplicates are found using this approach, it still means a substantial saving for a major development organization"

However, the best way to avoid duplicate bug reports is fixing any reported issue as soon as they are reported before another user raises yet another duplicate.

[1] When Testers Should Consider a Bug a Duplicate (Michael Stahl), Sticky Minds, January 15, 2018
[2] Teaching machines to triage Firefox bugs (Marco Castelluccio), April 2019 at
[3] Detection of Duplicate Defect Reports Using Natural Language Processing (Per Runeson & Magnus Alexandersson), IEEE 2007

Thursday, December 13, 2018

Confetti User Stories

Finding the right balance between "too big" and "too small" isn't easy sometimes.

Thursday, November 29, 2018

Lining up in the Ocean

It's about how fast we can be wrong with our conclusions. I often see that we detect an alledged new issue, not realizing that the same anomaly was already there before. It's easier and faster to judge without collecting the facts first. Things that may look the same, sometimes aren't.

Monday, November 26, 2018

Patched UFOs

When developers state, they didn't change anything but one line of code, they actually believe that themselves. It's just that they have a bias towards forgetting all the work they've done (plus their colleagues') before their last coffee break.

Tuesday, November 13, 2018

Sprint Review #37

An agile team is one that continually focuses on doing its best work and delivering the best possible product. In our experience, this involves a ton of discipline, learning, time, experiementation, and working together.
[Crispin - Agile Testing]

Sunday, November 11, 2018

Gung-Ho Tester

Last Friday, in the "daily", Cindy handed over a draft of her own doodled sketch about me. I was really surprised and found it brilliant. I promised her to put the sketch into my style cartoon over the weekend and here we go.

ThanX. I never met someone who made a drawing about me. I really enjoyed having this moment of conducting a little self-mockery.

And this is the original sketch from Cindy:

Sunday, November 4, 2018

Analyzing Lenny's flight paths

We are tracking the routes of animals all over the world. Some birds have already become famous like Lenny the stork whose annual airtrip to Spain and back to his home at Basel Zoo is observed by over 1500 fans. Lenny has a transmitter and everyone can check online when he starts his journey from Switzerland to South Spain where he spends a few months before he returns.

His routes [lenny] are recorded since 2013 and each time he starts his way back to Basel, the local news are celebrating it with corresponding articles [lenny2]. People are desperately awaiting his arrival near February/March and are fascinated by the fact he returns to the same nest every year. He also meets the same old girlfriend to enlarge his family. What a faithful and diligent bro.

I wonder what Lenny thought of us bird-stalkers if he knew that each day of his life is tracked as a spot on a map that many people find so exciting. He'd probably bring us to court, took off the transmitter or at least got himself unsubscribed from all social media accounts. 

Maybe he'd become more vain and demonstrate that he could fly further south to Africa like others and/or his antecessor did. He'd probably also felt ashamed when we watch him stop at all these fastfood rest places which are nothing else but garbage dumps. Or, he could be a little rascal like in the cartoon - with a big portion of humour. But since he doesn't know he's getting observed, he blithely goes on doing what many others do until a yet unknown trigger in his head tells him to go back home.

It's is not only the incredible flight-paths that I find so fascinating, it is also sad and heartrending if you see that almost every second of the ringed storks do not survive. The datalogger web page [logger] keeps a record of each stork with his/her name and if you read their fates, you will learn that some don't even manage to accomplish a single trip. They die by flying into electric power lines, wind wheels, get shot or perish because they ate wrong or morbit things. These are all causes of death which are not natural.

But that's not really what I was about to talk. While digging deeper into Lenny's flight routes, I got a lot of questions. Why do these storks no longer fly to Africa and instead stay in Spain? How do they know where to go and how do they find their way back to their original nest they grew up? When do they start going South and how do they know it's time to go back?

Scientists have raised the same questions about the truncated routes and they have come up with two theories. The first one is the fact that Swiss storks are the result of a resettlement project that started with Algerian storks, a species that is not used to fly the same long distances as the original domestic storks.

The route they are taking now pretty much matches the distance to what their relatives are "programmed for" or what they can perform. The second theory is less pleasant. Spain is full of garbage dumps where they find enough food. If you follow the markers of Lenny and then switch from the "map"-view to "satellite"-view, it is really appaling to see where he likes to rest and eat. I feel sorry for him and his friends. These garbage dumps seem to be another reason why there is no need to search for food further south.

These are fair explanations and I can live with those very well. But I could not find any clear answer to all my other questions like how do they remember which route to take and when? Explanations in scientific articles range from "inner-clock", "genetically programmed to go South with no plan" to "orientiation by the Sun", "stars" and even "magnetic fields" or "orientiation by the landscape".

I am a software tester and if I am given similar amounts of illustrations from a software developer whose pieces of sofware don't work, then my conclusion normally is that one simply doesn't know.

I took the time and verified just one of many theories, which is "orientation by the landscape".
If you zoom out then it looks like Lenny takes the exact same route every year. But if you zoom into Lenny's flight path, I see that his first 2 routes to Spain went over the region of Burgundy in France while the last 2 years he took a route which was 100 km East of the first routes. This doesn't really look to me as if he strongly remembers each hill and course of a river but it still looks like he follows a main path from which he takes small excursions.
When Lenny headed South the very first time, he took a shortcut over the Mediterranean Sea (08/2013); a route which he never took again in the subsequet years on his way South. But he used it regularly on his way back north except for 2016 and 2017. That's interesting, 'cause big birds generally don't like to fly over the sea. They have less natural draft. It costs them more energy to fly. So, what happened the first time that he never took that route again? And why did he find that route more attractive for his return to Switzerland? Did he follow and  trust less experienced storks? Probably not, because at the time he got "chipped" he was already an experienced 10 year old stork. May weather conditions or seasonable winds play a role which make it more economically to fly over the Sea at some time of the year? Did he learn about dangerous places which he tried to avoid next time? Is this aberration from the main route broadening his skills and make him a real travel expert over time by learning on the fly and really remember all these things?

Now we have our link to software testing. Isn't there an analogy to the process of following a strict testscript vs. taking some detours? Isn't it more successful to find bugs by making some extra tours from your documented test steps? What if we don't enter the prescribed number 9481 in the calculator but do something different and watch what happens next? What if we use an alternative way to trigger the same software function/operation? I can enter a text by typing or by pasting from the clipboard into an edit field and I swear to you it is not the same! Isn't this habit of taking detours exactly what turns an ordinary amateur "checker" into a professional software tester?

As a software tester you have to activate all your senses and over time you get a much better feeling where things can go wrong. No book can teach you that. There are interesting ones out there like those from James Whittaker but it's nothing like experiencing these tours on your own [whit]. And what about subject matter experts? They are like the elder storks. You would be dumb not to take their advice. They may not be the best testers, but they might lead you to very interesting places you had never been before.




Exploratory Software Testing by James Whittaker

Thursday, November 1, 2018

Story Points at Coco Beach bar

Consider you asking a waiter how long it takes to deliver the meal and all he says is: "Can't tell you that, but I can tell you how complex it is to prepare". That's what story points are about. I already wrote about story points in the last post but copied a snippet to this cartoon.

Although story points have nothing to do with scrum (invented by Mountain Goat Software), many companies use these to estimate their tasks. Story Points are a bit special because they don't tell you anything about how long a task takes, but rather how complex a task may be to complete. In theory, a task can be very trivial and still take longer because it may invole a lot of monotonous activity to complete the task. On the other hand, a complex task can be completed within an hour or day depending on the skill of the one who is implementing it.

Although I have worked with Story Points for the past 10 years, I am still swinging between two different mindsets. I still can't decide if it is a cool tool or just a buzzword. The paradox of this measurement is the fact that on one hand you don't measure how long you have for a task, but you still do it indirectly because you take these SP numbers to fill the sprint. This is per se exactly the same as if I measured the hours I have for a task, because we fill the sprint only with a limited number of hours/days available. But, story points help you understand much faster when a task gets too complex. Since you are using fibonacci numbers, you get alerted right away if you have a task estimated higher than 8. There is no 9, no 10, the next number is 13. This huge jump is a great warning sign and leads you to rethink the size of the story.

Saturday, October 27, 2018

Agile cocktail bar

A funny explanation of scrum terminology with a pinch of salt.

Refinement: in a Refinement Meeting, the business analysts introduce the desired features (= user stories), and the developers estimate how much time they will need to build it. If you are lucky, you deal with an experienced team and the estimation of each story is a matter of minutes. At the worst case the estimation is based on limited background information or even total disorientation on what is really behind these stories.
Weird thing is that some companies prefer the term "Grooming" over "Refinement" although I recommend not to use it. Grooming is another word for child abuse...just be aware of that.

If one realizes that a desired story is too complex to be done within a reasonable time-frame, it must be stripped down into smaller stories until each of them fits. Even during a sprint you may deal with the situation that certain tasks of the story cannot be implemented in time and therefore require business analysts to further breakup the story into smaller pieces. The more stories requiring refinement the higher probability of losing overview of the great picture. At one point in time you may not see the wood for the trees.

Sprint Planning: The team discusses the goal of the next sprint; which is the tasks and user-stories that need to be implemented. Some now detect that the estimation of some user-stories is too low and correct those on-the-fly, so one cannot add too many stories to the sprint. In order to understand what you can put into a sprint, you also need to know your capacity. If you are new to agile processes, you can't know the team's capacity. Usually you will learn after a few sprints what is the typical team's capacity.
I've been at a company where the planning was a matter of 20-30 minutes while at another we had endless discussions and calculating velocity etc., then everyone agreed although it was already clear there was little hope one really manages to complete these tasks.The only group of people who believed they meet the goal were the managers.

Sprint: Usually a 2-4 week work phase where one implements what was agreed in the sprint planning.This is also the phase where some people realize they wanted to go on holidays and just didn't say anything before. Others realize they go to a workshop, yet some others
become ill. At one company it was common to regularly check whether ot not the sprint goal could be met. If there were indications a task could become too big or other tasks could become more important, it was normal to take tasks out of the sprint and re-prioritize. In another company the sprint planning was seen as a strong agreement between the product owner and the team and it was much harder to take things out.

Daily: In a "Daily", the entire team meets for a 15-minute briefing. Each individual has a minute or two (depending on the team size) to explain what you have done yesterday and what you are planning to do today. For people who are not familiar with this kind of briefing (especially when they are new to agile processes), such meetings can be mis-understood as an awkward instrument of micromanagement. As a result, during the briefing they try to convice others how hard they worked and that they will work even harder today.
Others are so enthusiastic and would like to have the complete 15 minutes just for themselves. As a result, one rarely finishes in time. Some of the team-mates have no clue what others are talking about, others polish their nails or check the latest news on their mobile phones or make notes for the next meeting where they will do the same for yet another meeting. From a good friend I heard their team had to do planks while they were speaking. This helped guarantee they didn't talk too much. An external consultant once suggested to snap off the meeting after 15 minutes regardless whether all team-mates had a chance to talk. I think such advice is ridiculous and counterproductive in regards to building up a motivated team.
Burndown-Chart: The burndown chart is a two-dimensional graph designed to track the work progress during the sprint.  In the beginning you have a lot of work in TODO and near the end ideally nothing should be left to do. At the best case, the graph shows a nice consistant stair going down from top left to bottom right. However, reality is sometimes different. The graph often shows a straight line without any indication of movement until shortly before the sprint ends. The graph then looks like the path of an airplane that all of a sudden disappears from the radar. The poor guy is the tester who gets thrown half-done tasks over the wall to test in zero left time-frame.

Sprint Review: That's like payday. Developers demonstrate the outcome of the sprint. This is usually the day when we see a lot of astonished faces.

Retro (retrospective): After a sprint, all team members have the opportunity to comment on what went well and what didn't. From this, measures will be taken for the next sprint. Don't be surprised if after the meeting, people have already forgotten what they just discussed and agreed on.

Story Points: Although story points have nothing to do with scrum (invented by Mountain Goat Software), many companies use these to estimate their tasks. Story Points are a bit special because they don't tell you anything about how long a task takes, but rather how complex a task may be to complete. In theory, a task can be very trivial and still take longer because it may invole a lot of monotonous activity to complete the task. On the other hand, a complex task can be completed within an hour or day depending on the skill of the one who is implementing it.

Although I have worked with Story Points for the past 10 years, I am still swinging between two different mindsets. I still can't decide if it is a cool tool or just a buzzword. The paradox of this measurement is the fact that on one hand you don't measure how long you have for a task, but you still do it indirectly because you take these SP numbers to fill the sprint. This is per se exactly the same as if I measured the hours I have for a task, because we fill the sprint only with a limited number of hours/days available. But, story points help you understand much faster when a task gets too complex. Since you are using fibonacci numbers, you get alerted right away if you have a task estimated higher than 8. There is no 9, no 10, the next number is 13. This huge jump is a great warning sign and leads you to rethink the size of the story.

Monday, October 8, 2018

Banksy was here

Sotheby's in London has auctioned off a framed version of Banksy's iconographic subject "Girl With Balloon" for over 1 million pounds.When the final bid was made, the big surprise came. Suddenly the screen moved down and the picture was destroyed by a shredder built into the picture frame [nyt]. Right after the surprise, the anonymous artist had published a video detailing how he installed a shredder into the frame [ban].

The disturbing part of this story is that this picture has probably gained even more fame through this action and thus very likely becomes more coveted although shred in pieces; volitional or not, Sotheby's to be in the known or not, the buyer well-informed or not....who knows.



Saturday, October 6, 2018

Birds love BUGS

Here are the stats. I raised 2500 bugs in 12 years, then moved to a company where I raised a thousand bugs in only 19 months.
Presuming that a typical year has 252 working days, this gives me rate of 2.5 bugs per day or 12 per week (compared to an average 0.8 per day or 4 per week during the last 12 years).

That means the rate of identified defects has increased by the factor of 3.

What do these numbers tell about me or the software-under-test, or the company and what does it tell about the developers who introduce these bugs?
Do these numbers really have any meaning at all? Are we allowed to draw a conlusion based on these numbers without having the context? We don't know which of these bugs were high priority, which ones weren't. We don't know which bugs are duplicated, false alarm and which of those look rather like they should have raised as a change request.
We also don't know what is the philosophy in the team. Do we raise any anomaly we see or do we first talk to developers and fix it together before the issues make it into a bug reporting system. Do we know how many developers are working in the team? How many of them work really 100% in the team or less, sporadically, etc...Also, does management measure the team by the number of bugs introduced, detected, solved or completed user-stories, etc.? May the high number of identified issues be a direct effect of better tester training or are the developers struggling with impediments they can/cannot be held responsible for and these bugs are just a logical consequence of these impediments? Are there developers who introduce more bugs than others?

As is with these numbers, they are important, but they serve only as a basis for further investigation. It's too tempting to use these numbers as is and then draw one's one conclusions without questioning the numbers.

Wednesday, September 26, 2018

Checking the Cloud

...or "Hi, just wanted to see how my data looks like in the cloud".

and here how the first draft looked like...

Thursday, September 20, 2018

Monday, May 21, 2018

Welcome to Weirdo-Land

I just wanted to draw an oldtimer Porsche Speedster 356 of the Fifties. Usually I am less addicted to sports cars, but this one is an exception. I watched an interesting report by Jay Leno about a 356 replica bulit by JPS Motorsports and since then can't take my eyes off it.

Thursday, May 17, 2018

Baffled sequenceIDs

Sorry folks, Insider. I don't know how I can better generalize the cartoon so it is also funny for those who haven't experienced our exciting moments we had with our love-hate relationship-sequence IDs. The counter was reset after each deployment. It resulted in duplicate numbering, hence kept us busy hunting for ghost bugs.

Tuesday, May 15, 2018

No undo in the elevator

A nice test pattern to apply while testing software, is to revert all changes performed on one object or more, either by sending the keys CTRL-Z as often as possible or by using any other provided cancel/revert/undo operation if it exists. An interesting observation for me still is the fact that until this day, I had never been in an elevator that provided the possibility to cancel your choice on pressed buttons... resulting in the experience of real "pain".

Sunday, May 13, 2018

Ready to take off - or good enough to go live?

Many, many years back, when I drew the first version of the cartoon on a draft piece of paper, I was inspired by an email sent out to everyone claiming the software is good enough to go live although the software never went through proper testing. Only "A." had tested it; and since A was such a great techie we were feeling quite comfortable if he had looked into it. This attitude had established more and more until we forgot to deliberating on the fact that even for "A." it was simply not possible to deal with all those increasing bugs showing up at customers. We had to change our mindset to simply rely on exploratory testing only, and therefore set-up a team that performed a more planned approach in testing the software. This included a study of all varying characteristics of the different customer's most important workflows, and it resulted on testing these workflows with the goal to mitigate the risk these workflows are broken in the next release.

Thursday, May 10, 2018

A lesson in Test Patterns

I recently had a workshop at the SwissQ offices in Zurich about common test patterns and how I am looking for bugs. I showed a draft of this cartoon, and this was the only moment where they were really laughing out I thought it might be worth finalizing the cartoon and publish it.

Thursday, February 22, 2018

Unequal conditions

I created this cartoon on behalf of Tennisclub Augst for their regular published small brochure. The more I look at it, the more I realize the interesting analogy it bears to completely different areas far outside of simple Tennis scenarios.

Friday, February 16, 2018

125000 clicks

We made it Yogi! I cannot believe that in these days I see 125000 of total and real clicks on my cartoon blog. Not enough to get famous (not my goal) but also not so few to get ignored..., thanX a lot for watching this blog.

I am especially happy having succeeded in banishing all requests to publish advertisment on this blog. I hope I can keep this for yet a long time.

Saturday, February 10, 2018

Mocking the Driver

Inspired by Tesla who "uploaded" their first car into space on Febrary 6, 2018. I haven't searched for any Tesla cartoons yet, but I guess there must be hundreds of similar ones around already. By purpose I didn't look for them, so I could fully concentrate on mine and not risk to accidentially copy someone else's idea. The picture of the moon is real and I shot it 2014 with my Russian telescope MTO 1000.

Sunday, February 4, 2018

Squares, Bowls and Triangles

I never forget the speech of a Google test engineer many years back, when he started with a slide that showed nothing but just all red colors. He took a chair and sat near the audience watching the wall together with us and said nothing. After quite a while he stopped the pondering silence and stated: "one gets used to red colors if you only stare long enough to it".

The point he was making here was to never accept your test scripts to report failures for too long. You should immediately address newly introduced bugs and fix those. If you wait too long, more and more tests may go wrong until very soon, you don't get alerted anymore when new tests fail. It may become a normal situation for your tests to fail. This is not only true for automated testing. It applies the same way to manual testing. If you get used to all the flaws and error messages an SUT (software-under-test) reveals, it is just a question of time until you arrange with it.

You don't see these things anymore after a while until someone with fresh eyes points you to all these known but weird things happening on your screen. You may then realize that you may have been reeducated softly. Ask yourself whether you still wear the hat of a customer or have you already turned from a quality focused engineer to an ignorant which - almost in trance - automatically clicks away all the noise. You may explain to yourself that these are all known issues reported in the past and someone will care about sooner or later. Really? Reality shows that these issues are long forgotten; although saved, but in a deep ocean called backlog where none can or wants to touch them anymore. If a bug sits in the backlog for a while, the likelihood for it to die increases. If a bug hasn't been fixed till now, it probably isn't important enough to care about.

Quite some time ago, when I realized that, me too, was just about to get used to quirks, exceptions and error messages, that pop-up like huge bubbles created by a hunting whale, I communicated my perception to the team and it was a shock when I noticed that others seem to have already "fallen in love " with all this puzzle of inconsistency.

As a result I asked myself, do I allow to adopt the attitude of the squares, or am I an ignorant bowl unable to accept things may be seen differently or am I currently just a triangle who struggles to take a decision.  I think I need a therapy... =;O)

Monday, January 22, 2018

Aspiring killer bugs

My relationship to bugs is two-folded, actually it is a love-hate relationship. I hate 'em because they nest everywhere. There are places where you definitely don't want to have them, regardless whether you like to hunt bugs or not. Even an enthusiastic software tester doesn't like to see creatures in production turning customer's lifes into nightmare. But fact is, there is no space in software where there isn't enough potential for bugs hiding until they take their chance and show their ugly grimace. They are like cockroaches. Even if you think you have destroyed them all, there is always a few left.

But of course I also love these little virtual critters. Without bugs I had no job, or let's phrase is a little less dramatic; I'd probably had a different job and not such a great time putting them on paper.

Not all bugs cause headaches, but some actually do. Even the less dangerous ones can be a pain if they they show up in a pack. The job of a professional tester is to find as much of the critical ones as possible. That means, you have to study them; you need to understand where they "live" and and how they develop. Bugs aren't clever (at least the software bugs) but they use every tiny little whole you don't carburize.  Some wholes are so small that you can hardly see them until the bugs crawl out and make your program crash or behave in a very unexpected way. Often, you need to apply good techniques that make bugs sell themselves out because you wouldn't find them otherwise.

People with little to no dedication making software a great user-experience will not only miss important bugs, they also make a whole team look bad in front of a paying customer.
The job is by far not done if you simply compare deliverables from software developers with incomplete acceptance critieria defined in user-stories.

One great technique to find bugs is to apply test patterns when you investigate a feature. You probably cannot apply all test patterns to every feature, because some don't make sense depending on the context, but if you continously test with a checklist in mind, you will soon realize how much more there is to discover behind an innocent looking user-story.

Test patterns are described in many books. James Bach talks about testing heuristics, James Whittaker walks through a software like a tourist in a foreign country. The book "Lessons Learnt in Software Testing" by Cem Kaner, James Bach and Bret Pettichord is a great source of information to help you develop great testing ideas. If you take testing seriously, you have probably already developed your own set of test patterns, sprout from past mistakes, observations, experiences or lessons learnt from other testers and hopefully you have further developed those patterns, so your detection rate further increases. The list will never be completed but getting better and better.

Saturday, December 30, 2017

Knowing your dress size

This is one of a rare cartoon where its root is not from a software testing happening but rather developed from a colleague who missed his jacket. He sent an email to everyone asking who might have taken it.
It later turned out it was Jon (anonymized) who accidentally took it, but seemed not to realize this jacket was far too big for him. His height was quite a bit different from the owner’s.

The cartoon is a metaphor on what happens sometimes when companies start introducing UI test automation. You buy an expensive tool only to find out, what you want to automate is not supported or requires expensive add-ons. At the worst, one needs to hire experts to develop extensions, so the tool works well with the AUT (application-under-test). Of course, these extensions need maintenance and maintenance is seldom for free. I am not starting a debate about what's best, open source or off-the shelf. This answer can only be given in a context which we don't have here. But I strongly believe that it’s no bad practice to first start with a cheap or an open-source solution which fits your current "dress size" and which lets you experiment, and develop more specific ideas to learn better what you really need. Having enough time to explore helps you narrow down your requirements catalogue. You are growing with the experience you make and after a while you may end up realizing the current "jacket" no longer fits or needs some boosters. It may also be the right moment to restart the tool evaluation and look for a more suitable "jacket" that fits your new dress size, but at least you do this now with a more specific background - which is knowing your dress size.