I was the first one to revolt on that idea of rating a tester based on the number of bugs they find despite knowing the fact that it could benefit me a lot. I was aware of the importance of other testers in the team who weren’t as good bug sloggers like me but had some skills that I did not have and were important skills that a test team required. For instance, I wouldn’t execute test cases. A combination of scripted and exploratory testing was important for that team. A customer insisted that the test cases ( about 140 ) were executed and report being sent to him without discarding the idea that we could do exploratory testing. Some testers in the team had brilliant ideas to create test files that helped the team in finding a lot of bugs. Some testers were documentation savvy guys who used to sit hours together patiently to generate release reports. We shared our tasks pretty well.
I replied to the e-mail saying, “Despite I see that this policy could benefit me, I see this as a threat to the value we testers can add and have been adding. I wouldn’t mind to quit the organization if this policy is serious and be put to action”.
The idea of losing a tester like me on a project whose business is a lot of US Dollars sounded bad to the management and they changed the policy to “Testers will be rated based on the number of bugs that customers find” for which, my reply was, “This doesn’t make me any comfortable”.
In this context, I started to look for articles that talk about testers being rated based on number of bugs found by them and bumped into Dr Cem Kaner’s article Don’t Use Bug Counts to Measure Testers and the conclusion is super fantastic “If you really need a simple number to use to rank your testers, use a random number generator. It is fairer than bug counting, it probably creates less political infighting, and it might be more accurate”.
I was very much impressed about the ideas shared there and shared that article with my management. No measurement of bugs to rate testers happened to my knowledge.
Till a couple of days before, I was thinking that testers should not be rated based on number of bugs they find.
Well, its possible.
I read a post from Sharath Byregowda, a tester whom I respect and collaborate with in
Can I still use bug count to measure testers although I know it’s a bad idea and make a good judgment out of it ?
Then an idea struck, “Measure testers based on their reaction to being rated based on number of bugs they find”
All good and great testers that I know and those who understanding testing better than the major population of testers would oppose the idea. There you are – you know who understands testing better than your other staff and whom to retain and who cannot be easily replaced.
Those who test whatever information they get are likely to test the product better than those who don’t test the information they have been given.
If you give a specification document, test plan document, test case document to someone who doesn’t test the information ( which most management does ) in it, you have a problem with your hiring and staff. I have witnessed testers who think of those documents as Bible, The Holy Quran and Bhagvad Gita.
Watch the following video. While you watch the video answer my question:
Isn’t what you see in the video strikingly similar to what most testers do?
As you watch the video, you might see the monkey doing things very similar to what scripted testers do and you would be reminded of
Step 1: Open that
Step 2: Click Here,
Expected Result: This should happen and then you get a peanut.
Many such peanuts satisfy your hunger.
Such humans can definitely be rated on the number of times they find things that they are expected to find through a tightly written script. Now you know why some testers use the term “monkey testing” as something that they do and also now you know why their being paid peanuts.
Disclaimer: The usage of monkey training video isn’t to talk ill about the training that was happening to those monkeys there. I respect and appreciate that those monkeys are being trained for a noble job of helping physically handicapped people get their jobs done. Kudos to the team at http://www.monkeyhelpers.org and for their brilliant idea of using monkeys to help physically handicapped humans.
Pradeep Soundararajan - Software Testing Videos: http://www.viddler.com/explore/testertested