Michael Bolton, the singer has won Grammy Awards. This Michael Bolton , a little known mandolin player and largely known singer of good intellectual software testing has been awarded for his innovation and contribution to the community you and me live in, at EuroStar 2008, one of the biggest conferences in the world.
Wait a minute, T H E Y won the award!
I just want to celebrate this more than any of my own success because it makes me so proud that my singing is influenced by THEY. Read more ->
Some Indian testers won thought leadership award at Test 2008 conference. Wondering who they are?
Plus, at STC2008 conference that concluded today, Sharath Byregowda, ( another proud moment for me ) won the Best Contributor award for Test Republic
What does all this say to you?
If you want to be a tester who goes beyond the traditional, great recognition awaits you BUT if you do things just for recognition you won't get it. Come out and communicate if you haven't been communicating or else you will see yourself vanishing.
I am sandwiched between my guru and my student winning the award at around the same time. What a fantastic moment for me. During the toughest of the times, I kept hoping that software testers who focus on human skills should shine in the future and those who aren't willing to challenge their minds should vanish.
This is just the beginning of a greater success of these people. Wake up or Vanish!
Tuesday, November 25, 2008
Friday, November 07, 2008
Rating testers based on number of bugs they find - It's definitely possible
In a CMM Level 5 organization I worked for about 2 years back, I got an e-mail from my Senior Manager that testers would be rated based on number of bugs they find. At that time I was an ace bug slogger ( means, I logged a lot of bugs ) for the team.
I was the first one to revolt on that idea of rating a tester based on the number of bugs they find despite knowing the fact that it could benefit me a lot. I was aware of the importance of other testers in the team who weren’t as good bug sloggers like me but had some skills that I did not have and were important skills that a test team required. For instance, I wouldn’t execute test cases. A combination of scripted and exploratory testing was important for that team. A customer insisted that the test cases ( about 140 ) were executed and report being sent to him without discarding the idea that we could do exploratory testing. Some testers in the team had brilliant ideas to create test files that helped the team in finding a lot of bugs. Some testers were documentation savvy guys who used to sit hours together patiently to generate release reports. We shared our tasks pretty well.
I replied to the e-mail saying, “Despite I see that this policy could benefit me, I see this as a threat to the value we testers can add and have been adding. I wouldn’t mind to quit the organization if this policy is serious and be put to action”.
The idea of losing a tester like me on a project whose business is a lot of US Dollars sounded bad to the management and they changed the policy to “Testers will be rated based on the number of bugs that customers find” for which, my reply was, “This doesn’t make me any comfortable”.
In this context, I started to look for articles that talk about testers being rated based on number of bugs found by them and bumped into Dr Cem Kaner’s article Don’t Use Bug Counts to Measure Testers and the conclusion is super fantastic “If you really need a simple number to use to rank your testers, use a random number generator. It is fairer than bug counting, it probably creates less political infighting, and it might be more accurate”.
I was very much impressed about the ideas shared there and shared that article with my management. No measurement of bugs to rate testers happened to my knowledge.
Till a couple of days before, I was thinking that testers should not be rated based on number of bugs they find.
Well, its possible.
I read a post from Sharath Byregowda, a tester whom I respect and collaborate with inBangalore . No, he didn’t write that it’s a good idea to rate testers based on number of bugs they find. I often question myself, my ideas and beliefs.
Can I still use bug count to measure testers although I know it’s a bad idea and make a good judgment out of it ?
Then an idea struck, “Measure testers based on their reaction to being rated based on number of bugs they find”
All good and great testers that I know and those who understanding testing better than the major population of testers would oppose the idea. There you are – you know who understands testing better than your other staff and whom to retain and who cannot be easily replaced.
Those who test whatever information they get are likely to test the product better than those who don’t test the information they have been given.
If you give a specification document, test plan document, test case document to someone who doesn’t test the information ( which most management does ) in it, you have a problem with your hiring and staff. I have witnessed testers who think of those documents as Bible, The Holy Quran and Bhagvad Gita.
Watch the following video. While you watch the video answer my question:
Isn’t what you see in the video strikingly similar to what most testers do?
Step 1: Open that
Step 2: Click Here,
Expected Result: This should happen and then you get a peanut.
Many such peanuts satisfy your hunger.
Such humans can definitely be rated on the number of times they find things that they are expected to find through a tightly written script. Now you know why some testers use the term “monkey testing” as something that they do and also now you know why their being paid peanuts.
Disclaimer: The usage of monkey training video isn’t to talk ill about the training that was happening to those monkeys there. I respect and appreciate that those monkeys are being trained for a noble job of helping physically handicapped people get their jobs done. Kudos to the team at http://www.monkeyhelpers.org and for their brilliant idea of using monkeys to help physically handicapped humans.
--
Pradeep Soundararajan - Software Testing Videos: http://www.viddler.com/explore/testertested
I was the first one to revolt on that idea of rating a tester based on the number of bugs they find despite knowing the fact that it could benefit me a lot. I was aware of the importance of other testers in the team who weren’t as good bug sloggers like me but had some skills that I did not have and were important skills that a test team required. For instance, I wouldn’t execute test cases. A combination of scripted and exploratory testing was important for that team. A customer insisted that the test cases ( about 140 ) were executed and report being sent to him without discarding the idea that we could do exploratory testing. Some testers in the team had brilliant ideas to create test files that helped the team in finding a lot of bugs. Some testers were documentation savvy guys who used to sit hours together patiently to generate release reports. We shared our tasks pretty well.
I replied to the e-mail saying, “Despite I see that this policy could benefit me, I see this as a threat to the value we testers can add and have been adding. I wouldn’t mind to quit the organization if this policy is serious and be put to action”.
The idea of losing a tester like me on a project whose business is a lot of US Dollars sounded bad to the management and they changed the policy to “Testers will be rated based on the number of bugs that customers find” for which, my reply was, “This doesn’t make me any comfortable”.
In this context, I started to look for articles that talk about testers being rated based on number of bugs found by them and bumped into Dr Cem Kaner’s article Don’t Use Bug Counts to Measure Testers and the conclusion is super fantastic “If you really need a simple number to use to rank your testers, use a random number generator. It is fairer than bug counting, it probably creates less political infighting, and it might be more accurate”.
I was very much impressed about the ideas shared there and shared that article with my management. No measurement of bugs to rate testers happened to my knowledge.
Till a couple of days before, I was thinking that testers should not be rated based on number of bugs they find.
Well, its possible.
I read a post from Sharath Byregowda, a tester whom I respect and collaborate with in
Can I still use bug count to measure testers although I know it’s a bad idea and make a good judgment out of it ?
Then an idea struck, “Measure testers based on their reaction to being rated based on number of bugs they find”
All good and great testers that I know and those who understanding testing better than the major population of testers would oppose the idea. There you are – you know who understands testing better than your other staff and whom to retain and who cannot be easily replaced.
Those who test whatever information they get are likely to test the product better than those who don’t test the information they have been given.
If you give a specification document, test plan document, test case document to someone who doesn’t test the information ( which most management does ) in it, you have a problem with your hiring and staff. I have witnessed testers who think of those documents as Bible, The Holy Quran and Bhagvad Gita.
Watch the following video. While you watch the video answer my question:
Isn’t what you see in the video strikingly similar to what most testers do?
Step 1: Open that
Step 2: Click Here,
Expected Result: This should happen and then you get a peanut.
Many such peanuts satisfy your hunger.
Such humans can definitely be rated on the number of times they find things that they are expected to find through a tightly written script. Now you know why some testers use the term “monkey testing” as something that they do and also now you know why their being paid peanuts.
Disclaimer: The usage of monkey training video isn’t to talk ill about the training that was happening to those monkeys there. I respect and appreciate that those monkeys are being trained for a noble job of helping physically handicapped people get their jobs done. Kudos to the team at http://www.monkeyhelpers.org and for their brilliant idea of using monkeys to help physically handicapped humans.
--
Pradeep Soundararajan - Software Testing Videos: http://www.viddler.com/explore/testertested
Labels
bad testing,
context,
creative ideas,
job,
learning,
new ideas,
tester
Subscribe to:
Posts (Atom)