"Some birds aren't meant to be caged, their feathers are just too bright"- Morgan Freeman, Shawshank Redemption. This blog is from one such bird who couldn't be caged by organizations who mandate scripted software testing. Pradeep Soundararajan welcomes you to this blog and wishes you a good time here and even otherwise.

Thursday, February 11, 2010

Coaching testers on Bug Reports, Advocacy & Credibility

Here are some of the funny/awful bug reports that I have seen in my experience:
  • "When I open the application and click on that button I get a error massage saying it fatally crashed"
  • "The spelling of Transport Parametre is miss spelled"
  • "When I perform a submit the application has error message and thrown on the right side"
  • "I open and it crashes"
  • "The application is giving me unexpected message to work"
  • "Clicking on that link is taking me somewhere and I am unable to return"
  • "Applications is throwing pops up when I put my mouse on links of ads"
  • "Click fast links and get error message"
  • When trying to establish connection with devices some device say I am not available
  • Everytime I execute test case TR234 my PC rebooting
  • Test case TR 343 fails
  • When users click Submit twice & hangs.
  • After waking from sleep, application don't respond. (Hope you got what the tester was trying to say)
Impact

  • Now, that's funny for the moment but what about the credibility of such testers? 
  • Why would they be respected? 
  • Why would their bug reports be even read further? 
  • Why would they need a hike? 
  • Why wouldn't they be treated not on par with others? 
Unfortunately, more number of such bug reports have made developers sick of reading bug reports. That's why you'd see developers call testers to their desk and say "Show me that bug". I am concerned about this problem because it is yours and mine. When we who report bugs in a credible manner join a team and report bugs, the developers still ask, "Show me that bug". That's how bad the effect has been on most developers that I have come across in India. Other country folks reading this could share their experience.


Bug reports are one of the key factors that make or break the credibility for testers.



You know about Hands on Software Testing Training from India? If you haven't, you must take a look at the excerpts of students work. You might be surprised and ask me if fresh college graduates from India really reported such bugs in such a credible way.

Santhosh Tuppad, a tester who chose me as his mentor, striked this distinction - he was profiled at Utest for gaining credibility for the highest bug approval rate. You should read about it, I think.

Santhosh, just has an year of experience. However, the practice he did, in that one year, is what has made him get profiled at utest and win the credibility of the highest bug approval rate. Not just Santhosh, his batch mates are doing exceptional as testers /for their year of experience/ in the organization they work for. That even includes his girl friend :)

I just did what any training is actually supposed to do. So, I am not going to be bragging about myself or about the magic / voodoo stuff of coaching testers. Its safe, you can read it without getting hurt.


Its a shame that some of us are being looked upon as good thinkers for just doing what every tester is supposed to be doing. That's the state in which our community is. I think the difference is that we /those who are considered good thinkers/ are bolder, respect ethics, have higher self respect and want to see the community better.


Let me just share how I coached testers on bug reporting and bug advocacy, so that it could either help you coach yourself or others. The ideas to coach testers that I have is abstracted, borrowed, redesigned, modified from what I observed while getting coached by James Bach & Michael Bolton. I also did a context driven approach to suit India.

Coaching testers to report bugs


There is a one week of training & practice session on Bug Reporting in my Hands on Testing Training class. In typical approaches like ISTQB, CSTE training, they appear to cover it up in 2 hours or less. Wow! Those guys have scalability and I don't.


So, I set up the bug tracking system, provide them a buggy application (mostly any software) and ask them to report all bugs they see. I clearly state that it is not about who finds the most number of bugs but who reports bugs in a credible manner. I don't teach anything about bug reporting when I start this exercise and leave it to them to learn it through the experential approach.



I am logged in as admin from my laptop. The first bug report arrives and I call that participant to my seat. I critique the report in following ways:

The developer view

"I am a developer and I don't understand your bug report. I don't know if this is happening while running in Vista because I see a different behavior in XP. As you have no indication to me about your OS, I have no clue what to do with your report. This is why I don't understand your bug report. I am deleting this report because it doesn't make sense to me. Can you report it again in a way I can understand it?"
  • The participant goes back and attempts to report the same bug with additional points based on what I mentioned.
  • That participant whispers the message across to other participants and I just act as though I don't know about the whisper.
Some kinda smart people pick up a little more and mention other details such as Service Pack 2/3 so that their bug report doesn't get deleted again but others don't get it yet. No problems, Pradeep is there to help.

The next bug from another participant arrives and also gets invited to my seat. "I am a developer who is sitting in US of A and I can't understand your English. Maybe it would help me if you could send me a screenshot or a video. I am going to delete your bug report as I don't find it helpful to me.
  •  So, that participant goes back with a little disappointment that although the System configuration was mentioned, the bug report was deleted.
  • Whispers about screenshot and videos gets started. I didn't hear any such whisper :)
The next bug from another participant with system configuration and screenshot. A 6 MB BMP attachment."I am working in a dial up connection and this attachment is taking 20 minutes to download. I have lost my interest to see the screenshot. Maybe you logged a great bug but I would have been able to see it if it were a JPG file that takes lesser space"
  • Whisper! Sssssh!
  • I don't know :)
Another bug, another participant. Spelling and grammatical mistakes, filled and stuffed in the report. "Thanks for coming to my seat when I asked you to come. It shows the respect you have for me. I wish I could respect you that much or maybe even more. However, I see that your English looks so bad that my colleagues would laugh at me if I respect you. In order to prevent that from happening, I have no option than to delete your bug report"
  • "Hey, please correct this report" whispers one to another.
  • I then announce "Microsoft spelling and Grammar check is your best friend".

As a Developer who loves to say, "No user would do that"


"Hey, the bugs you have reported are not actually bugs. So, I am deleting them all".
"What? Sir, we put in so much of effort and you are deleting it all" (Hate the "Sir" part though)


Well, if you'd not want to me to delete any of your bug report then I need strong evidence and investigation of why you think it is a problem. I wish I knew which Oracle you used but none of you reported any such.
  • At this moment, I hear people wondering what oracle they used to spot the problem and trying to add to their existing bug reports.
  • I announce, "You could ask me which oracle if you are not sure". For the next 15 minutes I am your friend who knows how to test and the oracles for the bugs you reported.
  • Those 15 minutes is real busy time for me.
Next iteration. "The oracles you mentioned seem to be OK but I am still not convinced if that is a problem. It might help me understand the problem if you could tell me the impact it has on the user. Hello! Bug Advocacy! Cem Kaner! Google!"



So that goes on and on and it takes them 3 days to get one bug report to not get deleted by the developer. By then they have surpassed most of the ISTQB, CSTE or any other similar certification training. However this approach that I am talking about here sucks. Man! it doesn't have scale.



As Test Manager


So, from today I am a Test Manager. Not a developer. I hear a sigh of relief. They think their reports wont be deleted as much as it did with the Developer.



"Ah! My manager has called me for a meeting to give an update. When I look into all your bug reports, I get no clue what it means. Your summary is either too long or I am unable to understand them. I don't have time to go through your bug reports in detail. I shall delete all bug reports that do not help me make a quick assessment of the quality of the product"


"Hey! Here is how I report bugs!"
  • No Whispers this time.
  • They start copying my style and when they are learning it for the first time, that's Okay.
  • All bug reports gets changed to a style that I follow.
  • The beauty comes when some people try to modify my style to their own style. That's the Indian Masala I like the most.
Now, that kind of thing happens for a few iterations and it goes for one full day or so, till I am satisfied about their bug reporting as a test manager.


As a co-learner


"Guys and gals. Now, we are pretty OK with bug reporting and bug advocacy. Let's try to critique the bug reports that are public and try to learn if there is something interesting that others do that we didn't so far"


I open bugzilla.mozilla.com and go through many reports and ask them to point out the good and bad points in the bug report that we are observing. We have a discussion, argument and debates on it. So, we end up refining ourselves and I drive home some important points of bug reporting.


"Folks! Lets test your bug reporting skill. Let's work on another project and see if we repeat any mistakes or make new ones".


Fail fast, Fail Safe


I failed miserably in a couple of exercises that James and Michael did with me as a part of Rapid Software Testing. However, for me, it was safe to fail in front of them than to fail in front of a client. I was glad I was exposed to a context that I failed although that kind of context had not been on my work yet. When the context actually arrived, I said, "Aha! The Wine Glass". Its a RST secret :)


I have a checklist of things I would make the participants fail and get them corrected of it with respect to bug reporting and here are some of them:
  • Spelling, Grammar & Typos checks
  • No usage of SMS language
  • Crispy & Useful summary
  • Serving different stakeholders
  • Observations
  • Investigation
  • Risk to the user
  • Inferences & Conjectures
  • Cost versus value
  • Screenshots & Videos
  • Log files & Supportability
  • Symptom versus problem
  • Cost of fixing / not fixing the bug
  • Questions to the developer
  • System, Browser and all other relevant configuration
  • Test Environment awareness & details
  • Self critique of ideas and bug reports
  • Peer review
  • Input / Output / In between : test data, system state, environmental changes
  • Hardware
Enough! You might ask me how big is my bug report usually. Well, it depends on the bug and the audience not on me. In the last two assignments that I am executing here in Delhi, a light weight bug reporting with all above information is helping me win accolades. Sorry for bragging although I said I won't. Its kinda bad habit, you see.


Oh, you know, I also won the Top Best Bug at Utest in the Bug Battle of Search Engines. I wish Utest could publish that bug report. Since then, I see some testers attaching videos of their bug in utest bug battles and releases. It really solves the problem of language barriers, steps to reproduce for most of the bugs. I didn't invent the idea of video recording bugs but I practice it and advocate it to be used in contexts where it helps.



So, coming back to the topic of Bug Reporting, Bug Advocacy & Credibility. So, are you surprised that Santhosh Tuppad who went through such a training achieved the status of highest bug approval from Utest and is invited to many more releases to test?


Whenever I go and speak to businessmen in India with the results of this kind of training approach and the value it can bring to the community the thing they say after saying "Great" is "This doesn't have as much scale as ISTQB or CSTE where I can make any Tom Sick Hary to replicate the training. Slides are replicate-able. Good luck".


So the message to you is.Forget all this hands on approach and other blah blah. ISTQB has scale. CSTE has scale. CSQA has scale. ISEB has scale.Go there and learn to memorize the CBOK and practice to puke it on the exam.


I don't coach people to memorize. That's not how I was coached. Go find a guru who would teach you good stuff. Don't waste time with businessmen talking about a stuff that they too know is good but they think won't scale.


I will show the world that good stuff can scale. If you didn't like the word "I", let me replace it by another.


"We will show the world that good stuff can scale".



The Avatar!

Friday, February 05, 2010

Finding Nemo Answers: Shmuel, William & Bangladesh Testers


So, I posted a challenge for software testers in December through this post : Why testers need to learn to write code that works. It was widely tweeted, circulated and even translated to Russian.

At the end of the post was an exe file with a challenge in it. I was practicing coding and wanted to do it in a way that helps me and others. I probably thought many Indian testers might pounce on it but that didn't happen. Even the testers I mentor didn't seem to take it serious. A part of mentoring is to not get disappointed when your mentees are not excited about a thing you are excited or you want them to be excited. 

However, there were some pleasant surprises. Thanks a lot to Alexei Barantsev, Shmuel Gershon, William Fisher & Sajjadul Hakkim and his team at Bangladesh.

I am going to be presenting a review of Shmuel Gershon's and William Fisher's work. What about Bangladeshi testers work, then? Well, its in encrypted form which I will link in the bottom.

At this point, I'd like to warn you that if you plan to attempt the challenge, you might not want to read further. However, if you don't want to attempt but read the answers and review, please go ahead. Think for a moment, maybe two and see if you really want to read the answer before attempting it. If you don't know what the puzzle is then the following wouldn't make sense to you, so just play with this and come back.


So, you decided to read their answers. No problem, you still made a wise choice :)

If you want to read Shmuel Gershon's experience report and answer first before reading my review on it, please do so. Here is the link to his blog.

The Black Box Tester Approach!

Shmuel, "So I downloaded Pradeep’s application, and got to work! Rather than just trying to solve the puzzle, I looked at it as if the mission was: This is commercial 'roulette' style game. Stakeholders want to know it the game logic can be broken/learnt, which could mean a substantial loss of money when people start winning every time.

That's a nice start. He didn't see it just as a puzzle but set a bigger mission to himself and made it more interesting. The way he set the mission didn't contradict a bit to the mission I had provided yet was different from the actual. That's a cool thing to do in such contexts. I would have been more curious to jump in to the puzzle and treat the mission as it is. So, that's a learning for me.


  • 1. First, I ordered my environment to allow efficient and organized work:




    1. Increased the Screen Buffer Size of the Command line I was using to 500 (it turned out that I would do well on less, even half, than that (see point 2.b). But it didn’t hurt).
    2. Renamed the executable to be shorter and without number version: ren fnemo_1.7.6.exe fnemo.exe
    3. Changed the prompt to something shorter, cleaner, and that gave me context about this task against the other Cmd lines windows open: prompt $Cfnemo$F$S$G
Testing Hint: Organizing your testing environment before you start will likely help you during your tests. After you’re in the heat of fight, it will be harder to stop and get organized.

Shmuel provides a testing hint to all of us that is important but not practiced often. I should say that I would jumped into it without bothering to do any setup because
  • I tried some basic executions of the program, to grasp the feeling of what it does and how hard it is to find Nemo. This also taught me what are the messages returned by the application when Nemo is found, and when Nemo isn’t found. They’re “Found Nemo this time!, Nemo Gill Bubbles SharkTooth Flow Phamplet Stinger” and “Ah! bad luck, didnt find Nemo this time!” respectively.
    1. I also learnt that Pradeep used Perl and Perl2Exe to do the app.
    2. Additionally, the application clears the screen at every execution, so the big command line buffer is not so helpful.
While trying to developer test this program, I found that not clearing the command line left over was a little bit irritating to me to perform my next test and hence I added a clear screen in the code. You might be interested to note that I clear screen at start because when my code erred, I didn't want to see it. A clear screen gave me a fresh look at the output and I was happy. It is possible that other programmers do something similar which impacts the usability of the product.

So, programmers aren't wrong unless they don't want their code to be tested.

The next time I write code, I shall keep this in mind and would try giving the user an option if he'd like to clear the screen every time this program is launched. That might solve one problem, probably without creating others.

Sarmila commented on the Challenge page about disassembling the executable. Although by learning all the assembly one can learn about the rules that move the fishes around, it would be extremely difficult and overkill. But this made me think on how Perl2Exe works… Maybe it just wraps the Perl interpreter and the Perl script together? If so, what if the script is stored internally in clear? I tried to open the app in an Hex editor, but no, the Perl script isn’t in clear text there, it is obfuscated.
  • I also tried the useful Strings tool by SysInternals. My intention was to see if, maybe, the array of fishes could be seen at the app code, in order to learn the initial order of fishes. It couldn’t.
  • That ended my cheating session :) , from here on I used only a black box functional approach.

So, Shmuel came to a conclusion that the code might be obfuscated. Well, its not. Watch out for William Fisher's approach! However, Shmuel did try to cheat. When the programmers say, "Don't test this part!", that doesn't mean you don't do any tests there. A little bit of cheating could reveal bugs that remain even after the feature is moved to a testing ready state. Maybe, you might learn something about the software or maybe learn how the programmer stages his or her work. That insight is helpful, too. So, did Shmuel cracked without cheating?

I don't know, why not read further?

By now, I knew that Nemo changed places between plays.
  1. Nemo also change places between invalid parameters (which may or not be a bug).
  2. Moreover, the other fishes change places too, even their placement related to Nemos’ placement changes.
  3. So I tried to see if Nemo will return to the same place in any consistent way or number of times. Running that is easy, you just enter “3″, “3″, “3″… and count where Nemo was found and where he wasn’t.
    • I discovered there is regularity (for “3″, its every 2, 3, 17, 5, 10, 5…), but the regularity was irregular enough, and also changed for different positions. This was likely a consequence of the real logic, rather than the logic itself. This correlates to the movement of fishes, but does not cause it.
Shmuel starts with simple tests and that's very important. I don't know why our brains try to move towards complicating things than simplifying it. It looks like "simple thinking" is a skill and a must for tester. Puzzles, especially those which we were unable to solve were the ones which we complicated it so much that we put a puzzle2 over the puzzle1 and tried solving puzzle2 and yet tried getting the answer for puzzle1 through that. Solve it directly!

Another detail that helped me in the game was knowing the developer and the purpose of the challenge. This puts a lot of content in the context. Pradeep is a rather playful :) and would put some tricks into it.
  1. First I tried to see if Nemo could sometimes be in any placement bigger than 7 (there are 7 fishes, places 1 to 7, but there were no rules as to where the fishes were limited to be).
  2. I tried in a toss of up to 220 attempts, and Nemo wasn’t in place 8 even once. So I let this idea on StandBy for now.
  3. Then I tried… what if the “Minimum attempts” input at the beginning of the app affected where Nemo appears? 




    • 10 is the default for the “Minimum Attempts“. Nemo is at place 1 at the beginning there.
    • 11 attempts chosen: Nemo is at place 1 at the beginning too. Maybe it does not affect?
    • 12 attempts chosen: Nemo is at place 4! Bingo!
    • 13, 14, 15, 16 showed that the original places were cycling at ‘3′ intervals. That means that when trying to find Nemo on the default 10 attempts, it will be in the same placements as 10, 13, 16, 19, 22, 25, 28, 31, 34, 37, 40, 43, 46, 49, 52…
Having understood some bit of me, Shmuel made a conjecture that I could be playful and do some tricks to fool you all. That's a kind of thing like trying to understand the psychology of the programmer and model tests around it. 

You also notice the OFAT heuristic? You don't know OFAT, no problem, its One Factor At a Time. Now, I don't need to explain MFAT, right? Shmuel, tried using the OFAT heuristic (although he might call it in a different name) to narrow down or learn about the behavior. It did work well for him. He found something valuable that looks like helped him proceed on solving the puzzle.

You’re seeing that “220 attempts” above, and thinking if I really entered each number manually.
  1. By this time, I decided that I was in need of some automation to insert inputs to the application.
  2. The original app had no apparent automation capabilities, so I decided to use the internal input redirection of the Command line: 




    • By using the “<” operator, I could redirect an input file “input.txt” to the app using “fnemo.exe <  input.txt
    • I extended that to “fnemo.exe < input.txt > output.txt“, which gives us with a complete dump of results that can be searched with Notepad.
  3. We had an easy automation now, and this can help our sapient testing. I was exploring, and using automation to help me try different things help me explore my possibilities and get a bigger picture of the situation when needed.
Look at that powerful stuff. A tester shouldn't be classified as a manual only tester or an automation only tester but the industry designations suck. Shmuel provided evidence that it is the context in which either of the approach is used. Well, manual isn't a good work, so I agree with Sapient tester, as James Bach coined. Testing is related to human brain not the hands that type the input. All good testers I come across, don't separate manual and automation testing. They see it as approaches that compliment each other.

However, lets see what Shmuel did after that?

Do you still want to continue reading? 

  • With the knowledge and tool I had so far, I did a lot of trials in order to find where Nemo is at each iteration. This could give me a hint if he moved in any regular or consistent manner.
  • So I made a big file of winning moves (a lot of trial and error went in it).
  • It was now time to model the results:
  • First attempts at modeling the results:
  • I got this: Nemo’s position: 1, 3, 5, 3, 2, 7, 3, 4, 7, 4, 4, 1, 5, 5, 2, 5, 6… 




    • Nemo was certainly jumping around without any clear regularity, even when I did it twice the times there was no repetition



  • On the Second modeling attempt, I tried to see if a more visual model could help:
















  • 1
    2
    3
    4
    5
    6
    7
    X











    X





    X




    X









    X












  • Even in a table with many more results, it made no sense.







  • "No answer is also a useful answer". The first time I said that was in one of Elizabeth Hendrickson's exercise that I volunteered to guinea pig. Shmuel did fine. He looked for a pattern in the output but didn't get it and that was a clue for him that there appears to be no pattern. I wish he had done more tests on it but then sometimes critical thinking is a trap.

    I didn't want someone to find the pattern of where Nemo is, that easily. I set the trap.

    I changed my approach. Looking only at Nemo was not giving enough insight. Plus, I knew the other fishes moved along, so there might be a hint in their placement.
    • I got this table:





    • Nemo
      Gill
      Bubbles
      SharkTooth
      Flow
      Phamplet
      Stinger
      Stinger
      SharkTooth
      Nemo
      Phamplet
      Gill
      Bubbles
      Flow
      SharkTooth
      Flow
      Phamplet
      Stinger
      Nemo
      Gill
      Bubbles
      Stinger
      SharkTooth
      Nemo
      Phamplet
      Gill
      Bubbles
      Flow
      Stinger
      Nemo
      Gill
      Bubbles
      SharkTooth
      Flow
      Phamplet
      Phamplet
      Gill
      Bubbles
      Flow
      Stinger
      SharkTooth
      Nemo
      Phamplet
      Stinger
      Nemo
      Gill
      Bubbles
      SharkTooth
      Flow
      Flow
      Stinger
      SharkTooth
      Nemo
      Phamplet
      Gill
      Bubbles
      Gill
      Bubbles
      SharkTooth
      Flow
      Phamplet
      Stinger
      Nemo
      Flow
      Stinger
      SharkTooth
      Nemo
      Phamplet
      Gill
      Bubbles
      Flow
      Phamplet
      Stinger
      Nemo
      Gill
      Bubbles
      SharkTooth
      Nemo
      Phamplet
      Gill
      Bubbles
      Flow
      Stinger
      SharkTooth
      SharkTooth
      Flow
      Phamplet
      Stinger
      Nemo
      Gill
      Bubbles
    • That didn’t look good. Nemo moved spuriously, and the other fishes either jumped aimlessly or stayed at their place. Nothing made much sense.
    Shmuel, while he reads might be surprised to know that he didn't have to do this table. The code does the table, I had enabled logging. Shmuel didn't probably pay attention to his C:\ folder :)

    Must admit that version 1.4 didn't have the logging but Michael Bolton, yes, the singer and tester, did a Rapid Test and suggested this feature. Maybe Shmueld did extract this from the log file but his report doesn't suggest he did.

    I still thought that looking at the other fishes will bring the breakthrough needed.
    1. In order to learn the movement of the fishes, I did a table of their placement in relation to Nemo at every run. 




      1. i. This is what I got:





      2. Nemo
        Gill
        Bubbles
        SharkTooth
        Flow
        Phamplet
        Stinger
        Nemo
        Phamplet
        Gill
        Bubbles
        Flow
        Stinger
        SharkTooth
        Nemo
        Gill
        Bubbles
        SharkTooth
        Flow
        Phamplet
        Stinger
        Nemo
        Phamplet
        Gill
        Bubbles
        Flow
        Stinger
        SharkTooth
        Nemo
        Gill
        Bubbles
        SharkTooth
        Flow
        Phamplet
        Stinger
        Nemo
        Phamplet
        Gill
        Bubbles
        Flow
        Stinger
        SharkTooth
        Nemo
        Gill
        Bubbles
        SharkTooth
        Flow
        Phamplet
        Stinger
      • See anything interesting? The rows repeat themselves alternatingly! So the fishes were not really moving around, they were following Nemo!
      • My take is that Pradeep has two different arrays: One for Odd rows, the other for Even rows.
      • This may not be true, but it doesn’t matter for us. When testing, we not always know what exactly the programmer wrote in the code, but we infer a mental model. If it suits the needs, it is a good model even when not the real thing.
      • Many times I think these fake mental models are even better than the real thing. It’s the best way for them to act as an intuitive Oracle when analyzing an application.
    "Boy, you got closer". That was my expression when I read the above. Good going pal!

    So, if you have got curious to see what he did after that, you must visit his blog. I am not going to be revealing it further but to spoil your curiosity, he did crack it.

    Kudos Shmuel! I hope other testers congratulate this effort of yours. It was very inspiring. I wish I could say "You could have done this and that" but it doesn't make sense to say that especially after your brilliant effort plus you did crack it! Your excel sheet, shows the way!

    Now to William Fisher!

    Whitebox Testing Approach

    Aren't you curious to see the whitebox approach in action?

    William Fisher...
    K
    Key:
     ! = Bug
    * = Action taken
    ? = Question to investigate
    @ = Assumption
    Thoughts before executing:
    ? Size of Array
    Unsure if the size of the array
    @ At minimum, it must contain 7 elements.
    ? "All fish change positions"
    @ Do all fish REALLY change position?
    Wonder if their position oscillate between a set number of actual configurations
    Are the ordinal positions simply incremeted? decremented?
    Randomization?
    Seeded?
    ? Are their intial positions consistent between session?
    ! I understand the rules, but how do I USE this?
    ? Assuming that I'm to enter a number representing an ordinal position
    zero-based?
     
     
    Notice, he has a different style of reporting than Shmuel, yet it is equally very useful and understandable. Finally, what matters is; is the information provided in a useful and understandable manner to the target audience? If you meet that through any kind of reporting style, good job done. William appears to set the context right to read his report further and when I ask questions to myself if I do that, I don't. However, just because I don't do it and I see other tester doing it is not a plausible reason that I should do it. Knowing different ways is of help when I'd have to switch style from one kind of audience to another.
     
    If I have held you till this moment, It is a big achievement. Not just mine but of Shmuel & William, too.Test 1: 
     
     
    Track the fish at ordinal position 1 across successive attempts
    > Minimum attempts is set to 10 but you may increase it: 0
    > 10 attempts left
    ! I entered a number and received the message "10 attempts left". Was I only supposed to press enter?
    ---
    > Guess the position of Nemo :0
    > You probably didn't give any any input or your input was invalid;please provide a valid input
    > Guess the position of Nemo :"
    ! Did that count as an attempt? How many attempts do I have left?
    @ If the language uses zero-based, potential exists for off-by-one error if input is not decremented properly
    ---
     
     
    I told you. I did tell you, its hard for a tester to avoid finding a bug :). You also notice he is asking questions that are different from the ones Shmuel asked. He might also have suspected if making an invalid entry or just hitting the enter might cause a movement of Nemo but did it happen? 
     
    > Guess the position of Nemo :1
    > Ah! bad luck, didnt find Nemo this time!
    >
    > 9 attempt(s) left"
    ! There is an extra carriage return (prior to '9 attempt(s) left'. Should that be there?
    ! Attempts are decrementing from 10. Opening screen says I can increase it. How do I increase the number of attempts?
    @ Different code used to display max number of attempts, than what is used to display the remaining number of attempts
    @ The use of '(s)' in 'attempt(s)' indicates that the code does not change output based on remaining attempts.
    @ Refactoring possiblity: use one function to display the remaining number of attempts. This will accomplish the same thing
    ---
     
      
    So, as and when tests are performed, William is thinking about constructing the code in mind. It is definitely a useful approach, at least to me. I always try to imagine how the code behind this GUI look whenever I am not exposed to it. Modeling the code, no matter how wrong, is a useful approach. I get test ideas different than what I had by modeling the code behind the GUI. Now, if there ain't a fancy name for it, don't worry. It doesn't matter.
     
     
    
    
    
    > Guess the position of Nemo :1
    > Ah! bad luck, didnt find Nemo this time!
    >
    > 8 attempt(s) left
    End session:
    Found Nemo with 1 attempt left
    Having found Nemo, I've been given the opportunity to guess one more time.
    It closes after 5 seconds; cannot mark the entire session.
    ? Does this program generate any artifacts?
    ? When an attempt is not accepted, do the array positions change anyway?
    ================
    Test 2:
    Manual run the same inputs in s new session.
    Run session with FILEMON attached
    End session:
    No artifacts generated by the executable
    Did not find Nemo in this session
    ? Is Nemo's position time-based?
    ================
    
    
     
     
    This is cool. Suspecting a specific thing to be happening and basing tests on it and not being biased when seeing the output. Appears like a critical thinking approach to me.
     
     
    
    Test 3:
    Manual run the same inputs in s new session.
    Run session with TSEARCH attached
    In TSEARCH, HexDump shows some secrets:
    - print "\nHave you reverse engineered the logic? If you haven't go on!\n";
    . else. {print "Ah! bad luck, didnt find Nemo this time!\n";.
    End session:
    Need to dump ASCII Strings from exe file.
    
     
    You expect me to talk about tools now? Right. Tools are to help testers. This is another example of sapient testing, I would say. Until we use a tool, we don't know if its going to be of much help. If you observe what William was doing, he was constantly worried about code and was also concerned about which tool could be of help to him. Tried Filemon and then TSEARCH. I didn't know about TSEARCH till William sent me the report. Wow! You got to do something to get a tester to talk to you. That's all education happens as a side effect.
    Every software offers us enough clues to help solve the problem. It ain't an intentional design, I guess. I still like to think that way although I might be wrong. I am not going to say this to my client while testing for them or we wouldn't have time to talk about these.
     
    And you'd want to know what happened after William got that clue? Watch out for the video.
     
    
    
    If you were by any chance curious while reading this, I too am, for a different reason. I want to know what would happen when William Fisher discovers the things that Shmuel did and vice versa. I bet they'd appreciate each others effort and approach and would learn from each other. 
     
    Shmuel had almost concluded that there appears to be no way to look into the array by opening the exe in the HEX editor but William's experiments violated that.
    
    
     
    Both of them almost missed the log file. William discovered the existence of the logfile by looking to the code. The next time I should just pass on the .pl file instead of EXE, I guess. All beans spilled, ah.
    
    
     
     
    What's the deal with Bangladeshi Testers?
    
    
     
     
    Ever since I discovered SQABD.com , I am wanting to go there. I want to go talk to those testers there. Not got a chance yet. I am hoping it doesn't remain a dream. Sajjadul is a tester with thoughts that are widely respected within and outside of Bangladesh. He and his team took time out and discussed about solving this puzzle, after experimenting on it for a while. 
    
    
    
    
    Watch the video: (Some part in Bangladeshi and rest in English)



    Want to get your hands on Finding Nemo 2? Watch out for it in March.