Showing posts with label management. Show all posts
Showing posts with label management. Show all posts

Saturday, March 16, 2013

Test Management is more about testing than management.


This started as a Test Manager survey from Mike Lyles (https://t.co/LSvlWobcdW) and as I was answering question 45, I just kept on writing.

Any test manager needs to understand the three basic things any tester is or should be doing on any given day. For a couple of years now, I've been using what the Bach brothers refer to as the TBS Metrics of Session Based Test Management fame. T stands for "Test Design and Execution" related activities, B stands for "Bug Investigation and Reporting" and S stands for "Session Setup". On any given day, the amount spent on all three should add up to 100% of their time for a particular session or day depending on how you want to implement this.

Test design and execution means evaluating the product and looking for problems. Bug investigation and reporting is what happens once the tester stumbles into behavior that looks like it might be a problem. Session setup is anything else testers do that makes the first two tasks possible, including tasks such as configuring equipment, locating materials, reading manuals, or writing a session report.[1]

Based on personal experience, TBS is a really good way to look at the Work Breakdown of any given tester/ test group. You can see where the focus is from a management perspective. If a higher percentage of time is spent on Test Design and Execution, then you know that the testers are doing what they are hired for, which is ... ummm ... Testing. If a majority of time is spent on Bug investigation and reporting, it could be that the product is not quite ready for release, or that the focus of the testers are on regression duties due to the number of bugs and fixes that need validating. In this case, testing can't continue. If a majority of time is focused on Session Setup, your tester/team might be in too much meetings (which might be good or not) or is trying to solve a technology problem that your operations or support team can help with, for example setting up a LAMP stack for your web app so the tester can test in a local environment. Either way, testing has been limited.

There is no ideal percentage combination between all three since this is just an indication of what your context has to deal with. Or this is a way for you to initially evaluate and come up with questions before you have a one on one with a team member. You can even look at trends of any given tester or project based activities but trust me when I tell you that all this will give you is a graph and nothing more. As a manager, I really like what Jon Bach mentioned in his STP Crew talk about delivering Value with Test Metrics (paywall alert).

"Less BS, more T".

This is just a birds-eye view or guide for understanding what your team is up to. One on one debriefings or retrospectives will help and are more effective in finding out what is really going on with your organization. If you really want to be more effective as a manager, get in the trenches. Try to be at par with your testers' knowledge about the product. As a manager, your mission is two fold; serve your organization in making sure your testers are finding the necessary information about the quality of your software and you serve your testers in making sure that you enable them to do what they were hired for and that is to test. Bugs are just the bonus that comes as a result of your team testing properly and intelligently.

The one main lesson that I can take with me from this is that, Test Management is indeed more about testing than management.

Sources:

  1. SBTM by Jon Bach - http://people.mozilla.com/~nhirata/SBT/sbtm.pdf

Wednesday, February 1, 2012

stigma of the test manager -- part one

Last year, there was this mania about the death of testing. I don't know when it started, or who started it, but that doesn't matter now. Looking from the outside, I observed some of the staunchest testers I know almost lost faith because of the seemingly almost futile efforts of self introspection and not coming up with answers that could somewhat defend the value a tester can bring back to the company. Scott Barber (@sbarber) sum's the problem that is plaguing the software testing industry pretty well in this post.
"The under-informed leading the under-trained to do the irrelevant."
Sadly, this statement is very true and defines the individual characteristics that exemplify what I think is the stigma of a test manager; Ignorance, Incompetence and Insignificance. For the sake of this post, a test manager is defined as someone whom testers report to.

Ignorance is manifested when you either focus too much on your context and not look from the outside for better ways to improve your process or the complete opposite which is looking too much on the shiny toys that the outside has and you end up detesting your context. Incompetence is demonstrated when the test manager doesn't provide feedback that can improve the skill or correct a bad habit of a direct report. Irrelevance is a logical result of the first two, but this trait is usually personified by a test manager's refusal to champion the test team itself. That person doesn't usually know what everyone is doing beyond what they see on the status reports.

The craft of testing is not dead or dying. Testing needs to to be understood and re-evaluated. I do propose that the ignorance, incompetence and irrelevance of test managers need to die. I am not considering or suggesting that we murder them. I am simply saying that these traits need to die. Gory details in the next post.

Wednesday, October 26, 2011

are we done yet?

The testing projects I've been involved lately have been "normal". Normal meaning, confusing, untestable and a lot of gotcha's. As I was writing my experience report for this one project, I suddenly remembered Adam Goucher's talk about how pirates do away with "finishing" a job.

Apparently, in the 1700s when pirates wanted to kill someone, they would hang the person in public -- dead --, take the body and bury it up to it's neck in the shore and wait for the high tide to come in so they can drown -- dead, dead -- and finally take the hanged, drowned body and put it up for display on a stick for all to see -- dead, dead, dead --. Very gruesome, I'm glad they don't do that anymore.

Just like those pirates, instead of dead, dead, dead, agile has this concept of done, done, done. Which simply means that a story/feature/project is ready for deployment to production. In my context, the ideal path should be;

  • The developer declares that his/her stuff is ready for testing, done.
  • The tester "completes" testing, done, done.
  • The stakeholder accepts the final product, done, done, done.

Coming from a development methodology that seem like the offspring of waterfail and fragile methodologies, the biggest challenge I see is involving everyone else in the team after the developer says, hey I'm done. There are two more parts after that, and the last two parts that define the completeness of any given project are just as important as the first one.

Sunday, October 23, 2011

Let's Get Rid of the QA Team

one end of the spectrum

What if you walk into work and your boss walks in and tells your group that, we need to make some cutbacks and we will get rid of the testing team. No conversations, no explanation. What do you think will happen? Do you think your company's velocity will grind down to a zero? Will the development team beg your boss to get the testing team back again? or will it be business as usual?

the other end

Lets say that you achieved everything that your testing team has ever dreamed of? You have a crack team of Exploratory testers that has a very good understanding of what can and should be automated. Everyone in the company recognizes the value that your team gives, what now? Can we just sit back lie on our laurels?

some thoughts

As a test manager, one of the biggest questions lingering at the back of my head is the value my team brings to the company. Sure, I try to mentor each and every member in my team and spend weekly 1-on-1's even with my offshore members. I even went as far as getting everyone AST memberships so everyone can take the BBST course.

Are all of that enough? Is there still room for improvement? The answer I always give myself for the above questions are NO, no amount of improvement or training is never enough because one can never test everything, and YES, there is always room for improvement.

Testing vigilance is not a talent that everyone has, as a matter of fact it is not a talent either. Testing vigilance is something that you have to do. As a test manager, one of my primary roles is to promote to the rest of the group the value my team can bring to the rest of the company. Another role is to make sure my team understands what is valuable for the company that they are working for at any given point in time. For myself and the members of my test team, I expect everyone to have healthy discussions with the projects we are involved with and not just wait for something to fall in our laps.

I have been on that first end of the spectrum and there were a lot of blame that went around. It is a place that I would never want my team to be in but that is something I cannot control. What I control is setting up an environment so my team can strive and be the best that they can be and be able to serve the team and thus provide value.

In closing, when my team does get to the other end of the spectrum, I just need to remember that a great tester named James Bach once said, "Your team maybe called 'Quality Assurance'. Don't let that go to your head. Your test results and bug reports provide information that facilitates the assurance of quality on the project, but that assurance results from the effort of the entire team."