Speaking to Management: Coverage Reporting

Test coverage is important. In this post, I will reflect about communication issues with test coverage.

The word coverage has a different meaning in testing than in daily language. In daily language, it’s referring to something that can be covered and hidden completely, and if you hide under a cover, it will usually mean that we can’t see you. If you put a cover on something, the cover will keep things out.

Test coverage works more like a fishing net. Testing will catch bugs if used properly, but some (small) fish, water, plankton etc. will always pass through. Some nets have holes through which large fish can escape.

What’s so interesting about coverage?

When your manager asks you about test coverage, she probably does so because she seeks confidence that the software works sufficiently well to proceed to the next iteration or phase in the project.

Seeking confidence about something is a good project management principle. After all: If you’re confident about something, you are so because you don’t need to worry about it. Not having to worry about something means that you don’t have to spend your time on it, and project managers always have a gazillion other things that need their attention.

The word is the bug

So if confidence comes out of test coverage, then why is it that it managers often misunderstand us when we talk about coverage?

Well, the word actually means something else in daily language than it does when we use it in testing. So the word causes a communication “bug” when it’s misunderstood or misused.

We need to fix that bug, but how? Should we teach project managers the ”right” meaning of the word? We could send them to a testing conferences, ask them to take a testing course, or give them books to read.

That might work, but it wouldn’t solve the fundamental communication problem. It will move higher up in the organisational hierarchy.

An educated manager will have the same problem, not being able to make her peers and managers understand what ”test coverage” means. After all, not everyone in the organisation can be testing experts!

STOP mentioning coverage

A good rule of thumb in communication is: When your communication is likely to be misinterpreted, don’t communicate.

I, as a tester knows what test coverage means and more importantly what it does not mean, but I cannot expect others to understand it. Thus, if I use the word, I will probably be misunderstood. A simple solution to this is to stop using the word. So I won’t say sentences like: Our testing has covered some functionality.

The thing I can say is: We have carried out these tests and we found that.

This will work well until someone asks you to relate your testing to the business critical functionality: Ok, then then tell me, how much of this important functionality do your tests cover?

Uh oh!

Stay in the Testing Arena – or be careful

American circuses have enormous tents and two, three or even four arenas with different acts happening at the same time. A project is always going on in different arenas as well: For example we might have a product owner arena, a development arena, a test arena, and a business implementation arena.

Some people play in several arenas: I think most testers have at some point in the career made the mistake of telling a developer how to code. Likewise, we can probably all agree that there’s nothing more annoying than a developer telling a tester how to test.

Confidence belongs in the product owner arena, not in testing. This is because testing is about qualifying and identifying business risks, and since confidence does not equal absence of risks, it’s very hard for us to talk about confidence. And coverage.

This doesn’t mean you can’t move to another arena.

You can indeed look at things from the product owners perspective, that’s perfectly ok! Just make sure you know that you are doing it and why you are doing it: You are leaving your testing arena to help your product owner make a decision. Use safe-language, when you do.

Talk facts and feelings

Confidence is fundamentally a feeling, not a measurable artefact. It’s something that you can develop, but it can also be communicated: Look confident, express confidence, talk about the good stuff, and people around you will start feeling confident.

Look in-confident, express worry, talk about problems, and people around you will start feeling worried.

Testers always develop feelings about the product we’re testing, and we can communicate these feelings.

I know two basic strategies in any type of test result communication:

  • Suggest a conclusion first, then tell’m what you’ve done
  • Give them all the dirty details first, then help your manager conclude

Which communication strategy you pick should depend on the context, e.g. your relation with the manager. If everything looks pretty much as-expected (whether that’s good or bad), your manager has trust in you, and you have good knowledge of the business risks, then I wouldn’t worry too much about serving the conclusion first, and then offer details later mostly to make sure you and your manager doesn’t misunderstand each other. And that nobody will later be able to claim that you kept silent about something.

But if something is way off, or your manager doesn’t trust you (or you don’t trust her), peoples lives may be at stake, or you just have no idear what’s happening, then stick to the details – do not conclude. And that, I think, implies not using the term ”testing coverage”.

Communicating models: A psychological perspective

Simon Morley posted a very interesting post about Challenges with communicating models about two weeks ago. Mental models are what we unconscously use to understand a situation, and communicating models to others is an interesting challenge: “[…] models do not transmit themselves – they are not necessarily understood on their own – they need the necessary “synch’ing” to align both parties to achieve comprehension, communication and dialogue”, as Simon summed it up in a comment on the blog post.

Simons post and the very good discussion he and I had about it started me thinking on the psychological perspective and how important empathy is: The “synch’ing” relies on empathy.

I really liked Simons blog post because above all it highlights the subjectivity of mental models. Models are not something you can implement in an organisaton just by e.g. e-mailing them to all employees. If you want someone to ‘get’ your model, you need to actively communicate it. Which is not possible without empathy.

Empathy is something that we associate with friendship and love, but it plays part in all communication processes between human beings, including those we as testers engage with at work.

From time to time we come across people who seems to have a total lack of understanding of what we’re doing: Colleagues, managers, customers. Most of the time, people who don’t understand need only a good explanation of the situation and our point of view. But sometimes, an explanation isn’t enough: Some people just don’t seem to want to understand.

People under pressure or in stress can be like that, and we often associate this with aggressive behaviour, rudeness. Or maybe we just see the rudeness, and only later realise that maybe there was a problem with understanding.

Empathy seems to exclude this behaviour. Empathy relies on a cognitive ‘feature’ of our brain which attempts to copy thoughts and feelings of other people: It tries to decode what those who you interact with are thinking based on verbal as well as unconscious ques, e.g. body language. It’s quite obvious that having ‘a notion’ of what someone else thinks and feels can make communication much more successful – if you feel sympathy for the other persons feelings and thoughts.

This can work both ways: Loss of empathy in a situation can mean that you think everybody else thinks and feels the same as you, and it can cause quite a lot of confusion and frustration when you realise that other’s are’nt thinking the same as you.

It can happens to all of us: The brain is not a perfect and consistently operating machine, but rather a very adaptable and flexible organ. For example in situations of crisis, empathy is one of the first things to go. A person in a crisis shift from being a social creature to goal oriented and focused, typically on survival – at any cost.

There are people who don’t want to understand, e.g. due to politics. But there are some people who involuntarily just aren’t able to get to ”the other side” of the argument, for example because they’re having ”a bad day”.

(Some people with autism and ADHD can be characterised by having problems with empathy. This is a quite severe handicap for them, since not only do they have problems decoding what other people think og feel, they can also have problems seperating their own thoughts and feelings from what other people are thinking and feeling. The sad situation for empathy impaired people is that they often don’t have a choice: Even when everything is good is it extremely difficult for them to decode other peoples thoughts, feelings and intentions – and therefore extremely difficult for them to communicate and interact successfully. Noticing how successfully others interact often just makes them feel plain stupid. This can lead to severe depression.)

The Communicative Power of Counting

Michael posted the following two comments on Twitter shortly after I published this post:

There’s nothing wrong with using numbers to add colour or warrant to a story. Problems start when numbers *become* the story.

Just as the map is not the territory, the numbers are not the story. I don’t think we are in opposition there.

I agree, we’re not in opposition. Consider this post an elaboration of a different perspective – inspired by Michaels tweets.

Michael Bolton posted some thought provoking Tweets the last few days:

Trying to measure quality into a product is like measuring height into a basket ball player.

Counting yesterday’s passing test cases is as relevant to the project as counting yesterday’s good weather is to the picnic

Counting test cases is like counting stories in today’s newspaper: the number tells you *nothing* you need to know.

Michael is a Tester with capital T and he is correct. But in this blog post, I’ll be in opposition to Michael. Not to prove that he’s wrong, not out of disrespect, but to make a point that while counting will not make us happy (or good testers), it can be a useful activity.

Numbers illustrate things about reality. They can also illustrate something about the state of a project.

A number can be a very bold statement with a lot of impact. The following (made up) statement illustrates this: The test team executed 50 test cases and reported 40 defects. Defect reporting trend did not lower over time. We estimate there’s 80% probablity that there are still unfound critical defects in the system.

80%? Where did that come from. And what are critical bugs?

Actually, the exact number is not that important. Probabilities are often not correct at all, but people have learnt to relate the word “probability” to a certain meaning telling us something about a possible future (200 years ago it had a static meaning, by the way, but that’s another story).

But that’s okay: If this statement represents my gut feeling as a tester, then it’s my obligation to communicate it to my manager so he can use it to make an informed decision about whether it’s safe to release the product to production now.

After all, my manger depends on me as a tester to take these decisions. If he disagrees with me and says ”oh, but only few of the defects you found are really critical”, then fine with me – he may have a much better view of what’s important with this product than I have as a test consultant – and in any case: he’s taking the resonsbility. And if he’s rejecting the statement, we can go through the testing and issues we found together. I’ll be happy to do so. But often managers are too busy to do that.

Communicating test results in detail is usually easy, but assisting a project manager making a quality assessment is really difficult. The fundamental problem is that as testers, by the time we’ve finished our testing, we have only turned known unknowns into known knowns. The yet unknown unknowns are still left for future discovery.

Test leadership is to a large extent about leading testers into the unknown, mapping it as we go along, discovering as much of it as possible. Testers find previously unknown knowledge. A talented ”information digger” can also contribute by turning ”forgotten unknowns” into known unknowns. (I’ll get along to defining ”forgotten unknowns” in a forthcoming blog entry, for now you’ll have to beleive that it’s something real.)

Counting won’t help much there. In fact it could lead us off the discovery path and into a state of false comfort, which will lead to missing discoveries.

But when I have an important message which I need to communicate rapidly, I count!

Finding the perfects

Friend and tester colleague Jesper Ottosen participated in what appeared to be a great event and discussion at EuroStar 2010: The rebel alliance night (link to Shmuel Gershon’s blog with video recordings of the talks), where he spoke about whether we as testers can start looking for more than defects. What if we started looking for the perfects?

I like the idea: Is testing really only about finding problems? It can be depressing to be the one always to tell the bad news (especially when there is a lot of bad news or the bad news are not really welcome). Do we testers really have to be worried all the time? If we start communicating perfects too, will our careers not get both better and more successful?

I see a problem, though. Looking for good things will be in conflict with the very mindset of testing. Programming is a creative process where the programmer creates something new and unique. He does it to solve a problem and he does it in the assumption that it will solve the problem. If he starts out assuming that it won’t work, he will be psychologically blocking his creativity and he will probably not perform well.

As a tester, I look at software with the reverse assumtion: I assume that it will not work. This assumption is stimulating my creativity to find the bugs because I will get ideas of where they’re hiding.

With that assumption, I just can’t be successful looking for good things!

That said, however, I do beleive that we sometimes need to be positive, especially to satisfy some managers and programmers. They’re used to hear bad news from us and some people can’t take that. Switching for a moment to looking for “perfects” might actually work very well in this respect. Just don’t forget that we’re doing it for them, not to do our job.

And don’t forget that it can only be for a while: We have to think negatively to be successful. We make a difference when we find the obvious problems with the product: The problems that will cause severe dissatisfaction among users and managers if they slip into product. We’re a great help to our clients because we prevent bugs by finding them before the users!

Here’s Jesper at EuroStar 2010: