January 24th: Meetup on Risk Based Testing Strategy

On Tuesday Janaury 24th, we will be hosting a meetup on Risk Based Testing Strategy under Ministry of Testing Copenhagen Meetup group in Herlev.

Make sure you register as a member of the Ministry of Testing Copenhagen meetup group to stay tuned when new meetups are announced.

Also, don’t miss the Ministry of Testing home page to learn about other meetups, TestBash, news, and lots of useful testing resources.

Why you should do your job better than you have to

Software testers evaluate quality in order to help others make descisions to improve quality. But it is not up to us to assure quality.

Projects need a culture in which people care for quality and worry about risk, i.e. threats to quality.

Astronaut and first man on the moon Neil Armstrong talked about reliability of components in the space craft in the same interview I quoted from in my last post:

Each of the components of our hardware were designed to certain reliability specifications, and far the majority, to my recollection, had a reliability requirement of 0.99996, which means that you have four failures in 100,000 operations. I’ve been told that if every component met its reliability specifications precisely, that a typical Apollo flight would have about 1,000 separate identifiable failures. In fact, we had more like 150 failures per flight, substantially better than statistical methods would tell you that you might have.

Neil Armstrong not only made it to the moon, he even made it back to Earth. The whole Apollo programme had to deal very carefully with the chance that things would not work as intended in order to make that happen.

In hardware design, avoiding failure depends on hardware not failing. To manage the risk of failure, engineers work with reliability requirements, e.g. in the form of a required MTBF – mean time between failure – for individual components. Components are tested to estimate their reliability in the real system, and a key part of reliability management is then to tediously add all the estimated relibility figures together to get an indication of the reliability of the whole system: In this case a rocket and space craft designed to fly men to the moon and bring them safely back.

But no matter how carefully the calculations and estimations are done, it will always end out with an estimate. There will be surprises.

The Apollo programme turned out to perform better than expected. Why?

When you build a system, it could be an it-system or a space craft, then how do you ensure that things work as intended? Following good engineering practices is always a good idea, but relying on them is not enough. It takes more.

Armstrong goes on in the interview (emphasized by me):

I can only attribute that to the fact that every guy in the project, every guy at the bench building something, every assembler, every inspector, every guy that’s setting up the tests, cranking the torque wrench, and so on, is saying, man or woman, “If anything goes wrong here, it’s not going to be my fault, because my part is going to be better than I have to make it.” And when you have hundreds of thousands of people all doing their job a little better than they have to, you get an improvement in performance. And that’s the only reason we could have pulled this whole thing off.

Do it right: A value in Context Driven Testing

The problem with me is that I’m really bad at following instructions. When people tell me to do something in a certain way, I do it differently.

It’s a problem when I cook, because I‘m not particularly good at cooking. So I have to follow recipies, and I often mess it up slightly. I’m improving, learning strategies to remember, but this is a fundamental personality trait for me.

And not one I’m sorry about. Because it’s not a problem when I test!

I always wanted to be a great tester.

I tend to become really annoyed with myself when a bug turns up in production in something I have tested. ”Why did I miss that?!” I feel guilty. You probably recognise the feeling.

The feeling of guilt is ok. The fact that we can feel guilt proves that we have consciousness, empathy and do the best we can. People who don’t care, don’t feel guilt.

But in testing, finding every bug is fundamentally impossible, so we have to get over it and keep testing. Keep exploring!

Even before I learnt about Context Driven Testing, I knew that great testing could never be about following test scripts and instructions. I noticed that I got bored and lost attention when I executed the same test scripts over and over again, but I also noticed that I missed more bugs when I only followed the instructions.

So I stopped following the instructions. This gave me an explanation problem, however: “Uhh, well I didn’t do quite what I was asked to do…. but hey you know, I found these interesting bugs that I can show you!”

Can you hear it? That won’t impress old-school project managers with spreadsheets to check and plans to follow.

Context Driven Testing has helped me greatly in becoming a better tester. The thing is that CDT teaches me to do a great testing job without instructing me exactly what to do. Instead, the community shares a lot of heuristics I can use to help me do a great testing job, and through thinking-training and inspiration from others, it’ll help me develop my capacity to do great testing in whatever contexts in which I’m working.

It may be a bit difficult to grasp at first. A little worrying, perhaps.. But CDT is a really, really powerful testing approach.

And CDT has helped me explain what I do to my project managers. Even the old-school types!

Social services and testing

A few days ago, I read an article about quality in social services in which the following statement from the vice president of the Danish social workers union caught my attention: ”It’s about doing it right, not about doing the right things.” He was speaking of how the authorities try to help vulnerable children and their families.

The statement resonated with me, and a bit later it occurred to me that it even sums up what CDT is about:

Context Driven Testing is about doing it right, not about doing all the right things.

Note that I’ve added the word ‘all’ here.

There’s more to CDT of course, but this is the core of CDT – to me. Some readers may lift their eyebrows over the ”doing it right”-thing: ”Does he mean that there is ONE right way to do it?” Read on…

The past 10 years, I’ve worked in contexts where CDT is not really the thing we do, and if I was to be as context driven as managers in my context had asked me to be, I would not really have been context driven. You get the picture, I’m sure.

But with CDT in my briefcase, I can work to make changes and improving things in the specific context in which I’m working. As a consultant and contractor, I’m usually hired to fix a problem, not to revolutionize things, and I’m expected to do a “good job” fixing it.

”Doing it right” is of course about doing a good job, and ”doing a good job” should really not be taken lightly. CDT helps me do a good job, even when I’m not working in contexts that actively supports CDT. That’s because CDT is flexible: It empahsizes that testing should never be driven by manuals, standards or instructions, but by the actual context in which it takes place, and that’s actually quite difficult to disagree with, even for old school project managers!

Further, if my context (i.e. project manager) ask me to do something, I do it – even if it’s in a standard. Sometimes there’s a good reason to use a standard.

Context driven testing is not defined by any methods or even by a certain set of heuristics. Nor is it defined by a book, standard or manual. Neither is there any formal education, certification or ”club” that you have to be a member of in order to call yourself Context Driven.

Instead, Context Driven Testing is about committing oneself to some core values, and to me the most important value in CDT is contained in the sentence:

It’s about doing it right, not about doing all the right things.

Why would anyone wan’t anything else?

(Thanks to @jlottossen for reviewing this post and for suggesting several changes to the original version.)

Speaking to Management: Coverage Reporting

Test coverage is important. In this post, I will reflect about communication issues with test coverage.

The word coverage has a different meaning in testing than in daily language. In daily language, it’s referring to something that can be covered and hidden completely, and if you hide under a cover, it will usually mean that we can’t see you. If you put a cover on something, the cover will keep things out.

Test coverage works more like a fishing net. Testing will catch bugs if used properly, but some (small) fish, water, plankton etc. will always pass through. Some nets have holes through which large fish can escape.

What’s so interesting about coverage?

When your manager asks you about test coverage, she probably does so because she seeks confidence that the software works sufficiently well to proceed to the next iteration or phase in the project.

Seeking confidence about something is a good project management principle. After all: If you’re confident about something, you are so because you don’t need to worry about it. Not having to worry about something means that you don’t have to spend your time on it, and project managers always have a gazillion other things that need their attention.

The word is the bug

So if confidence comes out of test coverage, then why is it that it managers often misunderstand us when we talk about coverage?

Well, the word actually means something else in daily language than it does when we use it in testing. So the word causes a communication “bug” when it’s misunderstood or misused.

We need to fix that bug, but how? Should we teach project managers the ”right” meaning of the word? We could send them to a testing conferences, ask them to take a testing course, or give them books to read.

That might work, but it wouldn’t solve the fundamental communication problem. It will move higher up in the organisational hierarchy.

An educated manager will have the same problem, not being able to make her peers and managers understand what ”test coverage” means. After all, not everyone in the organisation can be testing experts!

STOP mentioning coverage

A good rule of thumb in communication is: When your communication is likely to be misinterpreted, don’t communicate.

I, as a tester knows what test coverage means and more importantly what it does not mean, but I cannot expect others to understand it. Thus, if I use the word, I will probably be misunderstood. A simple solution to this is to stop using the word. So I won’t say sentences like: Our testing has covered some functionality.

The thing I can say is: We have carried out these tests and we found that.

This will work well until someone asks you to relate your testing to the business critical functionality: Ok, then then tell me, how much of this important functionality do your tests cover?

Uh oh!

Stay in the Testing Arena – or be careful

American circuses have enormous tents and two, three or even four arenas with different acts happening at the same time. A project is always going on in different arenas as well: For example we might have a product owner arena, a development arena, a test arena, and a business implementation arena.

Some people play in several arenas: I think most testers have at some point in the career made the mistake of telling a developer how to code. Likewise, we can probably all agree that there’s nothing more annoying than a developer telling a tester how to test.

Confidence belongs in the product owner arena, not in testing. This is because testing is about qualifying and identifying business risks, and since confidence does not equal absence of risks, it’s very hard for us to talk about confidence. And coverage.

This doesn’t mean you can’t move to another arena.

You can indeed look at things from the product owners perspective, that’s perfectly ok! Just make sure you know that you are doing it and why you are doing it: You are leaving your testing arena to help your product owner make a decision. Use safe-language, when you do.

Talk facts and feelings

Confidence is fundamentally a feeling, not a measurable artefact. It’s something that you can develop, but it can also be communicated: Look confident, express confidence, talk about the good stuff, and people around you will start feeling confident.

Look in-confident, express worry, talk about problems, and people around you will start feeling worried.

Testers always develop feelings about the product we’re testing, and we can communicate these feelings.

I know two basic strategies in any type of test result communication:

  • Suggest a conclusion first, then tell’m what you’ve done
  • Give them all the dirty details first, then help your manager conclude

Which communication strategy you pick should depend on the context, e.g. your relation with the manager. If everything looks pretty much as-expected (whether that’s good or bad), your manager has trust in you, and you have good knowledge of the business risks, then I wouldn’t worry too much about serving the conclusion first, and then offer details later mostly to make sure you and your manager doesn’t misunderstand each other. And that nobody will later be able to claim that you kept silent about something.

But if something is way off, or your manager doesn’t trust you (or you don’t trust her), peoples lives may be at stake, or you just have no idear what’s happening, then stick to the details – do not conclude. And that, I think, implies not using the term ”testing coverage”.

A Sustainable Mission for Context Driven Testing?

This image changed the world. It was taken from Apollo 9 in 1968 and shows the blue Earth rise over the grey and deserted Moon. Our world seems fragile.
This image changed the world. It was taken from Apollo 9 in 1968: “The vast loneliness is awe-inspiring and it makes you realize just what you have back there on Earth,” astronaut Command Module Pilot Jim Lovell said. Image credit: NASA.

I have lately become worried about certain developments in society.

For years, scientists, politicians and others have warned us that we’re responsible for irreverisble changes to our planet: Climate changes, most notably. They’re telling us we need to change to sustainable energy sources.

Sustainability is about more than energy, and I’m worried that in the society changes imposed upon us by the combined effects of globalization and the need for serious resource conservation, we are at the same time becoming increasingly indifferent about the lives of certain groups of people. I remember how many used to be develop deep feelings of indignation when pictures of hungry or poor children were shown on tv. It has changed and such pictures don’t have much effect any more. And worse: We genrally don’t even care about poverty close to ourselves.

I feel this may be linked to a macroeconomic pattern we’re seeing almost everywhere in the world: The rich are getting richer, but the poor are still as poor as they used to be. In Southern Europe, we have enormous unemployment among young people. Economists are raising a warning that we are about to loose a whole generation.

Does this affect testers too? After all we’re safe, working in IT, technology of the future?

Well inequalities in income and life conditions are growing on our planet, and this is worrying, since inequality has historically been a trigger of wars and revolutions, and has always been damaging to democracy and society as a whole. So yes, I think we have very good reasons to be worried about the future for ourselves, our families and for our societies.

James Bach recently published a blog post which has inspired me. Testing is a performance, not an artifact, he says. It made me think about how I differentiate the great testing performance from the poor performance. Is it only a subjective measure (aka ”the music performance was good”), or could there be some objective measures in play?

I think we should judge the testing performance by the artefacts it produces: Knowledge artefacts which are valuable in the business context in which we’re testing, income artefacts to me as a tester, and entertainment artefacts (testing is fun).

But I’ve realised that there is something missing: The performance should also be judged by its contribution to society as a whole. Testing should somehow contribute to sustainability in order to be a meaningful profession for me, social sustainability as well as energy and materialistic sustainability.

This can be taken as a strictly political point of view, and I could choose to execute it by only accepting jobs in socially responsible companies and in organisations and comapnies which are making sustainable products.

But it can also be seen as a mission for our craft as a whole. Like science itself has had to face the fact that it is not just a knowledge producing activity, but has to face the fact that it is changing society by the knowledge it is producing, we as testers also have to face the fact that the knowledge we are producing is applied by certain ways. Being a responsible tester does not mean that I’m only responsible for testing.

Therefore, I think that we should take on the endavour to develop our craft from being just a knowledge producing performance, to be a wisdom producing performance.

Philosopher Nicholas Maxwell is the author of ”From Knowledge to Wisdom” in which he outlines a revolution in science. In the introduction to the second edition he writes (p 14, second ed. 2007):

There is thus, I claim, a major intellectual disaster at the heart of western science, technology, scolarship and education – at the heart of western thought; and this long-standing intellectual disaster has much to do with the himan disasters of our age, our incapacity to tackle more himanely and successfully our present world-wide problems. In order to develop a saner, happier, more just and humane world it is certainly not a sufficient condition that we have an influential tradition of rational inquiry devoted to helping us achieve such ends. It is, however, I shall argue, a necessary condition. In the absense of such a tradition of thought, rationally devoted to helping us solve our problems of living, we are not likely to resolve these problems very successfully in the real world. It is this which makes it a matter of such proound intellectual, moral and social urgency, for all those in any way concerned with the academic enterprise, to develop a kind of inquiry more rationally devoted to helping us resolve our problems of living than that which we have at present.

Should this apply testing, as well as “science, technology, scolarship and education”? Yes, it certainly should. Will it be easy to adopt this thinking in testing? No, not at all.

First of all, we shouldn’t start throwing away any of the good things we’ge learnt and developed. Like the ”scientific method” is still a necessary but not suffucient condition for the progress of science, our values and ideas about great testing are still all-important in testing. They are just not sufficient.

I think we who belong to the Context Driven Testing school are far better equipped than other testing schools to accept the sustainability point of view. After all, we’re already successful developing testing into a sustainable performance. Other testing schools still struggle with their explicit or implicit underlying short-term profit-making ambitions.

And although we’re obviously playing a polyfonic music piece, speaking many voices, not saying or meaning exactly the same about testing or CDT, it seems to me that everyone in the CDT school share the mission of developing testing as a craft as a creative, value producing performance, where value is what matters to stakeholders of the product under test. Let me call this our shared mission.

This is a wonderful mission, but in the new context, it has to give way for a better one: We’re only percieving the craft of testing in isolation or in its immediate context, and we have to raise our heads and relate our craft to the greater context of society.

So I propose that we in the Context Driven School adopt the mission to develop testing towards being a wisdom enhancing performance, where wisdom is knowledge that helps build a sustainable society.

What do you think?