Efter 15 år som freelancer tør jeg godt tvivle på mig selv

I disse dage er det 15 år siden jeg tog springet og gik freelance. Det har jeg ikke fortrudt!

Jeg er blevet hyret ind som eksperten, der skal gøre det komplicerede enkelt og løse problemer. For det meste I lange kontrakter, men altid som den frie fugl. Jeg elsker det faktisk!

Konsulentjobbet kræver masser af kærlighed: Kærlighed til problemerne, der skal løses og kærlighed til de mennesker som har problemer. Ja, og kunden. Der følger også mere kedelige ting med: Kontrakter, fakturering,… den slags. De er en del af gamet.

I gamet er også en forventning om performance: At vi hurtigt kan gå ind og “levere varen” – uden slinger i valsen.

Ydmyghed er faktisk utrolig afgørende. For, – hånden på hjertet – konsulenter er langt fra perfekte og slet ikke ufejlbarlige.

Specialistrollen og forventningen om den sikre performance må aldrig komme til at betyde, at man ender med næsen i sky. Jeg kan godt blive lidt flov, hvis jeg ind imellem møder en anden konsulent med en attitude i retning af at de er universaleksperter, der altid ved bedst.

Jeg synes jeg selv er rimeligt god til at undgå den attitude. For mig hjælper det at jeg jævnligt mindes om nogle af de fejl jeg har begået. Efter 15 år i rollen har jeg ikke længere tal på, hvor tit jeg har fejlet i en opgave. Pinligt, men sandt. Og nu har jeg sagt det!

Den klassiske pinlige situation for mig som tester er et ”bugslip”: Kunden vil gerne have testet, at systemet vi arbejder med viser netop et bestemt resultat og jeg er hyret ind til at dokumentere kvaliteten af systemet inden vi går i produktion med det.

Jeg er testekspert og har indsigt i teknikken og projektet. Jeg udfører ordren. Det ser fint ud. Vi overholder planen. Alt er godt.

Men så kommer der melding om en fejl i produktion, og endda et åbenlyst problem som jeg simpelthen overså da jeg testede.

I sådan en situation er det ikke rart, at være i mine sko. Puh, jeg husker hver eneste gang det er sket, og det er mere end en gang! Det ligger desværre i testerjobbet, at det sker. Jeg prøver, at afstemme forventningerne om det, men sjovt er det aldrig.

Den situation og andre fejl jeg har haft del i har lært mig, at nok er det ret vigtigt at bruge sin erfaring og ekspertise, men det er også vigtigt at kunne tvivle på sig selv. Ja, tvivle: At vide, at ekspertise tit er langt fra nok til at garantere succes.

Sommetider er det faktisk netop ekspertisen, der står i vejen for at man gør det godt.

En generel ting jeg har tænkt lidt over (men ikke tænkt færdig) er, at vi alle faktisk burde blive bedre til at improvisere. Altså improvisere ærligt og blive dygtige til det: Fejle kontrolleret, observere det vi kan lære – og gøre bedre, fejle lidt mindre, evaluere, gøre det meget bedre.

Altså blive bedre til at undgå at lade os blænde af tidligere gode erfaringer – og derfor misse det åbenlyse.

Jeg tror i alle tilfælde på, at det er en kvalitet, når jeg som konsulent tager tvivlen med på arbejde – som en god ven, der hjælper til at jeg gør mit bedste. Og jeg tror på, at det er en kvalitet, hvis jeg deler tvivlen på en konstruktiv måde, så vi i fællesskab kan bruge den til at gøre vores bedste.

Ekspertisen og erfaringen er stadig vigtig. Men tvivlen må vi alig glemme.

I øvrigt føler jeg mig klar til at tage 15 år mere. Måske ses vi derude! Og bliv ikke overrasket, hvis jeg er eksperten, der tvivler.

January 24th: Meetup on Risk Based Testing Strategy

On Tuesday Janaury 24th, we will be hosting a meetup on Risk Based Testing Strategy under Ministry of Testing Copenhagen Meetup group in Herlev.

Make sure you register as a member of the Ministry of Testing Copenhagen meetup group to stay tuned when new meetups are announced.

Also, don’t miss the Ministry of Testing home page to learn about other meetups, TestBash, news, and lots of useful testing resources.

Why you should do your job better than you have to

Software testers evaluate quality in order to help others make descisions to improve quality. But it is not up to us to assure quality.

Projects need a culture in which people care for quality and worry about risk, i.e. threats to quality.

Astronaut and first man on the moon Neil Armstrong talked about reliability of components in the space craft in the same interview I quoted from in my last post:

Each of the components of our hardware were designed to certain reliability specifications, and far the majority, to my recollection, had a reliability requirement of 0.99996, which means that you have four failures in 100,000 operations. I’ve been told that if every component met its reliability specifications precisely, that a typical Apollo flight would have about 1,000 separate identifiable failures. In fact, we had more like 150 failures per flight, substantially better than statistical methods would tell you that you might have.

Neil Armstrong not only made it to the moon, he even made it back to Earth. The whole Apollo programme had to deal very carefully with the chance that things would not work as intended in order to make that happen.

In hardware design, avoiding failure depends on hardware not failing. To manage the risk of failure, engineers work with reliability requirements, e.g. in the form of a required MTBF – mean time between failure – for individual components. Components are tested to estimate their reliability in the real system, and a key part of reliability management is then to tediously add all the estimated relibility figures together to get an indication of the reliability of the whole system: In this case a rocket and space craft designed to fly men to the moon and bring them safely back.

But no matter how carefully the calculations and estimations are done, it will always end out with an estimate. There will be surprises.

The Apollo programme turned out to perform better than expected. Why?

When you build a system, it could be an it-system or a space craft, then how do you ensure that things work as intended? Following good engineering practices is always a good idea, but relying on them is not enough. It takes more.

Armstrong goes on in the interview (emphasized by me):

I can only attribute that to the fact that every guy in the project, every guy at the bench building something, every assembler, every inspector, every guy that’s setting up the tests, cranking the torque wrench, and so on, is saying, man or woman, “If anything goes wrong here, it’s not going to be my fault, because my part is going to be better than I have to make it.” And when you have hundreds of thousands of people all doing their job a little better than they have to, you get an improvement in performance. And that’s the only reason we could have pulled this whole thing off.

Do it right: A value in Context Driven Testing

The problem with me is that I’m really bad at following instructions. When people tell me to do something in a certain way, I do it differently.

It’s a problem when I cook, because I‘m not particularly good at cooking. So I have to follow recipies, and I often mess it up slightly. I’m improving, learning strategies to remember, but this is a fundamental personality trait for me.

And not one I’m sorry about. Because it’s not a problem when I test!

I always wanted to be a great tester.

I tend to become really annoyed with myself when a bug turns up in production in something I have tested. ”Why did I miss that?!” I feel guilty. You probably recognise the feeling.

The feeling of guilt is ok. The fact that we can feel guilt proves that we have consciousness, empathy and do the best we can. People who don’t care, don’t feel guilt.

But in testing, finding every bug is fundamentally impossible, so we have to get over it and keep testing. Keep exploring!

Even before I learnt about Context Driven Testing, I knew that great testing could never be about following test scripts and instructions. I noticed that I got bored and lost attention when I executed the same test scripts over and over again, but I also noticed that I missed more bugs when I only followed the instructions.

So I stopped following the instructions. This gave me an explanation problem, however: “Uhh, well I didn’t do quite what I was asked to do…. but hey you know, I found these interesting bugs that I can show you!”

Can you hear it? That won’t impress old-school project managers with spreadsheets to check and plans to follow.

Context Driven Testing has helped me greatly in becoming a better tester. The thing is that CDT teaches me to do a great testing job without instructing me exactly what to do. Instead, the community shares a lot of heuristics I can use to help me do a great testing job, and through thinking-training and inspiration from others, it’ll help me develop my capacity to do great testing in whatever contexts in which I’m working.

It may be a bit difficult to grasp at first. A little worrying, perhaps.. But CDT is a really, really powerful testing approach.

And CDT has helped me explain what I do to my project managers. Even the old-school types!

Social services and testing

A few days ago, I read an article about quality in social services in which the following statement from the vice president of the Danish social workers union caught my attention: ”It’s about doing it right, not about doing the right things.” He was speaking of how the authorities try to help vulnerable children and their families.

The statement resonated with me, and a bit later it occurred to me that it even sums up what CDT is about:

Context Driven Testing is about doing it right, not about doing all the right things.

Note that I’ve added the word ‘all’ here.

There’s more to CDT of course, but this is the core of CDT – to me. Some readers may lift their eyebrows over the ”doing it right”-thing: ”Does he mean that there is ONE right way to do it?” Read on…

The past 10 years, I’ve worked in contexts where CDT is not really the thing we do, and if I was to be as context driven as managers in my context had asked me to be, I would not really have been context driven. You get the picture, I’m sure.

But with CDT in my briefcase, I can work to make changes and improving things in the specific context in which I’m working. As a consultant and contractor, I’m usually hired to fix a problem, not to revolutionize things, and I’m expected to do a “good job” fixing it.

”Doing it right” is of course about doing a good job, and ”doing a good job” should really not be taken lightly. CDT helps me do a good job, even when I’m not working in contexts that actively supports CDT. That’s because CDT is flexible: It empahsizes that testing should never be driven by manuals, standards or instructions, but by the actual context in which it takes place, and that’s actually quite difficult to disagree with, even for old school project managers!

Further, if my context (i.e. project manager) ask me to do something, I do it – even if it’s in a standard. Sometimes there’s a good reason to use a standard.

Context driven testing is not defined by any methods or even by a certain set of heuristics. Nor is it defined by a book, standard or manual. Neither is there any formal education, certification or ”club” that you have to be a member of in order to call yourself Context Driven.

Instead, Context Driven Testing is about committing oneself to some core values, and to me the most important value in CDT is contained in the sentence:

It’s about doing it right, not about doing all the right things.

Why would anyone wan’t anything else?

(Thanks to @jlottossen for reviewing this post and for suggesting several changes to the original version.)

Speaking to Management: Coverage Reporting

Test coverage is important. In this post, I will reflect about communication issues with test coverage.

The word coverage has a different meaning in testing than in daily language. In daily language, it’s referring to something that can be covered and hidden completely, and if you hide under a cover, it will usually mean that we can’t see you. If you put a cover on something, the cover will keep things out.

Test coverage works more like a fishing net. Testing will catch bugs if used properly, but some (small) fish, water, plankton etc. will always pass through. Some nets have holes through which large fish can escape.

What’s so interesting about coverage?

When your manager asks you about test coverage, she probably does so because she seeks confidence that the software works sufficiently well to proceed to the next iteration or phase in the project.

Seeking confidence about something is a good project management principle. After all: If you’re confident about something, you are so because you don’t need to worry about it. Not having to worry about something means that you don’t have to spend your time on it, and project managers always have a gazillion other things that need their attention.

The word is the bug

So if confidence comes out of test coverage, then why is it that it managers often misunderstand us when we talk about coverage?

Well, the word actually means something else in daily language than it does when we use it in testing. So the word causes a communication “bug” when it’s misunderstood or misused.

We need to fix that bug, but how? Should we teach project managers the ”right” meaning of the word? We could send them to a testing conferences, ask them to take a testing course, or give them books to read.

That might work, but it wouldn’t solve the fundamental communication problem. It will move higher up in the organisational hierarchy.

An educated manager will have the same problem, not being able to make her peers and managers understand what ”test coverage” means. After all, not everyone in the organisation can be testing experts!

STOP mentioning coverage

A good rule of thumb in communication is: When your communication is likely to be misinterpreted, don’t communicate.

I, as a tester knows what test coverage means and more importantly what it does not mean, but I cannot expect others to understand it. Thus, if I use the word, I will probably be misunderstood. A simple solution to this is to stop using the word. So I won’t say sentences like: Our testing has covered some functionality.

The thing I can say is: We have carried out these tests and we found that.

This will work well until someone asks you to relate your testing to the business critical functionality: Ok, then then tell me, how much of this important functionality do your tests cover?

Uh oh!

Stay in the Testing Arena – or be careful

American circuses have enormous tents and two, three or even four arenas with different acts happening at the same time. A project is always going on in different arenas as well: For example we might have a product owner arena, a development arena, a test arena, and a business implementation arena.

Some people play in several arenas: I think most testers have at some point in the career made the mistake of telling a developer how to code. Likewise, we can probably all agree that there’s nothing more annoying than a developer telling a tester how to test.

Confidence belongs in the product owner arena, not in testing. This is because testing is about qualifying and identifying business risks, and since confidence does not equal absence of risks, it’s very hard for us to talk about confidence. And coverage.

This doesn’t mean you can’t move to another arena.

You can indeed look at things from the product owners perspective, that’s perfectly ok! Just make sure you know that you are doing it and why you are doing it: You are leaving your testing arena to help your product owner make a decision. Use safe-language, when you do.

Talk facts and feelings

Confidence is fundamentally a feeling, not a measurable artefact. It’s something that you can develop, but it can also be communicated: Look confident, express confidence, talk about the good stuff, and people around you will start feeling confident.

Look in-confident, express worry, talk about problems, and people around you will start feeling worried.

Testers always develop feelings about the product we’re testing, and we can communicate these feelings.

I know two basic strategies in any type of test result communication:

  • Suggest a conclusion first, then tell’m what you’ve done
  • Give them all the dirty details first, then help your manager conclude

Which communication strategy you pick should depend on the context, e.g. your relation with the manager. If everything looks pretty much as-expected (whether that’s good or bad), your manager has trust in you, and you have good knowledge of the business risks, then I wouldn’t worry too much about serving the conclusion first, and then offer details later mostly to make sure you and your manager doesn’t misunderstand each other. And that nobody will later be able to claim that you kept silent about something.

But if something is way off, or your manager doesn’t trust you (or you don’t trust her), peoples lives may be at stake, or you just have no idear what’s happening, then stick to the details – do not conclude. And that, I think, implies not using the term ”testing coverage”.