Passion for Testing and the Need for ‘Julehygge’

Christmas is almost over and while I am still having holiday with the family, I’m beginning to think a bit about testing again.

I am passionate about software testing.

There is a lot of talk about passion, but do we know what passion is?

The word shares roots with the greek ‘pathos’, which is one of the three key components of persuasion in rhetoric. The other two are ethos and logos.

Good communication should be fact based (logos) and serve a common greater good (ethos), but passion adds something important to communication.

The passionate lecturer

I remember two math lecturers from university. One taught analytical algebra, the other graph theory and combinatorics.

Both were personalities of the type you would notice if you saw them in the street, but if someone would then whisper to you: “He is an associate professor in mathemathics”, you would exclaim “ah!” and understand exactly what you were seeing 🙂

Their style of lecturing was very different, however.

Every lecture in graph-theory and combinatorics was unique. It seemed the lecturer literally reinvented what he was lecturing while he was doing it. He was not particularly organised in his teaching, sometimes he would even forget the subject, and divert off a wrong ‘graph’ (sic!). But he had passion for the subjects, and that showed. The lectures were often very engaging and fascinating.

The other lecturer prepared his lectures to perfection: He always started on the exact minute putting his chalk to the board in the top left corner of the first of the six large black boards in the auditorium, and by the end of the 90th minute, he would finish writing formula in the last available spot of the lower right corner of the last board. He repeated that time after time. A fascinating performance. But there was a problem, as he had obviously lost passion for the subject he was teaching. I felt bored to death during his lectures, and I am not sure I ever passed that exam.

Some testers are passionate about what they do, others try to be perfect. I always prefer passion over perfection.

Suffering by Passion

Passion is one of those tacit capabilities we know by heart, but will probably never be able to code, teach to a neural network, or explain to someone who has never experienced it.

The word has an interesting record in the Douglas Harper online etymology dictionary. Apparantly, passion used to be a kind of suffering:

Passion: late 12c., “sufferings of Christ on the Cross,” from Old French passion “Christ’s passion, physical suffering” (10c.), from Late Latin passionem (nominative passio) “suffering, enduring,” from past participle stem of Latin pati “to suffer, endure,” possibly from PIE root *pe(i)- “to hurt” (see fiend).

The article even goes on linking passion to sufferings of martyrs.

Let me confess now: While I am very passionate about good testing, I am not going to become a testing martyr.

Words change meaning over time and passion is certainly a word that has become more of a daily language term than it probably was back in the late 12th century.

Today, linking passion to sufferings, even physical sufferings, may seem out context.

However, it reminds us that passion does involve trading in some things that I like too: Staying relaxed, calm and cool, for example.

I am neither of those things when I am feeling passionate.

Passion seems to be a kind of double-edged sword.


I am always more tired after working passionately on a testing problem than when I’m doing more trivial things in my job: E.g. diligently replying to e-mails, writing factual test reports, checking out plans and schedules.

Could there be something called passion-fatigue? I think so, and when passion is a driver in daily work life, relaxation and recharging is important to stay healthy, sane, and well in the longer run..

The need for Hygge

Now that Christmas has just passed, but I am still enjoying days of holiday with the family, it seems right to mention ‘hygge’ (pronounced “hyk-ge”).

Hygge is Danish for relaxing with others, a good book or in other nice ways.

Hygge is difficult to define. In that way it’s similar to passion, except opposite: Relaxing, calming and mentally soothing.

A day with hygge could be so relaxing and good that it deserve finishing off with a good tequila, scotch, or another good drink of your preference 🙂

What’s interesting here is that hygge seems to be a good cure for passion-fatigue. Hygge creates space for passion.

And this is exactly what ‘Julehygge’ is about: Getting away from daily life, relaxing with family and friends, and recharging.

Is “hygge” becoming a global fashion trend? The New York Times had an article on the fashion of hygge a few days ago: Move Over, Marie Kondo: Make Room for the Hygge Hordes


Detail of Christmas tree in our living room. Perhaps more than anything, a Christmas tree is in Denmark a symbol of “Julehygge”.

Playful Software Testing

I met with and enjoyed a very good conversation with Jessica Ingrassellino in New York back in September. Jessica presented a workshop on playful testing during the Reinventing Testers Week (I presented at the conference about “Testing in a Black Swan Domain” which, unfortunately, I have not had time to write about yet).

We talked mostly about philosophy.

Jessica is quite a multi-talent: Plays the violin virtously, is an educated music teacher, has switched career to testing, taught herself Python, authored a book on Python programming for kids, and is teaching Python classes at a local community college, as well as music classes.

She has a vision of making testing playful and fun.

Structured work govern testing in professional settings, work which has nothing to do with play. So why is play important?

Jessica puts it this way:

When the power of play is unleashed in software testing, interesting things happen: The quality of the testing performance becomes noticeably better, and the outcomes of it too. This results in better software systems, higher product quality.

I have a product engineering background and play is important for me too. Engineers have methods, calculations, and procedures, but great engineers know that good solutions to problems are not found by orderly, rational processes. Good solutions depend on creativity and play.

Friday December 9th, I met with Mathias Poulsen in Copenhagen. Mathias is the founder of CounterPlay, a yearly conference and festival on serious play in Aarhus, the second largest city in Denmark.

About three years ago, Mathias got the idea for the conference.

In the first year, 2014, it was an immediate success with more than 20 talks and workshops in 3 tracks on “Playful Culture, Playful Learning, and Playful Business”, and more than 150 participants. This year (2016), the conference had 50 scheduled sessions: keynotes, talks, workshops, mini-concerts and open sessions.

Mathias explains (about 0:30 into the video):

Counterplay is basically an attempt to explore play and being playful across all kinds of domains and areas in society. We are trying to build a community of playful people around the world to figure out, what does it mean to be playful and why do we think it is beneficial?

Processional IT has so far not been represented at the conference, Mathias told me. I found that a bit surprising, as at the moment almost everything in IT seems to be buzzing with concepts promising joy and fun – play.

Sometimes, however, there is an undertone to all the joy. Agile and DevOps have become popular concepts even in large corporations, and to me, both strive to combine productivity with playfulness. That is good.

But is the switch to Agile always done in order to pass power to developers and testers, allowing them to playfully perform, build and test better solutions? No, not always.

Play facilitate change and breaking of unhelpful patterns, but sometimes play is mostly a cover for micromanagement. There is a word for this: In a recent blog post, Mathias talks about playwashing:

Playwashing describes the situation where a company or organization spends more time and money claiming to be “playful” through advertising and marketing than actually implementing strategies and business practices that cultivate a playful culture in said organization.

A question is therefore how we genuinely support play? Are there methods or processes that better accommodate playfulness at work?

I believe there is. Processes need to leave space for exploring context, knowledge sharing and actual interaction with customers, stakeholders and team members.

But processes or methods will not do the job alone. In fact, putting play under the examination of psychology or cognitive sciences will never be able to grasp what play really is.

Play is more like music and poetry, where ideas based on assumptions about order, rational choice, and intention cannot explain anything.

Philosophy and especially the dialectical exploration of what it means being a playful human is much better at embracing what play means to us and how to support it.
Jessica and I are working on a workshop about playful and artful testing. It will combine ideas of playful testing with philosophy.

We are certain that breaking out of patterns will help testers, and breaking out of our patterns, participating in a conference which is fully devoted to play will teach us a lot.

I took this photo in the local forest on a walk with our dog Terry (the black poodle). It is obvious, when dogs play well, that they have fun and learn a lot through play. Play seems a fundamental capacity for mammals.

The many are smarter than the few: How crowds can forecast quality

This is a blog post which I’ve had underway since early May. It is about a new way of assessing quality. Let’s start with how we normally work:

Testers usually work alone or in small teams checking and exploring functionality, finding bugs, issues, and other artifacts. These artifacts do not by themselves say anything about the quality of the whole product, instead they document things which are relevant to quality.

In this blog, I’ll propose a different kind of testing, one which is organised in a way which is radically different from traditional testing – and which can produce a quite different type of result.

My inspiration is the 2004 book by James Surowiecki: ‘The Wisdom of Crowds‘ with the subtitle ‘Why the many are smarter than the few’. In the book, Surowiecki presents a thought provoking fact: That while some individuals are very good problem solvers or excellent forecasters, a diverse crowd of ordinary people can always do better than any single individual.

Surowiecki explains this in a very convincing manner and the book is an enlighting read. I find Surowiecki’s thoughts a welcome diversion from what most seems to be concerned about these days: The performance of the individual. Too often, we forget that most good solutions are not invented or implemented by any single person, but by groups of people. And that the performance of teams often depend more on the composition of the team than on the individuals in it.

As a tester, I enjoy working alone as well as in teams, but reading Surowiecki’s book made me think of ways to apply his thoughts to make quality assessments of a different kind than those traditional testing can make.

James Surowiecki: The Wisdom of Crowds

Let me start the description with an example of a question which traditional testing cannot easily answer, but which I think a new kind of assessment can:

A client approaches us with a product which is still under development and therefore not yet on the market. The client tells us that he needs a holistic quality assessment of the product and he asks us to provide the answer to a very simple question: Will it be a good product?

Though I can produce a wealth of information about a product I’m testing, answering this question is not possible by ordinary testing alone. I may be able to make up an opinion about the product based on my testing, and I can communicate this to my client, but it will always be a fundamentally subjective view.

And there is no practival way of assessing whether my view of the product matches that of the collective intelligence of the population of users of the future product. An expert in the field of forecasting product successes may do better than me, but in principle he may be just as wrong as I am – and the worst thing is that we will not know whether he’s right or wrong.

Humans are actually very good at coming up with answers to open ended questions: Quality is something that everyone tends to have an opinion about! But while a single human can (and according to Surowiecki will) make interpretation errors, Surowiecki points out that in a crowd, the errors will be evened out. Aggregated opinions can be a very reliable prediction of the quality of the finished product.

The crowd does not have to be a team of experts. Surowiecki points out that rather than focusing on maximizing the individual members’ domain knowledge and level of experience, the crowd should be put together to be as diverse as possible.

Obviously we have to supply some information about the product to the group – they can’t make up their minds about quality without knowing something about the product. Collecting information has to be done by someone and provided to group members. This is an important task which a ‘moderator’ has to do.

In the ideal situation, we will provide all available information the group: Prototypes, design documents, concept descriptions, ideas, diagrams – even code! The idea is to allow each individual member of the crowd use his own heuristic making up his mind about the question.

But that won’t work in practice. Asking all group members to read everything is just not effective. Besides, the documentation could lead them in wrong directions: They will focus on the most easily accessible parts and will avoid information for which they have to work a little to get to it.

So the moderator will have to make a ‘flat’ (as opposed to hiearachical) binder of different information from the product. What should it contain?

When I was learning sketching and drawing, I was introduced to the problem of drawing leaves on a tree or hair on an animal. I was taught a trick, which is to draw every 100th leave or every 10,000th hair accurately. It will then look correct to the viewer.

I suggest making the ‘information collection’ in the same way: Pick some documents, some diagrams, some code, some tests. Or even, pick some pages from some documents.

The idea is that the crowd members actually doesn’t need to see everytning – they only need enough to formulate an opinion. And then they should see different things, so we’re most certain that they will form different opinions about the system.

How about questions – what questions should we ask? We will have to ask them in a way so answers can be aggregated into a combined result. We may want ask them to give a score, which can then be averaged or in other ways analysed.

Surowiecki points out some important pitfalls that should be avoided. I’ll focus on what is often referred to as collective thinking. This is what happens when a group of people turns out to be ‘dumber’ than the individual members. A bullet proof way to get people to think in collectives is to let some members of the crowd influence other members: E.g. by putting a very charismatic person in the role of chairman or manager for the group. Surowiecki refers to several examples of how group thinking has lead to wrong decisions, and it is obvious that if we want to make an assessment which can be trusted, we have to avoid it. By all means.

So ‘voting’ should be secret, and we should generally prevent members from communicating with each other. If we do allow them to communicate, we should moderate the communication to ensure that individual members are not allowed to influence the opinions of other members.

Is crowd involvement in testing a new thing? I think so. I don’t think the concept has been described before.

On the other hand, many beta test programs have traits of it.

But where the crowd based quality assessments (or forecasts) can take place at any point in the development process, beta testing by definition takes place on an almost finished version of the product. And beta test programs produce the same types of results as ordinary testing: Bugs and other artifacts.

Holistic crowd testing is not an efficient bug factory. Its power is its ability to answer holistic questions about a product under development.

I’d like to set up a workshop in a forthcoming software testing conference where the idea can undergo further development. Let me know if you’re interested in participating, and I’ll let you know when and where.