With Cynefin, I can justify skepticism about inappropriate approaches and create better ones

As testers we need to better understand and be explicit about problems in testing that don’t have known, clear, or obvious solutions. Cynefin can help by transforming the way we, our teams, and our stakeholders think about testing problems.

Ben Kelly and James Christie has written very good blogs about Cynefin and testing. Liz Keogh was one of the first to write about Cynefin in development. At the bottom of this post, I have included a video with David Snowden and a link to an article I found interesting when I read it.

With this blog post I’m sharing elements of my own understanding of Cynefin and why I think it’s important. I think of Cynefin itself as a conceptual framework useful for comprehending dynamic and complex systems, but it is also a multi faceted “tool” which can help create context dependent conceptual frameworks, both tacit and explicit, so that we can better solve problems.

But before diving into that (and in particular explain what a conceptual framework is), I’d like to share something about my background.

Product design and the historic mistakes of software development

I used to study product design in university in the early 90’s. Creating new and innovative products does not follow obvious processes. Most engineering classes taught us methods and tools, but product design classes were different.

We were taught to get into the field, study real users in their real contexts, develop understandings of their problems, come up with prototypes and models of product ideas, and then try out these prototypes with the users.

Discussing an early draft of this post with James Christie, he mentioned that one of the historic mistakes of software development has been the assumption that it is a manufacturing process, whereas in reality it is far more like research and development. He finds it odd that we called it development, while at the same time refusing to believe that it really was a development activity.

SAFe, “the new black” in software delivery, is a good example of how even new methodologies in our industry are still based on paradigms rooted in knowledge about organizing manufacturing. “The Phoenix Project”, a popular novel about DevOps states on the back cover that managing IT is similar to factory management.

What I was taught back in the 90’s still help me when I try to understand why many problems remain unsolved despite hard work and many attempts on the opposite. I find that sometimes the wrong types of solutions are applied, solutions which don’t take into consideration the true nature of the issues we are trying to get rid of, or the innovations we’re trying to make.

Knight Capital Group, a testing failure

The case of Knight Capital Group is interesting from both innovation, risk and software testing perspectives, and I think it exemplifies the types of problems we get when we miss the complexity of our contexts.

Knight Capital Group was one of the more aggressive investment companies in Wall Street. In 2012 they developed a new trading algorithm. The algorithm was tested using a simulation engine, I assume to ensure to that stakeholders that the new algorithm would generate great revenues.

The testing of the algorithm was not enough to ensure revenues, however. In fact, the outcome of deploying to algorithm to production was great losses and the eventual bankruptcy of the company after only 45 minues of trading. What went wrong?

There are always several complementary perspectives. SEC, Securities and Exchange Commission of the U.S.A.:

[…] Knight did not have a system of risk management controls and supervisory procedures reasonably designed to manage the financial, regulatory, and other risks of market access […] Knight’s failures resulted in it accumulating an unintended multi-billion dollar portfolio of securities in approximately forty-five minutes on August 1 and, ultimately, Knight lost more than $460 million […]

From a testing perspective, it’s interesting that the technical root cause of the accident was that a component designed to be used to test the algorithm by generating artificial data was by some kind of mistake deployed into production along with the algorithm itself. This test component created a stream of random data and the effect was that the algorithm issued purchase orders for worthless stock.

It is paradoxical that the technical component that caused the accident was designed for testing, but it is not uncommon that software testing is often focused on relatively obvious, functional and isolated performance perspectives of the system under test.

Cynefin transforms thinking

Let’s imagine you’re the test manager at Knight and you choose to use Cynefin to help you develop the testing strategy for the new algorithm. David Snowden talks about Cynefin as a ‘sensemaking tool’ and if you engage Knights’ management, financial, IT-operations, and development people in a facilitated session with a focus on risks and testing, I’m pretty sure the outcome would be the identification of the type of risk that ended up causing the bankruptcy of the company, and either prevented it by explicitly testing the deployment process, or made sure operations and finace put the necessary “risk management controls and supervisory procedures” in place.

I think so because even with my limited experience so far, I have seen how Cynefin sessions are great for forming strategies to deal with the problems, issues, challenges, opportunities etc that a team is facing. It helps people talking seriously about the nature of problems, transforming them, and to escalate things that require escalation.

Cynefin seems to be efficient breaking the traditional domination of linear and causal thinking that prevent problem solving of anything but the simplest problems.

My interpretation of what is happening is that Cynefin helps extend the language of those participating in sessions, and in the following I’ll dive a bit more into why I interpret it that way.

Language and Conceptual Frameworks

Language is an every-day thing that we don’t think about, yet it is the very framework which contains our thinking. While we can know things we cannot express (tacit knowledge), we cannot actively think outside the frames our language creates.

Many philosophers have thought about this, but I’d like to refer to physicist Niels Bohr (1885-1962) who in several of his lectures, articles, and personal letters talks about the importance of language. Poetically, and paraphrasing him from my memory, he describes language as the string that suspends our knowledge above a void of endless amounts of experiences.

In a particular lecture “The Unity of Science” given at Columbia University, New York in 1954, Bohr introduce language as a “conceptual framework” and describes how Quantum physics is an extension of the previous conceptual framework used in physics:

“[it] is important […] to realize that all knowledge is originally represented within a conceptual framework adapted to account for previous experience, and that any such frame may prove too narrow to comprehend new experiences.”


“When speaking of a conceptual framework, we merely refer to an unambiguous logical representation of relations between experience.”

Quantum physics is more than new laws about nature. Rather, it introduced new and complimentary concepts like uncertainty, and non-deterministic relations between events. The extension was made for quite practical purposes, namely the comprehension of observations, but has turned out to be quite useful:

“By means of the quantum mechanical formalism, a detailed account of an immense amount of experimental evidence regarding the physical and chemical properties of matter has been achieved.”

The rest is history, so to speak.

Why is this relevant to software testing and the talk about Cynefin? First of all, I think that the conceptual frameworks based on the thinking developed during industrialism are far from capable of explaining what is going on in software development and therefore also in testing. Further, Cynefin seems to be an efficient enabler to create extensions to the old thinking frameworks in the particular contexts in which we use it.

Cynefin and software testing

Software development is generally not following simple processes. Development is obviously a human, creative activity. Good software development seems to me to be much more like a series of innovations with the intention to enable someone doing things in better ways.

Testing should follows that.

But if language limits us to different types of linear and causal thinking, we will always be missing that there is generally no simple, algorithmic or even causal connection between the stages of (1) understanding a new testing problem, (2) coming up with ideas, and (3) choosing solutions which are effective, socially acceptable, possible to perform, and safe and useful.

Experienced testers know this, but knowledge is often not enough.

James Christie added in his comments to the early draft mentioned above that as testers, with Cynefin we can better justify our skepticisms about inappropriate and simplistic approaches. Cynefin can make it less likely that we will be accused of applying subjective personal judgment.

I would like to add that the extended conceptual framework which Cynefin enables with us and our teams and stakeholders further more allow us to discover new and better approaches to problem solving

David Snowden on Cynefin

This video is a very good, quick introduction to Cynefin. Listen to David Snowden himself explain it:


AI personally found this article from 2003 a very good introduction to Cynefin:

The new dynamics of strategy: Sense-making in a complex and complicated world (liked page contains a link to download the article)


A Sustainable Mission for Context Driven Testing?

This image changed the world. It was taken from Apollo 9 in 1968 and shows the blue Earth rise over the grey and deserted Moon. Our world seems fragile.
This image changed the world. It was taken from Apollo 9 in 1968: “The vast loneliness is awe-inspiring and it makes you realize just what you have back there on Earth,” astronaut Command Module Pilot Jim Lovell said. Image credit: NASA.

I have lately become worried about certain developments in society.

For years, scientists, politicians and others have warned us that we’re responsible for irreverisble changes to our planet: Climate changes, most notably. They’re telling us we need to change to sustainable energy sources.

Sustainability is about more than energy, and I’m worried that in the society changes imposed upon us by the combined effects of globalization and the need for serious resource conservation, we are at the same time becoming increasingly indifferent about the lives of certain groups of people. I remember how many used to be develop deep feelings of indignation when pictures of hungry or poor children were shown on tv. It has changed and such pictures don’t have much effect any more. And worse: We genrally don’t even care about poverty close to ourselves.

I feel this may be linked to a macroeconomic pattern we’re seeing almost everywhere in the world: The rich are getting richer, but the poor are still as poor as they used to be. In Southern Europe, we have enormous unemployment among young people. Economists are raising a warning that we are about to loose a whole generation.

Does this affect testers too? After all we’re safe, working in IT, technology of the future?

Well inequalities in income and life conditions are growing on our planet, and this is worrying, since inequality has historically been a trigger of wars and revolutions, and has always been damaging to democracy and society as a whole. So yes, I think we have very good reasons to be worried about the future for ourselves, our families and for our societies.

James Bach recently published a blog post which has inspired me. Testing is a performance, not an artifact, he says. It made me think about how I differentiate the great testing performance from the poor performance. Is it only a subjective measure (aka ”the music performance was good”), or could there be some objective measures in play?

I think we should judge the testing performance by the artefacts it produces: Knowledge artefacts which are valuable in the business context in which we’re testing, income artefacts to me as a tester, and entertainment artefacts (testing is fun).

But I’ve realised that there is something missing: The performance should also be judged by its contribution to society as a whole. Testing should somehow contribute to sustainability in order to be a meaningful profession for me, social sustainability as well as energy and materialistic sustainability.

This can be taken as a strictly political point of view, and I could choose to execute it by only accepting jobs in socially responsible companies and in organisations and comapnies which are making sustainable products.

But it can also be seen as a mission for our craft as a whole. Like science itself has had to face the fact that it is not just a knowledge producing activity, but has to face the fact that it is changing society by the knowledge it is producing, we as testers also have to face the fact that the knowledge we are producing is applied by certain ways. Being a responsible tester does not mean that I’m only responsible for testing.

Therefore, I think that we should take on the endavour to develop our craft from being just a knowledge producing performance, to be a wisdom producing performance.

Philosopher Nicholas Maxwell is the author of ”From Knowledge to Wisdom” in which he outlines a revolution in science. In the introduction to the second edition he writes (p 14, second ed. 2007):

There is thus, I claim, a major intellectual disaster at the heart of western science, technology, scolarship and education – at the heart of western thought; and this long-standing intellectual disaster has much to do with the himan disasters of our age, our incapacity to tackle more himanely and successfully our present world-wide problems. In order to develop a saner, happier, more just and humane world it is certainly not a sufficient condition that we have an influential tradition of rational inquiry devoted to helping us achieve such ends. It is, however, I shall argue, a necessary condition. In the absense of such a tradition of thought, rationally devoted to helping us solve our problems of living, we are not likely to resolve these problems very successfully in the real world. It is this which makes it a matter of such proound intellectual, moral and social urgency, for all those in any way concerned with the academic enterprise, to develop a kind of inquiry more rationally devoted to helping us resolve our problems of living than that which we have at present.

Should this apply testing, as well as “science, technology, scolarship and education”? Yes, it certainly should. Will it be easy to adopt this thinking in testing? No, not at all.

First of all, we shouldn’t start throwing away any of the good things we’ge learnt and developed. Like the ”scientific method” is still a necessary but not suffucient condition for the progress of science, our values and ideas about great testing are still all-important in testing. They are just not sufficient.

I think we who belong to the Context Driven Testing school are far better equipped than other testing schools to accept the sustainability point of view. After all, we’re already successful developing testing into a sustainable performance. Other testing schools still struggle with their explicit or implicit underlying short-term profit-making ambitions.

And although we’re obviously playing a polyfonic music piece, speaking many voices, not saying or meaning exactly the same about testing or CDT, it seems to me that everyone in the CDT school share the mission of developing testing as a craft as a creative, value producing performance, where value is what matters to stakeholders of the product under test. Let me call this our shared mission.

This is a wonderful mission, but in the new context, it has to give way for a better one: We’re only percieving the craft of testing in isolation or in its immediate context, and we have to raise our heads and relate our craft to the greater context of society.

So I propose that we in the Context Driven School adopt the mission to develop testing towards being a wisdom enhancing performance, where wisdom is knowledge that helps build a sustainable society.

What do you think?

Integration Testing and Technology Convergence

I have grown to like my Android smartphone quite a lot. It’s about a year old now, but I’ve had a few smartphones over the last couple of years. This one, however, is the first where I feel it is making my life slightly better. The thing I really like is that it has ‘everything’ inside it, and that it all works reasonably well: In addition to being a phone and a communication device, it’s a torch, a camera, a map, a calculator, a travel booking service, a map, and it allows me to stay in touch with my good friends no matter where I am.

All my previous Android and Windows CE based smartphones sucked with everything they did, except texting, calling and playing the odd game.

Convergence is changing the way we use and perceive technology: Where the selling points of a product used to describe the product itself (e.g. megapixels in a camera), features which allow products to integrate with each other are becoming more important to customers (e.g. wifi in a camera). This is because customers have observed how these ‘meta-features’ make things smarter and allow us greater flexibility of how we use the products.

I’ve been working as a tester on busines systems for the past 10 years, and I’ve observed a similar trend: Testing is transitioning from having a product focus into having an integration focus. So the changes that we’re seeing due to technology convergence in consumer electronics, seems to be happening broadly in IT.

Integration testing is playing a much more prominent role in software projects today than it was just a few years ago. Where integration testing used to be regarded as a ‘phase’ in large scale projects, we are now more and more carrying out integration testing on a continous basis throughout projects. I’ve seen this change in the projects I’ve been working on, and I have had it described to me by firneds and colleagues.

Project managers seem to have realised that system integrations are just too critical to postpone testing until the last days of a project or project cycle.

Niels Bohr said: ”It’s difficult to make predicitions – especially about the future” I’ll try anyway: I think we’re at the beginning of a development which might completely be changing the nature of testing: In the future, software testing will be predominantly focused on interoperability, system integration, robustness and other factors buried in the structure of the products we’re testing. Functionality will be much less important.