As testers we need to better understand and be explicit about problems in testing that don’t have known, clear, or obvious solutions. Cynefin can help by transforming the way we, our teams, and our stakeholders think about testing problems.
Ben Kelly and James Christie has written very good blogs about Cynefin and testing. Liz Keogh was one of the first to write about Cynefin in development. At the bottom of this post, I have included a video with David Snowden and a link to an article I found interesting when I read it.
With this blog post I’m sharing elements of my own understanding of Cynefin and why I think it’s important. I think of Cynefin itself as a conceptual framework useful for comprehending dynamic and complex systems, but it is also a multi faceted “tool” which can help create context dependent conceptual frameworks, both tacit and explicit, so that we can better solve problems.
But before diving into that (and in particular explain what a conceptual framework is), I’d like to share something about my background.
Product design and the historic mistakes of software development
I used to study product design in university in the early 90’s. Creating new and innovative products does not follow obvious processes. Most engineering classes taught us methods and tools, but product design classes were different.
We were taught to get into the field, study real users in their real contexts, develop understandings of their problems, come up with prototypes and models of product ideas, and then try out these prototypes with the users.
Discussing an early draft of this post with James Christie, he mentioned that one of the historic mistakes of software development has been the assumption that it is a manufacturing process, whereas in reality it is far more like research and development. He finds it odd that we called it development, while at the same time refusing to believe that it really was a development activity.
SAFe, “the new black” in software delivery, is a good example of how even new methodologies in our industry are still based on paradigms rooted in knowledge about organizing manufacturing. “The Phoenix Project”, a popular novel about DevOps states on the back cover that managing IT is similar to factory management.
What I was taught back in the 90’s still help me when I try to understand why many problems remain unsolved despite hard work and many attempts on the opposite. I find that sometimes the wrong types of solutions are applied, solutions which don’t take into consideration the true nature of the issues we are trying to get rid of, or the innovations we’re trying to make.
Knight Capital Group, a testing failure
The case of Knight Capital Group is interesting from both innovation, risk and software testing perspectives, and I think it exemplifies the types of problems we get when we miss the complexity of our contexts.
Knight Capital Group was one of the more aggressive investment companies in Wall Street. In 2012 they developed a new trading algorithm. The algorithm was tested using a simulation engine, I assume to ensure to that stakeholders that the new algorithm would generate great revenues.
The testing of the algorithm was not enough to ensure revenues, however. In fact, the outcome of deploying to algorithm to production was great losses and the eventual bankruptcy of the company after only 45 minues of trading. What went wrong?
There are always several complementary perspectives. SEC, Securities and Exchange Commission of the U.S.A.:
[…] Knight did not have a system of risk management controls and supervisory procedures reasonably designed to manage the financial, regulatory, and other risks of market access […] Knight’s failures resulted in it accumulating an unintended multi-billion dollar portfolio of securities in approximately forty-five minutes on August 1 and, ultimately, Knight lost more than $460 million […]
From a testing perspective, it’s interesting that the technical root cause of the accident was that a component designed to be used to test the algorithm by generating artificial data was by some kind of mistake deployed into production along with the algorithm itself. This test component created a stream of random data and the effect was that the algorithm issued purchase orders for worthless stock.
It is paradoxical that the technical component that caused the accident was designed for testing, but it is not uncommon that software testing is often focused on relatively obvious, functional and isolated performance perspectives of the system under test.
Cynefin transforms thinking
Let’s imagine you’re the test manager at Knight and you choose to use Cynefin to help you develop the testing strategy for the new algorithm. David Snowden talks about Cynefin as a ‘sensemaking tool’ and if you engage Knights’ management, financial, IT-operations, and development people in a facilitated session with a focus on risks and testing, I’m pretty sure the outcome would be the identification of the type of risk that ended up causing the bankruptcy of the company, and either prevented it by explicitly testing the deployment process, or made sure operations and finace put the necessary “risk management controls and supervisory procedures” in place.
I think so because even with my limited experience so far, I have seen how Cynefin sessions are great for forming strategies to deal with the problems, issues, challenges, opportunities etc that a team is facing. It helps people talking seriously about the nature of problems, transforming them, and to escalate things that require escalation.
Cynefin seems to be efficient breaking the traditional domination of linear and causal thinking that prevent problem solving of anything but the simplest problems.
My interpretation of what is happening is that Cynefin helps extend the language of those participating in sessions, and in the following I’ll dive a bit more into why I interpret it that way.
Language and Conceptual Frameworks
Language is an every-day thing that we don’t think about, yet it is the very framework which contains our thinking. While we can know things we cannot express (tacit knowledge), we cannot actively think outside the frames our language creates.
Many philosophers have thought about this, but I’d like to refer to physicist Niels Bohr (1885-1962) who in several of his lectures, articles, and personal letters talks about the importance of language. Poetically, and paraphrasing him from my memory, he describes language as the string that suspends our knowledge above a void of endless amounts of experiences.
In a particular lecture “The Unity of Science” given at Columbia University, New York in 1954, Bohr introduce language as a “conceptual framework” and describes how Quantum physics is an extension of the previous conceptual framework used in physics:
“[it] is important […] to realize that all knowledge is originally represented within a conceptual framework adapted to account for previous experience, and that any such frame may prove too narrow to comprehend new experiences.”
“When speaking of a conceptual framework, we merely refer to an unambiguous logical representation of relations between experience.”
Quantum physics is more than new laws about nature. Rather, it introduced new and complimentary concepts like uncertainty, and non-deterministic relations between events. The extension was made for quite practical purposes, namely the comprehension of observations, but has turned out to be quite useful:
“By means of the quantum mechanical formalism, a detailed account of an immense amount of experimental evidence regarding the physical and chemical properties of matter has been achieved.”
The rest is history, so to speak.
Why is this relevant to software testing and the talk about Cynefin? First of all, I think that the conceptual frameworks based on the thinking developed during industrialism are far from capable of explaining what is going on in software development and therefore also in testing. Further, Cynefin seems to be an efficient enabler to create extensions to the old thinking frameworks in the particular contexts in which we use it.
Cynefin and software testing
Software development is generally not following simple processes. Development is obviously a human, creative activity. Good software development seems to me to be much more like a series of innovations with the intention to enable someone doing things in better ways.
Testing should follows that.
But if language limits us to different types of linear and causal thinking, we will always be missing that there is generally no simple, algorithmic or even causal connection between the stages of (1) understanding a new testing problem, (2) coming up with ideas, and (3) choosing solutions which are effective, socially acceptable, possible to perform, and safe and useful.
Experienced testers know this, but knowledge is often not enough.
James Christie added in his comments to the early draft mentioned above that as testers, with Cynefin we can better justify our skepticisms about inappropriate and simplistic approaches. Cynefin can make it less likely that we will be accused of applying subjective personal judgment.
I would like to add that the extended conceptual framework which Cynefin enables with us and our teams and stakeholders further more allow us to discover new and better approaches to problem solving
David Snowden on Cynefin
This video is a very good, quick introduction to Cynefin. Listen to David Snowden himself explain it:
AI personally found this article from 2003 a very good introduction to Cynefin:
The new dynamics of strategy: Sense-making in a complex and complicated world (liked page contains a link to download the article)