Right, let’s pull things together a little from the previous 3 posts.

Firstly lets keep our focus on the word context, because this is key here.


“the situation within which something exists or happens, and that can help explain it:”

Cambridge Online Dictionary

I’ve been finding this quite challenging and I’ve rewritten this a few times as whilst it works in my head, its hard to express clearly in written form.

I suppose this fits with Snowden’s saying ..

“We know more than we can say.
  We can say more than we can write down”

Here goes ….

In a complex adaptive system agents,  let say for the sake of argument people, interact with and modulate, not drive remember the system. In return the system effects and changes the agents, so the whole thing keeps changing and you simply cannot predict how it will turn out.

Except in retrospect of course because hindsight is always a wonderful thing, so in complex systems cause and effect are only correlated in retrospect.

So in the complex space even if we have done something similar before there is no guarantee that what we did last time will work this time, indeed we can be very sure it won’t because the context is always different in some way. For example we may have a similar problem, but a different organisation or industry, even having different people may make something complex.

However at a given moment for a given context we can work out how to resolve a particular complex problem and in doing so move it from being a complex problem to be a complicated problem by wrapping constraints or rules around it.

Carrying out Safe to fail probes is how make this transition, this is the Cynefin Complex to Complicated Dynamic ( blue on this post)

So lets talk through this …

Right, we know that we have a complex issue / problem / situation as it’s something that we have never done before, it’s really big, it’s scary, it’s just plain old hard and we don’t really have any experts that we can call on to help us, we are a bit stuck to be honest.

( Liz Keogh has a great blog post on estimating complexity  and how we work out what type of problem we have, I use this)

We know that the very nature of a complex adaptive system means that we cannot possibly hope to “analyse” it then we need to do something else.

If we don’t know what we should be doing then this is where capturing narratives comes into play see, Cynefin #1 – Managing Change in a Complex Environment

So here we are assuming that we have a sense of where we want to go, a vision of where our journey may end, we perhaps just don’t quite know how we get there at the moment t and to be honest we cannot even be 100% sure of the end position anyway.

So if we cannot analyse, what do we do now?


The general idea is that we do multiple  experiments called probes and these need to be safe to fail.

The safe bit is quite an important distinction, in agile we often talk about failing fast, however failing in a disastrous way is not what we want to be doing regardless of speed. So simply put safe to fail means that the world won’t metaphorically end if the experiment fails. In development terms we would carry out a spike perhaps in a development environment, that way if it bombed it wouldn’t matter.

For a new product you might use A/B testing, the might do the same for a new website, Amazon do this all the time

But the key thing is that its about running multiple coherent probes at the same time, not just one.

Snowden talks about running different types of experiments, some conflicting, some naive, some oblique, but the basic idea is we don’t want to out all of our eggs in one basket. We need to explore different ways of solving our problems without converging on a single solution too early, we need to keep out options open.

The coherent part is interesting, when we are working out what experiment / probes to carry out we could get disagreement on the validity of one experiment over another to try. Coherent just says that the experiment has to make sense,  not that its the right thing to do. This can diffuse arguments as we can usually agree something is coherent without having to compromise our “belief” on what is right or wrong.

One way we can decide if something is coherent is through ritual dissent, which is simply where a individual or representative of a  group putting forward the idea for a probe explains it to another group. The other group sits in silence and listen to the presentation and then proceed to critique the idea, or even rip it to shreds, basically saying why it isn’t coherent, why it wont work. However the presenter turns their back and listens to he feed back in silence, so that the group cannot see their reaction, which may effect the feedback if people start to feel sympathy for the presenter.

We absolutely must be able to identify when a probe is working and when its failing, if its working then we want to carry on doing more of the same, called amplify if its failing we need to stop it quickly, referred to as dampen.

Snowden’s  Children Party story is fun and relevant.

Interestingly enough here I think very much of the lean product development /lean start up movement, the whole concept of finding a customer product fit, doing as little as you can to get quick feedback and validated learning through an MVP is a probe to my mind.

However when I asked Snowden about this at the course he was really very negative about the whole concept and to be fair he did acknowledge that this was based on the fact the Ries doesn’t or didn’t have a weight of academic research backing him up.

But for me Ries sits squarely here.

The outcome is that we identify one or mere possible ways to resolve the situation and from here we can wrap constraints around it and move it into the complicated domain.

In a development sense you could say that constraints are writing the code, building the solution for this context.

In a product sense it may be identifying a product that will fit this market at this time ( context) and building it.

Organisationally its may be fining out what drives the work force and motivates them at this time.

.. but we must be ever so careful, because it works now it might not work next time!