How can we take advantage of what Theory of Constraints teaches as well as bring in thinking from other disciplines to learn? Specifically, how do we learn from a single occurrence - an occurrence of something going awry? This was the question that Eli Schragenheim tried to answer in his talk this morning on "Learning from ONE event: A structured organizational learning process to inquire and learn the right lessons from a single event."
But before we get to the serious stuff, Schragenheim reminded us of a key line from The Producers (toward the end):
How could this happen? I was so careful. I picked the wrong play, the wrong director, the wrong cast. Where did I go right?
Wasn't he surprised?
Learning is about updating our understanding and knowledge. It's about updating the beliefs we have in causal connections. And in this case, it is about generalizing from a specific case to improve how we (the organization) do things in the future. It is not simply, "don't do THAT again."
My biggest take away from this talk was that Schragenheim has encapsulated a lot of what I have seen in the knowledge management community (and the education community) around learning from our experiences. Of course, ongoing learning is a key part of the continuous improvement world as well. The short form is: plan an activity; take the action; check if the action created the desired results; if not, correct and try again.
Schragenheim wasn't so much focused on the iterative loop aspect as the aspect of learning after being surprised by the outcome of some activity. Surprise was a key aspect of this discussion: if you don't articulate what you expect to happen, it is difficult to know what a surprising result would be. Or if you are surprised, it might be harder to articulate what the surprise is. Once surprised, go through a process of discovery. What is the gap between what was expected and what happened? What is the nature of the gap? Was it a failure in execution (of a project/event) or a failure in performance (of the product)?
Once the gap is articulated, the next step is to generate some hypotheses as to why it happened. Were the expectations flawed? Was the execution flawed? Or, rarely, was there a statistical fluke? These hypotheses are the starting point for the investigation that the team conducts. Check the logic of the hypotheses - a familiar activity for the TOC community - do the hypotheses describe a likely scenario? Do they need more detail behind them to understand proposed cause and effect? The hypotheses also help the team focus on where to look for evidence and information. This is as opposed to a fishing expedition, collecting every possible piece of data. There may be some iterations of asking "how come" or "why" to get at the hypothesis and information which explain what happened.
But the next step is to identify the underlying causal relationships - the (small p) paradigm - that created the situation. And then take that understanding of what happened and propose the smallest possible change to prevent the same thing from happening again. Note this is unlikely to be a sweeping (large P) Paradigm shift. It's a small tweak in the way we do things. It's also not an opposite action from the way things happen now. Of course, along with the proposed change, the team should check that it won't create negative branches. AND they should work out the procedures for implementing the proposed change.
Not only should the team look at the specifics of preventing a repeat of the situation, but they should also ask the larger question about the organization: where else might the old paradigm be at play in the organization? How can we make this change more general to help the organization as a whole?
For people who pay attention to organizational learning, I don't know that there is anything surprising here. The addition from TOC is the specific reference to cause and effect logic, and somewhat the idea of allowing yourself to be surprised. But I hear that in other disciplines as well. I liked how Schragenheim put this together, though.