Joe Crumpler has an interesting discussion of The risk of exposure in Observable Work. The problem:
Observable Work has a flaw which is difficult to overcome. The general issue involves dealing with conflict in an open forum. The specific issue is the risk of exposure resulting from an admission in a post that might lead an influential third party to conclude that the author is to blame for a failure on a project. This fear paralyzes free expression and acts as a damper on clear communication.
And the solution proposed is to add boundaries to the "levels" of observability that translates to allowing wider observability with a wider contextual understanding - a part of which is pictured here.
The statement in the opening about fear strikes again. When I bring up the topic of observable work with a new crowd, I often hear about the concerns (fears) people have with exposing raw thoughts and ideas to colleagues or to permanent storage, when those ideas aren't ready for prime time viewing. The push back has to be that people are already talking about these things, it's just on email (bad!) or in regular conversations over lunch or in private meetings (good!). The answer isn't to dis-able electronic communications, it's to make those electronic communications make sense.
In the past, I have argued that blogging and other social media could overcome some of these concerns, because more of the context of the discussion and idea generation is preserved. While this is true, it still doesn't cover some of the concerns that Joe describes in his piece. Here it isn't concern about half-formed ideas, it is concerned about blame and people misusing information that could be very helpful in the context of a team or small group.
One element that they didn't discuss is why is there a culture of fear? Or can anything be done about raising the level of trust within the organization so that the fears prove to be unfounded? In the meantime, we are still working with human beings and need to give them mechanisms where they can feel safe.
Stepping out of the technology question for a bit, the proposed solution of keeping difficult discussions private and then opening up as more is learned - and more input is needed - makes a lot of sense in any human setting. Isn't that how we do it today?
[Thanks to the Twitter stream for pointing me toward this article and blog.]