H

Our Blogs  

 

Peer to peer knowledge flow

By Dave Snowden  ·  December 20, 2013  ·  Knowledge Management, Reflections

I promised yesterday to talk about peer-to-peer knowledge flow.  Like yesterday's post this has an ideological aspect as well as a practical one and its nice when they coincide.  I argued yesterday that people's own voice should be heard, and their own interpretation of that voice.  Removing mediating layers of interpretation reduced the dangers of misunderstanding; the same applies to knowledge flow.  Knowledge management has taken a route over the years of increasing summarisation to reduce the reading load of key knowledge workers.   I want to argue that while this has some utility it is not the whole picture, or for that matter the correct picture.  In doing this I want to draw on a mixture of sources and experiences but in particular Fauconnier and Turner' work on the idea of Conceptual Blending.

Now in one memorable meeting in the Pentagon, several years ago I remember General Sorenson, then and now CIO for the US Army saying the only thing which worked in Iraq was platoon commanders blogging no one paid attention to doctrine.  The point was that, in the field under fire, people wanted the narrative people's day to day experiences.  Those had contextual relevance while doctrine had utility for training and background learning. In another case the best practice​ document about managing a hijacking in Singapore was simply thrown in the bin, not because it lacked utility but because it lacked immediacy.  There was no time to read it.  Even a superficial search will turn up many other such cases.

So picking up on Fauconnier and Turner's ideas; they argue that the brain assimilates fragmented data from both personal experience and variously through narrative.  It blends that (based on a contextual recall) with its current situation to come up with a unique and contextually appropriately form of action.  This is backed up by other sources (although not in the same formulation) from a wider variety of sources.  We like fragmented messy recall, it gives us evolutionary advantage and increases the chances of making abductive leaps.  In Cognitive Edge, and before that in IBM, I came into the whole area of narrative from the point of view of knowledge discovery.  That took me more down the route of using naturally occurring anecdotes than in constructed or facilitated stories.   I also knew from my own experience, before I discovered the science, that my best insights came from blending together experience with fragmented memories of things I had read or heard.

So when I started designing SenseMaker® post IBM I wanted to replicate this natural process, using technology to augment but not replace human intelligence.  That capability is now there and you might want to refresh on the signification concept in SenseMaker® using deliberately abstract shapes such as triads.   If you want to experiment with this then download SenseMaker® onto your iOS or Android device and use activity code AgileAU2013 (a conference assessment and learning configuration) which will show you how it works.  

​Now we are increasingly moving towards continuous capture in journal form as an alternative to work reports, shift reports and the like.  That allows people to keep observations, suggestions and ideas in SenseMaker® as they do their job.  Typically we suggest that reporting requirements are reduced if people do this and that generally provides sufficient incentive. Not to have to complete a patrol report at the end of a stressful 12+ hour patrol is a significant incentive to keep your field notes up to date.  The same applies to field engineers, safety workers, social workers, health workers and many others.  A side benefit is real time access to data across multiple sources but we also, and critically for this post, get fragmented experience and contextually triggered commentary and ideas in the field.  Given that material is signified into a quantitative framework in real time we also have the ability to provide for complex monitoring and recall.

But we can go further, we can take historical data and have experts signify that at a fragment or composite level.  We can take ideas and experience from related fields (development sector for peace keeping operations for example) and signify those.  A field worker can then ask an ambiguous question and get fragments from multiple sources that they can conceptually blend with they current situation to come up with a unique form of action.  That action itself can be signified to build a body of knowledge that is constantly evolving and which is structured through human intelligence without the normal overhead of taxonomies and the like.   Doctrine or best practice documents can be linked into the same system and we can create something I call Narrative Enhanced Doctrine in which best practice documents are shorter and more summary in nature with multiple HTML links to rich explanatory narrative that I can also select in turn.

Then a level beyond that, and one I am passionate about.  A child acting as a field ethnographer or citizen journalist as part of a project somewhere in the Phillipines captures stories about something his parents have done on their farm to handle flooding.  He then searches for more stories like mine and is connected to stories from someone in Africa on a related story as well as some drawings of basic paddles from Vietnam entered by an expert.  he then takes that highly visual and pragmatic material to his own parents and they adapt and adopt.  All of that is very different from those various experiences being synthesised by development workers in a major capital then re-distributed.

So we move to something that is peer to peer, but incorporates material from expert and other sources; technology as augmentation not imposition.