A long time ago I settling into a new strategic role within DataScience following the best part of a decade as consultant/programmer then General Manager of the Decision Support Software business. Part of that new role involved getting parachuted in to spend time with key clients who wanted a more intellectual conversation than the responsible account executive was willing or able to provide. In terms of corporate survival it is a useful reputation to acquire but I had to put up with some gentle ribbing as a result. At one sales conference I was satirised (affectionally I think) as the thinking woman’s thinking person. You have to be of a certain age and British to get that reference. One of those clients was John Taylor, then Chief Executive of the Dental Practice Board. We got on from the day we realised we were the only two people in the room who knew that Issac Newton had considered his alchemy as, if not more, important than his scientific work. Worst still we were the only two who even cared about it, or thought it was significant. It was for John that I created the first version of the known-unknown-unknowable model that was an early predecessor of Cynefin. In a modified form, and possibly a parallel invention, the nomenclature became notorious and I’ve only recently started to use and develop it.
So when I was searching for an image for this follow up to yesterday’s post that conversation came back to me. In consequence I chose An Alchemist’s Laboratory by Jan_van_der_Strae given that the word pseudo-science first came into use in the Eighteenth Century to describe Alchemy. As a word it comes from the Greek root pseudo meaning false and the Latin sciential, meaning knowledge. So literally it means false knowledge and is thus a pejorative epithet. To apply it to something that uses true science but does not make any wider claim is a misnomer at best.
Now I know Simon was not being malicious in using the word there is a clear need to make the distinction between something that is based in science and can be validated against its sources and something that claims to be a science. Hopefully that is clearer.
Readers may want to skip what follows. I’ve written it up for the record given a link Simon made to a polemic that I am surprised he took seriously. But given he made the link and thereby gave it some credibility I decided to provide a response.
I suspect Simon just did a web search and found Tom’s post but didn’t investigate it further. If he had he would have discovered that Tom was once an enthusiastic advocate of Cynefin, one of a group of over enthusiastic advocates that have always made me slightly nervous. Tom decided at one point that I didn’t understand Chaos while he, as a self nominated wizard did. So he produced a new version of Cynefin. I politely pointed out that he was welcome to create his own framework and acknowledge his sources but not to usurp the Cynefin brand. I had started with Boisot’s I-Space and said as much in my response to Tom. Either way he became heated and said that I didn’t understand chaos as my only solution was to get the hell out of it. I then pointed to several references where I had clearly stated that this was the strategy if you accidentally fell into it, but the deliberate entry was a desirable thing for the purpose of innovation and distributed decision support. So his assertion was wrong and here was the evidence in peer reviewed articles. That didn’t go down well and my comment with said references was deleted. My protest at this, along with the comment that he was using chaos where I used complex, resulted in a tirade in which he explicitly stated that he hadn’t read any of the articles I referenced and didn’t intent to as doing so made him physically sick. At that point I realised he had a problem and might be better left alone although I haven’t been able to resist the odd poke from time to time. I probably shouldn’t have pointed out that Tom has always self-published and never submitted his material to peer review. But given he was asserting a position of knowledge on what was or was not science it seemed a fair enough point.
I’ve happily ignored Tom’s argument that Cynefin is a pseudo-science over the years and Tom deleted any comment I made on his posts. But as Simon has draw attention to his views I thought I would run through the pseudo-science criteria that Tom uses by way of expanding on what is or is not pseudo-science in respect of Cynefin. So here goes, with the criteria in italics.
- Isolation – failure to connect with prior and parallel disciplines
Anyone who had read any of the articles that describe Cynefin will see multiple references to the supporting sciences. The only article which doesn’t is the HBR one as their style guide prohibits it. In the early articles I even reference people I disagree with such as Stacy to make sure that due acknowledgement is given to all sources. So I think we are OK there.
- Non-falsifiability – no means to invalidate hypotheses
Given all the sources are clearly identified they can be checked. Cynefin as such does not advance a hypothesis it references and uses the verified hypotheses of others.
- Misuse of data – leveraging data out of context or beyond validity
If anyone can give me an example of this do. I’ve had a lot of peer review over the years on many articles and no one involved has accused me of that.
- No self-correction, evolution of thought – often centred round a single ‘thought-leader’
Easy to refute if you look at the evolution of Cynefin over time. I’ve constantly interacted (and acknowledged) the intellectual contribution of people like Boisot and Juarrero. I’ve also co-authored papers with a range of people over time. Kurtz for example, who worked for me in the IKM and did a lot of the method development around Cynefin. Boone on the HBR article and more recently Prof Marks with two peer reviewed articles on the use of Cynefin in Health. I’m currently working on three other co-authored book chapters or articles. Yes I’m the primary author in respect of Cynefin but others have been involved, and critical to my benefit.
- Special-pleading – the claim that this is a special-case that can’t be measured in any other terms
I’ve examined my conscience on this and I can’t think of any such. Again the references are clear and can be found, read and checked. No special pleading has been made.
- Unfounded optimism – unrealistic expectations
That would apply to a claim of predictive power, which is not made, only a claim for sense-making
- Impenetrability – an over-dependence on complicated ideology and obfuscation, or bluster in place of debate
Given that the HBR paper won an Academy of Management award as the best practitioner paper and my first significant paper was recently recognised as the third most cited paper of all time in Knowledge Management I think we can refute that one. Many people have picked up and used Cynefin, and used it authentically but maybe not the way I would have in the same circumstances. If the material is impenetrable to some, it is not to a significant number and I’m fine with that.
- Magical-thinking – such as “the belief that good things will result from willpower alone”
Never really bought the will power idea and if this was the case nothing would have got through peer review.
- Ulterior motives – particularly ulterior motives of a commercial kind
Possible, but I’ve put the framework into the public domain and have never required attendance on a training course prior to use. So there is no dependency.
- Lack of formal training – including certification schemes that link back to #4
We have formal training, we don’t certify and we don’t require certification or training to use
- Bunker mentality – such as complaints about being ‘misunderstood’ by others, and often linked to #5 and #7
Misunderstandings happen but I am remarkably tolerant of people’s use. If I wasn’t Cynefin wouldn’t have the level of citation and use it does. People have created games using it on which we get no royalty or control etc. etc. My general policy is to live with four domain versions of Cynefin (although I do point out there are five if people persist) and to retweet or reference most uses of the framework, I can think of three occasions only in over five teen years where I have had to be firm when people have misrepresented the framework or, as in the case of Tom, hijack the brand.
- Lack of replicability of results – especially replicability by others under controlled conditions
This is an interesting one and I really covered it in my post of yesterday. The Cynefin framework is based on science so if people check out the sources they should come to similar conclusions, or at least not incompatible conclusions. Methods taught within Cynefin can be tested and replicated and people do.
Everything I’ve said above can be verified so I’m surprised Simon gave Tom’s rantings credibility. But I respect Simon so I’ve responded as promised.