Archive | Research RSS feed for this section

Expanding Horizons and Cognitive Computing

21 Nov

The cognitive computing software we are currently working with is methodologically based on systems thinking. A key influence was the work of ecologist and cybernetics expert, Frederick Vester. Vester argued (2007) that the failure of organisational systems occurs because only a single aspect of a problem is ever explored, a concept he referred to as “symptom bashing”.

An organisation, like any aspect of life, is a network of interlocking relationships, interdependencies and features, which can be referred to as nodes. The relationship which exists between the various nodes of a network is crucial to an improved understanding of a wider system. Vester observed that what seems to happen in organisational planning is an over emphasis on nodes, with very little attention paid to the relationships and arising interdependencies.

For example, in organisational change, a CEO needs to make the organisation more streamlined by reducing the number of management positions. Each management position is a node in a network, and can be identified clearly on a spread sheet.

From this spread sheet, the CEO can select which positions they wish to retain, restructure or remove. What is not fully known to the CEO is the relationship each of these nodes have with the organisation. The formal rationale for restructuring, presented as the symptoms of too many managers, too long decision making, and too much cost, makes sense, but the cost of change, which will only become apparent through time, is unknown.

In other words, the nodal management positions are known in theory (job descriptions, formal responsibilities etc.), but how this theory has been adapted to practice, and what relationships have been formed to the delivery of key business areas, could be fuzzy at best. What occurs is an initial saving followed only by the appearance of “unexpected events”, the latent effect of system neglect. Not all risk can be known by taking a systems approach, far from it, but the risks that change can bring can be better prepared for, creating a more resilient restructure.

A lot of system neglect is human nature. We, as human beings, have a natural bias to jump to conclusions, and then stick to these conclusions, explaining away any evidence or data which might contradict our initial assumptions (Kahneman, 2011 for excellent examples). High level expertise presents frequently as the ability to let go of conclusions and adapt quickly to new information (Klein, 2007). However, the conditions need to be supportive for “letting go” to occur (Klein, 2014).

A practical method of providing this support to improve strategic decision making, forecasts and analysis is to think about the system as opposed to the nodes, or put more simply, get a broader perspective. For example, affective forecasting (Wilson et al, 2003, Greening, 2014) discusses how forecasting about the outcome of an event or future state improves when the forecaster speaks to someone who has lived their future.

Forecasting and decision making in small groups can become overly focused on the perceived resources, abilities and knowledge of the team to achieve a goal, the inside perspective (Kahneman, 2011). However, this type of planning leads to over optimism and no consideration of what time and uncertainty will throw in their direction.

However, if the team speak to someone who has made and lived through a similar decision and faced the unexpected events, then the team is being exposed to the outside view. This takes their focus away from the nodes (the closed attributes) and broadens the perspective to the potential relationship of the nodes to the system, the outside perspective, through a period of time.

This is one of the aspects the cognitive computing software we are working with aims to achieve. It aims to collect a wide range of experiences, forecasts and anticipations from variety of decision makers and make these available to a user so they (the user) are able to view their environment more as a system and less as a series of nodes. The software is able to present information to a user in a time critical format, so the user is able to see the latent effect of decisions and the emergence of potential unintended consequences.

Viewing a decision as being part of an interconnected and changing system broadens the perspective. A broadening of perspective allows a decision maker to let go of initial assumptions and adapt to changes in a system. This can be achieved by accessing the knowledge and experience of people who have experienced the topic system through a period of time. For the CEO in the earlier example, this means a sharp analysis of the key relationships the managers have built up in their role. The result is discreet knowledge of what the CEO will be actually restructuring beyond spread sheet nodes, and the risks which that brings. For a user of cognitive computing software, this can mean exploring a topical system through the eyes of potentially thousands of people who have already lived a similar future.

Reading

Wilson, Timothy D.; Daniel T. Gilbert (2003). “Affective Forecasting”. Advances in Experimental Social Psychology 35: 345–411

Kahneman, D (2011) Thinking Fast and Slow. Penguin

Gilbert, D (2013) (p.45) AFFECTIVE FORECASTING…OR…THE BIG WOMBASSA: WHAT YOU THINK YOU’RE GOING TO GET, AND WHAT YOU DON’T GET, WHEN YOU GET WHAT YOU WANT in John Brockman (Editor) (2013) Thinking: The New Science of Decision-Making, Problem-Solving, and Prediction. Harper Collins

Klein, G. (2014) Seeing What Others Don’t: the remarkable ways we gain insight. Public Affairs.

Klein, G. (2007) The Power of Intuition: How to Use Your Gut Feelings to Make Better Decisions at Work. Currency

Vester, F. (2007) The Art of Interconnected Thinking: Tools and concepts for a new approach to tackling complexity. Malik Management (English language edition)

Human Judgement and Cognitive Computing

13 Sep

McKinsey have published an outstanding interview with Gary Klein and Danial Kahneman. The interview is a reflection on Klein and Kahneman’s classic paper- Conditions for intuitive expertise: A failure to disagree (2009). Whilst the interview reflects on the two authors positions when it comes to intuitive decision making, the prime focus is on executive judgement- is intuition a good basis for top level business decision making? In this article I’ll briefly reflect on some of the key points raised by Kahneman and Klein, and how aspects of cognitive computing could potentially support some of the author’s suggestions.

Continue reading

The Effect of Culture on Decision Making

16 May

In my early days researching organisations, culture was never high on my list of priorities. I was mostly focused on behaviour, cognition and decision making. This meant I was investigating how people make sense of their environment, use their environment as a resource, make choices and update choices (or not) based on environmental feedback. As much as this research tells you, it plays out on a stage which influences both behaviour and cognition. And this stage could be described as culture. In other words, culture, as part of an environment, enables, influences and loads both behaviour and reasoning within an organisation.

I consider culture a vital part of the environment people interact with, use and are influenced by in their decision making in organisational settings.

Continue reading

Getting Answers from Unusual Places

30 Mar

Sometimes the answer to a problem comes from an unusual place that is right in front of us. This article covers how we can potentially transfer expertise from one disciplinary subject to another.

Many organisations contain expertise which stretches across multiple disciplines. For example, construction, engineering, research and development all contain reservoirs of expertise particular to their discipline’s training, experience, and cultural sense making. All these disciplines apply their expertise to design, deliver and problem solve during the completion of tasks. Problem solving, and dealing with tough non routine cases, are generally the points where expertise becomes most active and innovation takes place (Taleb, 2012, Klein, 2014).

Continue reading

The Rise of Unstructured Data

17 Feb

IBM’s website features an interesting statistic- 80% of new data is unstructured. This means that new data is largely in the form of blogs, tweets, white papers, articles amongst many other different types. The vast amounts of unstructured data could have, and are having, quite profound effects on the way people and organisations view and analyse “soft data”.

Continue reading

The Benefits of a Growth Mindset

3 Feb

Why do some people seem to improve at tasks whilst other people stay still? How do some people, teams, and entire organisations seem to bounce back from unexpected events whilst others seemingly never recover? And what provides some people with an accurate sense of “what is going to happen next?”

Philip Tetlock and Dan Gardner (2015) recently revealed the results of their multi-year forecasting tournament. The aim of the tournament was to identify people who could forecast world and local events accurately. Once these people were identified, the task was to assess what behaviours and methods produced this accuracy.

Continue reading

Increase Knowledge, Improve Health

22 Oct

In the previous article I made a case for health and well-being strategies aimed at prevention placing at least some focus on increasing a communities requisite variety (Weick and Sutcliffe, 2007) and adaptive tool kit (Gigerenzer, 2008). To achieve these aims an emphasis needs to be placed on knowledge acquisition. In this follow up article I’ll unpack what I mean by knowledge acquisition in this context and the role it can play in strategy and evaluation.

Health and well-being strategies are so complex that it is easy for the teams who design strategies to become disconnected from the people who implement and receive strategies (Klein, 2007). When the aim of a strategy is to improve prevention of poor health and well-being outcomes, then I’ve argued a key to delivering this strategy is a focus on knowledge acquisition, a quick re-cap. Prevention improves by increasing the amount of related knowledge within a community. Improved knowledge improves the quality of decision making, and it also increases the amount of coping mechanisms available to a person and community, thereby increasing resilience. However, this knowledge counts for nothing if it is to complex, hard to understand and very difficult to apply. For example, if you have been introduced to a new technique and it is complicated, what are the chances you are instinctively going to apply it when placed under pressure? The chances are you will apply previously ingrained techniques, after all, they are familiar and intuitive, regardless of how dysfunctional they might be.

So, what are sources of simple and effective knowledge? These sources are created and applied by the implementers and receivers of a strategy. When aims, objectives, procedures and processes hit reality they rarely maintain the theoretical form. They can become discarded or bypassed (a bad outcome) or they can become adapted and customised to meet a variety of challenges and applications (a good outcome). The problem is that when these highly effective customisations occur, they rarely become recorded and shared because they are taken for granted events during the course of a day. When these same aims, objectives, procedures and processes become evaluated, particularly if there is distance between planners and implementers, they can become tidied up into data which sanitizes the reality. Distance between planners and implementers can be created by culture and\or overly clinical and abstract feedback loops. In other words, the feedback loops provide only surface detail and fail to capture the reality of the strategy’s “ground truth”.

Seeking to improve health and well-being through prevention should aim to tighten these feedback loops not just between planners and implementers but also between partner service providers and across a community. A method of achieving this is focusing an evaluation and feedback strategy on the development of requisite knowledge and adaptive toolkits. Practically, the researcher should aim to collect simple rules of thumb which people have used to solve challenges, avoid or recover from mistakes or manage non routine events. There are a numerous methods which can be built into an evaluation and feedback strategy to gather this type of data, but I’ll outline an example below.

Tough Case Scenarios (TSC) have been used extensively within decision making research (Crandall et al, 2006, Klein, 2007, Rugg, 2013 all provide good examples) to focus a practitioners mind on non-routine cases. Non-routine cases mean that a respondent can’t just apply routine, they have to apply expertise and experience. My colleagues, Professor Wilf McSherry, Adam Boughey, and I conducted a recent study in dementia care. We used a TSC to gather expert knowledge from health care professionals who had recently trained in dementia care. The TSC enabled us to pick up sense making and craft techniques which extended beyond basic procedures, allowing us to access the professional’s expertise. Conducting a TSC across multiple groups gathers the requisite variety and adaptive toolkits which professionals and participants apply to handle tough situations.

The feedback TCS’s supply is the rules of thumb and craft skills people use in operation. These lessons can then be shared and applied across service providers, and an entire community, increasing the adaptive toolkit and requisite variety. For planners, the feedback provides insight into how the strategy plays out in practice, on the ground. The overall effect is increased usable knowledge, strategic awareness and insight into how and when to adapt a plan.

Reading

Weick, K. E., & Sutcliffe, K. M. (2007). Managing the Unexpected: Resilient Performance in and Age of Uncertainty, Second Edition. San Francisco, CA: Jossey-Bass

Gigerenzer, G (2008) Gut Feelings: Shorts Cuts to Better Decision Making. Penguin

Crandall, B. Klein, G. Hoffman, R. (2006) Working Minds: A Practitioners Guide to Cognitive Task Analysis. The MIT Press

Rugg, G. (2013) Blind Spot: Why We Fail to See the Solution Right in Front of Us: How Finding a Solution to One of the World’s Greatest Mysteries. Harperone. With D’Adnese. J.

Klein, G. (2007) The Power of Intuition: How to Use Your Gut Feelings to Make Better Decisions at Work. Currency

Phenomenology and Strategy Evaluation

1 Oct

In the play Nausea by Jean Paul Sartre, the protagonist, Roquentin, experiences the world around him phenomenologically. Objects in the life of Roquentin do not have any innate meaning. Objects exist independently, regardless of what meaning people give to them, they are simply things. People attach meaning to the world around them but Roquentin experiences only things. The objects in the world won’t supply any form of meaning to us and so it is up to us to create meaning. Roquentin has a realisation, staring at a chestnut tree, that unless he creates meaning in his life, he will remain alienated from the world around him.

Sartre leaves Roquentin realising that he must become responsible for himself and so is free to define himself. This freedom is both the basis for individual thinking and creativity, and also the basis for problems in creating shared meaning. If we, providing you accept at least some of Sartre’s argument, supply the world with meaning, and we all have a capacity to define meaning individually, then “getting on some page”, despite using the same phrases, words and subjects is potentially a problem. These type of meaning problems can be fatal when it comes to strategy implementation, and so, need to be evaluated with a degree of phenomenological understanding.

Any form of enquiry which involves investigating an existing problem, method, strategy has some degree of a phenomenological basis. So does an investigation which involves any new discovery. The reason for this is language. When something is named, the name it is given immediately attaches some form of meaning. A single name can have many forms of meaning to many different people. For example, the name cat could invoke multiple images with numerous meanings attached. It could invoke “my cat, who I love very much” or “next door neighbour’s cat, who I can’t stand”. The names “Corbyn” and “Trump” currently invoke very strong feelings around the world. And these feelings are attached to the way an individual makes sense of the world; they are phenomenological experiences. When you are asked to evaluate how something (let’s say a strategy) has worked, you are being asked to evaluate phenomenologically. You are being asked to test the assumptions and theories which produced a strategy against the assumptions and theories of the people who delivered, managed and experienced this strategy. In other words, when one group of people agree a meaning for the word “cat”, how well and for long does this meaning endure when the word “cat” is shared outside of this group? How often does the meaning divide?

The above argument brings us to one of the key phenomenological limitations in executing a strategy effectively- Intent (see Weick and Sutcliffe, 2007 and Klein, 2007). Person A provides person B with a task. Naturally, the communication of this task involves language. Many words in this communication had a cache of potential meanings (see Croft et al, 2004). Person A and person B may have left the room assuming they had understood each other perfectly, only to find out later, when the task has gone wrong, they had attached very different meanings to key words. In one interpretation “person A had failed to communicate their intent to person B” and in another “person B had failed to deliver the task adequately”. If you are investigating the effectiveness and performance of a strategy then you need to understand it, at least to some degree, phenomenologically. To say it another way, you need to understand the limits and varied interpretations of language, objectives, procedures and processes in relation to outcomes.

Reading

Klein, G. (2007) The Power of Intuition: How to Use Your Gut Feelings to Make Better Decisions at Work. Currency

Weick, K. E., & Sutcliffe, K. M. (2007). Managing the Unexpected: Resilient Performance in and Age of Uncertainty, Second Edition. San Francisco, CA: Jossey-Bass

Croft, W. Cruse, A. (2004) Cognitive Linguistics. Cambridge: Cambridge University Press

Sartre, J.P. Nausea (1949) Penguin

Evaluation, Time, Money and Methods

16 Sep

Whenever you are tasked or contracted to carry out an evaluation of a project, particularly in health, there are nearly always 2 immediate challenges

  • Investment- the amount of money and time available to carry the evaluation out
  • Access- the availability of relevant people and populations to provide data

These issues immediately restrict the capacity of a researcher to conduct fieldwork. Time spent out of the office interviewing respondents is time consuming. Factor in the time taken to organise interviews and other data collection points, and very quickly you have either burned through the budget or are left with an inadequate amount of data as time runs out. Designing and conducting research methods to scale is always a problem, the gap between the “theoretically desirable and the practically possible”. But funders pay for results so it’s a problem that requires solving.

A method of scaling projects is to train non-researchers from within target populations, or people who have easy access to the target populations, to collect the data. This works best with larger scale projects, so you avoid skews with small numbers (for example, only 10 total respondents all interviewed together in the same room). And for similar reasons the approach works best across multiple independent sites, because multiple sites (contexts) stand a better chance of locating genuine reoccurring themes and also insight specific to local context. Below is a recent example from the method angle.

I’ve recently completed a project where I was part of a team where we collected qualitative data from over 250 respondents. The data was collected by non-researchers from across multiple sites and the project was focused on assessing priority areas for strategic decision making. The design of the data collection helped us deliver to budget and on time, and importantly, produce analysis the funder was happy with.

When designing the methods for use by the non-researchers, we looked at other domains where good quality data had to be collected rapidly, and, was not collected by research professionals. The domain which we used was wild firefighting, which I examined through the work of Weick and Sutcliffe (2007). The chaotic nature of a wild fire means firefighters have to debrief rapidly, pass on learning, identify vital cues and focus attention on what is most important. These outcomes were exactly what we were looking to achieve through our data collection. To ensure these conditions are met there exist interview schedules which are designed to collect data from firefighters as they leave the field.

I adapted one of the fire fighter interview schedules (Weick and Sutcliffe, 2007) for the non-researchers in our project to use in both interviews and focus groups. The questions were structured to collect anticipation-what people think could and should happen next, and also reflection-what has gone wrong specifically in the past. These type of questions are useful for separating out needs from wants and setting priorities. Most of all, these type of questions enable strategic decision making to be improved via insight.

For our team and project, the adapted interview schedule worked very well, better than we could have hoped. We had piloted it and stress tested it before it went “live”, and this allowed us to make further adjustments. But most significantly, it was very easy for non-researchers to understand the underlying logic of the schedule and so make adjustments independently, depending on the context and flow, and for them to use practically in the field. This should have been no surprise, the interview schedule which we adapted was designed specifically to achieve these outcomes.

Reading

Weick, K. E., & Sutcliffe, K. M. (2007). Managing the Unexpected: Resilient Performance in and Age of Uncertainty, Second Edition. San Francisco, CA: Jossey-Bass

The Problem with Being Right

8 Sep

I’ll start with a general example- When you are involved in research then a lot of emphasis can be placed on defending your work, on trying to demonstrate you are “right”. This is especially the case during public presentations of research work, a PhD examination or a conference presentation for example. No aspiring researcher wishes to give in to rigorous examination and have to resubmit their PhD thesis and no researcher wishes their research findings to look so flimsy that they withdrawal their conclusions under questioning. Saying “I don’t know” is not expected in these situations, on the other hand, rigorously defending your work is expected. And besides technical competence there is emotion involved, these events mean a lot to the participants, they are heavily invested. Being able to technically argue the rigour of research findings is all part of the skill set, but how far does this attitude stretch before it becomes a problem? What is the line between technical competence and emotional investment?

Karl Popper (1994) argued that the challenge and focus of science is to prove theories wrong. In other words, effort should be directed at disproving hypothesis. This is the opposite to many of our instincts, as we (human beings) are frequently pulled toward proving ourselves right (Kahneman, 2011). We are more likely to dig in and defend current beliefs and this can result in an emotional, political and cultural solidifying of positions. This solidifying was well expressed and observed by Thomas Kuhn whose work on paradigms illustrated how a current scientific norm was not simply replaced by a “better norm”, but instead replaced by the surviving result of cultural, political and technical war between the “old” and the “new”. The investments which go into establishing a position are hard to walk away from, and the desire to be right is a psychological means of protecting that investment.

Holding onto positions is something most people will have seen, experienced or read about in business. Many of us have been in meetings in which someone refuses to let go of a project or strategy, even though the majority can see it’s time to move on. Frequently, there has been a large investment in this strategy or project, consisting of time, expertise, reputation and sheer belief. And at its inception, the architect of this project\strategy spoke eloquently of its merits, sold it to the room, and convinced everyone they were right. It’s understandable that someone who feels they have so much to now lose wants to hang in and prove they are right. Unfortunately they are simply sinking more costs into what has become a bad deal.

The above psychology might be familiar to anyone who has had to reluctantly walk away, or dragged kicking and screaming, from an investment gone bad, whatever the nature of that investment. A plan, a relationship, a business deal, anything which has meant something and is now being taken away. But where does this leave us between the ability and skill to rigorously defend and sell our ideas to our peers and the perils of holding onto ideas just to prove we are right? This isn’t just arrogance being discussed, it’s the psychological concept of loss aversion (see again Kahneman, 2011). It’s the emotional difficulty of letting go of things, concepts and people which have meant a lot to us-I’m really trying to stress this point! In my previous article on this site I discussed the book What I learned Losing a Million Dollars (by Jim Paul and Brendan Moynihan, 2013), and within it the authors provide a useful framework for assessing when “proving” you are right has become toxic.

Paul and Moynihan identify the difference between the following activities and the psychology which drives these same activities, I’ll summarise them below

Betting- proving you are right

Gambling- enjoying the thrill of risk

Speculating- researching information and making informed decisions

Trading- Looking to achieve a positive result in a relatively short amount of time

Investing- Looking to achieve steady returns over a relatively long period of time

If you are trying to “prove” you are right you are essentially saying you have mastered uncertainty; that “what we don’t know” won’t trouble your current position, your position will not turn out to be wrong. So, what might start off as speculating- researching, hypothesis creation and speculation, could turn into betting if the position solidifies and becomes emotional. In other words, if a contradiction to your current position starts to feel like an emotional attack on your sense of self (that the investment feels like slipping away) then you are being flung into a betting state of mind. Although this is hard to avoid, Paul and Moynihan share a simple three stage frame work which can potentially dilute this effect. I’ve adapted and summarised this framework below-

Determine your conditions for making the original choice- why is it being done?

Determine the conditions which would make you change your mind- how could I be wrong?

What are the levels of losses I can endure- What is the worst case scenario, and can it be contained? At what point of loss do I call it quits?

The above are conditions to put in place before a plan starts, or after a hypothesis has been generated. They are also useful to evaluate a strategy and to determine which lessons could be applied to other domains, creating the conditions under which a plan could and could not work- what would need to go right and could we handle what could go wrong?

The above framework could be used in conjunction with cognitive research methods such as prospective hindsight (Klein, 2007) which require decisions to be imagined as catastrophes and the reasons for the catastrophe to be written down; this would help establish the conditions for changing your mind and the “stop\loss” condition relevant to the worst case scenario.

This has been a very simplistic argument on the errors of being right, but the framework offered by Paul and Moynihan is a useful, and potentially vital, method for stopping speculation from sliding into betting.

Reading

Karl Popper

Notturno, M.A. (1994) Knowledge and the Mind-Body Problem: In Defence of Interaction (edited Mark Amadeus Notturno)

https://en.wikipedia.org/wiki/Karl_Popper

Thomas Kuhn

Kuhn, T.S. (2000) The Road Since Structure: Philosophical Essays, 1970-1993. Chicago: University of Chicago Press

https://en.wikipedia.org/wiki/Thomas_Kuhn

Kahneman,D. (2011) Thinking Fast and Slow. Penguin

Paul, J. Moynihan, B. (1994) What I Learned Losing a Million Dollars. Columbia Business School Publishing

Klein, G. (2007) The Power of Intuition: How to Use Your Gut Feelings to Make Better Decisions at Work. Currency