Tag Archives: organizational behaviour

Harvard On Forecasting

1 Dec

There is an outstanding, and extensive, resource available on the Harvard University website dedicated to one of my favourite research subjects, Affective Forecasting, by Wilson and Gilbert (the link appears below). I’ve covered Affective Forecasting multiple times on this blog, across various contexts. So, in order to provide a primer for anyone interested in exploring this topic further and clicking the link, I’ll attempt to briefly revisit Affective Forecasting from an organisational perspective.

One of the downsides of having large amounts of experience in a specific domain is that sense making can become fixed (Klein, 2007 for examples). If an experienced decision maker has a run of success, their critical insight is put at risk. This means success can diminish the capacity to draw a conclusion, and then critically analyse that conclusion with questions such as “how could I be wrong? What else could be at play which I might have overlooked?”. The result is that the past and present is projected into the future, uncritically, and used to forecast an outcome or future condition. And this is affective forecasting crudely expressed, a current emotional state is used to predict a future emotional state.

If time pressure and competing demands are added to the mix, then the past is more likely to become a proxy indicator of what will happen in the future (see Weick and Sutcliffe, 2007, Kahneman, 2011, Taleb, 2013 for examples). This has all sorts of consequences for decision making.

Experienced organisational decision makers, should work hard to maintain critical insight despite current demands and pressures. For example, If a leader affectively forecasts and is operating with a non-critical team, then decision traps such as group think can easily take hold.

As the source material will reveal, methods such as encouraging critical reflection on organisational decisions before execution, and consulting someone or a team who has lived your intended future to broaden the frame of reference can prove effective. I’ll leave Wilson and Gilbert to explain the rest.

Link to Wilson and Greening on Affective Forecasting


Kahneman, D (2011) Thinking Fast and Slow. Penguin

Klein, G. (2007) The Power of Intuition: How to Use Your Gut Feelings to Make Better Decisions at Work. Currency

Weick, K. E., & Sutcliffe, K. M. (2007). Managing the Unexpected: Resilient Performance in and Age of Uncertainty, Second Edition. San Francisco, CA: Jossey-Bass

The Psychology of Organizational Change

23 Aug

Interesting article on Psychology Today addressing the topic of organisational change, and entitled The Psychology of Organizational Change. I’ve quoted the conclusion from this article below in full

Traditional change in management tactics in organizations are based more on animal training than on human psychology and neuroscience. Leaders promise bonuses and promotions (the carrot) for those who go along with the changes, and punish those (the stick) who don’t with less important jobs or even job loss. This kind of managerial behavior flies in the face of evidence that shows that people’s primary motivation in the workplace is neither money or advancement but rather a personal interest in their jobs, a good environment to work in and fulfilling relationships with their boss and colleagues” Article available here

It’s difficult to argue with the logic of the conclusions. Evidence does clearly suggest that meaning, social interaction and a good environment are the most important workplace considerations for staff. However, the job of change is never done. The external environment changes jobs which had so much meaning, people who mean so much move on, and revenue can impact on offices which everyone enjoyed working in. So even successful organisational change, is simply a point on a continuum, and never done.

The above point highlights the importance of leaders and planners staying in frequent touch with frontline workers. This is important because not only is change an ongoing process, a way of life which can be too often left to chance in organisations, but human beings are notoriously poor at anticipating how much positive and negative change will impact upon their lives in the medium to long run (Kahneman, 2011, Wilson et al, 2003).

The consequence is that short term data on success can show very positive attitudes towards change, and then once adaption kicks in, the data can look like a decline in satisfaction. The two key points are 1) try and avoid declaring change as complete or a success too soon, and 2) leaders should stay in frequent touch with frontline workers to monitor adaptions. Otherwise the gap between implementation and feedback can cause inaccurate responses, for example

This data shows a sharp fall in satisfaction, we need to review the effect of our change strategy

Everyone has just adapted to change, it’s only natural that it’s no longer seen as something significant

Both these points require accurate feedback data to back them up, otherwise what’s really going on is lost.

We’ve written about the above points here and here in more detail


Wilson, Timothy D.; Daniel T. Gilbert (2003). “Affective Forecasting”. Advances in Experimental Social Psychology 35: 345–411

Kahneman, D (2011) Thinking Fast and Slow. Penguin


Transforming Expertise

28 Jan

Mickinsey and Company recently published an interesting article entitled “Transforming Expert Organizations”. The article identified an interesting “expertise paradox”. Expertise builds up within an organization, is highly effective and transformational, but then becomes increasingly more difficult for outsiders to understand and access. The authors of the article (Bollard et al, 2016) refer to this as an “expertise silo” and describe the concept below

Continue reading

Detecting Errors in Strategy

19 Sep

This is a quick introduction to the Spiral methodology we’ve designed when applied to strategy error detection. When any organisation, policy unit or research team are putting together a strategy or plan aimed at realising future goals there are 4 levels of potential error they should be mindful of.

Level one is the acquisition of knowledge. The first port of call when making a decision on future direction is the collection of data. If you’re getting data from people or from any complex system (systems which involve people) then you need to avoid potential areas of bias such as memory decay or data fixation (Wilson and Gilbert, 2003) and capture expertise (Hoffman et al, 2006). The Spiral seeks to compensate for bias by applying research methods for data collection from complex systems which are designed to minimise the presence of bias. This wouldn’t be the primary source of data collection, but it would place assumption testing on all data and support insight.

Level two is the analysis of this knowledge. What scripts are people using to make sense of it? There is always a script at work when reading data, the script of the individual (the CEO for example) and\ or the script of the organisational culture. The methodological goal here is to ensure that the data is read with a broad frame, one which considers other potential scripts, as oppose to a narrow frame, one which only considers and analyses through a single script such as the more complex the analysis the less chance of blame. Changing the perspective on data, by simply changing statistics to frequencies for example, has incredibly powerful effects on decision making effectiveness (Kahneman, 2011, Gigerenzer, 2014).

Level three is the gap between planners and implementers. A simple way to illustrate this level is through communication. Within an organisation there is always formal explicit knowledge-the guidelines and procedures for example. Also within an organisation is the informal knowledge, how things really get done. Creating plans which only deal with the structural side of the organisation and expect appropriate changes in behaviours are doomed to fail (see Klein, 2013, Weick, 2009).

These plans fail because expertise and insight become diluted by procedures and processes. What manifests are large gaps in reasoning between planners and implementers. The aim here is deploy simple but effective research methods which close this gap and get reasoning on a similar level. The pay off here is clear intent- the ability of people to achieve objectives, independently, even when things go wrong. This increases innovation and reduces the cognitive load on managers by reducing the frequency of the question- what should I do next?

Level four is forecasting, the anticipation of future states. The other three levels all feed into forecasting, perception is information based (Hoffman et al, 2006). Research suggests that forecasting is particularly susceptible to bias (see Wilson and Gilbert, 2003 and Kahneman, 2011 for some good examples). With this is mind the goal is to use methods which reduce bias in anticipation, such as prospective hindsight (Klein, 2007). Not only can these methods make forecasts more realistic it can also improve general risk management, resilience and adaptability.

This has been a very short introduction to the four levels where errors can occur in strategies and plans. In a future article I’ll provide more detail on some of the research methods we use to locate and address these errors.

Wilson, Timothy D.; Daniel T. Gilbert (2003). “Affective Forecasting”. Advances in Experimental Social Psychology 35: 345–411
Kahneman, D. (2011). Thinking Fast and Slow, Allen Lane 2011
Gigerenzer, G. (2014) Risk Savvy: How to Make Good Decisions. Allen Lane
Klein, G. (2013) Seeing what others don’t: The Remarkable Ways We Gain Insights. Public Affairs
Weick, K. Making Sense of the Organization (Volume 2) The Impermanent Organization, Blackwell
Hoffman, R. Crandall, B, Klein, G.. (2006) Working Minds: A Practitioner’s Guide to Cognitive Task Analysis Cambridge, MA: A Bradford Book
Klein, G. (2007) The Power of Intuition: How to Use Your Gut Feelings to Make Better Decisions at Work. Currency

A Role for Vision in Decision Making

22 Jan

I had the pleasure of speaking at a clinical reasoning conference yesterday and a question from one of the delegates really stood out- how much is decision making affected when an organisations vision isn’t shared by senior managers and executives? The delegate was referring to how decision making by non managerial professionals is affected but the answer is relevant across all domains- it affects it significantly and with potentially catastrophic results.

Quite often we hear that for a job or career to be rewarding two things need to be in place- a level of autonomy and a sense of meaning. You can have a level of autonomy in your work but without a sense of meaning, of being part of a bigger purpose, then it’s possible to feel what the philosopher Michelle Foucault observed- certain types of freedom can themselves be oppressive. Being able to answer the question, at least to some degree, what’s the point of all this, is essential to a sense of satisfaction in work. It’s also essential to decision making and innovation.

I’ve pointed out frequently in previous articles that a key difference between an expert and a novice in their decision making is the ability to assess risk, to identify what could go wrong. This could be simply expressed as anti goals, things you don’t want to happen. And sometimes focusing solely on anti goals is all that’s required to plan positive outcomes. The ability to develop a keen sense of anti goals only comes with a deep understanding of the domain in which you operate and what success or good outcomes look like, then you can identify what you want to avoid.

To understand what a good outcome\ success looks like you need to visualise it when faced with a challenging situation. You need to be able to draw from experience and see what this situation needs to become, how to get there and most importantly things to avoid on the way. This simple process allows you to adapt previous methods to the current situation, spot leverage points and innovate, and keep a plan on track in the face of distractions. However, none of this is possible without vision.

It’s vision which provides the background of meaning; it allows people to understand the consequences of their actions in relation to achieving higher level goals. It also provides them with a strong sense of what to avoid. When an organisation tries to cope without vision quite often people turn to routine, customs and practices and\ or bureaucracy as the default mode of avoiding blame sets in. What goes out the window is the ability of people to make decisions swiftly by sizing up a situation or opportunity in relation to the organisational vision, along with the ability to spot leverage points and innovate. It happens because everyone is fumbling about in the dark and so tries to simply stay safe. Everyone could feel autonomous in this situation but it would feel like straight jacket with no one certain what to do with it.

It’s far better to just turn the lights on.