Prediction, Tolstoy and New Technology

27 Mar

Way back in the 1800s decision makers were wrestling with the notion you could plan and predict outcomes which involved people. A list of pivotal figures from Tolstoy to William James (regarded by many as the father of modern psychology) devoted some of their best work to explain why prediction of human outcomes was an unachievable goal. If you’re not going to read Tolstoy’s War and Peace, a masterpiece of decision making critique, James provided an outstanding framework for pragmatic thinking when it came to planning in the face of uncertainty.

James argued that no plan or decision was true, a fact, it simply became true or otherwise as the consequences of action unfolded. Uncertainty never gave you enough to predict, you simply had to think through the consequences of actions carefully and be ready to adapt. Like the scientific hypothesis, you explored whether your expectation stood up, if it didn’t, then it was adapted.

Tolstoy’s work uses literature to account for the arrogance of people who think they can predict and plan with certainty and I often wonder what Tolstoy would have thought about big data, new technology and the notion of prediction. I’ll never know, but I do think the work of both Tolstoy and James are highly relevant to the technology enabled decision makers of today.

In my mind, whether you were planning in the 1800s or you are in 2014, using technology for predictive purposes and making decisions off the back of these predictions is something to be approached cautiously. When you predict, especially with confidence, you expose yourself to fixation- the confirmation bias. This means that when decisions rested on confident predictions encounter the complexity of human behaviour and become contradicted, the investment (psychology, resources and time), in that initial plan and decision, cognitively pulls at you to avoid the loss of that investment. The consequences of this is loss aversion ( resulting in the explaining away of contradictory data, fixating on the initial decision. So more data can actually accentuate biases by increasing subjective self confidence (

A better use of big data is discovery, and I think this works well with a form of risk primed decision making. To echo James, thinking through the consequences of a decision before its implemented is a good starting point. However, I think this can be taken a step further, when thinking about the consequence of action there is a tendency to frame and anchor consequences based on the worst of the experienced past. This is good, but it doesn’t prepare you or an organisation for the shock of a potential “even worse” scenario. A methodology to counter this is to take the worst historic event which methods like stress tests focus on and use data to construct even worse scenarios- there will always be a point in a crisis where the crisis hits a floor (in 2008 it was a bailout for example), data can be used to model how bad things could have got if this floor didn’t materialise when it did.

This combination of data and imagination can produce thinking on- how would I\ we have dealt with that? What fire doors or communication links could be built in to avoid this? Imagining risks unlocks creativity, problem solving in the future and enhances the longer term adaption of decisions. Placing risk first, as oppose to prediction, is a lesson from Tolstoy and James, but it’s a lesson which can drive discoveries today.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: