Visualisation, Big Data and the Halo Effect

18 Apr

Read anything on the subject of big data and the recurring theme is that more data will bring organisations better and faster decisions. But when you have methods which can crunch half a million or even a billion lines of data you’ll need a view of this information which makes sense, doesn’t overwhelm and supports the decision-making process. This is where big data visuals come in- managing the appearance of vast amounts of data in a way which supports the decision-making process.  As good as these visuals are, it’s still human beings who actually make the decisions and this is what I’d like to discuss- how much should we rely on data visualisation in our decision-making?

If you take a look at big data visualisation they are impressive, not only do they provide methods to explore a billion lines of ever-changing data on a screen, but some of the best visualisation packages will adjust selected data to the best visual for analysis and decision-making. The quality of some of the bin graphs, box graphs and scatter graphs is outstanding, capable of displaying multiple measures of millions of data lines in an easy to understand screen shot.  But this is also the problem- they look great, they do so much for you, and they promise even more, it’s very easy to think you are seeing EVERYTHING you need on the screen in front of you. Great visuals can produce a halo effect, a form of decision-making bias.

The halo effect is a bias which leads the perception of one area to effectively spill over into the perception of another area. For example, if a candidate arrives at an interview well presented, articulate and incredibly polite there is a natural tendency for the interview panel to overweight these factors and perhaps overlook something which is more telling- a lack of sustained success across multiple challenges when compared to other candidates. The visual qualities of the candidate, the qualities which are immediately in front of the interview panel, can create blindness at the expense of data which is out of sight. The psychologist Daniel Kahneman sums this up perfectly- What you see is all that there is.

When human beings see something which is visually appealing they have a tendency to attach a range of other favourable qualities to it, we produce a coherent and consistent narrative to justify our belief-if it looks good it must be good. And data is not immune to this process, in previous blogs I discussed how the visual appeal of a bell curve or Gaussian distribution can lead to the “shutting out” of vital information.

With this in mind, it is wise to use big data visualisations with caution and avoid getting caught up in their aesthetic qualities. The current visualisation packages allow the user to zoom in on various parts of the data, make local and global correlations and compare multiple measures but this also allows the user to construct a good-looking story with numbers and graphs and this can produce a second set of assumptions if the story is composed of graphs and numbers then it must be objective. Unfortunately, this is not always the case, with so much data to explore and so many methods available to explore it, the question still remains how is the user choosing the focus?  The focus could very well be the product of a bias, and if this bias looks good, it stands a greater chance of becoming tractable.  Nothing causes more dangerous risk taking in business than believing a number has provided you with the answer, it’s over confidence in extremis.

Does this mean avoiding big data visualisation methods in decision-making? No, it definitely doesn’t, their ability to effectively crunch and demonstrate data is outstanding, but it does mean maintaining perspective. Big data visualisation is a tool in the decision-making process, not the complete tool kit, and the human tendency toward the halo effect cannot be underestimated. Keeping this argument in mind, consider the following when analysing your data. Firstly, once you have reached a conclusion in your data analysis, consider what you don’t know, what could be missing and write it down.  Secondly, is the data available to address these information gaps? This will provide you with an idea of where to look next with your data sets. If the data is not available to fill the gap you should weight how sensitive any decision outcome is to the missing data, this will help with assessing the risk and the quality of the decision. Thirdly, re-assess your conclusion; has it changed as a result of the new data set? You could repeat this process until the information gaps are as small as you can achieve in the time and resources available, but ensure you still weight the sensitivity of outcomes to any missing information.

Essentially, the above suggestions are attempts to refute your initial conclusion, to withhold bias.  And so it is with the halo effect in all aspects of life, if something looks good you might want to ask yourself how much evidence you have it is likely to be good.

 

 

 

 

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: