Why I Hate Heat Maps

You know what a risk heat map is. It is a two-dimensional graph, where one axis represents probability, the other represents impact. Then people plot the risks on the graph. And you know what is wrong with every heat map ever used to manage risk. You really do. But people continue to use them anyway. They are like a cancer for risk management. Or a better metaphor is they are the cigarettes of risk management: they provide an easy convenient fix of your favorite drug but in the long run they damage your (corporate) health. In this case the drug is instant credibility but the cost is the failure to manage risks properly.

Seriously, you know all of the things that are wrong with heat maps, because you are intelligent. But just like an intelligent smoker who will not quit that habit, you may be unwilling to quit the habit of using heat maps.

I often tell people what is wrong with heat maps (for example, during this talk) but few listen. So, for once and all, let me reiterate everything you already know is wrong with every heat map you have ever seen or used.

1. Risks can have upsides too.

A heat map is a two-dimensional graph. One axis is probability, the other is impact. But some risks have a potential upside. That means they may have an impact which is good. The upsides never get plotted on the heat map, because there is nowhere to plot them. So how are you supposed to properly judge how to respond to a risk, if you cannot differentiate between a risk that has a massive potential upside, and a risk which has no upside whatsoever?

2. Risks are a range, not a point.

Depending on where you live, earth tremors happen every so often. Occasionally there will be an earthquake. And very rarely there will be a monster quake that kills loads of people and makes news around the world. That is just one easy and obvious example of why most risks need to be understood by describing the potential range of outcomes. The earthquakes with less impact have higher probability, the ones with higher impact have less probability etc etc. So if we plotted earthquakes on our heat map, we would need to color in an area of the graph, not plot a single point. But that would soon make our graph look very messy. And it might reveal we simply did not bother to work out what the range was. So people plot points instead, like every earthquake has exactly the same magnitude and will occur with exactly the same likelihood. Madness!

3. No, you cannot plot the average; you really must plot the range.

I know what some tricky risk managers want to do! Plot the average, you say. Firstly, that is wrong. You cannot have an ‘average’ outcome when one of the dimensions is probability. There is no ‘average’ earthquake. There are once-a-year earthquakes and once-in-a-million-years earthquakes. You cannot ‘average’ these earthquakes by plotting the once-in-a-decade earthquake and pretending it covers all of the others.

And did you forget the bit about upside? How are you going to average upsides into your impact without netting them against the downside risks and hence defeating the point of the exercise? Suppose you had to plot the risk of betting on red whilst playing roulette. You win if the roulette ball lands on red, you lose if it lands on black or zero. If you tried to plot the average outcome it would say you are expected to lose a little every time. But the reality of the simple maths in this risk situation is you will win almost half the time, and lose slightly more often. You cannot net the probabilities or the amounts to be won and lost. It is the variance between possible outcomes that makes it necessary (and profitable) to manage risk.

Some people think of risks having a ‘shape’, and I think it can be useful to visualize the shape of a risk. For more on the ways one risk can have a different shape to another, I recommend this article from the blog of consulting business Causal Capital.

4. The numbers do not fit your stupid graph.

All graphs have a scale. Imagine a scale from 0 to 100. Now imagine plotting a co-ordinate point at 0.00001 on that graph. Pretty difficult, huh?

Now imagine a scale from 0 to a very very large number. Because big things happen too. They even happen to risk managers (if they are doing their job, and actually managing risk). You know what I mean. Tsunamis hitting nuclear reactors and causing meltdowns, all your top executives flying on the same airplane which crashes, and other things that go off the scale. Except they cannot go off the scale, because you have to plot them too.

The real world includes risks which have very high impact and very low probability. It also includes risks which have very high probability but very low impact. The world was not created with the goal that every risk should look pleasing when plotted on your graph, because the risks are neatly distributed around the rectangle and perfectly suited to the arbitrary scale you imposed. Ten billion dollars of disruption caused by a once-in-a-century earthquake is worth, statistically speaking, the same as computers making unpredictable one cent rounding errors to ten billion billed transactions each year. You could try to screw around with those numbers to make them neatly fit your graph, but that just proves the problem is with your graph, not the numbers.

Some people solve these problems by not even bothering to have a proper scale. For example, instead of having a monetary scale for the impact they place them in bands 1, 2, 3, 4, or 5, based on how severe the impacts are. But why even try to rank and prioritize risks if you are making arbitrary decisions about how to define each band, and then judging the risks according to how plotted points look on a graph with no valid scale? Why not just use real numeric data (a Cartesian graph is just the geometric map of a two-dimensional array) and determine the expected values (the probabilities multiplied by the impacts) instead of messing around with meaningless bands?

5. EV shows why the graphs are irrelevant.

If you were paying attention to the last point, you know heat maps are just a bad way of plotting the expected value (EV) for a risk. So why use a graph to rank risks, whilst saying those plotted towards one corner are more severe than those plotted towards the opposite corner? If you know how to calculate the EV for every risk, why not simply calculate every EV, then order the list of risks from high to low?

6. Except we know why people do not use EV. It is because they are using make-believe data.

The difficulty with using EV is that you have to determine (or guess) the probability and impact for each risk, and to put it into actual numbers. But maybe you do not know. So you plot a graph instead. If you understood the point about the equivalence of a geometric map to a two-dimensional array then you understand why you have the same problem either way. Running away from numbers by plotting points on a square of paper is not a solution, if the underlying problem is not knowing what the numbers should be.

One reason why some people run from numbers is they get squeamish about certain topics. “Every human life is priceless!” Maybe so, but tell that to your life insurance company. Your business is going to spend a finite amount on protecting people. Any impact can be translated into a number, so just get over yourself and accept that even a human life can be described in terms of a financial value, whether it is the value the US government uses when judging how much taxpayer’s money will be spent on preventing road accidents, or whether it is the amount your company will spend on employee safety.

And once again – though it gets boring repeating myself – a Cartesian graph is no different to a two-dimensional array, so if you can plot ‘loss of a human life’ on a graph, then you could work out the value of a human life by deciding what financial loss you would plot at the exact same point. However you look at it, you are translating lives into dollars, so just get on with it instead of kidding yourself about what you are doing. Using arbitrary bands will not avoid creating the equation; they just confuse people and prevent them from doing the maths properly.

7. Heat maps do not even look good.


Seriously. Green, yellow and red squares. If this is the best pictorial representation of risk we can come up with, then we surely deserve to fail.

Painfully Obvious Conclusions

Heat maps are just a terrible terrible terrible way to understand, communicate about, and decide how to respond to risks. They either mess up what you already knew, or they hide the fact you are too ignorant to make a rational decision.

Everything that can be done with heat maps would be done better with actual numbers. That leads me to one last hypothesis about why some people like heat maps: they are innumerate. But if you want to earn a six-figure salary by managing major risks, you should really invest in learning some basic statistics. Go to night school and refresh your understanding of mathematical concepts that an average 16 year old could master, instead of pretending that the art of arbitrarily plotting points is as useful as the application of real science. And if you are numerate already, why do you want to demean yourself by using a technique that your innumerate rivals can also use (and abuse)? Is it because the rest of the executive team are innumerate? Then send them to night school too!

Like I said above, heat maps are the cigarettes of risk management. You light them up, and get your quick hit. But whilst you might feel good about using them as a way to discuss and manage risks, they are harming you, and your organization, by leading you astray. One day everybody will regard heat maps the same way they feel about the advice in old cigarette commercials…

Eric Priezkalns
Eric Priezkalns
Eric is a recognized expert on communications risk and assurance. He was Director of Risk Management for Qatar Telecom and has worked with Cable & Wireless, T‑Mobile, Sky, Worldcom and others.

Eric was lead author of Revenue Assurance: Expert Opinions for Communications Providers, published by CRC Press. He was a founding member of Qatar's National Committee for Internet Safety and the first leader of the TM Forum's Enterprise Risk Management team. Eric currently sits on the committee of the Risk & Assurance Group, and is an editorial advisor to Black Swan. He is a qualified chartered accountant, with degrees in information systems, and in mathematics and philosophy.

Commsrisk is edited by Eric. Look here for more about Eric's history as editor.