Security and Risk: Too Much How-To, Not Enough How-Much

Last week there was a lot of commotion about Meltdown and Spectre, two security bugs that afflict chips made by Intel and ARM. In theory, the bugs meant that “virtually all” phones and computers could be compromised. Patches will be delivered in due course, but some fixes may take longer to develop than others, and there are concerns they will negatively impact processing speed. But despite the fact that security researchers gave emotionally-charged names to these flaws, there is no reason to make the public worry about bugs like these, even though headline writers were bound to squeeze the story for easy and plentiful clickjuice. That is because everything computerized and networked is potentially at risk all the time, whether we know about the specific risks or not. And even when we know about security flaws, nobody provides any useful insights that might help us to take sensible steps to choose the right behavior. Every scary story is turned into an excuse to tell people to do things they have already been told to do, without allowing them to judge if they should be doing something completely different instead. In other words, security experts scream ‘patch patch patch’ at us, when perhaps the answer is to switch the blasted computer off for a while, or to devise ways of working that place less reliance on networked devices.

A key problem with the reporting of Meltdown and Spectre is that there is no useful quantification of the risk to anyone. On one hand we are told about the huge number of devices that are affected. On the other hand, the BBC says:

First, let’s not panic. The UK’s National Cyber Security Centre (NCSC) said there was no evidence that the vulnerability had been exploited.

I could go on quoting all sorts of people who make contradictory assertions about the extent of the risk posed by these flaws. But that would be pointless. It is sufficient just to look at the words and phrases they use:

  • “everyone is at risk”
  • “the impact is quite severe”
  • “is a more general attack”
  • “the risk is arguably less significant”
  • “harder for hackers to take advantage of”
  • “probably one of the worst CPU bugs”
  • “absolute disaster”
  • “near zero risk”

Only the final example contains a number, and that number is zero, although surely the point is that the risk is non-zero, and hence we want to know how much higher than zero the risk really is! Otherwise, comment after comment after comment pretends to tell us how great the risk is, without making the slightest effort to genuinely quantify it. Is the risk of Spectre being exploited on my phone higher than the risk of my being hit by a meteor? Is it lower than the risk posed to me by global warming? Our language is crude and this relativistic way of talking about risk is not adequate for rational decision-making. As a consequence, we must endure a useless pantomime that purports to give us information but actually gives nothing that might help us make an informed choice about how to respond to risks. Instead we get subjected to authoritarian instructions about how to protect our safety, when really we should be free to question if the people giving the advice are misleading us because they advocate reliance on systems that will always pose systemic risks that will never be adequately mitigated.

Nobody can do anything useful with the idea that one notional risk is higher or lower than another notional risk. What would be useful is knowing the chances of suffering harm, and how great that harm might be. We are supposedly talking about computer science after all, not the computer arts. Measurement is the cornerstone of science. However, when it comes to risk, we expect the ‘experts’ to start waffling in an horrendously unscientific fashion. Security experts should either present worthwhile stats about risk, or else admit they lack the data needed to properly answer the questions being put to them. Consider this priceless example from Professor Alan Woodward of the Department of Computer Science at the University of Surrey, as quoted by the BBC:

It is significant but whether it will be exploited widely is another matter

Woodward is a trained physicist and engineer, and is a member of the Royal Statistical Society, but this is the level of his contribution to the public understanding of risk? He might as well have said the bugs were: ‘serious but not very’ or ‘potentially catastrophic but maybe we’ll all be fine’.

When we discuss risk without using numbers and probabilities we allow ourselves to be led astray. However, it is not clear who is doing the (mis)leading. Are the experts letting themselves down, by indulging in speculation when they should stick to science? Or are they simply kowtowing to the perception that the public is so innumerate that there is nothing to be gained by presenting hard facts as a series of figures? Whoever is to blame, the public will never gain an improved understanding of risk unless the experts actually make the effort to educate them. And because they rarely try, many decision-makers lack the fundamentals to incorporate risk into their thinking. Should I jump out of the window when the house is on fire? Should I be less inclined to rely on phone apps which store sensitive data? Will a loan default? Will a customer complaint lead to an international scandal? It is no wonder that the average decision-maker lacks the basic skills to evaluate risk and incorporate it into their thought process. Risk is everywhere, but the public discussion of risk occurs at a level suited to infants, and rarely includes simple reliable numbers, even when the relevant data is plentiful and straightforward.

To make risk-based decisions for ourselves, we should not just be told how to behave to protect ourselves. Instead, we need to understand how much risk is being incurred, and then decide how we want to react. That is because the people giving instructions are not themselves at risk. Having no skin in the game, their how-to instructions may lead to the wrong outcome. Furthermore, they cannot put themselves into anyone else’s shoes. Their risk appetite is not your risk appetite is not my risk appetite. Nobody else can know how much risk I should take, because they do not know what I am trying to accomplish, or how much I am willing to lose in order to attempt to realize my goals. An astronaut accepts a great deal of risk to accomplish his or her ambition of travel into space, whilst another man or woman may choose never to fly to avoid the risk of dying in a plane crash. The best we can do is to advise what degree of risk is being taken when following one or other option, and then give people the freedom to decide if that level of risk is acceptable to them. So whilst Meltdown and Spectre may be serious and complex issues, the science and reporting surrounding these flaws reveals just how flippant and simplistic our discussion of risk can be.

Eric Priezkalns
Eric Priezkalns
Eric is a recognized expert on communications risk and assurance. He was Director of Risk Management for Qatar Telecom and has worked with Cable & Wireless, T‑Mobile, Sky, Worldcom and others.   Eric was lead author of Revenue Assurance: Expert Opinions for Communications Providers, published by CRC Press. He was a founding member of Qatar's National Committee for Internet Safety and the first leader of the TM Forum's Enterprise Risk Management team. Eric currently sits on the committee of the Risk & Assurance Group, and is an editorial advisor to Black Swan. He is a qualified chartered accountant, with degrees in information systems, and in mathematics and philosophy.   Commsrisk is edited by Eric. Look here for more about Eric's history as editor.