Why Using Words to Discuss Probability Is Dangerous

People complain about fake news, but a lot of news is useless because it refers to matters that can only be properly described in numbers. Journalists, politicians, and even scientists often refuse to present the figures that supposedly support the words they choose. The absence of numbers in professional discourse encourages everyone to skip the discipline of justifying decisions with robust statistics derived from objective data. The rejection of numbers is especially acute in the realm of prediction, which is a shame because most important decisions are motivated by a desire to change the future, not the past. Sometimes the resistance to using numbers occurs because educated and knowledgeable people are unwilling to admit how much they rely on guesswork to do their jobs. At other times it is not necessary to make guesses but there is instead a desire to mislead by offering imprecise words in the place of precise numbers.

Consider all of the following statements, which were copied from a recent BBC News article about COVID-19 transmission rates.

These scenarios were possible

…a further surge in cases… is now unlikely

Waning immunity and increasing contacts… could easily lead to a rise. Although that could well be offset by…

A number of government modellers have told the BBC they expect to see another rise at some point

…there is likely to be lower immunity in the population than normal.

The worst of Covid may be behind us…

As a professional risk manager, I struggle to find useful predictions amongst most of the efforts to ‘inform’ the public about COVID-19. Public communication seems to be instead focused on the exercise of mind games, or what would otherwise be called propaganda, except that COVID-19 leads the vast majority of us to be on the same side. The language used to describe the spread of the disease gets scarier when somebody decides the goal is to frighten people into being more careful, then relaxes when there is a countervailing fear that too much negativity can also encourage reckless behavior. However, every decision regarding this pandemic concerns probabilities: the likelihood of transmitting the disease, the likelihood of being hospitalized, and the likelihood of dying. We can only hope that the decision-makers who take such a grim view of the public’s ability to understand and react rationally to numbers are still using the most reliable numbers as the basis of their decisions.

To illustrate why real decisions should be based upon numbers, suppose that I tasked you to become a well-paid investment fund manager, and I tossed you a copy of the Wall Street Journal or the Financial Times, but instead of these newspapers listing actual stock prices and foreign exchange rates they merely offered a series of opinions about whether it was ‘likely’ a company would rise in value or it is ‘possible’ that the dollar will fall. Any investor may draw the wrong inference from past and present data, but at least their decision is grounded in factual information and not how they feel about somebody else’s words. Or imagine that you and I sought to arrange a future meeting, but instead of using times and dates we could only say whether we feel the meeting should be sooner or later. Even if I know you are usually five minutes late, we still need to specify a time in order to make a meaningful decision.

My observation about the imprecision of the words used to discuss probabilities is hardly new, though it is worth repeating because we still live in a world where most people are happy to be imprecise about probability. Too many professional risk managers suffer this serious defect too. There have been past attempts to force people to use numbers instead of words when discussing probability. One that ultimately became famous, but which was secret at the time, was led by Sherman Kent, a Professor of History at Yale University who became known as the father of intelligence analysis because of his contribution to the Central Intelligence Agency (CIA). He observed that two CIA employees using the same words to describe the probability of an event might map those words to vastly different probabilities if they were described in numbers. Kent’s confidential CIA paper “Words of Estimative Probability”, which was declassified in 1993, includes an anecdote about analyzing the likelihood that Yugoslavia would be invaded in 1951.

A few days after the estimate appeared, I was in informal conversation with the Policy Planning Staffs chairman. We spoke of Yugoslavia and the estimate. Suddenly he said, “By the way, what did you people mean by the expression `serious possibility’? What kind of odds did you have in mind?” I told him that my personal estimate was on the dark side, namely that the odds were around 65 to 35 in favor of an attack. He was somewhat jolted by this; he and his colleagues had read “serious possibility” to mean odds very considerably lower. Understandably troubled by this want of communication, I began asking my own colleagues on the Board of National Estimates what odds they had had in mind when they agreed to that wording. It was another jolt to find that each Board member had had somewhat different odds in mind and the low man was thinking of about 20 to 80, the high of 80 to 20. The rest ranged in between.

So here we have serious people, doing the serious job of evaluating the risk that a country will be invaded, and they all believe they have reached the same estimate even though they collectively used the word ‘serious’ to mean everything from a 1-in-5 likelihood to a 4-in-5 likelihood! It should go without saying, but I will reiterate it anyway, that if a group of sophisticated CIA employees who all speak English as their first language can exhibit such different understandings of the same English words then at least as much variation will be found when risk managers discuss probabilities in a business environment.

Kent’s concerns about imprecise use of words were followed up Scott Barclay and his colleagues, the authors of a 285-page work entitled Handbook for Decisions Analysis for the Defense Advanced Research Projects Agency (DARPA), a section of the US Department of Defense. They presented a series of sentences to 23 NATO officers where only one phrase, concerning the likelihood of the outcome, was changed each time. For example, the sentence might include the words “almost certainly” or “probably not” or “highly unlikely”. The NATO officers were asked to convert the statement into a numerical probability. The following graph shows the range of answers they received, as presented in their handbook, with the small dots representing the answers given by the NATO officers, and the bars showing Kent’s recommendation of the probability range that should be assigned to words.

The individual dots are quite hard to read from the original report, which is why the following version of the same graph, presented in Critical Thinking For Strategic Intelligence by Pherson and Pherson, makes it easier to see just how much the NATO officers differed in their interpretation of words.

Notice how much overlap can be found between different words. The same numerical probability could lead one NATO officer to describe the event as ‘probable’ whilst another would choose the word ‘improbable’!

Old habits die hard, so I cannot expert every reader to ditch qualitative assessments of risk and switch to a fully quantitative approach, even though this article should have comprehensively demonstrated why an assessment based on words is sure to leave different recipients with conflicting views about the probability of the risks described. Perhaps I can encourage you to think more often about the danger of assuming a common understanding of words by suggesting that you and your colleagues play a game.

ProbabilitySurvey.com by Andrew and Michael Mauboussin is continuing the work of Kent and Barclay by giving you the opportunity to compare your interpretation of different words and phrases about probability with the interpretations submitted by many others, reproducing Barclay’s exercise with the NATO officers but on a much larger scale. Why not click on the link, submit your answers, and invite your colleagues to do the same? It will help your business to think about the dangers of making decisions based on probabilities that have only been described in words.

Eric Priezkalns
Eric Priezkalns
Eric is the Editor of Commsrisk. Look here for more about the history of Commsrisk and the role played by Eric.

Eric is also the Chief Executive of the Risk & Assurance Group (RAG), a global association of professionals working in risk management and business assurance for communications providers.

Previously Eric was Director of Risk Management for Qatar Telecom and he has worked with Cable & Wireless, T‑Mobile, Sky, Worldcom and other telcos. He was lead author of Revenue Assurance: Expert Opinions for Communications Providers, published by CRC Press. He is a qualified chartered accountant, with degrees in information systems, and in mathematics and philosophy.