Seeing the Big Picture for Benefits Analysis

Most of you will have heard of the US Government. As an organization, they are responsible for decisions like how much to spend on stopping other countries from invading the USA, deciding how much to spend on preventing deaths from industrial pollution, and deciding how much to spend on public infrastructure that will be used by private sector businesses and private citizens. As such, the US Government is intimately concerned with a topic that also interests the typical reader of this website: how to assess risk, and hence how to compare the costs of various current and potential activities to the uncertain benefits that will flow from them. Put like this, the US Government is the biggest risk management organization in the world. When considering the theory of how to do risk management, I like to keep the US government in mind as a practical example of how risk management works in practice. I do this for four reasons:

  • The US Government is big. They have a lot of money, spend a lot of money, and can do work to a degree of sophistication that is unrealistic for other, smaller, organizations.
  • The US Government is transparent, at least when compared to private sector businesses and many other governments.
  • The US Government is the culmination of democratic processes that permit its primary stakeholders (American citizens) to provide very extensive and open feedback about risk priorities.
  • It makes lots of mistakes.

Do not get me wrong. Every organization makes lots of mistakes, so I have no intention of criticizing how well the US Government is managing risk in practice. To do that fairly, I would need as much data as the US Government has, and whilst I admit to being opinionated, and to having quite a good memory, I will not claim to have as much data as the entire US Government. Also, I could make only a negligible contribution to the total sum of criticism that the US Government receives all the time. I review their mistakes so I can learn from them, based on the principle that it is cheaper to learn from somebody else’s mistakes, rather than from your own. And being so extensive in scope, the US Government deals with a much more extensive ‘universe’ of risk than any other organization, meaning that when their risk management suffers from a general and systematic error, it occurs at a level of generality that most closely approximates to the level of generality found in international risk management standards like ISO31000 and COSO. After all, not many organizations have divisions as diverse as the Federal Reserve, which deals with mortgage underwriting risk (oops), like NASA, which aims to send people into space and bring them safely back again (err…) and the Environmental Protection Agency, which many Americans accuse of doing too much.

Seen from this perspective, the US democratic process is the greatest human process for deliberating and deciding risk appetite. The resulting appetite for risk has global influence on the climate, world trade and security. So, given that I referred to mistakes, what are they doing wrong? Well, they suffer the same problem as everyone else. They suffer from bias. Or at least, they suffer from bias at recurring intervals; I will avoid commenting on whether the decisions reached today are more or less biased than decisions reached in the past or which may be reached in future. It is sufficient to note that bias is evident in how the US Government assesses risk, just because risks that seemed severe when one kind of politician is in charge are considered less severe when another kind of politician is in charge. Perhaps that is not surprising, but bear in mind that, at a fundamental level, the US Government follows the same ‘objective’ approach to compiling a cost-benefit analysis of risk as the paradigms presented in ISO31000 and COSO. The conclusion to be drawn is that following an ‘objective’ process is no guarantee of an objective decision. Even processes designed to promote objective thinking about risk will be prey to (conscious or unconscious) manipulation because of subjective judgements about what data is considered relevant and how to calculate its significance.

The point is illustrated by a recent article in The Economist. It gives examples of how, since Barack Obama became President, government agencies perform the same kind of calculation about risks, but have tended to include more benefits, and fewer costs, when evaluating when to take action. Similar observations were made in a 2011 article for the New York Times. The latter focuses on the rise in the ‘value of a statistical life’, which is effectively the dollar value that US Government agencies use in their equations when deciding how much to spend on preserving a human life. Evidently your cost-benefit conclusions will be different depending on whether you value a life at $2M or $10M. So whilst the process can be considered objective – you gather data on costs, you gather data on benefits, then do a subtraction and see whether you are left with a plus or a minus – this superficial objectivity can be easily undermined by subjectivity in what data is used and how it is used. A cynic would observe that Republicans always reach objective conclusions on the value of a human life that lead to fewer regulations, whilst Democrats always reach objective conclusions on the value of a human life that lead to more regulations. I will not pick a side in a political debate, but merely want to observe that whenever people work backwards from the amount of regulation desired to the ‘objective’ data they select for the cost-benefit calculation, then they have already biased that calculation and hence undermined the purpose of performing it.

If objectivity is not guaranteed by mechanically following a process, then we need to go beyond processes in order to deliver risk management that is fit for purpose. Honesty, transparency, and consensus all play a part in performing a reliable cost-benefit analysis. That is why I am so dismissive of cVidya’s ProactiV tool and the TM Forum model it was based upon. Put simply, it always concludes that the benefits of more software will outweigh the costs. Always. And they have so obscured this essential truth that Dr. Solotorevsky, of both cVidya and the TM Forum, actually recommends the tool will help telcos to reduce risk. That is true if you conveniently forget that businesses have multiple objectives and face multiple risks. No business has a primary and overriding objective to spend money buying software no matter how little benefit it delivers. People can kid themselves that using this tool helps them to objectively analyse risk, but I do not want to work for a company that will spend a million dollars for certain just to eliminate one cent of possible risk. There is no job security and little job satisfaction in working for a company that is wasteful, even if I work in the department that selfishly benefits from the waste. cVidya’s ProactiV tool is the metaphorical equivalent of a government department that says a human life is worth a million billion gazillion dollars and then demands tax rises to pay for all the ‘benefits’ it will deliver.

Just following a process or adhering to a standard is never sufficient to deliver objective risk management. To manage risks correctly, we must have a collective sense of priorities that allow us to address competing goals. The government weighs up the risk of inadequate infrastructure investment with the risk of inadequate defence with the risk of stifling the private sector through excessive tax… and so on. The US Government is chosen by a democratic process that helps it arrive at the right balance; if a government’s risk appetite is badly out of line with the majority of voters, then the voters can and will replace it. An equivalent process occurs in business, though we must remember that an employee is not a metaphorical equivalent of a voter. The investor is our ‘voter’, and they vote with their money. The consensus we arrive inside business is like the consensus reached inside government. We should not confuse votes by political representatives with votes for political representatives, and nor should we confuse the interests of employees with the interests of the business itself. Employees have a stake, and that needs to be aligned to other stakeholders. Good alignment leads to mutual benefits. For example, employee safety is generally going to be of benefit to longer-term investors. To be brutal, it costs money to replace and train people. Only a business that over-prioritizes quick returns to short-term investors will cut costs and corners at the expense of employee safety. Hence, it is possible to align the goals of different stakeholders to achieve mutual benefit. And the vehicle for this is the risk appetite statement, a public and transparent articulation of how the business weighs up and prioritizes the potential for variance between its various objectives and its actual performance.

To clarify the importance of understanding and setting priorities for variance, let me use a simple hypothetical example. A business may have two objectives: it will generate $XM in profits; and no employees will die as a result of accidents. What, then, if we find the business is under pressure and is struggling to meet its profit objective? Cutting expenditure on safety may be a way to attain their goal, but this may come at a human cost. So a moral business would more readily accept a variance to its profit objective than to its safety objective.

The risk appetite statement acts like a manifesto, giving guidance to investors and to employees, so investors back the businesses that match their goals, and employees make decisions aligned to those goals. Like good government policy, the formulation of this statement will need to be supported by both an internal and an external discussion that both contribute to the formulation of an agreed mission with clear priorities.

And by now… most people would tell me this blog is too long. In a way it is, just like democratic processes can be exhausting in how long they take. Many voters will only take an interest in the weeks immediately before an election. Some government employees take a sporadic interest in serving the public. Sporadic interest is also a problem insufficiently addressed by the textbook methods advocated in ISO31000 and COSO. A system that looks perfect on paper is of limited use if only occasionally followed in practice. Consider this: a company says it follows the standards and performs a cost-benefit analysis when deciding how to respond to risk. Fine. Then it must know the value it places on human life. How else could it calculate what to spend on safety? Of course, what really happens in some businesses is that it puts in place some mechanics for calculating risk, but never finishes the job. We end up with a peculiar, partial and superficial compliance to a risk standard, but not a genuine attempt to objectively analyse the costs and benefits of risk treatments across the full spread of properly prioritized risks. We end up with something that looks like an objective calculation by a government department, but which actually reverse-engineered its subjective selection of data from the conclusion it wanted to reach.

For all its failures, the US Government gets a lot right. At least we know what value is placed on a human life by US Government agencies. At least there is sufficient transparency that journalists can report on, and voters can learn about, how risk perceptions have changed and how this might be influenced by subjective factors. These decisions can be analysed for what they are, and are not lost because they are hidden from view or twisted into unrecognizable forms. And, thanks to transparency, I have a lot of data which helps me to improve the objectivity of decisions that people otherwise try to evade because they find it uncomfortable to do so. Remember, not making a decision is still making a decision: the decision not to act now. Not making a reasoned decision about safety risks does not eradicate those risks and nor does it mean there is no sense of risk priorities being manifest in practice. The same can be said of any type of risk. Mechanics, compliance, standards and process are not sufficient for good risk management, as demonstrated by arguments about the risk priorities of the US Government and its failures in practice. Honesty, transparency and a genuine desire to reach consensus are also important, especially if failings are to be identified and improvements made. Whilst the execution may be imperfect, the US Government illuminates the challenges when turning good risk management theory into good practice, and illustrates the moral characteristics that are essential for success.

Eric Priezkalns
Eric Priezkalns
Eric is the Editor of Commsrisk. Look here for more about the history of Commsrisk and the role played by Eric.

Eric is also the Director of the Risk & Assurance Group (RAG), a global association of professionals working in risk management and business assurance for communications providers.

Previously Eric was Director of Risk Management for Qatar Telecom and he has worked with Cable & Wireless, T‑Mobile, Sky, Worldcom and other telcos. He was lead author of Revenue Assurance: Expert Opinions for Communications Providers, published by CRC Press. He is a qualified chartered accountant, with degrees in information systems, and in mathematics and philosophy.