Why It Is Impossible to Measure All Leakage

Tomorrow I will publish yet another story about specialist auditors failing to identify regular, systematic overcharging errors by a British telco, though this one will stand out because they failed to protect hundreds of thousands of customers for 15 years in a row. People with limited experience of the workings of a comms provider (which includes these auditors) will inevitably be shocked but their reaction reflects their naivety. Preventing mistakes always seems simple when you do not have the burden of actually needing to understand why mistakes occur, or finding ways to identify them in practice.

Mistakes can be found everywhere in life. Rockets explode, banking systems collapse, a journalist is seen trouserless on Zoom and a dead Chinese bat somehow forced everyone to stay inside for year. These outcomes were not intentional; they occur because people are fallible. Somebody in China is probably fuming at the suggestion that there were mistakes involving a dead bat but this just illustrates how difficult it is to pinpoint errors and prevent them from recurring. If the world lacked the intelligence to prevent a killer virus from spreading globally then we should have realistic expectations about what might be accomplished through the half-assed audit of phone charges by people who spend more time staring at the bottom of tea cups than the figures on real bills. If a large telco and its auditors needed 15 years to identify mistakes that thousands of customers complained about then what makes you think you can identify all the errors that go in the opposite direction, and which no customer ever complains about?

The poet Alexander Pope wrote “to err is human, to forgive divine”. Mistakes are so widespread they can have special names. Actors fluff their lines, software has bugs, publishers often find themselves printing an erratum, and anyone may commit a social faux pas. The special word used by telcos is leakage, which draws on a metaphor involving water escaping from pipes, although some of the errors that cause telco leakage can favor either the telco or its customers, whilst water companies never worry about a pipe delivering more water at its end than was pumped into its beginning. As ridiculous as it seems, it is sometimes necessary to remind people of these facts:

  1. People make mistakes
  2. We do not know what we do not know

Put those facts together and you may realize why you should refuse to follow the example of those tea-swilling auditors who spent 15 years promising that phone bills are accurate, and will soon begin arguing that the identification of a 15-year old mistake just proves that bills keep getting more accurate as a result of their work. Be sensible and do the opposite: admit that you will always fail to identify all errors. If you do that then you might get the resources needed to do a better job (instead of budget for biscuits to go with the tea).

The remainder of this article will reiterate some messages that I first articulated in the early 00’s and during many of the years since. That means a few of you may feel you have heard this before. Then again, these messages never penetrated the thick skulls of my least favorite auditors so it is still worth reiterating them. Perhaps younger generations of assurance and audit professionals will learn from the mistakes of the past, and do a better job as a result.

Any collective enterprise will often find it difficult to identify and measure mistakes. Customers may complain in large numbers, project costs may balloon, and profits may turn into losses but the root causes of failures can remain elusive. Partly this is due to fear of the ‘blame game’. Individuals become preoccupied with avoiding punishment. Even an organization like NASA can struggle with this. The investigation into the Space Shuttle Challenger disaster faced managers unwilling to admit to the failure of the O-rings in the Shuttle’s booster rockets, prompting Nobel Laureate Richard Feynman to demonstrate how brittle the O-rings were at low temperatures by putting the material in ice water on live television.

To learn from mistakes we must admit to human fallibility without discouraging individuals from talking honestly about failure. We might commence by setting an example which involves emphasizing how everybody should consider themselves ignorant, no matter how much people resist this notion for social reasons.

What You Know Is Less Than What You Do Not Know

US Secretary of Defense Donald Rumsfeld will always be remembered for one particular quote. When questioned about intelligence reports in relation to the Iraq War, Rumsfeld said:

There are known knowns. There are things we know we know. We also know there are known unknowns. That is to say, we know there are some things we do not know. But there are also unknown unknowns, the ones we don’t know we don’t know.

The Athenian philosopher Socrates said something similar 2,600 years ago. He acknowledged the paradox that he knew more than other people even though he knew nothing, because at least he knew that he knew nothing.

Modern risk managers address the same topic through their conception of risk, which boils down to uncertainty about outcomes. Knowledge reduces uncertainty, and so reduces risk.

In my 2011 book, Revenue Assurance: Expert Opinions for Communications Providers, I illustrated the conundrum of not knowing what we do not know by referring to how we perceive an iceberg.

It is natural to focus on the visible portion of the iceberg, above the waterline. These are the ‘known knowns’ and ‘known unknowns’ that management spends much of its time dealing with. But beneath the waterline are unknown unknowns – mistakes and problems that are invisible because we do not think about them. We make more of the iceberg visible by thinking about what lies beyond our current field of vision.

How do we do this? One approach is to canvass opinions. These opinions might be recorded in risk registers and other documents, though we should never consider documentation to be the end goal of the exercise. We want to bring knowledge to the attention of people who should act upon it. To take the NASA example, the risk of an O-ring failure was known to some engineers. Engaging widely helps management to be aware of issues already known to staff.

Another method is to consciously avoid groupthink by giving individuals the freedom to express dissenting opinions. This principle can be extended to tasking a specific individual to think of and voice doubts about a project or proposal. In Greek mythology Cassandra (pictured above) foresaw the fall of Troy, but was ignored. A modern corporate Cassandra is a team member whose role requires them to speculate about potential causes of failure, testing the resilience of hidden assumptions. Effective business assurance professionals should be willing to play the role of Cassandra and educate the business about why this should be seen as helpful instead of being dismissed as obstructive.

Make Knowledge a Dimension of Measurement

Enterprises measure their performance, but do they measure enough? This is another illustration from Revenue Assurance: Expert Opinions for Communications Providers.

On the y-axis we plot the value of leakage; this is what management typically spends a lot of time managing. On the x-axis we plot the completeness of our knowledge. A business with a Six Sigma philosophy may have many controls and measures in place. Other enterprises, especially new ones, will have fewer. Having fewer controls may not be wrong; a new business needs time to grow. However, we should try to be conscious of the gaps in our knowledge as well as dealing with the issues we already know about.

I labelled the four quadrants of the diagram as: complacency, fear, challenge, and delivery. There is an evolutionary path that businesses take which leads to superior measurement of leakage.

  • Few issues are reported in the complacency quadrant… but that is because there are so few measures of performance. Businesses need to avoid getting ‘stuck’ in the complacency quadrant where it seems like they are doing well but only because they are ignorant of their failings.
  • A rough and imprecise measure is better than none, so one way to move out of complacency is to migrate to the fear quadrant, perhaps by asking for subjective opinions about risks and issues from experts both inside and outside the enterprise. Guesstimates generate a sense of fear about risks, but fear is better than ignoring risk.
  • The good thing about fear is that it helps everyone understand the benefits of controls and measures, leading to their implementation. Implementing those new controls and measures causes the business to gradually move into the challenge quadrant, where the enterprise learns some previous fears were exaggerated, but this is balanced against previous ‘unknown unknowns’ that are visible for the first time.
  • Fixing problems in the challenge quadrant leads the business to progress to the final and most mature stage, the delivery quadrant, where leakage is steadily and systematically reduced. Now the enterprise has a more complete understanding of itself.
  • However, the organization must continue to think of the future and address changes that introduce new leakage risks, or it will slide back towards complacency. This can occur because the controls and measures the business has are no longer the controls and measures that it needs.

There are various ways the organization can come to understand the quality and completeness of its controls and measures with a view to improving them. One way is to embed an objective analysis of the quality of the information used to evaluate risks within its risk registers. Guesstimates should be described realistically – a hunch is better than nothing, but should also prompt the identification of cost-effective ways to improve decision-making through enhanced data.

Another approach is to invest in a team of data scientists to serve internal customers within the enterprise. This allows specific functions in the business to focus on what they do best, whilst leveraging collective data resources and staff with specialized informatic skills.

A third method involves comparing separate organizations to each other. A leakage that is actively monitored by one business may have persisted as an ‘unknown unknown’ in the other. Two organizations may both find they are estimating a leakage but one of them may use a superior method to generate their estimate. Industry-wide comparison is the rationale for the RAG Leakage Catalog, the most comprehensive inventory of revenue and cost leakages for comms providers. The catalog was compiled and is maintained by crowdsourcing insights drawn from the assurance work performed by many comms providers around the world. It takes the contribution of just one person to turn an unknown unknown into a known unknown for everybody else who compares their company’s leakage coverage to the contents of the catalog.

Today’s live broadcast of RAG TV will involve two operators benchmarking their leakage coverage using the catalog. Kerry Evans of BT and Jayaprakash Devaraj of Vodafone Qatar will also be joined by Geoff Ibbett, the custodian of the RAG Leakage Catalog. Kerry and Jayaprakash will walk through the process of assessing their leakage coverage, with Geoff giving his advice. If you have not joined our comparative study then why not complete the self-assessment whilst watching the show? Geoff will also take questions from the live audience, with the broadcast starting at 8am Eastern, 1pm UK, 6.30pm India; click here to save it to your diary.

Anyone who completes the benchmark assessment will receive a personalized report that compares their company’s leakage coverage to industry norms. If you cannot watch the live stream then you can always watch the recording at your convenience, and you have until the end of March to complete the questionnaire and join the study.

The RAG Leakage Catalog is extensive, covering 25 categories of revenue and cost leakages, but the catalog will never be completed. There will always be unknown unknowns that have yet to be captured. This is because people will always keep finding new ways to be mistaken, and it takes time for these mistakes to be acknowledged. That is also why our measures will always understate real leakages. A wise professional should not concentrate on how much they already know, but should remain acutely aware of what they still need to learn.

Eric Priezkalns
Eric Priezkalnshttp://revenueprotect.com

Eric is the Editor of Commsrisk. Look here for more about the history of Commsrisk and the role played by Eric.

Eric is also the Chief Executive of the Risk & Assurance Group (RAG), an association of professionals working in risk management and business assurance for communications providers. RAG was founded in 2003 and Eric was appointed CEO in 2016.

Previously Eric was Director of Risk Management for Qatar Telecom and he has worked with Cable & Wireless, T‑Mobile, Sky, Worldcom and other telcos. He was lead author of Revenue Assurance: Expert Opinions for Communications Providers, published by CRC Press.

Related Articles

Get Our Weekly Newsletter by Email