The Myth of 3% Leakage Has to End

Without a transformed approach to revenue and service assurance, over 50% of respondents admit that they will risk losing more than 3% of the revenues expected to come from digital services (i.e. TV services, etc) in 2017 alone.

Openet commissioned a new survey about revenue leakage by, but I am not sure why they bothered. You already know the results. Everybody already knows the results. And that is because everybody knows what this profession’s mainstream opinion is, even though opinion never seems to be supported by anything as straightforward as evidence or data. I would go further, and argue people repeat consistent opinions because they have previously heard other people repeat the same opinions, whilst never hearing any contrary opinions. This is not a sign of a robust and healthy intellectual position. We admire scientists because they do experiments, gather data, and can refute each other when the data points to different conclusions. How is it that assurance, which should be done by people with skeptical and enquiring minds, has a community which reliably comes to consistent conclusions about the levels of leakage they face, even if its practitioners work in totally different telcos in totally different parts of the world? Does that not seem strange? And does that not seem even stranger when we consider that some assurance practitioners also see themselves as data scientists?

I know I will not win any popularity contests by making these observations. Perhaps that is the point. Perhaps there is safety in numbers. Perhaps we need to say leakage is 3 percent in order to persuade our bosses to invest in assurance. Perhaps the argument is strengthened by being able to say everybody else, wherever they work, agrees on the scale of leakage. But let me reiterate the flaw: nobody is presenting any evidence or data to support this assertion. The mainstream opinion that leakage is 3 percent is like a religious mantra. If you are inclined to believe, then repeating a dogma will strengthen your belief. But if you are a non-believer then the words sound like meaningless babble. Without hard facts to support our claims we might as well be repeating superstition or casting magic spells. My reason for protesting this approach is not because it cannot be true that leakage is 3 percent, but because I have dealt with plenty of non-believers. I have watched their eyes roll, their eyebrows raise and their mouths smirk as some low-confidence RA manager trots out “3 percent” as the justification for anything from discontinuing a flawed tariff to implementing a new database. Even if true, 3 percent represents the projected aggregate error of many separate errors with many separate causes. How can knowing the total help anyone make the argument for prioritizing investment where it will deliver the best return?

We should analyze the report by, because I give them credit for asking one good and unusual question. But let us analyze it with our critical faculties switched on, in the same way we apply those faculties when looking for weaknesses and flaws in our own business, instead of just skimming for messages that are most favorable to us. The report begins:

Digital transformation continues to dominate the telecoms industry as one of 2017’s single biggest trends. The term remains nebulous, and stakeholders across nearly every industry vertical market are gearing up for the next 12 to 18 months to bring era-defining change.

The most useful word in this paragraph is ‘nebulous’, which is another way of saying you are talking about something but you cannot be sure of what you are talking about. We can agree that the industry is changing – except that the industry is always changing, and every new era is described as the next new era. In short, if you claim to have done a global survey of opinions about the leakage caused by digital transformation of telcos then it is sufficient to state that. This report suffers from a lot of interpretation and loose words not backed by actual data.

… keeping ahead of the curve is of paramount importance. Services will change; indeed, they already are. Where voice, SMS and data once ruled supreme, mere connectivity is no longer considered enough. That’s not a surprise to anyone in the industry…

This is one excerpt but it represents the kind of empty guff that I could have copied from twenty other paragraphs in this report. What this says is: waffle, waffle, waffle. If there is a fact that comes as no surprise to anyone then it really does not need repeating! Let me skip ahead, ignoring most of this report’s pointless verbiage…

Perhaps of even greater consequence to the telecoms industry, beyond devising the underlying infrastructure required to even deliver these services in the first place, is the discussion around billing accuracy, revenue and network assurance in this new paradigm. Ensuring billing systems are up to the task of monitoring and appropriately charging for services rendered is arguably the most important consideration.

I suppose if you work in billing or revenue assurance you might want to feel that way. Feel free to boost your own ego, or pat yourself on the back. But if you actually made the argument to executives responsible for marketing or technology then you should expect to be treated with contempt. If there is no technology to deliver a service then there is nothing to bill. If there are no customers to buy a service then there is nothing to bill. So we should avoid this kind of nonsense about what is ‘arguably’ most important. In general, telcos need everybody that works for them. If you want to succeed at assurance, be holistic. Trying to be more important than other people will only make you enemies.

We interviewed 117 operator respondents from around the world by way of a 10 question survey.

At last! They tell us who was surveyed. Except they do not tell us what kinds of people were surveyed. I assume they were not surveying Chief Marketing Officers, but I might be wrong…

We began this survey by asking the audience what percentage of their company’s revenue they think will come from digital services in 2017.

And how do they define digital services? With a suitably nebulous definition…

This would include, for example, TV services, entertainment, IoT, healthcare and so on.

I always loved to assure revenues from ‘and so on’ services, writing up reports about the amount they leaked and what to do about it. Imagine the leakage report: “Last month Xtel made $3mn from and so on, but a further $240,000 was lost because of underbilling of and so on. The root cause of the revenue leakage was and so on. The recommended control to reduce this leakage is that we implement an automated and so on.”

… more than 30% of respondents indicating that more than 20% of operator revenues will come through these [digital service] channels.

Fair enough. So digital services matter, even if they have been vaguely defined to cover everything from low-price high-volume B2B M2M for smart meters to high-price low-volume B2C PPV live action sports! This finding was also skewed by including cable and satellite operators in the survey; they obviously receive most of their existing revenues from television.

So with such a significant proportion of money coming in through this channel, what does the audience consider to be the biggest challenge in protecting said revenue streams? Our next question sought to clarify just that. The top priority, given a rating of ‘important’ or ‘very important’ by 91.2% of the audience, is real-time assurance. Effectively, this is the shoring up of revenues generated by realtime data streaming services – such as video on demand or live television streaming. With data consumption trends shifting towards an ever increasing amount of video streaming, it is rightly being treated as a top challenge by operators the world over.

In case you did not spot it, this is the part of the report where they tried to trick the audience to engineer the conclusion they wanted all along. That 91 percent want real-time assurance is the headline finding in the press release, but this bears all the hallmarks of a question that was designed to prompt that answer. We are told that this was the most important category, but the report does not present a table showing all the categories which respondents were asked to rate. It does not follow that having a lot of people saying this is ‘important’ or ‘very important’ makes this the ‘top’ priority. Why not simply break out the scores for ‘important’ and ‘very important’ across every category, so we can see which ones scored highest for ‘very important’, and which received fewer ‘very important’ scores but were thought ‘important’?

Another red flag is raised by the use of the word ‘effectively’. Good survey technique means you show us the questions asked and the answers given; you do not tell us what was in the head of the people answering the questions because nobody knows that. Perhaps the average respondent was thinking about prepaid services so said real-time assurance was important for that reason. It need have nothing to do with ‘realtime data streaming services’, which have been poorly defined anyway. A live sports broadcast is indisputably ‘real time’ but video on demand is not. Data streaming occurs at the time it occurs, just like everything occurs at the time it occurs, so what is the meaning of ‘real-time’ when used in this context?

The second most-challenging element of revenue protection is more related to the individual network elements responsible for delivering data services to the end-user. 87.4% of the audience believe that ensuring revenue assurance processes for all transactions to reconcile 100% of usage data from network elements is of great importance to operator revenues.

Maybe the respondents were not conscious of this conflict, but this interpretation of this data is a straight contradiction of the report writer’s interpretation of the previous data. If I have genuine real-time assurance I do not need a batch-oriented reconciliation after the fact. Now I get the feeling that these answers are more like asking people if they want prepaid assurance – yep! – and if they want postpaid assurance – yep! – without anyone being asked to express any meaningful priorities about what they want.

The report goes on analysing this question at length, but without presenting all the data. This is poor technique. Remember that we are supposed to be concerned with data. If you start quoting ‘facts’ like these at your bosses they are entitled to be skeptical because you cannot really show the detail that supports the interpretations chosen by the writers of this report. You cannot show it because you have not seen it!

So let us skip some questions and get to the good part of the survey.

It would appear as though the audience has a fear of underbilling, with more than one in five respondents suggesting that underbilling might increase by more than 5%; which would account for a significant amount of revenue loss if this transpires to be the case. Conversely, more than 20% of respondents seem to think there will be either no change or less than a 0.5% increase in underbilling – indicating that opinion is fairly divided on this subject.

This is an interesting question and finding, though still imperfect. One problem here is that the question is biased. Underbilling can go down, as well as up, so why only ask for predictions where the best-case scenario is that underbilling will stay the same? It is not impossible to imagine that a shift to digital services might reduce underbilling. For example, the trend towards digital services will lead some telcos to scrap usage charges and charge higher monthly fees instead. If the telco has a history of underbilling usage charges but always gets its monthly fees correct then underbilling will be reduced.

One useful aspect of this question is that it prompted an interesting spread of answers. This shows that mainstream opinion about so-called average leakage is actually falling apart. One in five are sticking to the traditional hardline RA assertion that the sky is about to fall on their heads. A different one in five says they see no reason for pessimism. Clearly not everybody is trying to motivate their bosses in the same way.

The divergence of opinions may also explain the weasel words in this report – “suggesting that underbilling might increase by more than 5%”. Either the survey asked people to give a prediction or not. Which is it? Did the respondents give their best estimate of what will happen, or were they sharing the gloomy dread inspired by an especially dark and rainy morning when they were feeling a bit depressed?

With overbilling, the audience seems unconcerned by an apparent risk of overbilling, with exactly 66% of respondents suggesting that overbilling will increase by a maximum of less than 1%. Less than one in five responses hint at a fear of more than 1% overbilling.

This is the best question in the survey. For once, we had a survey that admits errors go both ways. And maybe the answer is true. Again, it seems nobody was given the option to say overbilling might fall.

To the extent that the respondents share the report writer’s complacency it is obvious they have not seen the newspaper headlines about big fines for telcos caught overbilling customers. Regular readers of Commsrisk will have noticed the trend for more telcos to be punished for overbilling.

We cannot tell if the survey results have been manipulated because the raw data is not made available, so we do not know what the estimated overbilling was for those telcos who predicted it would be over 1 percent. For all we know they predicted it would be over 5 percent but the result has been categorized in such a way as to create a skew.

What would have been really interesting is to present the combined results of the underbilling and overbilling questions for all operators. In other words, we should have seen how many telcos are worried about both overbilling and underbilling, how many are worried about neither, and how many are worried by one but not the other. This would be far more interesting than simply giving separate averages and concluding underbilling is a concern but overbilling is not. Think about it logically: some telcos must have weaknesses that could lead to either kind of error. Some will be so simple and perfect that every bill is always accurate. And some telcos and individuals only worry about underbilling – either because it is difficult to imagine errors that lead to overbilling, or because they are biased (and a bit foolish). Combining the results to show the range of concern would have illustrated the differences in opinion that undermine the mainstream message of so-called average leakage.

… 60.2% saying batch analysis is the current norm. One in four respondents say the majority of existing revenue assurance systems conduct proceedings in real-time; while a cautious 13.6% said they were unsure.

Although the survey asked reasonable questions about underbilling and overbilling, this excerpt exemplifies how poor it is, and why we should not rely upon it. If you do not even know if your revenue assurance system does real-time or batch analysis, why would I ask your opinion about leakage? And why are you answering questions about what revenue assurance systems you need instead?

Let us skip more of the poorly-worded questions and dodgy interpretations to cut to the analysis which supported a key headline for the press release, the aforementioned statement that:

Without a transformed approach to revenue and service assurance, over 50% of respondents admit that they will risk losing more than 3% of the revenues expected to come from digital services (i.e. TV services, etc) in 2017 alone.

So how did the report writers analyze this?

A statistic we didn’t reveal earlier in the paper, because it seems to aptly summarise the current scenario operators find themselves in, is that more than 50% of respondents believe they will risk losing more than 3% of digital services revenue if they don’t transform their approach to revenue and service assurance. We’ll just leave that stat there.

They literally offered no more explanation of this statistic. So let us look at the question they asked, per the graph which displayed the answers:

How much of your existing digital services revenues do you think could be lost if existing revenue assurance systems aren’t upgraded to cater for digital services, virtualised networks and real-time systems and processes?

This is as badly-worded a question as it is possible to imagine. What does ‘could’ mean in this context? Are we dealing with existing revenues or the revenues you would have earned after the imagined date for this upgrade? Does this ask for the worst case scenario? Is it a prediction of what will happen? Is it something between?

If you want to avoid being hammered by skeptical executives then never use the word ‘could’. They want to know what will happen, not what might happen. Rational decision-makers base their judgements on forecasts, not fearcasts. With such loose language it is also true that my entire house is at risk, all of the time, no matter what I do! Think about it: an asteroid could fall out of the sky, my neighbor may set light to his house and burn mine down too, there could be a nuclear war, and my house might sink into a giant hole that suddenly opens up in the ground. None of these are especially likely but they could happen, and each would destroy my house in the process. So should I just leave that fact there, encouraging everybody to rave that every house is at risk? Of course not. Facts like these do not encourage sensible decision-making. They encourage cynicism and apathy.

Here we see the 3 percent myth has gone full circle, only to leave us chasing our own tails. A fact gets quoted, but nobody knows what it really means. Opinions are surveyed, but nobody knows what the opinions actually represent. Apparently some data was collected, but we are not trusted to see it. Some of the people who supplied data appear to know nothing about nothing, but their data was not excluded. Suppose that sometime later these ‘facts’ gets repeated to an executive, in the hopes of persuading the exec to invest more in revenue assurance. Maybe the executive is already sympathetic to revenue assurance, so nothing is gained because the executive already agrees without wasting their time on pseudo-statistics. Or maybe the executive is hostile to assurance, so now they have an additional example of why the people doing it should be treated with scorn.

Oddly enough, the people who wrote this report call themselves Telecoms Intelligence. To download the report you need to provide personal details here. I just supplied them with a made-up name, phone number and other data; it seemed the appropriate thing to do.

Eric Priezkalns
Eric Priezkalns
Eric is the Editor of Commsrisk. Look here for more about the history of Commsrisk and the role played by Eric.

Eric is also the Chief Executive of the Risk & Assurance Group (RAG), a global association of professionals working in risk management and business assurance for communications providers.

Previously Eric was Director of Risk Management for Qatar Telecom and he has worked with Cable & Wireless, T‑Mobile, Sky, Worldcom and other telcos. He was lead author of Revenue Assurance: Expert Opinions for Communications Providers, published by CRC Press. He is a qualified chartered accountant, with degrees in information systems, and in mathematics and philosophy.

3 Comments on "The Myth of 3% Leakage Has to End"

  1. Always a pleasure to read your analysis of these surveys which look a lot like advertorials.
    As to why they claim 3%, see this thread, I think it’s as relevant as anything else on the topic and I really don’t think there’s more to it …

    • Great point! Psychological studies tell us that people behave the way that appears correct, not the way that is actually correct. That is why if you ask people for a string of ‘random’ numbers they may say ‘739246’ but not ‘111111’, even though both strings would be equally likely to occur at random. Hence you get the paradox that people choose some ‘random’ numbers much more often than others.

      I had a similar conversation the other day, showing what can go wrong with risk management too. I have an old story about visiting a financial advisor and he wanted to gauge my appetite for risk. He showed me an actual piece of paper with the numbers 1 to 5 printed along a line. He said that 5 means the most risk, 1 means the least risk. (I don’t know why he showed me the paper as well as describing the numbers, but he did both!) He then asked me to point at the line and pick the number which matches how much risk I wanted to take with my investments. I said and pointed at “3”. And then he said “that’s what people usually say.” Of course that is what people usually say! It means nothing. It’s not an objective measure. Everybody thinks “not too much risk” (=5) and “not too little risk” (=1). So they pick the number in the middle. But as ‘3’ is not an objective measure of anything, your idea of ‘3’ may be totally different to my idea of ‘3’ when it comes to what kind of risk we’re taking when making an investment. In other words, people are great at giving numerical answers to questions without having any kind of objective measure of anything! And risk management is often like this, with pseudo-scientific numbers being used to cover real gaps in our understanding.

      • Well at least this is a harmless survey. I would not be surprised at all if in real life, the actual risk budget allocated to projects is actually around 3% by default and the companies live under the illusion that they’ve done risk management.

Comments are closed.