New RAG Docs Apply Maths and Logic to Revenue Assurance

There has always been a tension in revenue assurance between those with a technical focus and those whose orientation faces more towards people and processes. The former like to write code and SQL queries, mine data and find leakages by using technology, whilst the latter use analytical and project management skills to elicit information from people, and then seek to drive necessary change. The technical types can excel at detecting issues by finding anomalies in data, but they might not have a clear idea of how to address the issues afterwards, sometimes leading to the tragedy of a job spent finding the recurrence of the same problems again and again. Other skills are also very important to a successful RA team, but it is hard to make the argument for revenue assurance if it never detects leakages in the first place. RA teams need a range of skills, and not everybody will be good at everything. Allowing for that, I still believe many practitioners lack an adequate grounding in mathematics and logical thinking, and it is a severe weakness to expect only the ‘technical’ side of the RA team to be competent with numbers and the basics of data science. To encourage a more rounded view of education I recently submitted two papers on maths and logic to the library of the Risk & Assurance Group (RAG), and I am happy to say they have been approved following a review by RAG members.

In “Using Statistics for Precision Assurance Testing: A Worked Example” I show how to use statistics to calculate the size of a sample for testing. Part of my motivation for producing this paper is the realization that many intelligent and numerate practitioners choose to be lazy when it comes to making such calculations. The maths may not be that hard, but rather than perform the calculations it is easier to justify a poor decision by relying on prejudice and expecting nobody to argue against it. As a consequence, I have heard all sorts of ridiculous things said about sample sizes, including the following.

  • ‘A smaller sample size is best because we find too many issues already. We mostly get false positives so we would waste time if we took a larger sample.’
  • ‘If you only take a sample then all the issues will be with the calls you did not sample, so you have to test all of them to be sure.’

Both of these statements are nonsense. False positives are not a good justification for doing less work in total. Misleading test results should be addressed by improving the accuracy of tests, not by doing fewer tests. On the other hand, I have heard some bald-faced liars insist that sampling is never adequate and you always have to test the whole population. This is especially shocking when it comes from people who are computer scientists or people who are otherwise mathematically competent but who have chosen to use unscientific prejudice to sell more software. If anyone really believes you need to test a whole population involving millions or billions of transactions in order to form a reliable opinion about error rates then they desperately need to be given the remedial maths education which they did not receive at school. The paper shows that even when setting yourself an unusually precise assurance goal – the worked example requires that less than 1 call in every 50,000 will be charged incorrectly – you can attain a very high level of confidence with samples that are several hundred thousand in size. So the efficiency saving becomes obvious if given the choice between testing a billion transactions or testing the several hundred thousand transactions needed to gain statistical confidence that there is a very low error rate in practice.

Nothing in the statistics paper is difficult to understand. It can be followed by anyone willing to think methodically about how to work through a question. In practice we rarely seek such high levels of precision; when launching a new product we may be seeking quick assurance that errors afflict fewer than 1 percent of all related transactions. Hence the method used in the paper also illustrates how easy it is to apply a robust sampling technique when looking for rapid answers about risk exposure.

The other paper is entitled “The Objectives of Revenue Assurance: Completeness, Accuracy, Validity and Timeliness” though mostly it uses a simple formal logical system to discuss the difference between accuracy and validity. Unlike the statistics paper, I do not expect RA practitioners to learn first order predicate logic to do their jobs, but anybody with a basic understanding of computer science should already be familiar with the same logical concepts, even if they use different words to describe them. Starting with a funny rant about a not-as-funny row I had with a very intelligent colleague, the paper shows how logic can help us to avoid confusion when discussing objectives and circumstances. Error does not stem solely from software bugs or mistaken configurations. It is also caused by people failing to understand each other, often because one person makes an ambiguous assertion and the listener then makes false assumptions. We can improve our performance by applying logical thinking and using logical descriptions when performing our work. The specific comparison between accuracy and validity illustrates that many revenue assurance practitioners have used the same words but given them contradictory meanings, leading them to talk at cross-purposes, seemingly without knowing it.

RAG members can obtain both papers by logging on to the members-only section of the RAG website. Non-members can join RAG for free, by providing a few details here.

Eric Priezkalns
Eric Priezkalnshttp://revenueprotect.com

Eric is the Editor of Commsrisk. Look here for more about the history of Commsrisk and the role played by Eric.

Eric is also the Chief Executive of the Risk & Assurance Group (RAG), an association of professionals working in risk management and business assurance for communications providers. RAG was founded in 2003 and Eric was appointed CEO in 2016.

Previously Eric was Director of Risk Management for Qatar Telecom and he has worked with Cable & Wireless, T‑Mobile, Sky, Worldcom and other telcos. He was lead author of Revenue Assurance: Expert Opinions for Communications Providers, published by CRC Press.

Related Articles

Get Our Weekly Newsletter by Email