Obscure Data, Mysterious Calculations, Real Risks

Rules of thumb, if used appropriately, can be powerful tools for reminding and educating people about risk. Consider Murphy’s Law, which says “anything that can go wrong, will go wrong”. Hanlon’s Razor is one of my personal favourites, reminding me that I should “never attribute to malice that which is adequately explained by stupidity”. Nobody thinks these rules are genuinely universal, but they do illuminate the blind spots of our imagination. There will always be a tendency for some people – especially leaders – to be overoptimistic because they failed to consider all the pitfalls before they start a grand venture. Human beings get angry when things go wrong, and that anger leads us to suspect sabotage and conspiracy when our greatest foes are incompetence and laziness. Based on my observations over the last year, I am coming close to proposing a new rule of thumb for risk professionals, which might be called the cVidya Law: “whatever a cVidya employee says about risk, assume the opposite is true”.

At the moment, there is not enough data to insist that this theory be codified as law. But cVidya should make some effort to reverse course. However much they spend on promoting their brand, they risk permanently destroying it by pushing the kind of silliness and falsehoods that mislead the easily-fooled, but which get laughed at by the rest of us. Consider the video below – a damning piece of evidence about the dumbing-down of risk management. It was recorded by RCRTV at the TM Forum’s Orlando event, and stars Amir Gefen, who is Director of Industry Relations in the cVidya CTO’s Office.

To summarize the story so far: cVidya have a list of possible frauds and revenue leakages, and a questionnaire which helps to identify which of the listed frauds and leakages might be relevant to a particular telco. Note that this will never deliver facts – it will only ever produce informed speculation. A risk analysis like this helps a risk professional to use their professional judgement in deciding how to proceed with gathering further data; it does not lead to any conclusions. So, to summarize the summary: cVidya is touting a list, combined with a questionnaire. Excuse me for pointing out that whilst lists and questionnaires can be automated, this is hardly the most labour-saving software the world has ever seen. And, to clarify, the list is not even cVidya’s list – it is the TM Forum list (though it is hard to tell the difference between cVidya and the TM Forum at times). On top of the list and questionnaire, Gefen pours out many other words, like ‘innovative’, ‘advanced’, ‘assessment’, and ‘risk’. But underneath it all, we have just a list and a questionnaire.

You will have noticed that the video stops, just at the interesting point where we were going to show the list of risks on screen. Wowzers. I wonder what was in this advanced list of LTE-specific risks? Luckily, there is a part two…

First, did you see how many revenue streams were analysed by this supposedly authentic demo? The two revenue streams were (1) fraud management, and (2) revenue assurance. Now, without going into the rather obvious point that neither of these is a revenue stream, let me make the following observation about the word ‘risk’: everybody already knows what a fraud is, and everybody already knows what a revenue leak is, so there is no need to define/abuse the word ‘risk’ to mean ‘frauds and revenue leaks’. The truth here is that the word ‘risk’ is being used to make it sound like something new is being offered when really this is the same old material that we have all seen before.

Then the video shows us the relevant frauds identified for this LTE service. Wowzers, three kinds of fraud are possible. Did you realize that so many kinds of fraud were possible? Of course you did. Maybe you know of a fourth, or even a fifth. The three types identified by this demo were: (1) violating service policies; (2) messing about with billing accounts; and (3) man-in-the-middle. In other words, the system identified three frauds for LTE networks that also happen to be frauds you might find on lots of other networks. So, this is more evidence for the cVidya Law: what was described as an LTE-specific demo has no special relevance to LTE.

And then we see how powerful the system can really be. Gulp. Gefen points out that the system can:

“…actually simulate what if I’m going to implement controls that are going to detect and prevent those type of risk, what’s actually going to be the financial impact.”

Wowzers. Simulation. That sounds pretty cool. It sounds all computational and algorithmic. Perhaps cVidya have some pretty fancy algorithms spinning underneath the hood of their system. On the other hand, maybe not. Maybe cVidya use the word ‘simulation’ as a synonym for ‘fake’. Speaking as a former auditor, I look for things that tell me when calculated answers are faked – it helps me to guess how much work needs to be done to verify the underlying logic of a system. One easy check is that calculations that aim to give real-life answers never give answers which are improbably round. In real life, leaks and frauds usually result in whacky, spiky, jagged numbers like $35431.16 being lost. They hardly ever result in cuddly, warm, rounded numbers like $30000 being lost. Round numbers means a guess off the top of somebody’s head, not a calculation based on data. And what did the simulation calculate would be the benefit delivered by these controls for this telco?

$200000

That looks like a pretty round number to me. You can tell by the zeroes. It is so round, that I can only assume that it has come straight off the top of somebody’s head. But cVidya could easily prove me wrong, by sharing the algorithms that were used to generate this number. I dare them to do that. There is no special ‘intellectual property’ at stake here. Either you trust an algorithm or you do not. To trust it, you must know how it works. So if cVidya wants people to trust how they calculate risks, they have to tell people how the calculations work, and cannot keep them secret. Showing me the number on a screen means nothing. Bernie Madoff used to show people numbers on screens (sadly, too many people were satisfied with that). Keeping them secret makes no sense, as that undermines an auditor’s confidence in relying upon the numbers.

If the algorithms are explained, publicly, then we can all see if we agree with the simulation, or if we think it is flawed. After all, if the simulation is based on experience of risk, it is based on the real historic data of real telcos – so this is a case of ‘what goes around, comes around’. Either you welcome working with telcos to share data, or you obstruct it. You cannot be in favour of sharing it (for free) in one direction, and then charging for it when it comes back again. That does not work because nobody can tell if there is any reliable data underneath, or if it was just made up, or else ruined by how it was manipulated into the answers presented on screen.

Gefen goes on to explain how a future version of the simulation will also simulate the costs of controls. He subtly points out how expensive it is to employ people, implying that it is best if other controls are implemented instead. Again, if cVidya wants to share some data with the world – like their price list – then maybe they should just send everybody a price list instead of embedding it in their ‘simulation’. After all, other vendors will have different price lists, so now the ‘simulation’ is definitely starting to break down as an effective tool for decision-making.

And then Gefen goes on to give the most definitive example of the cVidya Law that I have yet encountered:

The entire approach is really to look at a top-bottom process of risk management and instead of doing ad hoc and gut feel controls that I feel they should be there, let’s take a step up, let’s do a comprehensive risk analysis, let’s bring operational data, let’s see what are the results, let’s simulate the controls that we can do, let’s add what are the costs of the controls, when this is all said and done, you end up with a much more professional and quantifiable system that you can act and manage upon, on one hand, and then in addition to that…

Let me break into the quote at that point, because I feel confident that all the words so far are consistent with the cVidya Law. That means, if you want to know the truth, just reverse the meaning of everything that was said. But as the remainder of the quote reveals, there is something even worse than the cVidya Law… something a lot worse.

…this catalyst we are also working with Microsoft who are bringing their BI and visualization and we can export this data from those system to a C-level, up to a tablet device, and they can see all the dashboards, and the reports, and the ad hoc reporting, just in the comfort of their own tablet.

Got that? Not only do we not know how numbers are calculated, but we are meant to have such blind faith that we give these numbers to executives so they can use them when making important decisions. I cannot be the only person who spots the real risk here.

But maybe you feel confident about taking a chance on cVidya’s numbers. As Gefen points out in the video, the system is backed by the best practices standards of the TM Forum. Fine. Then show me where the TMF standards show how to do these calculations, and what constants they used as generic estimates. Somewhere in the standard we should calculations that look like X*Y*c, where X might be customers, Y might be a financial number relating to the telco (like ARPU) and c is a constant, derived from the observed risk in other telcos and hence used to generalize an estimate for all telcos. If this is made public, then anybody can (a) verify the calculation is reliable, (b) do it differently and/or make the constants specific to their telco if they disagree with them, and (c) simulate risk without buying cVidya’s software. Fuzziness between supposedly accessible standards and proprietary intellectual property undermines confidence in both.

So come on, cVidya, I will be positive if you will be positive. I will tell the truth if you tell the truth. I know you are reading this. So respond to my sincere request, instead of just treating marketing as a one-directional push. Write something for talkRA that explains how your risk calculations work. If you do that, it will be published, in full, without any changes or edits. Show us how your equations really work. X*Y*c is what we want to see. Stop my cynicism, by doing the right thing. Be open and be honest. If you have solid maths underpinning your system, you have nothing to lose by being honest. If your sums are based on TMF ‘best practices’, then explain the relationship. Secrecy will get you nowhere because nobody can trust the results of a calculation that is performed in secret. Either X*Y*c is reasonable or not, but nobody will be able to have confidence in your result unless you show you are willing to explain your maths.

By now, I hope you appreciate that I am trying to do the industry a favour, whilst trying to do myself a favour at the same time. It does not help me to walk into a telco and have to deal with made-up nonsense that has been spread so thoroughly that even the C-level execs have started repeating them like they are laws of nature. Anyone can make up ‘laws’ just like I made up the ‘cVidya Law’. People could invent a law that says everybody always underestimates risk. People could invent that law – and I would argue against it.

For a more sophisticated debate about risk, and for examples of how real risk professionals should be debating real risk topics, I turn to this blog by Norman Marks. Marks, who is one of the leading voices in risk management, slams Robert Kaplan, amongst others, for a flawed understanding of the word ‘risk’. Let me reiterate: a VP of SAP who is also a former chief auditor and former CRO wrote on his regular blog for the Institute of Internal Auditors about the mistaken understanding of risk exhibited by a world-renowned strategy guru, Harvard Professor and co-creator of the balanced scorecard. So nobody should wet their panties when I question why we should encourage execs to make decisions based on numbers calculated by opaque software backed by vague claims about TMF ‘best practices’. In making his case, Marks brilliantly illuminates the ideal for risk management:

My own view is that risk management effectiveness is measured by its ability to influence decision-making. Better decisions, made with quality information, enable better performance.

Contrast the beautiful simplicity of Marks’ explanations with the video above. Both are attempts to influence people. But it is not enough to say you should be trusted; you must make some effort to show why you should be trusted. You have to show the quality of the information you push towards people. I respect Marks because, when I pick out a sentence like the one above, I know I am picking out conclusions that are backed by a wealth of detailed reasoning, and where that reasoning is always made open and available, so I can scrutinize it and form my own judgement. A willingness to explain your reasoning is an important element in building confidence.

Let me finish with one final rule of thumb, stated by another guy whose insights I greatly respect. If you want to understand how things really work, you cannot accept descriptions of the world just because they are offered to you. You need to see the connections between the surface explanations of the world and the deep-lying evidence beneath. Or as W.E. Deming so neatly put it:

In God we trust; all others must bring data

Eric Priezkalns
Eric Priezkalnshttp://revenueprotect.com

Eric is the Editor of Commsrisk. Look here for more about the history of Commsrisk and the role played by Eric.

Eric is also the Chief Executive of the Risk & Assurance Group (RAG), an association of professionals working in risk management and business assurance for communications providers. RAG was founded in 2003 and Eric was appointed CEO in 2016.

Previously Eric was Director of Risk Management for Qatar Telecom and he has worked with Cable & Wireless, T‑Mobile, Sky, Worldcom and other telcos. He was lead author of Revenue Assurance: Expert Opinions for Communications Providers, published by CRC Press.

Related Articles

3 COMMENTS

  1. Eric,

    I hope we will hear from others on this point, and that cVidya responds directly to your challenge. I would like to hear candid comments from some of the RA and fraud managers out. What do they think about this business of measuring RA risks. Are they doing it? What approaches are they using?

    Much of the disagreement on this issue, I feel, revolves around use of the term “risk”.

    I think the type of risk you are talking about is “enterprise risk” such as whether or not a new service should be launched. Here, there are negatives to consider, such as the cost of developing and marketing the service. And then there are positive things to weigh such as the likelihood of generating nice revenue. So (as a layman) it sounds to me that enterprise risk management is a balancing act, a weighing process for making good executive decisions.

    Now the type of risk cVidya is trying to measure is “revenue risk” or risks associated with threats (such as fraud, operational failures (RA), and security. And I think cVidya’s use of the term is consistent, for example, with how cybersecurity/fraud expert Mark Johnson talk about the problem. I read InformationWeek magazine and notice the IT guys have adopted the “risk” as a synonym for “threat” language too.

    cVidya has been upfront about saying that the measuring of revenue risks is an imprecise science. Read Shaul Moav’s column on Black Swan.

    Finally, what’s wrong with keeping the algorithm under wraps? Personally, if I was buying the software, I would want to know what the rationale was behind each formula, as you say. But that doesn’t mean cVidya should reveal those algorithms to everyone. And are the algorithms part of what TM Forum developed, or are they a cVidya value-add? You’ll have to talk to the TM Forum about that.

    Hopefully cVidya has built into its algorithms some rules of thumb based on actual operational experience. A software company has working relationships with customers and the result of those interactions is used to develop proprietary benchmarks and rules of thumb.

    Still, I agree, the key question is: what assumptions are behind the algorithm? And any prudent service provider will ask cVidya to open up its kimono before seriously investing in the product.

  2. @ Dan, to answer the specific point about why cVidya must share its algorithms, please consider the following scenario. You walk into a business meeting and somebody tells you they can deliver $X of value, if only you invest $Y into their project. You’re pretty darn sure you’ll be spending $Y… that’s the money they want from you. But do you take the promised returns, $X, on trust? That’s the situation you’re in, if you don’t understand how X was calculated. But how can you understand X if you don’t know how the salesman across the table has done his sums? What assumptions did he make? Do you agree they are the right assumptions for this specific project, or were they copied from a different and irrelevant project? How did he get from those assumptions to a specific number? This scenario is no different to how projects get pitched in many telcos. cVidya supplies a program which supplies numbers which supposedly get presented to decision-makers. I’d expect those decision-makers to ask tough questions about hard numbers, before they approve any project. So I’m just anticipating the need to give clear answers about specifics, rather than taking numbers on faith.

    Of course you’re right that measuring risk is an imprecise science. But that doesn’t mean we should do it in a vague or lackadaisical way. Sadly, that’s a common confusion – people spend more time quantifying what is susceptible to precise quantification, and less time on what is hard to quantify, when really we need more effort focused where quantification is hardest. And it can’t be that cVidya has discovered some secret knowledge about how to do quantification. So the only way to judge their product is to know if the quantification is better, or worse, than could be delivered using alternate means. Again, that’s not something that can be determined based on faith. It can only be determined by being able to reproduce the figures, and to diagnose how reliable they are.

  3. Hi Eric,

    Norman Marks is one of my favorite reference in risk management.

    I’ve met with him in KL last year.A lot can learn from him. It would be interesting if other readers could try to find out what is GRC(Governance,Risk Management,Compliance) and COSO.

    Fairuz,

Comments are closed.

Get Our Weekly Newsletter by Email