Some RA Managers ‘Deluded’, says cVidya CTO

The UK is a few weeks from a general election. Just a short while ago, Israel had an election. The French had elections recently. The Japanese, the Americans, the Indians… they all keep having them. When people have elections, they also tend to have surveys, because people want to know the likely election result before the actual election is held. And whenever people conduct surveys, a lot of time is spent talking about how to minimize bias… unless the people running the survey want a biased outcome, and nobody is willing to argue against their bias. In other words, people admit that bias is a constant and unavoidable challenge… unless they are suffering from bias. We all know this. We are adults, and we know what other people are like. And we do not need to be data scientists or psychologists to know this, because we have all read press reports about biased opinion polls, because they happen all the time. This is the context I wanted to establish before discussing an extraordinarily biased LinkedIn post by Gadi Solotorevsky, CTO of Israeli RA vendor cVidya, and lifetime boss of the TM Forum’s RA team. In the post, Solotorevsky had the temerity to state that some revenue assurance managers are “deluding themselves”. How does he know they are deluded? Because their survey answers did not fit Solotorevsky’s expectations. Can there be a better example of somebody who sees bias only when he wants to?

This is what Solotorevsky wrote about a recent TMF revenue assurance survey:

About 8% of the participants estimated that their maturity is level “5-optimizing”; 21% said that their activities cover over 90% of their company revenues. These are troublesome figures. Since the survey was anonymous, there was no reason for the respondents to inflate the numbers. So they truly believe that these numbers are correct.

Firstly, it does not follow that anonymity means there is no reason to inflate the numbers. Those who anonymously submit answers to the TMF are likely to openly share the same answers with others inside their company. Some will have shared the answers with others inside their group. The anonymous nature of the TMF’s survey is irrelevant if the same answers are also shared in ways which are not so anonymous. Or is Solotorevsky suggesting that people are so cynical that they would give one set of answers to the TMF, and a different set of answers to their work colleagues?

Secondly, there is nothing the least bit ‘troubling’ about these results – unless you already know what the results should be. But how can Solotorevsky know what the results should be, if he does not know which telcos responded? If this really is an anonymous survey, there may be many reasons why the sample is biased. Note the difference between a biased sample, and a biased individual. If I was to conduct a pre-election poll whilst standing outside one party’s campaign headquarters, I should hardly be surprised if the poll gives me biased results. But the blame for the bias is with the sample that was taken, and the methods used for polling. The individuals who respond to the poll are not biased, because they can each give an honest, accurate answer without being responsible for the failure to survey a representative spread.

Assume for a moment that the numbers reflect reality, then I would estimate that their RA activities are not cost effective.

What justifies that extraordinary leap? Nothing was said about costs, or returns. So why assume that 90% coverage means the telco’s assurance efforts are not cost effective?

It is pertinent to point out that surveys and samples are also a great way to deliver cost effective assurance. Good pollsters survey a few thousand people, and from that they can get pretty reliable predictions of how millions will vote. Sampling transactions should be even more effective than polling people, because transactions do not change their mind. It should be perfectly possible to implement cost effective sample-based assurance techniques that cover 100 percent of revenues. All that is required is the intelligence to design the sample to give an appropriate level of confidence given the levels of risk involved. Or to put it another way: you can conduct good, unbiased polls, without going to the cost of getting everybody to vote in an election.

At this point, it should be clear who suffers most bias; Solotorevsky has always been more biased than the telco RA managers who respond to TMF surveys. For example, Solotorevsky always refuses to believe that sampling can be an effective means of attaining assurance – and he has skewed every TMF document to endorse his point of view. For him, assurance is always all-or-nothing. You must review every datum, or none at all. It never occurs to him that it is more cost effective to do some testing of every revenue stream, than to do 100 percent testing of some streams whilst doing no testing of others.

Think for a moment of RA as insurance – if you buy insurance to mitigate every possible risk you’ll pay a lot of money, and it would not be cost effective. A reasonable insurance policy is to accept some level of risk, reducing the premium to an acceptable level.

This would be a good analogy, if Solotorevsky understood it. Insurance works because one person in several thousand will suffer an unfortunate outcome, so we all pool our risks. We all pay a premium, knowing that if we are the unlucky person, the collective pot of money will recompense us for our losses. Insurance does not work by making you decide between insuring your car, or insuring your house, or insuring your health. You can insure everything. You can mitigate every kind of risk, and do it cost effectively by sharing your risks with others. It all comes down to probability, and the cost effectiveness of insuring your house has nothing to do with the cost effectiveness of insuring your car. In the same way, the cost effectiveness of revenue assurance is determined by what you spend versus how much you reduce your risk. It makes no sense to say you cannot cost-effectively assure your postpaid bills and your prepaid charges, so you must choose which one to assure, or to insist you can assure fixed-line revenues or mobile revenues or pay-TV revenues, but not all three at the same time. There are always cost effective ways to assure what has not yet been assured. And so, there is no reason not to assure 100 percent of revenues in a cost-effective manner.

The same goes for RA if you are really at maturity level 5 and you really cover over 90% of your revenues in a sound way, then you are most likely not being cost effective. Yes, there are exceptions. Sometimes you may cover 100% of you revenues, be at maturity level 5 and be extremely cost effective, but these exceptions are rare.

This is bias, not data. Perhaps Solotorevsky finds these examples rare because he mostly speaks to telcos who are influenced by his biased view on how to do assurance.

Based on my experience 90% coverage and maturity level 5 are unlikely figures. This is a real cause for concern, because some RA managers may be deluding themselves that their situation is swell when the reality different. Very dangerous. It also makes me wonder if only those with maturity 5 were over optimistic, or if over-optimism runs across a large percent of the respondents.

This is pure bullshit. You cannot take a survey and conclude that all the results you like are reliable (earlier Solotorevksy commented that “everything is swell”) but the results you do not like are unreliable (“some RA managers may be deluding themselves”). What kind of data analysis is that? Solotorevsky accuses respondents of being biased, but appears ignorant of his own bias.

If there is evidence that respondents present biased survey results then it should be assumed that all the data suffers from bias. However, Solotorevsky will never admit that, as he needs the surveys to make himself seem important.

Every year, the survey results get better. Solotorevsky recognizes this fact, but does not think this is evidence of bias. Why not? One of the most obvious kinds of bias is that somebody who does the survey two years in a row will not want to say things have got worse. And yet, Solotorevsky does not conclude that the year-on-year improvement in scores has always been inflated by ‘delusion’. It suits him to keep saying things get better and better because he wants to take some of the credit. But he has to be careful, because if things get too good, then he becomes irrelevant.

Do the maths: if everyone is always getting better, then it is only a matter of time before everyone is excellent. That is why Solotorevksy and the TMF like to rig their surveys. The worst bias I see here is the bias of a multi-award-winning hero of the TMF telling us how the TMF knows best, and if telcos ever say different, then those telcos must be ‘deluded’. Telcos are allowed to say they are terrible. Telcos are allowed to say they are getting better, thanks to the TMF. Telcos can say they are better because they bought software from cVidya. But telcos are not allowed to say they are doing just fine. If they said that, then that would mean they do not need more from the TMF, and they do not need more software. In Solotorevsky’s survey, that answer is not permitted. Anyone who gives that answer must be ‘deluded’.

Let me finish by talking about another kind of bias. Gadi Solotorevsky was never elected to be lifetime boss of the TM Forum’s RA team. He was never elected to anything. I now believe these RA surveys have become systematically biased because of Solotorevksy’s never-ending influence over the TMF. Previously I wrote about the biased questions in the RA maturity model used for this survey. My company is a member of the TMF, so I complained officially as part of that model’s ‘approval’ process… and that complaint disappeared into a black hole like every other criticism that Solotorevsky does not want to hear. I have spoken to other TMF members amongst the people Solotorevsky describes as “30 enthusiastic RA practitioners who helped to build and analyze the survey”. They tell me they were unhappy with many aspects of the model, and that their views were ignored.

If I had a vote, I would vote for change. The only way to clean up systematic bias is to force the heads of TMF collaboration teams to be rotated on a regular basis. It is not healthy that the RA team has been run by the same guy for a decade. Like a politician, there should be a limit on the number of terms a TMF team leader can remain in office. And even when they are allowed to stand for re-election, there should still be an actual election, so we can all see how much real support the leader has. It cannot be right that one man has been put on a podium for ten years, as if he speaks for an entire worldwide industry, when nobody has the opportunity to debate him, never mind vote against him.

Democracy is the best antidote to systematic bias; the swings of the pendulum always ensure a periodic return to balance. If Solotorevsky has become so oppressive that he describes hard-working telco managers as ‘deluded’ for wanting to combine cost-effective assurance with over 90 percent coverage, it is time to replace him with somebody who is answerable to telco managers.

Eric Priezkalns
Eric Priezkalns
Eric is a recognized expert on communications risk and assurance. He was Director of Risk Management for Qatar Telecom and has worked with Cable & Wireless, T‑Mobile, Sky, Worldcom and others.   Eric was lead author of Revenue Assurance: Expert Opinions for Communications Providers, published by CRC Press. He was a founding member of Qatar's National Committee for Internet Safety and the first leader of the TM Forum's Enterprise Risk Management team. Eric currently sits on the committee of the Risk & Assurance Group, and is an editorial advisor to Black Swan. He is a qualified chartered accountant, with degrees in information systems, and in mathematics and philosophy.   Commsrisk is edited by Eric. Look here for more about Eric's history as editor.
  • Mark

    The only surprise here is that it has taken so long for this assessment to surface, I have long been sceptical of all RA surveys, whoever runs them.

    And if the RA Managers are deluded, then the TMF needs to look at how it promotes and explains the maturity model – as that is the source of the reference data, not the RA Managers who are merely interpreting those criteria inside their own organisations. My guess would be that they are reasonably confident about what their organisations do – so the problem has to be ‘ambiguity’ in the definitions used.

    However, in some ways I would agree with Gadi in that I would expect most RA departments not to be cost-effective, in fact I hope they are. Once you have established the risks and implemented the necessary controls, have these reviewed on a regular basis, have the buy-in and support of other departments, then errors are much less frequent, spotted more quickly and in many cases prevented before there is any loss.

    Note that I have said appropriate controls – that means agreeing an acceptable systemic error rate and designing controls around those, and have controls that spot causal errors. I am still somewhat bemused that an Operator requests its applicants be familiar with 6 Sigma, which is one in a million failure rate, while then arguing that the regulatory targets are not attainable despite being orders of magnitude higher.

    The debate inside RA for so long has been trying to get zero leakage – the ultimate aim for any system vendor. This fails on two significant counts – the first is that 100% inspection is not 100% effective (sorry, quality speak), and the second is that ‘planned revenue leakage’ through actions of Sales, Marketing and Billing dwarf the lost revenues found by RA. It is only when we start the concept of averted leakage we start getting close to significant values but these cannot apear on the bottom line.

    I agree with Eric, that change is needed. We should go back and ask what is the purpose of the survey and ensure that it meets the needs of the TMF membership, not the elite running the teams with their need to demonstrate to their employers that more money needs to be spent developing the next tool to be sold.

    Are RA Manager’s deluded – without anything to compare how they are doing, there is always a risk that they over-estimate their performance against some ideal. But that is human nature – ask anyone and they will tell you they are a good driver – so who is it that causes all the accidents on the highway? Me, I know I am a bad driver – that is what keeps me alert, sharp and trying to improve.

    • Mark, because you are alert, sharp and trying to improve, you’d be an ideal candidate to take over the TMF’s RA team. Sadly, we both know that cannot happen, not least because the TMF uses its collaboration program as a way to generate revenue from everyone who attends – thus making it unfeasibly expensive to regularly participate, unless you’re backed by a vendor that has a clear marketing objective, or a big telco with a generous attitude to its budgets.

      As you raise so many good points, I should make a couple of observations about the changes that Gadi made to the maturity model. As you point it, it is perverse to complain about the answers given without first examining the survey itself. Gadi made changes which make it much easier to score 5 on the maturity scale – and justified these changes to the basic method of calculation by presenting non-sequiturs about new technologies or new products.

      Previously the telco’s overall maturity rating equalled the lowest score attained in any of the five scoring dimensions. That means a telco needed to attain a level 5 score in all 5 dimensions (without rounding up) to get a level 5 rating overall. Now the telco can get a level 5 score by averaging the scores across the dimensions, and then rounding up. Obviously this changes the model from one where you have to be perfect at all things to score at level 5, to one where you can be imperfect in many respects and still get the top possible rating. Put simply, the new calculation means a telco can have a level 3 score on one dimension and still be rounded up to level 5 overall, even though the old system would have said that telco is only at level 3.

      Gadi also reduced the number of dimensions. By deciding that the quality of the people in the organization is no longer a key standalone element of maturity, he made it possible for telcos to get a level 5 score under the new scoring system, when relatively poor investment in people (skills available, training etc) would have previously meant a lower overall score.

      It says a lot about Gadi that he consciously decided to make it easier for telcos to attain a higher score – despite my pointing out the consequences during the so-called ‘approval’ process – and then later complains that too many telcos are attaining high scores. Why make it easier to get the top score, unless you’ve consciously decided that too few telcos were attaining that score under the old system?

      Your point about cost-effectiveness is very well made. Gadi’s team has systematically skewed every piece of work to either ignore costs in the calculations (such as occurred in their so-called ‘risk’ model) or to deny any room to the statistical concept of confidence. The motive was obvious: 100 percent testing is obviously less cost-effective than intelligent sampling, but his company’s marketing pushes extremely unscientific misconceptions about the relationship between the number of tests performed and the degree of confidence attained. You neatly illustrate how his company’s marketing also relies on a conception of risk management which is not about the cost-effective reduction of risk, but is about choosing to be risk averse, incurring costs even when there is no longer a measurable benefit. In that respect, it is perverse that he now argues against improving coverage on the basis it is not cost effective, when he’s previously argued for 100 percent testing without regard for the extremely low return generated by the final 90 percent of tests compared to the initial 10 percent of tests.

      Please also allow me to make one final point. In complaining that the results are wrong, Gadi highlights that he never understood how the original maturity model worked. In the original model, it would not have been possible to argue the results were ‘wrong’, because the results should be based on experience, not theory. (In the same way, a dictionary definition is not right because it is written in the dictionary, but because it conforms to how people use the word in practice.) When we wrote the original 5-level model, we knew there we no telcos at level 5. In that sense, we created something theoretical as opposed to something real. However, the lower levels were designed to match actual experience of what telcos were doing in practice. Level 5 was considered an impossible aspiration – as if we were extrapolating from the curve to a degree of perfection only found at infinity. Perhaps Gadi no longer desired to make level 5 so unattainable. But however they arrived at the definition of level 5, it should still be an extrapolation from actual experience. If the team decided that, say, 5% of real-world telcos should be scored at level 5, 10% at level 4, 30% at level 3 and so on, they should have engineered the scoring system accordingly, in the same way we normalize results in school exams or employee appraisals. Did they do this? It seems not. It doesn’t seem to have even occurred to them. This is both stupid and infuriating. The original model was created in order to start collecting data. In the absence of data, it was bound to be a work of well-intended fantasy. The second version should have benefitted from the data that was collected, making it much easier to predict how many telcos would rank themselves at level 5, how many at level 4 and so on. If the normalization was off, it could be tweaked. But it seems that Gadi did such a lousy and unscientific job of constructing this model, they neither used the data previously collected, nor looked at other real-world data, nor even formed a conceptual model of what the normalization should look like. It just didn’t occur to them, and they merely started with a blank piece of paper, filled out some questions, came up with answers and a scoring method, and waited to see what the results would be. And then they complained the results were wrong!

  • Rene Felber

    Eric, I agree on your view on cost effective high coverage strategies.
    As a passionate risk management/ RA practitioner I committed my time to lead and shape TM Forum’s global RA survey 2015 (freely available from TM Forum without registration)

    I find it important to stress/clarify a few things about the survey:
    – The survey was conducted based on agreement with TM Forum’s values and
    processes as well as on the acceptance of (own) imperfection

    – The survey was conducted and improved (10 new questions, 20 questions revised) with collaborative efforts (direct input of 29 RA practitioners and of 11 vendor representatives). Also the results in the final report were commented collaboratively (took a bit longer than planned).
    – The anonymous respondents had generally all answer options available. Example question: How do you think the overall maturity of your Revenue Assurance function has developed compared to last year? Answer options: It has become much worse, it has become a little worse, it has stayed the same, it has become a little better, it has become much better and I don’t know.
    – The current RA survey has not been aligned with TM Forum’s Revenue Assurance Maturity Model RAMM. For example the question about the overall maturity score was formulated as “Regardless of whether you use this model or not, how would you assess the overall maturity of your Revenue Assurance function?” 1. Initial 2. Repeatable 3. Defined 4. Managed 5. Optimizing
    – Gadi’s takeaways like all other expert’s takeaways in the report were requested to balance the fact based report with opinions.

    Finally, I can promise that we will discuss democratically in the RA community about setting direction for future improvements to make the RA survey more relevant for the customer.

    • Hi Rene, thanks for correcting my misconceptions. I must admit that this now leaves me even more perplexed by Gadi’s comments. If this survey was not aligned with the RA maturity model, then why ask people to rate their maturity without asking them to complete the actual model? And more importantly, why focus on their answer to this particular question – saying that if people give the ‘wrong’ answer, they are deluded? The maturity model is a questionnaire. Asking you to state your maturity without completing the questionnaire is like asking you to state your IQ, but without sitting an IQ test, or to state your blood group, without testing your blood. In many respects, it’s a trap. You’re obviously not doing what is needed to find out the actual maturity/IQ/blood group. Instead you’re learning how the spread of self-perceptions vary compared to the spread of reality. In other words, you might conclude that people’s perceptions are wrong, but you don’t reliably learn what their maturity/IQ/blood group really is.

      To be honest, I don’t know why your team issued a major survey that isn’t aligned to another major survey. Wouldn’t it make more sense to align them, and then correlate the data between them? I can see that asking even more questions at one time might discourage people from responding, but there are ways to allow people to retain their anonymity whilst being able to identify which responses come from the same source. If not, then how do you protect yourselves from the risk that the same telco might have submitted two inconsistent responses to the same survey?

      You refer to Gadi as an expert, so I feel compelled to respond to that. I do not consider him to be an expert. You don’t need to be an expert to arrange meetings, or to attend them. He has undoubtedly attended a lot of meetings, but I don’t remember him ever making any valuable intellectual contributions at any meeting I attended. I know I’m not alone in my dim assessment of his capabilities. His lack of intelligent opinions might be an asset for a chairman – which is what team members originally thought he was. It is not an asset for the ‘leader’ role he has latterly assumed. Mostly he seems to repeat what he hears others say. Of course, there is some safety in doing that. For example, politicians get very good at saying what others want to hear. On the occasions when Gadi makes an original argument – like when he flip-flopped on the definition of revenue assurance, or when he argued that revenue assurance couldn’t be done in a way consistent with the reality of Big Data (and so argued for a much more expensive approach), or this latest argument that you cannot attain a high degree of coverage in a cost-effective manner – I find he uses bizarre analogies that make little or no sense, and reaches conclusions that contradict what he’s said elsewhere. I find no evidence of a considered position or data to back up his arguments, as I would expect with a genuine expert.

      The TMF’s bosses love him because he makes money for them, but Gadi Solotorevsky is slowing the progress of RA. This latest incident proves my point: he says if you want cost-effective coverage for over 90 percent of revenues, then he does not just explain why that is hard to do. Instead, he says you must be deluded. His manner is deceptive; I find him to be manipulative, and language like this is designed to bully others into endorsing his point of view. If he wants to make an argument about the relationship between coverage and maturity then the correct way to do so is to gather all the reliable data on coverage and all the reliable data on maturity (noting that this survey doesn’t generate reliable data on maturity) and then to analyse that data! Why have lots of surveys, if the team leader doesn’t believe in properly analysing the data gathered over time? And there shouldn’t be any difficulty in finding the necessary data analysis skills in an RA team! The wrong way to have that argument is to present a bizarre analogy about insurance and to use emotive language to tell people they gave the ‘wrong’ answer when responding to the survey, whilst also ignoring the many other ways that survey results may be unrepresentative.

      I know he won’t go easily, because Gadi’s company is desperate to squeeze every last drop of marketing juice from his good fortune at occupying a useful role at a time when there was no viable competition for it. However, Gadi’s unscientific, data-free posturing on how to do RA has become a real and recurring barrier to making RA the data-driven science which it should be. For your sake, and for the rest of the RA community, I beg you to replace him as team leader with somebody more impartial and objective in their outlook.