Some RA Managers ‘Deluded’, says cVidya CTO

The UK is a few weeks from a general election. Just a short while ago, Israel had an election. The French had elections recently. The Japanese, the Americans, the Indians… they all keep having them. When people have elections, they also tend to have surveys, because people want to know the likely election result before the actual election is held. And whenever people conduct surveys, a lot of time is spent talking about how to minimize bias… unless the people running the survey want a biased outcome, and nobody is willing to argue against their bias. In other words, people admit that bias is a constant and unavoidable challenge… unless they are suffering from bias. We all know this. We are adults, and we know what other people are like. And we do not need to be data scientists or psychologists to know this, because we have all read press reports about biased opinion polls, because they happen all the time. This is the context I wanted to establish before discussing an extraordinarily biased LinkedIn post by Gadi Solotorevsky, CTO of Israeli RA vendor cVidya, and lifetime boss of the TM Forum’s RA team. In the post, Solotorevsky had the temerity to state that some revenue assurance managers are “deluding themselves”. How does he know they are deluded? Because their survey answers did not fit Solotorevsky’s expectations. Can there be a better example of somebody who sees bias only when he wants to?

This is what Solotorevsky wrote about a recent TMF revenue assurance survey:

About 8% of the participants estimated that their maturity is level “5-optimizing”; 21% said that their activities cover over 90% of their company revenues. These are troublesome figures. Since the survey was anonymous, there was no reason for the respondents to inflate the numbers. So they truly believe that these numbers are correct.

Firstly, it does not follow that anonymity means there is no reason to inflate the numbers. Those who anonymously submit answers to the TMF are likely to openly share the same answers with others inside their company. Some will have shared the answers with others inside their group. The anonymous nature of the TMF’s survey is irrelevant if the same answers are also shared in ways which are not so anonymous. Or is Solotorevsky suggesting that people are so cynical that they would give one set of answers to the TMF, and a different set of answers to their work colleagues?

Secondly, there is nothing the least bit ‘troubling’ about these results – unless you already know what the results should be. But how can Solotorevsky know what the results should be, if he does not know which telcos responded? If this really is an anonymous survey, there may be many reasons why the sample is biased. Note the difference between a biased sample, and a biased individual. If I was to conduct a pre-election poll whilst standing outside one party’s campaign headquarters, I should hardly be surprised if the poll gives me biased results. But the blame for the bias is with the sample that was taken, and the methods used for polling. The individuals who respond to the poll are not biased, because they can each give an honest, accurate answer without being responsible for the failure to survey a representative spread.

Assume for a moment that the numbers reflect reality, then I would estimate that their RA activities are not cost effective.

What justifies that extraordinary leap? Nothing was said about costs, or returns. So why assume that 90% coverage means the telco’s assurance efforts are not cost effective?

It is pertinent to point out that surveys and samples are also a great way to deliver cost effective assurance. Good pollsters survey a few thousand people, and from that they can get pretty reliable predictions of how millions will vote. Sampling transactions should be even more effective than polling people, because transactions do not change their mind. It should be perfectly possible to implement cost effective sample-based assurance techniques that cover 100 percent of revenues. All that is required is the intelligence to design the sample to give an appropriate level of confidence given the levels of risk involved. Or to put it another way: you can conduct good, unbiased polls, without going to the cost of getting everybody to vote in an election.

At this point, it should be clear who suffers most bias; Solotorevsky has always been more biased than the telco RA managers who respond to TMF surveys. For example, Solotorevsky always refuses to believe that sampling can be an effective means of attaining assurance – and he has skewed every TMF document to endorse his point of view. For him, assurance is always all-or-nothing. You must review every datum, or none at all. It never occurs to him that it is more cost effective to do some testing of every revenue stream, than to do 100 percent testing of some streams whilst doing no testing of others.

Think for a moment of RA as insurance – if you buy insurance to mitigate every possible risk you’ll pay a lot of money, and it would not be cost effective. A reasonable insurance policy is to accept some level of risk, reducing the premium to an acceptable level.

This would be a good analogy, if Solotorevsky understood it. Insurance works because one person in several thousand will suffer an unfortunate outcome, so we all pool our risks. We all pay a premium, knowing that if we are the unlucky person, the collective pot of money will recompense us for our losses. Insurance does not work by making you decide between insuring your car, or insuring your house, or insuring your health. You can insure everything. You can mitigate every kind of risk, and do it cost effectively by sharing your risks with others. It all comes down to probability, and the cost effectiveness of insuring your house has nothing to do with the cost effectiveness of insuring your car. In the same way, the cost effectiveness of revenue assurance is determined by what you spend versus how much you reduce your risk. It makes no sense to say you cannot cost-effectively assure your postpaid bills and your prepaid charges, so you must choose which one to assure, or to insist you can assure fixed-line revenues or mobile revenues or pay-TV revenues, but not all three at the same time. There are always cost effective ways to assure what has not yet been assured. And so, there is no reason not to assure 100 percent of revenues in a cost-effective manner.

The same goes for RA if you are really at maturity level 5 and you really cover over 90% of your revenues in a sound way, then you are most likely not being cost effective. Yes, there are exceptions. Sometimes you may cover 100% of you revenues, be at maturity level 5 and be extremely cost effective, but these exceptions are rare.

This is bias, not data. Perhaps Solotorevsky finds these examples rare because he mostly speaks to telcos who are influenced by his biased view on how to do assurance.

Based on my experience 90% coverage and maturity level 5 are unlikely figures. This is a real cause for concern, because some RA managers may be deluding themselves that their situation is swell when the reality different. Very dangerous. It also makes me wonder if only those with maturity 5 were over optimistic, or if over-optimism runs across a large percent of the respondents.

This is pure bullshit. You cannot take a survey and conclude that all the results you like are reliable (earlier Solotorevksy commented that “everything is swell”) but the results you do not like are unreliable (“some RA managers may be deluding themselves”). What kind of data analysis is that? Solotorevsky accuses respondents of being biased, but appears ignorant of his own bias.

If there is evidence that respondents present biased survey results then it should be assumed that all the data suffers from bias. However, Solotorevsky will never admit that, as he needs the surveys to make himself seem important.

Every year, the survey results get better. Solotorevsky recognizes this fact, but does not think this is evidence of bias. Why not? One of the most obvious kinds of bias is that somebody who does the survey two years in a row will not want to say things have got worse. And yet, Solotorevsky does not conclude that the year-on-year improvement in scores has always been inflated by ‘delusion’. It suits him to keep saying things get better and better because he wants to take some of the credit. But he has to be careful, because if things get too good, then he becomes irrelevant.

Do the maths: if everyone is always getting better, then it is only a matter of time before everyone is excellent. That is why Solotorevksy and the TMF like to rig their surveys. The worst bias I see here is the bias of a multi-award-winning hero of the TMF telling us how the TMF knows best, and if telcos ever say different, then those telcos must be ‘deluded’. Telcos are allowed to say they are terrible. Telcos are allowed to say they are getting better, thanks to the TMF. Telcos can say they are better because they bought software from cVidya. But telcos are not allowed to say they are doing just fine. If they said that, then that would mean they do not need more from the TMF, and they do not need more software. In Solotorevsky’s survey, that answer is not permitted. Anyone who gives that answer must be ‘deluded’.

Let me finish by talking about another kind of bias. Gadi Solotorevsky was never elected to be lifetime boss of the TM Forum’s RA team. He was never elected to anything. I now believe these RA surveys have become systematically biased because of Solotorevksy’s never-ending influence over the TMF. Previously I wrote about the biased questions in the RA maturity model used for this survey. My company is a member of the TMF, so I complained officially as part of that model’s ‘approval’ process… and that complaint disappeared into a black hole like every other criticism that Solotorevsky does not want to hear. I have spoken to other TMF members amongst the people Solotorevsky describes as “30 enthusiastic RA practitioners who helped to build and analyze the survey”. They tell me they were unhappy with many aspects of the model, and that their views were ignored.

If I had a vote, I would vote for change. The only way to clean up systematic bias is to force the heads of TMF collaboration teams to be rotated on a regular basis. It is not healthy that the RA team has been run by the same guy for a decade. Like a politician, there should be a limit on the number of terms a TMF team leader can remain in office. And even when they are allowed to stand for re-election, there should still be an actual election, so we can all see how much real support the leader has. It cannot be right that one man has been put on a podium for ten years, as if he speaks for an entire worldwide industry, when nobody has the opportunity to debate him, never mind vote against him.

Democracy is the best antidote to systematic bias; the swings of the pendulum always ensure a periodic return to balance. If Solotorevsky has become so oppressive that he describes hard-working telco managers as ‘deluded’ for wanting to combine cost-effective assurance with over 90 percent coverage, it is time to replace him with somebody who is answerable to telco managers.

Eric Priezkalns
Eric Priezkalns
Eric is a recognized expert on communications risk and assurance. He was Director of Risk Management for Qatar Telecom and has worked with Cable & Wireless, T‑Mobile, Sky, Worldcom and others.

Eric was lead author of Revenue Assurance: Expert Opinions for Communications Providers, published by CRC Press. He was a member of Qatar's National Committee for Internet Safety and the first leader of the TM Forum's Enterprise Risk Management team. Eric currently sits on the committee of the Risk & Assurance Group. He is a qualified chartered accountant, with degrees in information systems, and in mathematics and philosophy.

Commsrisk is edited by Eric. Look here for more about Eric's history as editor.