The Beta Generation (part one)

I am not a great fan of conspiracy theories. All conspiracy theories posit that a malign group of powerful people intelligently manipulate events to attain a particular goal. The conspiracy is kept a secret from the rest of society. I think that is naive. Very few people are that intelligent. And if they were, why would they waste their time doing the things that conspiracy theories are usually meant to be about? Most importantly, intelligence is probably a serious obstacle to achieving power. Just take a look at some of the U.S. Presidents in the last 30 years: Ford, Reagon, and George W. Bush. Al Gore is clever enough to write books that are actually about things other than himself. He made a Powerpoint slide presentation about global warming that was so good that they turned it into a movie that went on general release. Pretty smart, if you ask me. Gore was always going to come second to a man like George W. Bush. Bush has no choice but to keep his messages simple, because Bush is simple. Gore instead came across as wooden and hard to like. The painfully obvious difference in intellect probably helped Bush more than Gore. Simplicity beats intelligence most of the time.

[But I suppose the conspiracy theorists would argue that Presidents and the like are not the ones really running the show. Better save that debate for another time.]

One of the biggest problems with conspiracy theories is that they are usually so convoluted that it takes lots of intelligence and persistence to get to the end. And after all that you realise it was just nonsense and you wasted your time showing any interest to begin with. So conspiracy theories are ultimately not very rewarding unless you want to suspend disbelief and live in a fantasy. All of this is going to make writing this post very hard, as it looks a bit like a conspiracy theory, but is not, although it is a complex theory. A very complex theory, and about complexity. And about how to keep things simple. So to keep it simple this post is part one of two parts. Part one makes sense on its own. But part two is the really interesting bit. You just have to read part one first to make any sense of part two. So here goes with part one…

You may have noticed that lots of businesses are offering you lots of software these days. For free. On one condition. You test it for them. They call it “beta testing”. Another way of describing it would be “not sure how well this works yet” testing. Microsoft is no longer sure that releasing betas in the traditional way works that well – see here. But most software businesses do it. And betas are very popular with customers. Most of the popular new communications software gets high-profile beta releases: hotmail, messenger, skype, googlemail. Everyone does it. But even when you buy software, the testing never really ends. There is only one difference between a customer doing beta testing on Microsoft software and a customer clicking the box to email Microsoft with an error report when their paid-for software crashes. The difference is Microsoft does not want to call the latter “testing” because supposedly the software has been tested already. But all Microsoft did was to extend the idea of experimentation (aka testing) to its natural conclusion: treat all life as an experiment, treat all truths as contingent hypotheses, and then just get on with the real work of gathering as much data as possible to verify or falsify the hypotheses. I am sure the philosopher Karl Popper would have approved of this method. Continuously look for bugs on the assumption that even if the software looks like it is error-free, you never know for sure. This increased sophistication in allowing room for doubt is a positive thing, scientifically speaking. It is part of the reason why people started to talk about Einstein’s Theory of Relativity, when they used to talk about Newton’s Laws.

The troubling thing about needing to take a contingent approach to verifying software is that, if we cannot reach a definitive conclusion on whether software works correctly in a practical timeframe with a sensible level of resources, what chance do we have of verifying anything else complicated works properly? Okay, so software code may be complicated, but ultimately it is finite and mathematical. A line of code does the same thing each time it is executed; it is perfectly predictable. There may be very many, but ultimately there are only a finite number of logical sequences that could be executed in software in a given period of time. You could, in principle, execute every possible sequence within a period of time and so verify with certainty that it works correctly in all cases. But doing all that testing would be very slow. And costly and boring. So instead, testing involves having a reasonable go at checking that the main components work okay and then putting them together and seeing if everything works together okay for a while and then letting the customer have a play to see if they can find something wrong.

The message for revenue assurance is pretty plain. Everybody who ever claims to measure revenue loss is wrong. And always will be. And estimating loss is no better than using folklore to predict the weather. To measure revenue loss with absolute certainty you would need to know you were monitoring the outcome of every possible sequence of logical paths that might be involved in processing the data in a transaction. That would mean effective checks relevant to the execution of every line of software in every device from the network to the bill. And then some. Because losses involve much more. They involve the interaction of the software between systems (are the rules by which data is output from one system actually consistent with the expectations for the input into the next?) and physical and environmental factors (what happens if someone cuts the power to one of the systems and there is no failover? what happens even if there is a failover?) and we should not forget that, in most cases, there is also some processing done by humans. At the very least a human being is going to be involved in typing in reference data (more than one person has got the decimal point wrong when entering a new rate) and in writing the words to explain the charges to customers (the calculations described by those words need to be mathematically identical to the calculations performed in practice). So the best any revenue assurance department could come up with a contingent theory about loss. And that means the search for counter-examples must go on indefinitely. Which is rather a nuisance for revenue assurance people wanting a promotion ;)

I once wrote a paper explaining some theory and practice for metering and billing testing for T-Mobile UK. The people still working there must have forgotten about it because the original version is still up on their corporate website unchanged – even the spelling error in the URL is the same (mistakes happen everywhere). I mention it because writing the paper was a mistake. I thought I was pointing out some obvious and useful things. For example, I wanted to point out the only way you could really really be sure that a bill was accurate was to treat the whole business as a black box. You take the tariff documents that get published, then set up some services and make some calls. Finally you check that the bill was consistent with what the tariff document said and the services you received. Simple and fool-proof. And you do not need to know anything about how things work in the business in order to do it. The difficulty with that approach is plain: it would be an awful lot of work to really get confidence this way. But if you executed all varieties of calls and services at all times and locations etc etc you would eventually execute all logic paths. I contrasted that certainty of conclusion with the likely compromises that most would make in testing bill accuracy – which is to break up the tests into piecemeal components. Breaking them up makes it easier to focus on certain kinds of possible problems, but only at the cost that you totally fail to capture some kinds of error through your testing. In other words, you end like Microsoft – you trade certainty in exchange for being more cost-effective. You anticipate what might go wrong, and check for that. Sometimes you will miss something but it is a lot less work overall. But writing the paper backfired. It backfired because (a) probably no customers ever download and read this document, and (b) it upset the firms supposed to independently audit things like bill accuracy on behalf of customers. Pointing how much work would be involved to get certainty, and the real-life risks involved when deciding to compromising certainty for cost-effectiveness only upset the audit firms profiting from the work, especially as it was their job to be the clever people who would understand how to avoid mistakes. So they wrote a guide for bill accuracy approval that just said the opposite of what my T-Mobile document did. In other words, it said that following a complicated approach relying on human intelligence was less likely to be flawed than taking a simple approach which minimises reliance on human intellect. Now the document has got the regulator’s name on the front so doubtless customers can read it and rest assured that lots of super-intelligent people are protecting their interests and not making any mistakes whilst doing so. Much better than trusting an error-prone dullard like me, I am sure :(

You can be pretty sure that if Microsoft just eventually gives up and hands over software for its customers to de-bug, then there is no revenue assurance team in the well that is not doing effectively the same thing. However, unlike Microsoft, many in revenue assurance are a bit silly. The responsibility is handed over to the customers but then there is a failure to listen to the customer’s feedback. By feedback of course I mean the complaints the company gets about its accuracy. Most complaints may be nonsense, but if the revenue assurance department is not monitoring the valid ones, it is losing a vital source of data. But as I say, highlighting errors that revenue assurance missed only to be picked up by customers may not be the best way to get a promotion ;) This is another example of the supposed intelligence of the masses, in this case finding flaws that are not spotted by the “experts”. But understandably it takes a certain kind of expert to be willing to learn from mistakes. Other experts might feel accepting mistakes undermines their authority. Which is ironic for revenue assurance – a discipline that is itself a response to human fallibility.

To avoid mistakes you have to have an open mind about your own fallibility. In other words, you have to accept that you will make mistakes in order to reduce the chances of making mistakes. But believing in fallibility just means that any belief may be shown to be wrong. There is an old philosophical contradiction that illustrates the problem. It can be best stated in terms of a conversation between two people:

“Are you always right?”
“No.”
“So sometimes you believe things that turn out to be wrong?”
“Sure.”
“So which of your current beliefs are untrue?”
“I do not know….”

At one telco I was lumbered with responsibility for a new revenue assurance system. The purchase had been made just before I started working for the telco. After implementation the tool kept producing reports that said there were errors in how the bills were calculated. So I drew the obvious conclusion. When I told people my conclusion, the response was they did not like my conclusion and I should change it. My conclusion was very simple. The revenue assurance tool was wrong. You can imagine how much the vendors of that system liked that conclusion. And it was not that popular with the rest of revenue assurance either. Nobody was keen to admit that lots of money had been spent on a tool that did not work properly. So there was a lot of pressure to chase around the business and try to validate if the supposed errors were in fact real. But my reasoning was simple, so in my usual stubborn fashion I ignored what everyone was telling me to do. The revenue assurance tool was cheap, unproven, new, and had not been tested much. In contrast, the systems it was being used to test were expensive, old, long-established, proven and would have been tested many many more times, not just in our telco but in others too. So I put effort into finding out what was wrong with the revenue assurance tool, until the fault was found and corrected. Admitting to a faulty revenue assurance tool was inconvenient. But it would have been more inconvenient to admit the truth only after wasting a lot of people’s time chasing phantoms in other systems that were actually working fine. Of course, it would defeat the point of revenue assurance if you always assumed the revenue assurance test was flawed. But if you do not apply appropriate levels of scepticism you will waste a lot of time before you get to the truth, and it will be a lot more painful to admit the truth when you do eventually get there (if you ever do). So the message for revenue assurance teams is plain: doubt yourselves at least as much as everyone else. And make sure you keep on looking for evidence of your own failings. Some of the people who work in revenue assurance go into the role because they like to check on other people. It makes them feel superior. But the price of doing that job properly is self-doubt: they need to check on themselves just as much.

So did Al Gore learn from his mistakes? Probably. He was seen as overly wooden. His intelligence was as much a liability as an asset. So making fun of himself is positive and humanises his intelligence. What could be better for Gore than to appear on the TV cartoon series “Futurama” as a head in a fishtank making fun of your own books and environmental beliefs? He then takes the cartoon explanation of global warming in that show, and uses it in his (serious) documentary “An Inconvenient Truth”. And then Gore teams up with the Futurama crew to make a (funny) trailer for his (serious) film. The success of his film has even seemingly resurrected his prospects of standing for President again. This is a man joking that he used to be the next President of the United States. Maybe jokes like that get the US electorate to laugh him all the way into the White House. And the ability to tell a joke at your own expense cannot harm if the biggest challenger for the Democratic nomination is humourless Hilary Clinton. A good example of learning from past mistakes, as well as keeping things nice and simple.

Okay, lecture over… for now. End of part one. I divided this into two posts because if you do not want to believe people are fallible and being overwhelmed by the complexity and volume of data they receive, and they sometimes lack the ability to be self-critical in a way that may counter this problem, then you sure as heck are not going to want to read what I put into part two. So those guys in the audit firms can stop here. But if you are as cynical as me, read on….

Eric Priezkalns
Eric Priezkalnshttp://revenueprotect.com

Eric is the Editor of Commsrisk. Look here for more about the history of Commsrisk and the role played by Eric.

Eric is also the Chief Executive of the Risk & Assurance Group (RAG), an association of professionals working in risk management and business assurance for communications providers. RAG was founded in 2003 and Eric was appointed CEO in 2016.

Previously Eric was Director of Risk Management for Qatar Telecom and he has worked with Cable & Wireless, T‑Mobile, Sky, Worldcom and other telcos. He was lead author of Revenue Assurance: Expert Opinions for Communications Providers, published by CRC Press.

Related Articles

Get Our Weekly Newsletter by Email