Today I am trying to motivate myself to finally finish writing the enhanced revenue assurance maturity model I started working on in May last year. But motivating myself is proving a struggle. It is proving so hard to motivate myself that I may actually iron some shirts first, just to put it off a little longer. I might write a very long blog entry to help delay things first, so you have been warned….
Back in May last year, reworking the maturity model seemed so obvious. The idea for improving the model came at TeleManagement World Nice. I was there to speak alongside my good friend and colleague, Dr. Gadi Solotorevsky, who as well as being Chief Scientist at cVidya also spends his time heading up the TeleManagement Forum’s RA team. [To be precise, he has headed up both their teams, technical and catalyst, for the last 4 years, so probably deserves some kind of medal for services to revenue assurance.]
One of the topics we discussed in that 2006 presentation was the maturity model outlined in the original TMF technical overview for revenue assurance, also known as TR131. [By the way, this document is now supposed to be available free of charge to everyone, despite what the web page says, thanks to a recent change of rules by the TMF.] Before that conference, I was starting to wonder if the idea of maturity was as dead as a dodo. The original version had been written in 2003 whilst I was still at T-Mobile UK, but it simply had not taken off, at least as far as I could tell. Many people seemed to like the basic concept, so that was not the problem. The goal was to define a strategic evolutionary path for revenue assurance, and then assess the actual state of the business against that model. I am not claiming that this idea was original or imaginative, because it was not original or imaginative. On the contrary, it was a simple reworking of the model underpinning the Carnegie Mellon Software Engineering Institute’s Capability Maturity Model Integration (CMMI). As far as I was concerned, the connections between the original stimulus for the CMMI, the goal of producing better software, and that of revenue assurance, avoiding bugs that cause money to be lost, were pretty obvious. And as CMMI was effectively a practical extension of the ideas of thinkers like William Edwards Deming, who were trying to apply a scientific approach to improving processes, it had a huge appeal. If nothing else, it seemed obvious to me that revenue assurance was a good representation of Deming’s ideals about iteratively studying and improving performance. However, in early 2006, as far as I could tell, nobody was interested in maturity. But I was wrong. At TeleManagement World Nice 2006, and similar events soon after, it was obvious the opposite was true. Lots of people were interested in maturity. The problem was the reverse of what I thought it was. The problem was not apathy about strategic thinking and paths to reveue assurance maturity. The problem was multiplicity and fragmentation of thinking. Because 2006 was the year when seemingly everyone developed and presented their own maturity model.
So, let us get something straight here. Everybody having their own maturity model acheives nothing. You might as well not have any maturity models. I do not say that because I am jealous about who has the better model. Imitation is the sincerest form of flattery. So if people were copying the idea of a maturity model from the work I did and published with the TMF, I am flattered. Perhaps people thought up the idea of maturity completely separately. Well that is fine too, though really people should do a little more homework before they reinvent the wheel. Reinventing the wheel is not a clever thing to do. When the TMF publishes a document about revenue assurance maturity, it hardly takes a detective to find out about it. Perhaps people thought their models were just better. Well, that is fine too, but they might as well drop the TMF a line to say so, instead of just working secretly on their own. So I do not really care who has the best model. All that matters is that somebody has a good model and that I can get to use it. The problem so far is that nobody has a model that is even remotely good, including the one I am working on, so squabbling about which model is better or worse would be a complete waste of time.
It does not take a lot to justify my statement that all the many revenue assurance maturity models are alike in one way: that they are rubbish. To justify the statement, I just need to quote some people with big brains and the ability to calculate the value of their own work.
William Edwards Deming said
“In God we trust, all others bring data.”
The statistician George Box wrote
“Essentially, all models are wrong, but some are useful.”
If the revenue assurance maturity model is to have some value, there has to be some data to support it. But so far, so nothing. I wil be honest with you on this point. Even the one time that T-Mobile UK did an exercise based on gauging maturity, it was all subjective in a way that would make it impossible to gauge improvement over time, meaning there was no useful ongoing collection of data. I flirted with the idea of using maturity as a benchmark for performance in C&W’s international operations, but again the exercise was stymied because there was inadequate data to genuinely gauge if the model was successful. However, I think I understand how you could collect data, from more than one telco, and use it to validate and refine a revenue assurance maturity model. But I struggle to see how most of the “maturity models” people talk about could ever be validated using data. The problem they have is not that they do not discuss maturity or strategy. They do. The problem is that they are an arbitrary snapshot of the author’s opinion. So, in essence, they are only useful to a telco if the author happens to say something relevant and useful to that telco, despite not having worked for it, knowing nothing about it and having no data to support his opinions. There is no methodical way to improve or change the model. In short, the problem with the average maturity model is not that it fails to discuss maturity. The problem with the average maturity model is that it is not a model.
The points made by Deming and Box are really pretty straightforward. Having a theory is nothing. Anyone can have a theory. Theories may sound good or sound bad, but you would be silly to trust a theory just because somebody says so. Some very plausible theories have been shown to be wrong. For example, the world is not flat, the earth is not the centre of the universe, human beings can travel faster than 30mph without dying and women tend to be smarter than men. Columbus was looking for China, not America, and he was lucky that his error about the size of the planet was cancelled out by finding a continent he had not expected to be there. So theories are only useful if you compare the theory to the real world. Then you can modify the theory to better conform to what you actually observe. That is just the essence of taking a scientific approach. A model is a kind of scientific theory with a clear relationship to specific data. So in 2005, after seeing all the various theories unsupported by data, it was obvious somebody needed to construct a mechanism that would make it easy to collate genuine data. So, there was really only one choice. First, it had to be driven by me, as I am about the only person daft enough to spend time constructing such a mechanism. Then, it had to be supported by the only organisation capable of collating that data: the TeleManagement Forum (TMF). The TMF is the only organisation which could be objective about a maturity model/theory (all the others were biased because they were selling something) and had the resources, mission and infrastructure to bring together data from many telcos. So we formed a team in the TMF and set to work on creating a more detailed model which could be used for a meaningful level of data collection. And a meaningful level of data collection is not the same as somebody being able to proclaim they had reached the highest level of maturity possible, just in order to enhance their own career. It means asking a series of detailed and specific questions where it would be straightforward to find and verify the answers, and where the questions could be applied to all kinds of telcos.
So that was back in the middle of 2006. Now we are someway into 2007 and it is still a work in progress. Constructing the base questionnaire, at sufficient detail to get meaningful data, but also general enough to apply to many businesses, has been very tough. Maybe, just maybe, it will be finished soon. But it still will not be a model. It will only become a model after some real data has been collected, and given the difficulties in getting agreement on the questionnaire, I am sceptical about whether that will ever happen. After all, if it is easier to just call up a consultant, or listen to someone speak at a conference about their opinions on what is “best practice” in the industry, why go to the trouble to collect data? There is only a motive for collecting data if you can distinguish between seeming to be good at something and actually being good at something. But in the absence of any real models, how do you distinguish the two? After all, Columbus came back to Europe from America still thinking he had landed in Asia. He died without realising he found a new continent. If people can make mistakes like that, what confidence can be applied to distinguishing worthless and worthwhile revenue assurance?
So this is my personal opinion of the state of revenue assurance maturity in the telecoms industry. It is pre-mature. To get useful science, you need to follow solid basic principles and you need to objectively collect data, then iterate over and over. In the absence of solid principles and solid data you get lots of opinion and debate – or people who agree with each other but who have no real idea if they are right or wrong and who do nothing to find out either way. In other words, you get philosophy, not science. So far revenue assurance is a philosophy, not a science. Its value will remain unprovable until it becomes a science. To become a science, some people will have to offer up objective data without being certain about the benefits. Their data may ultimately prove that their theories about good revenue assurance are all wrong. So it takes courage to go back to the real data. Drafting a model is just the first, easy, challenge. Populating the model with real data is the harder task. It is 4 years since we started down this path – I wonder how much longer it will be before we get to the end, if we ever do. Anyhow, it is late now and I will take the same approach to finishing the document that I suspect most will take when it becomes time to answer the maturity questionnaire and gather the data. I will do it tomorrow ;)
Epilogue: Yesterday I was asked to speak at IIR’s Telecoms Internal Audit and Risk Management Conference taking place between May 8 and May 11 in London. The topic? You guessed it: revenue assurance maturity. So I had better hurry up and finish the questionnaire after all. Wish me luck – I will need it if I am to collect any data by then….