I really have better things to do than to write a blog about everything that is wrong with the TM Forum’s new Revenue Assurance Maturity Model. I really do. Nothing I could write, or do, will influence the ‘best practice guidance’ issued by the TMF. The people running their RA team have a fixed agenda. That agenda is far more transparent than the supposedly collaborative process by which they issue their standards. In case you missed the press release, the agenda is most easily illustrated by the following:
- Israeli software firm cVidya leads the TMF’s Enterprise Risk Management group;
- Israeli software firm cVidya co-leads the TMF’s Fraud Management group; and
- Israeli software firm cVidya leads the TMF’s Revenue Assurance group.
Does this suggest that TMF guidance draws upon a wide-ranging and representative sample of industry opinion about how to manage risk, fraud and assurance? You can decide for yourself. I state facts. It is also a fact that, over the last five years, the TM Forum has repeatedly issued guidelines which understate the importance of the employees of the telco, in order to push sales of technology. When I assert this, I get a lot of criticism. Naturally the criticism comes from people (however impressive machines are, they still cannot advocate for themselves). And so, the theory goes, I must be biased (after all, I have a freebie blog!) whilst the people I argue against must be honest, decent, unbiased folk (who only need to generate millions of dollars of revenue by selling software). So let me make a few, very succinct points about why the new RA Maturity Model proves that the TM Forum evades transparency wherever possible, and always seeks to undermine the value of people in order to promote sales of software.
1. Nobody told you that the new RA Maturity Model was coming out.
If you are employed by a company that is a member of the TM Forum, you have the right to comment on the draft documents they issue, prior to formal ‘approval’. Approval itself is a mystery – it is not clear who actually decides what is approved or how they reached that decision. But at least you can comment, saying if you like or dislike what they produce. Or you could, if you knew that a new document was coming out.
My point here is simple. In a few weeks, expect a press release from cVidya which plainly states that the TMF has approved the new RA Maturity Model, and that cVidya has made it available as a software add-on to their existing product suite. What the press release will definitely not say is that the TMF has approved “GB941 Revenue Assurance Solution Suite 4.5” and that cVidya have updated their products accordingly. The press release will not use that language because nobody knows that “GB941 Revenue Assurance Solution Suite 4.5” is code for the new RA Maturity model. And that is why the TM Forum sends out notifications alerting its members to the release of GB941 Revenue Assurance Solution Suite 4.5, without bothering to mention what is in it.
To illustrate my point, last week I contacted one of the four telco employees listed as authors of the new RA Maturity Model, to ask if he knew when the document was being approved. He had no idea that the deadline for comments was only days away. He admitted he had not even read the draft document. If the supposed authors of the document do not read and approve it, then who does?
2. The old model stated that people were an independent and crucial dimension in the determination of maturity. The new model deletes this dimension.
The original TMF RA Maturity Model had 5 dimensions: Organization, People, Influence, Tools and Process. In order to score as a fully mature organization, the organization had to be fully mature in every dimension, without exception. This was a straight copy from the Software Engineering Institute’s original Capability Maturity Model, and drew on their empirical evidence. The new TMF RA Maturity Model has 4 dimensions: Organization, Process, Measurement and Technology. Spot the difference? There are still a few questions about people, now included under the Organization dimension. But no explanation is given to justify this radical change, which demotes the importance of people, whilst putting even more emphasis on technology. In fact, two of the four dimensions are now dominated by technology, because the new ‘measurement’ dimension only makes sense in the context of technology to provide measurement.
But to fully understand the way in which people have been demoted in the new model, you need to appreciate the following…
3. The old model said that the assurance chain is only as strong as its weakest link. The new model just takes an average.
The old model was built on a straightforward but important principle. When many parts have to work together, the weakest performing part sets the limit on the overall performance. Good organization but weak process will deliver weak performance, good tools but weak people will deliver weak performance, and so on. The old RA Maturity Model emphasized the importance of improving maturity across all the dimensions in a coordinated way, because spending a lot of money or effort to improve one dimension would be wasteful and ineffective, if the other dimensions were left far behind. The new model just takes the aggregate score across all the questions, and translates this into the overall level of maturity.
This means that in the new model, even if no effort is put into recruiting and developing staff, a high maturity score is still possible by simply putting more money into technology and the things that senior managers tend to do.
Once again, the new RA Maturity Model deviates from a key principle in the Capability Maturity Model, which was why it was adopted in the original RA Maturity Model. No justification is given for this fundamental change of approach. The document begins by suggesting reasons why a new version of the maturity model was needed, such as the increasing popularity of digital services, and a different ‘ideal’ for revenue assurance (whatever that means). However, these reasons cannot possibly explain the much more fundamental changes that have been made in practice, without showing any reasoning or data to support those changes.
4. The new model makes it too easy to attain the highest level of maturity.
In the old model, to attain the highest level of maturity, the organization had to achieve the highest level within each of the five dimensions. It was a simple idea, which expressed how difficult it should be to achieve the ‘ideal’. In effect, an optimal organization could only exist if an optimal answer was given to every single question. Is this not obvious common sense? How can the whole organization be optimal at anything, if some crucial elements are sub-optimal?
The new model not only brings in averages, but sets low expectations. To be scored amongst the highest level of maturity, the telco needs only to achieve a score which is 80% of the maximum score possible. That means that a telco can completely fail to do some important tasks, like adequately training staff, or reviewing new products, and still be assessed as ‘optimal’ at revenue assurance.
5. There is obvious bias to the individual questions in the new model.
Consider the following questions, all taken from the new RA Maturity Model:
Is appropriate budget made available for the supply of RA technology?
Is appropriate budget made available for the deployment of RA technology?
Is appropriate budget made available for the operation of RA technology?
Is appropriate budget made available for the on-going support and maintenance of RA technology?
In contrast, there is only one question that might be interpreted as relating to another crucial aspect of a revenue assurance budget.
Is the resource profile of the RA team reviewed periodically to ensure it is staffed appropriately?
There are many other examples of how the questionnaire is slanted, but this example neatly illustrates the main problem. It was written by people obsessed by using software, and indifferent to the alternatives.
6. And all the rest…
I could go on for much longer, and in much more detail, but people complain that I rant for too long. So I will not go into a lot more detail. My main point is made: the new RA Maturity Model deliberately places less importance on people in order to focus even more attention on software and the budget to buy it. But there are very many other flaws with this work.
The new model repeatedly confuses the revenue assurance maturity of the whole organization (the very clear purpose of the original maturity model) with the maturity of a nominal RA Department. It even talks about the ‘ideal’ RA function, as if all that matters is the function, and not how the rest of the business behaves. The goal of revenue assurance is holistic, making demands all across the telco, and the original model sought to empower RA managers and staff by making this clear. Also, the business should have the right to split up work between different departments in any way that best suits them. What matters is the overall result to the organization, not the ego of some guy with the job title of ‘Head of RA’.
The new revision was supposedly needed to keep up with technology, but its understanding of technology is backward-looking. Time and again it refers to ‘RA technology’ in ways that indicate this technology must be separate to other technology. RA is a goal, not a technology. There is no reason why the same technology might not satisfy multiple goals, including the goals of RA. As such, the new model takes no account of the impact of Big Data, and other trends towards mass aggregate use of data across the enterprise. In fact, it still has a prejudice against using data from ‘secondary’ sources, even whilst Big Data is making a nonsense of the idea that data can only be trusted if it comes from ‘primary’ sources.
The new model claims to be a simplification of the old model, but it is not. The old model had five answers to every question, a simple way to express how every answer to a question maps to one of five levels of maturity. By destroying this mapping, the new model is opaque, and does not represent maturity as a stepwise improvement that must go across all dimensions.
As sadly typical of the TMF RA team leaders, the new model lacks transparency. This fits with its increasing complication, which is hidden from view and then mis-represented as simplicity. The new equations to calculate maturity are not visible to the user. The old assessment could be performed with pencil and paper, whilst the new one must be done in a Microsoft Excel spreadsheet, because of the equations hidden within. All the questions and answers in the original model were written out in full, so everybody could see them and implement them as they wished. Because the old model was transparent, telcos were free to tailor the model if they wanted to. There is some irony in this fact, because Gadi Solotorevsky often gave presentations about the ‘TM Forum RA Maturity Model’ in co-operation with Telefonica, even though Telefonica had very clearly changed the model to reflect their point of view. As the new document explicitly states, it would not be possible for a telco to change the new model, even if they wanted to, because of the way the equations have been implemented.
It should be noted that the new document claims to have improved on the old model because it has ditched the weighting scheme which was used in the original model. However, it is important to reflect on why the original model had such an inelegant weighting scheme. The reason was that the weightings were the result of many people’s contribution to the original model. If we surveyed the opinions of ten people about how important question A is, relative to question B, we might expect ten different answers. To get to an answer, the original model just totalled the weightings proposed by all the contributors, and used the average. It was not a perfect system, but it was clear and fair. The new model says it has improved upon this. However, I cannot work out how it would be possible to do this, unless just one or two people decided to impose their will on the work. As such, the new model must be much less of a collaborative team effort than the old model was.
Which leads me to my final point. When I was at WeDo’s user group event, I saw an excellent presentation by Daniele Gulinatti, VP of Fraud Management & Revenue Assurance at Telecom Italia. His presentation struck a chord with me, because it was all about real people, and getting the best from his team. They delivered great results by using imagination and good processes, irrespective of the limits on their technology budget. And some of his team were in the audience, and I can vouch that I could feel their enthusiasm from the other side of the hall. So I find it hard to reconcile Daniele’s effervescent humanity with the fact he is listed as one of the authors of this stilted, cold TMF document. On the one hand I see a manager who clearly understands that superior results can only come from a motivated team. On the other hand, I see a TMF document that treats people as inferior to, and more disposable than machines. What can I do, but shrug my shoulders, and wonder how this is possible?
Perhaps this divergence is natural in human affairs. Many managers want official-sounding documents to show to their bosses, arguing they should have a higher budget. I was always conscious of this potential pitfall with the original RA Maturity Model. Even though it explicitly presented a strategic overview, there was always the prospect that it might be manipulated to give quick budget wins. That is why so many vendors and consultants copied the idea (but not the content) in the hopes of boosting their sales. Their versions of the RA Maturity Model soon disappeared. The original TMF RA Maturity Model has thrived, because it really was long-term, strategic, and built on solid foundations. And that means curbing bias (like the need to maximize this year’s software budget) in order to present a more balanced model that genuinely considers what is needed in the long-run (like a motivated team, which receives proper rewards for its successes).
But like barbarians, the ‘leaders’ of the TMF team are determined to wreck anything that does not immediately gratify them. Maybe I am in the minority. Perhaps the majority agrees with their approach. If so, I would accept the will of the majority. But we will never know, because whilst the original RA Maturity Model was written in 2006 with the involvement of just three telcos, the new RA maturity model has been written in 2014 with the involvement of just three telcos. Getting three telcos to contribute to an RA document in 2006 was a minor miracle. In 2014, it is a sign of apathy, or worse. After all, people have had 8 years to get used to the idea of an RA Maturity Model. Only a few of us understood the idea in the beginning. The TMF claims that half of the respondents to its RA surveys use the model. But despite that, they could only get MTN, Telecom Italia, and Telefonica Chile to contribute their conception of the new ‘ideal’ for revenue assurance. With the greatest respect to the people working in those telcos, why do they know the new ‘ideal’ for revenue assurance, more than all the other people who now work in telco revenue assurance? And based on the person I spoke to, what confidence is there that anybody currently working for a telco has actually read the whole document?
The team who wrote the original RA Maturity Model produced the questionnaire using a voting process. Questions were proposed, answers proposed, people voted on which ones made the cut, and which were rejected. And then, there was a vote on the weighting of the questions. If the TMF really wanted the opinion of telcos, why did it not run a survey on the content of the new RA Maturity Model? Such a thing would have been impossible in 2006. In 2014, the same task is incredibly easy. I believe it is because the whole point of their ‘collaborative’ process is to exclude the involvement of telcos, whilst making it appear that they invite their input. Everything is done to make it hard to participate or respond, from requiring people to fly around the world to attend meetings in person, to hiding equations in spreadsheets, to sending out notifications about “GB941 Revenue Assurance Solution Suite 4.5”. The TMF does lots of surveys about lots of things. Why not decide the new ‘ideal’ for revenue assurance by doing a survey? The only possible reason is that the answers might not support the leaders’ agenda.
The new RA Maturity Model is a broken product. But that is no surprise: it is the output of a broken process. The TMF has no interest in fixing one. It is beyond my abilities to fix the other. The only good thing about the new model is that it will die in a year or two, victim of its own failings. It says too little about people – and people often last longer than technology. It is too easy to reach the top level of maturity, meaning there will soon be calls for an upgrade. It does not promote the kind of balanced approach needed for long-run improvement. The equations are too complicated to understand, and have been hidden from view, meaning they cannot be fixed if they do not work. These fundamental flaws have doomed it to an implausibly short life for a supposedly ‘strategic’ model. But then, we should not be surprised. The real authors of this revised model are worried about this quarter’s sales figures, not about the next evolution of a mature strategy for business improvement.
Maturity is a nice concept, yet at high elevations the air is rather thin. To say, “ABC Telco is at a 40% maturity level” is meaningless by itself. It begs the questions: what services? what business units? which processes?
At the ground level, though — in targeted operational processes where the company truly needs to win — measuring and plotting a path of excellence is crucial.
I participate from time to time in a programmer’s on-line forum and great discussions happen there because very specific technical questions are posted and many experts can chime in.
The complexity and variety of RA functions makes it much harder to codify lessons learned. And this is precisely why consultants, software vendors, and outsourcers have found fertile ground in RA.
Wouldn’t it be great if RA was a visual practice?
Consider the business model behind Krossover, a startup (owned by you-guessed-it — a 25 year old ex-athlete) provides analytics to high school basketball teams. A high school team pays $1,400 and gets a full season of analytics on each game in the season. The local team simply has someone videotape each game and provide the roster of players. After that, automation and a busy team of guys in Bangalore create a complete on-line analysis for coaches and players.
Krossover is catching on like wildfire. My wife saw a story about it on Japanese TV. And the Japanese are as nuts about high school sports as Americans and Canadians are (and where Krossover is currently marketed).
RA is in need of a Krossover. Any ideas out there?
Thanks for the comment. You raise some key points, as always.
My first observation is that sport is so easy to analyse because everybody knows how to keep score. Which team won the basketball game? Is it the one who threw the ball highest in the air? The one who scored most fouls? The one with the best uniform? The one with the happiest fans? No, of course not!!!! It’s the team who scored most points. But that’s where sport differs from business. Not every business has the same objective. And not every employee in a business has the same objective. That’s where the new RA maturity model gets things backs to front. They create a situation where they’ve decided what the objectives will be, and hence know how to satisfy them. But they’re thinking small, and looking backward. The future of business assurance belongs to high-powered teams who help the business achieve its objectives – which begins by listening and understanding what those objectives are. The teams who try to dictate objectives from the bottom-up, by starting from the controls they want to implement, the technology they want to use, and then persuading the business it should pursue the related goals, are the ones who will get stuck in middle management hell. They’ll wonder why they don’t get more senior support, as if it’s the c-level’s job to justify the roles people want to adopt under them, and not vice versa.
Put it this way: every business needs someone to perform the payroll, because every business pays its employees. That task can be handled in-house, or much of it can be outsourced. But is payroll a source of strategic advantage? Of course not! At best, it’s basic hygiene. Nobody is ever going to get the execs excited about payroll, unless they’re suggesting a way to perform the same tasks more cheaply (which is why outsourcing is such a common move). And the same kind of future hell is envisioned by the new RA maturity model. They start by prescribing tasks, and end up being seen as hygiene, not as a source of strategic advantage. And so, they’ve immediately walked down the path where the main question is the cost. Arguing they add benefit will end up as mindless as saying payroll adds benefit (because otherwise employees will get fed up and leave!) – we can agree you’ve got a crappy business if you don’t do this right, but that we shouldn’t expect congratulations and rewards just for getting basics right.
With no intention to offend anyone working in Bangalore for Krossover, but they represent the opposite of what I think is the best possible future for business assurance. Instead of going down the route where we atomize every task to the point of making it brainless – at which point the only question becomes whether it is cheaper to outsource or to automate (or both) – we need to upskill business assurance.
Do you know who gets paid well in sports? Coaches. Why? Because players aren’t machines, team strategies aren’t mechanical, and success is proven by the end result, with coaches getting fired if they don’t get results. I’d like to see business assurance take a radically different path than that advocated by the software obsessives who dominate the TMF. That is the path of the elite coach, who does things that no machine can do, because there’s no mechanical way to get a team to grind out the best results. He’ll use all the available tools – including all the data he can find. But he’s more than a data cruncher. He’s a tactician, and a strategist, and a psychologist too. That’s what is missing from the TMF vision of maturity. Follow their path, and they’ll relegate the assurance practitioner to something akin to the waterboy, performing a mundane and repetitive task, which may be essential, but which anyone can be trained to perform. But to be a coach, it’s not enough to receive training. You have to transcend, to synthesize, and be able to inspire, in order to give training, even beyond the point when anyone has taught the coach how to train. That’s the advantage delivered by the best coach. That’s the advantage the best assurance should seek to give to business.
Very well said. The top down objectives need to drive RA priorities.
To tease talkRA readers a bit: last week I interviewed the CEO of a billing solutions vendor who has adapted his system to be top down. In a few days, see the Black Swan story.
I thoroughly agree with the “Coaching” analogy, too. That’s the missing link: a high quality RA Coaching Academy (virtual or live) to advance the art. And I’m also looking for ideas on how I can help move such an idea along.
In defense of those Krossover guys in Bangalore, we should think of them as extensions of the analytic delivery system. They are not basketball experts but are merely doing the mundane but necessary work to prepare the data for further analysis by coaches and players.
We all know the role of software in RA is great, but how does an organization either grow Moneyball guys like Peter Mueller or tap into external experts/consultants without creating a high-maintenance or vendor lock-in situation?
I love the concept of applying moneyball-type analysis, which is why I loved your interview with Peter Mueller and why I want to emphasize that I intend no disrespect to Krossover’s guys in Bangalore. The way I see it is this: the data analysis is there to help the coach. So if you want to pitch high-end data analysis, you must start by pitching the need for a high-end coach. The two aren’t in opposition. If you tried to manage a sports team by dispensing with the coach, by just applying some clever-clever algorithms to determine which team members to play, what tactics they should employ etc, that might work for a while, but you’d soon find that the rival coaches would ‘work out’ how to beat those algorithms. And then you’d suffer loss after loss.
The same thinking applies in fraud management. You can’t just set up some rules for how to do fraud management and then sit back, thinking you’ve solved the problem. The fraudster will keep evolving, even if you don’t. And whilst that’s easy to understand because there we can easily conceive of the fraudster as an intelligent actor, it’s also true that assurance challenges evolve even if there’s no specific ‘intelligence’ that we’re combatting. Thats because services change, technologies change etc, but just as importantly corporate priorities change, as often as not because of the changing competitive environment.
So I’m all in favour of powerful tools based on data analysis, and I can see why the Krossover guys would provide a really useful service to a coach. But in my mind, this isn’t a ‘chicken and egg’ scenario, where we can debate what comes first. When it comes to assurance, the creative intelligence and insight of the coach (i.e. the business assurance practitioner) comes first, as he coaches his team (i.e. the telco) on how to get the best results they can. That’s always been the way, dating right back to the original pioneers of business assurance. Give the coach the best tools, give the coach the best data, but the team succeeds or fails based on the coaches’ insights into how to get the best from his team, not because you’ve copied some playbook which has been handed to every single coach. The data helps him understand the team, the playing environment, and the ‘opposition’ (where relevant). But it’s useless without a decent coach to synergize solutions from the available resources with a view to attaining goals. The coach needs to decide what tools and data he most requires, and what he will do with them. And the coach’s needs will vary from team to team.
People who try to invent systems of thought that dictate how to get the best results are like weak coaches who think they will win because a superior playbook has fallen into their lap. But they’re wrong. Firstly, nobody gives away the very best playbook, no matter how generous they are. Secondly, maybe it’s a fine playbook for some teams, but that doesn’t mean it suits your team. And finally, no matter how fine the playbook is, it needs to keep evolving. So as soon as you get too specific and detailed when writing a ‘maturity’ model, you end up with a glorified playbook, and as the competitive environment evolves, that playbook’s fixed prescriptions will ultimately doom your team to be losers.
And I suppose now I’m reiterating what’s so wrong with any ‘maturity’ model that thinks technology is a key dimension, but relegates people to a subordinate role. That’s like trying to win a race by designing and building a supercar, and then thinking you can put any monkey behind the wheel. Whilst the technology can deliver an advantage, it won’t even be useful if it’s not being steered by somebody who knows how to get the most from the technology, and who knows where he is going! In fact, in the wrong hands the supercar (i.e. the data) will do more harm than good.
A fool with a tool, is still a fool…
First, let me congratulate you to launch a great web site being a good step forward of rAtalk. I met you in 2013 at WeDo User Group summit.
Now on tMForum revised maturity model. The original maturity model, which I believe though was very not simple in implementation because of being more subjective in nature, but the revised one is more complex and mechanical with over 850 questions, some of which do not seem to be in conformity and synced to others. I tried to fill in to assess our maturity level very enthusiastically, but lost interest after I was done for about 25% as I had lost the thread.
I understand that tMForum leaders tried to introduce more quantification in measurement, but lost the objective by making it very lengthy and putting more focus on technology than people. Now technology is very important, but people are more or at least equal.
Ahmad Nadeem Syed
Thanks Ahmad! It’s always encouraging to hear from people who work in telcos, saying they benefit from reading Commsrisk. I don’t want to run this site if it’s not helping the hardworking people inside telcos.
I feel strongly that telcos do not pay enough to the people who work in risk, RA and fraud. Telcos sometimes try to cut corners, either by allowing avoidable mistakes to happen, or making unreasonable demands of the staff that are trying to protect the telco’s best interests, or by spending a lot on automating what cannot be automated. That was in mind when the original RA maturity model was written. The ‘people’ pillar was part of the CMM model that we copied, and if software engineering recognizes the crucial importance of people, then I saw no reason to assign any less importance to the people working in RA.
It saddens me that the team behind the new RA maturity model have swallowed the vendor hype. They have nobody who fights with passion on behalf of telco employees. I know that people from telcos attend the TMF meetings, but I question how much telcos can lead the work of the team when each telco representative usually attends one meeting and never returns – and based on how I’ve been mistreated and sidelined in the past. I can’t think of a single telco employee who has attended as many meetings as I did on behalf of the telcos I worked for, and I recognize I didn’t attend that many because it’s so hard to persuade your employer that taking part in the TMF is going to be a useful activity. And maybe my employers were right, given what happened whenever I complained about the work delivered by the TMF’s team!
We need a different way forward, where telco people are properly listened to and drive the direction of work, instead of being marginalized or treated as either customers or students, only fit to receive the tools and education supplied by the ‘experts’ who work for vendors. Take a look at my recent post about RAG, and you’ll get of the ideas I’m exploring to make it possible for telco people to collaborate more, by reducing the cost and other obstacles to teamwork.
Hey Eric, still as succinct as ever, I see – but I see your point! I am trying to find some RA Benchmarks on the TMF Forum (which Lebara are members of) and seem to keep finding links to software … can someone point me towards good old rating/usage/billing industry benchmarks, please (we are attempting to climb that maturity ladder!)
Lol. You look for useful data and you find links to software instead – that’s the TMF for you. I don’t think they have anything useful any more, but I’d be happy to be proven wrong.
(Where’s the Global Billing Association when you need them? They had lots of useful benchmarks of the type you’re asking for. But then they got taken over… by the TMF.)