How People Were Downgraded in the TM Forum’s RA Maturity Model

RA practitioners who work for members of the TM Forum may be interested to hear that the new draft of the TMF’s Revenue Assurance Maturity Model questionnaire (cover sheet pictured above) is currently open for ‘member evaluation’. That means you can criticize it until December 12th, and somebody will pay attention to your criticism… in theory. In actual practice, they will probably just ignore it because restructuring a massive and really badly-designed spreadsheet would take a lot of work, and nobody wants the burden. So I do not recommend you waste your time responding to the member evaluation, but I have, with a criticism that regular readers will find familiar. In short, the new maturity model significantly downgrades the contribution made by human beings who work for RA departments and elsewhere in the telco.

I strongly disagree with the decision to reduce the importance of people when evaluating RA maturity. As complexity rises, the skills and motivation of employees are increasingly important. Sadly, that does not matter to the authors of this questionnaire, as demonstrated by the fundamental changes they made to the mathematical formula for evaluating maturity.

Before I go further, let me make an observation about auditing spreadsheets. If you have ever audited a complicated spreadsheet, you will know it is a tedious exercise. And yet, it can be necessary. I personally have dealt with bad people who rigged spreadsheets. They deliberately made their spreadsheets look like they worked one way, when really they worked a different way. These people were bold liars, and I suspect they often got away with it. They knew that whilst anybody might check spreadsheet formulae in theory, very few have the time, intellect, motivation and perseverance to do so. As such, the liars could simply misrepresent how the spreadsheet works, whilst pretending they were being open and transparent. The TMF is doing something similar with their new spreadsheet, which they call a ‘questionnaire’. It is not just a questionnaire – it is also a complicated sequence of calculations. But they are not being transparent about those calculations, or how those calculations have changed since the previous version of the maturity model. This is what the TMF says about the new model, with my added emphasis:

It includes changes to the wording, the weighting of questions and answer options, as well as removing redundant questions, and some minor reorganization of questions within sections.

That is a lie. When I explain how the calculations have changed, I am sure you will agree there has been a very fundamental change in the way questions are organized, and hence the way maturity is calculated. But before I do so, let me make another observation. The previous version of the questionnaire was a real questionnaire – it was not a spreadsheet. The method for calculating maturity was separately explained to users of the model, because they might want to do the maths themselves, instead of relying on a spreadsheet to do it for them. That method was really very simple. By conflating a questionnaire with a spreadsheet, the new model has created an opportunity to increase complexity, and to mislead people about how the method of calculation has changed. Sadly, from the words above, it is clear somebody made a conscious decision to deceive.

Let me present the new and old formulae for calculating RA maturity, so you can see the significance in the change of approach. To begin with, I must define some variables.

p = maturity score for the people who work for the telco
o = maturity score for the telco’s organizational attributes
t = maturity score for the telco’s technology
w = maturity score for the way the telco works (a.k.a. its processes)
i = maturity score for how RA concepts and results influence the business
m = maturity score for the telco’s measures of performance

Now let me define a function called min().

min(a, b, c, …) = the lowest number in the set (a, b, c, …)

So now I can contrast the old and new maturity formulae.

Old maturity score = min (p, o, t, w, i)

New maturity score = \dfrac{(\dfrac{3p}{7})+(\dfrac{4o}{7}) + t + w + m}{4}

Put like this, I struggle to see how anyone with a mathematical mind can think the new formula is a ‘minor reorganization’ of the old formula. It is both a fundamentally different approach, and it clearly reduces the significance of p. The maturity score for people used to be one of five equally important scores for five separate dimensions of maturity. The new method relegates people to being worth \frac{3}{7} ths of the scores given to other attributes like technology and measurement.

To obtain the new formula, I had to dig into the spreadsheet to understand changes that have been disguised. For example, the old method of calculation was based on five clearly separate dimensions: people, organization, technology, way (a.k.a. process), and influence. Each dimension was equally important. The new spreadsheet has only four dimensions: organization, technology, process, and measurement. The authors pretend there are still plenty of ‘people’ questions in the new questionnaire, but they are now included as a subset of the ‘organization’ category. In fact, those people-related questions only represent \frac{3}{7} ths of that category; the remainder of that category contains questions that previously would have belonged in a stand-alone organization dimension.

We can debate the virtues of substituting ‘measurement’ as a dimension that replaces ‘influence’. However, as a poker player, I consider this to be a tell – a behavioral clue which reveals what is in the mind of another player. My belief is that this new version of the model has been written to suit one particular international telecoms group, by people who always wanted to put more emphasis on data-driven reactive measures and less emphasis on proactive human activities like the review of controls before a new product is launched.

The original maturity model was open and transparent. Anybody could easily modify it any way they pleased. Some did alter it, to suit their business and their opinions. So why should this new version be tailored to suit the prejudices of a particular RA department in a particular group? And why should they dictate to the whole industry that a group head office activity like collating measures of leakages is more than twice as important as training staff in each local operating company?

I take the opposite view: measurement is a trivial exercise unless people do something in response to the measures. That is why the quality, skills and influence of human resources are at least as important as enumerating the differences found by an automated reconciliation.

We can also argue about the change in emphasis, from saying the maturity score is the lowest score in any of the separate dimensions, to making it the mean average of them all. But the impact of that change is clear. The new method means management can boost their RA maturity score without tackling their greatest weaknesses.

Consider a hypothetical telco with the following scores for each of the variables used in the formulae above: p = 1; o = 4; t = 3; w = 4; i = 3; m = 2. Per the original maturity model, the overall maturity score would be only 1, because p = 1. The message is clear and unequivocal: to boost performance, the telco must invest in its staff. In contrast, the new formula gives a much higher maturity of 2.93. And this new approach allows many ways to boost supposed performance, because management could try to lift t from 3 to 4, or raise m from 2 to 3. Either of these changes would boost the overall maturity score to 3.18. In comparison, raising the people score from 1 to 2 would only increase the new-style maturity rating to 3.04.

The new calculations are clearly biased towards technology, process and measurement, and they downgrade the value of human resources. Furthermore, the approach is wrong-headed. RA is only as strong as its weakest link. It does not matter if you have wonderful automated detective controls if you lack the people who can act to recover leakages and prevent them recurring. It does not matter if you have the most precise measures of leakage the world, if you lack the influence to do anything about them.

I have asked individuals who say they were involved in developing the new maturity model to explain the changes in the way maturity is calculated, and to answer my questions in public. They sometimes say yes… but then beg off from actually doing it. Somebody, somewhere, has hidden motives for making important changes to the way maturity is calculated, but they have avoided scrutiny by putting the detail in a spreadsheet, and then pretending that the most significant change is “a minor reorganization of questions within sections.” Decide for yourself if this is how a ‘best practice standard setting’ organization should behave.

If you want to review and respond to the TMF member evaluation, you have until December 12th. But based on my previous experience, doing so would be futile. I do not believe there will be any serious consideration of my complaint within the TMF. They are unlikely to pay me the courtesy of replying directly, or even acknowledging the points I make. My main objective is to warn you of the consequences of this change in approach. After reflecting on whether an investment in people is really worth less than half of an investment in technology, you may decide that the new version of the maturity model is not a suitable tool for appraising your telco.

One major telco group appears to be using this draft questionnaire like it is has been finalized and approved already. I doubt that anyone will be willing to confront the real backers of the new maturity model about why they think the hard-working staff they employ are worth so much less than the software they run, or the numbers they report. The saddest irony is that the new questionnaire is more than 10 times longer than the original, and will create lots more burden for under-appreciated employees. My sympathies lie with the workers who tackle genuine leaks, and not those who play silly games within the unreal cells of a spreadsheet.

Eric Priezkalns
Eric Priezkalns
Eric is the Editor of Commsrisk. Look here for more about the history of Commsrisk and the role played by Eric.

Eric is also the Chief Executive of the Risk & Assurance Group (RAG), a global association of professionals working in risk management and business assurance for communications providers.

Previously Eric was Director of Risk Management for Qatar Telecom and he has worked with Cable & Wireless, T‑Mobile, Sky, Worldcom and other telcos. He was lead author of Revenue Assurance: Expert Opinions for Communications Providers, published by CRC Press. He is a qualified chartered accountant, with degrees in information systems, and in mathematics and philosophy.