Here’s a new thought. What exactly would we term (correctly) as proactive Revenue Assurance? I have seen quite a few scenarios where an operator claims that he has a proactive check system in place, but on further analysis, we find that it isn’t truly proactive, but merely reactive with a much smaller time period of detection and rectification.
It got me thinking about the nature of revenue leakage. When we take a step back and look at the big picture, we see that a leakage happens due to unforeseen eventualities, misconfigurations/omissions, poor system integrations or maybe even internal fraud. In none of the above cases (which are but a few of the reasons for revenue leakage) can we truly say that we are capturing all leakages. How would we go about proactively checking for integrity/completeness?
In my experience, I find that to a certain level, we can perform proactive checks as far as subscription data is concerned. Standard checks like Subscriber feature information matching between HLR, Provisioning and Billing etc. would in some ways prevent revenue leakages proactively(if the Subscription Data integrity is maintained, we find that many common scenarios which would come in usage data also reduce). Issues in usage data like usage not being billed can be attributed to unsynchronized subscription information across the Provisioning system and Billing system. I have seen cases where a subscriber is provisioned for certain services at the switch but the same set of services is missing in the billing system. What happens in such a case is that the subscriber uses certain services (eg. Voicemail), but won’t have to pay anything for the same.
For usage data (i.e. XDRs), I find that most proactive checks still require at least a minor amount of leakage to occur before an alarm is triggered. The only true way to perform proactive revenue assurance as far as usage data is concerned is to either ensure all root-cause possibilites are covered and alarms are set-up at the core level rather than at the data output level(which is a massive task), or have a RA test bed before launching any service into a production environment (we could scale down the total load to consider). Using the test bed approach, we could effectively run RA metrics to verify absence of leakages, as well as capture issues at an early stage. Once the RA department clears the new service/product, it could be launched into production. Once again, this would not give us a 100% assurance of no revenue leakages.
Any other suggestions for Proactive RA?
My interpretation of re-active, active and proactive is” when in relation to the activity which generates the revenue is the assurance done”. In re-active we would compare the service profile in the network to the profile on the billing system after the service activation was done or compare the in-and-out count of CDR’s between systems after the CDR’s were transfered (say the next day or at month-end however the control was set up).
Active RA is doing the assurance during the activity and that comes down to QA. An example here would be verification done by a second party when rate tables are updated. For instance, take a copy of the table before the changes and one after the changes were uploaded or captured manually. Compare these two and if everything is cool, put the table back into production.
I understood proactive to mean those controls we put in place to eliminate root causes for leakages such as RA reviewing system change specifications to ensure that existing controls/reports/extracts are not affected during the development life cycle of new products/services; RA being involved in the user acceptance testing of new systems and products/services launched; RA reviewing marketing plans and agreements with dealers or content providers. I don’t believe RA is responsible for margins but I have seen agreements with loopholes which resulted in the operator being exploited by the dealer or content provider. RA plays an objective role there and looks at the contract from a perspective that an operational person would not. This would not be active RA as the revenue generating activities (sales of new content) have not commenced yet.
Proactive could also mean doing additional checks for incase you have missed something in the re-active or active controls. That implies, as the maturity model also indicates, that a certain level of maturity must have been achieved in the organisation. You can hardly say that you do pro-active if you do not have adequate re-active and active controls in place. The ideal is in the mix of controls but what does that look like exactly?
Is there a different manner than the time relation to assurance to differentiate a re-active from a pro-active control? Is there certain types of controls or techniques to use that is only suited to pro-active? Good question.
I had exactly the same concern back when I was editing the first edition of the TMF TR131 Revenue Assurance Overview. The second edition will be released soon, but it keeps the same answer to your question. We also defined proactive, active and reactive based on time – which makes sense given what the words normally mean. There are two relevant points in time for categorizing revenue assurance activities (1) when the error/fault/mistake takes place that is the root cause of a leakage, and (2) when the consequential leakage takes place. I know those definitions are quite simple, and you can further debate precisely what is meant by the root cause or the leakage but I think these are good starting points for any universal definition. If you base your definitions on these two points in time, the meaning of proactive, active and reactive is straightforward. Reactive applies to activities that take place after the consequential leakage, such as checking a sample of real customer bills for evidence of errors. Proactive applies to activities that take place before the root cause occurs, which is the same as saying it is preventative and tries to stop mistakes before they happen. This would include activities like design and risk reviews, and pre-launch testing. Active applies to activities that take place after the root cause, but before the consequence, so is more likely to be relevant to postpaid services than prepaid services. An active test might be to look for call records that are falling into error logs on a daily basis. If identified and reprocessed quickly enough, they may still be included in the normal bill cycle, thus averting the leakage, although this approach does not address the root cause of why the records fell into the error log.
One of the main reasons for distinguishing proactive, active and reactive in this way was to recognize that telcos may adopt different approaches to improving the returns from RA. One approach would be to move activities from reactive to active – they focus on improved data analysis and monitoring that seeks to identify and address leaks before the leaks affect the business. An alternative and very different approach would be to move from reactive to proactive. This would involve taking the lessons learned from past failure and using the knowledge to anticipate the causes of leakage before they occur. The aim there is to implement new technology and products in such a way that the risk of errors is minimized.
The proactive-active-reactive definitions in TR131 fit well with accounting and controls terminology. Controls might be preventative or detective in nature. However, we were noticing that many people were using the phrase “proactive” to mean “faster detection”. Faster detection may be a good option in many circumstances, but by using the word proactive this way, it blinds people to the alternative to detection, which is prevention. After all, some say “prevention is better than cure”. We were keen to avoid RA being seen as just being about detective controls. In the worst case, the RA department spends all its time dealing with symptoms, by detecting individual errors and fixing them one-by-one, but spends no time on cure, by addressing the root causes of errors. Identifying and dealing with root causes signified by detection is not that different to trying to anticipate likely root causes, and an RA department that becomes good at one should be able to translate those skills so it can also act in a preventative mode. For example, if you detect errors, and the root cause lies in reference data, the learning experience should also lead you to anticipate how to avoid problems with reference data in future. The majority of telcos do treat RA as being only a detective control function, but a significant minority run it as a preventative control function, or as a blend of both. I personally have a strong preference for a blended approach, as I believe that there will be some errors that are easier and cheaper to detect than prevent, and vice versa. Taking a “belt and braces” approach, by using both preventative and detective controls for all kinds of errors, is the safest approach to minimizing leakage. It also helps to ensure RA spends its money where most effective. There is less need for expensive monitoring systems if you can prevent errors, and monitoring helps to ensure that your preventative tasks really do work, by capturing the cases where it does not. In fact, I advocate what we called the PAR model for doing revenue assurance, where for each kind of leakage you identify and implement a combination of proactive, active and reactive controls. I have already argued for implementing both proactive and active controls. I also advocate using reactive controls as a final confirmation that the revenue assurance is working and is comprehensive. A preventative control cannot easily measure the benefits it delivers. An active control can measure the benefits it delivers, but usually the pay-off is that the control’s increased speed means the control has a narrow scope. A reactive control, like independently recreating bills and checking them against what was sent out, may have a very broad scope and be relatively cheap. It may also find errors that slipped past both the proactive and active controls. This provides vital feedback necessary for prioritizing investment for further RA improvements, and also increases the reliability and completeness of RA measures of performance.
I have held 2 roles that were termed “Proactive” Revenue Assurance. The first role I held was called RA Development Consultant and it basically required me to fully understand the data flows and system inter-dependencies throughout the operators infrastructure. The idea behind the role was that I sat on every new product/service project team and identified any flaws in the requirements/design that would lead to a revenue loss situation. In addition I would also feed in any requirements the operational RA team had for data feeds to produce new reconciliations.
The second role I held, followed on from the first and had the title RA testing consultant. This role required me to proactively test new products and services pre-release to ensure that no new revenue leakages occurred. In addition I also had to ensure that all testing departments conducted full and accurate testing ie full integration testing took place.
Both of these roles enabled the operator to reduce their Revenue Loss exposure from new products and services. In my experience OSS and BSS are fairly stable when first introduced, it’s only when new products and services come along and these systems are changed that significant revenue loss start to occur.
Its good to hear of other operators who consider such an approach. I had a few questions regarding the activities performed in each role.
In the RA testing consultant role, what was your test bed for new products? I see that you mention that it requried RA analysis pre-release. Does this involve comprehensive data analysis on the basis of perhaps a TCG or maybe roll-out of service in a test area? Or does it refer to validation of process-flows, business rule analysis etc.
I am a supporter of the RA testing consultant role, as it would fit in nicely with my preferred approach for pro-active RA (i.e. test-bed approach). But I’m trying to understand the scope of such an activity from your experience. I wanted to understand if you felt that the testing was comprehensive, or if there was something that could be done to improve upon it (as inevitably there would be, given the scope of RA is vast).
I fully agree with you that most RA issues crop up when we’re introducing new services into an existing OSS/BSS stack. However, there are RA checks and KPIs that we can introduce right from the start itself. Basic checks like Data Integrity and File/CDR generation correctness, checks for garbled or missing data etc. have yielded benefit to the telco even in a new startup scenario.
Furthermore, ensuring that we have alarms set up right from the start will reduce pain areas when we intoduce changes. Would you agree?
To answer your questions could take a full blog post in itself. So I’ll try to do that at a later date. In short though:
Generally speaking I would try to borrow the integrated test environment specific to the new product/service. I was working for a large operator who could afford to have a fully duplicated production environment where new releases were tested
RA Analysis Pre Release:
By this I refer to the performing of a one-off reconciliation ie take data from the OSS/BSS that changes were being made to and perform reconciliations pre-release. This would then help the operational RA team to have reconciliations in place for day of release
Scope of Testing Consultant Role
Due to the number of new products being released by operator I was working for, only a handful got access to the integrated test environment. Therefore the scope of my role quite often required me to ensure some form of integration testing took place rather than testing the service myself ie once system testing of element A was completed ensure that it’s output is passed to the team in charge of system testing element B and so on and so on.
Further to this for any project being performed, the RA development team would perform a risk analysis against all projects and those that were deemed RA critical, I would take the role of overall test manager ie Billing System replacement, the launch of GPRS etc
Where there is a big system change or the introduction of a new product, quite often your existing RA solutions also need to be changed. By ensuring you are involved with all new developments within the company, you can ensure that your RA solution becomes a dependency on the final delivery. For smaller projects, your existing RA solutions can be effective, the only risk is that they do spot issues on the day of release of a new service, by which point there is no turning back and revenue loss has already started to occur. The best bet is to always try and get data pre-release
Hope this is of some help, if you would like a more in depth discussion please feel free to email me directly email@example.com