Raw Data, Workflows and Mr. RA Analyst

It’s been a while since my last post, and the reason is that I’ve been quite busy recently with some interesting developments in Revenue Assurance. I have started to notice a significant shift in the operator’s attitude to leakage detection and correction.

There used to be a time (long, long ago in a Galaxy far, far away…) when the most important item for most RA teams were a set of crisp and clean dashboards  that tell them “Hey Mr. RA Analyst, I’m working on dimensions and measures which have told me that you have 0.3511% leakage in product SuperSaver199 between the mediation and billing”. This used to suffice the needs of the RA analyst and he goes forth armed with “0.3511%”, “SuperSaver199” and “Mediation vs Billing”, and reports the same to his network team. Naturally, Mr. Network Guy wants a sample set of records so that his team can go about plugging the leakage. Unfortunately, Mr. RA Analyst works with dimensions, measures and dashboards only…so he sets forth on another activity (a search for the Leakage Grail) where he tries to pull out the raw data from the network records that corrobrate his claim. While he proceeds on his quest for data, the leakage continues unabated. Furthermore, his view is a bit myopic as the leakage might have been analyzed from a data transfer angle, but might not have been investigated from a “Impact to customer” angle.

What I was trying to illustrate in the true fictional anecdote above is the need for RAW DATA and established workflows. I am a great believer in getting down to the raw data and validating the actual data flows between interworking systems and validating against expected business process flow. Recently, I have been interacting with operators who share the same view about working with raw data, as opposed to running a RA department based on pretty dashboards and metrics from datawarehouses.

There is a visible paradigm shift in the way that operators are setting up RA processes in the Asia Pacific side of the world. The business process of leakage reporting has matured significantly, where every issue needs to be reported with the associated “Proof From Network”, which usually constitutes the discrepant/mismatched records. The network and IT teams even tell the RA team what particular fields they expect in the report for their cross-verification. I have published a post earlier where I discuss the importance of leakage reporting AND tracking. We are now seeing cross-functional teams which have been tasked with ensuring that issues raised are being closed in a timely manner. When I say cross-functional I’m talking about workflows that involve first level of investigation by the RA team, impact analysis by a finance function, technical cross-verification by IT, corrective action by a network task force and rectification corrobration by RA and IT. I like the approach primarily because there is a shift in the way RA is now viewed as more than just a “Finger-Pointer”.

The emphasis on being able to “drill down” to the raw data is something critical to the success of a RA function, because here we are questioning the fundamentals of the underlying data, its behaviour in various complex network systems, the impact from a technical/financial/customer/service angle and so on. The natural evolution of an multi-impact view of leakage analysis is growth from a mere System healthcheck validation to something like Customer Centric Revenue Assurance. I read an interesting article recently about the audit process at Verizon. The RA team at Verizon has been been working on some interesting approaches to RA, and in the below link Kathy Romano of Verizon (Head of the RA function) talks about the importance of solid workflows and questioning the underlying data.

http://www.technology-research.com/experts/romano1.php

Ashwin Menon
Ashwin Menon
Ashwin Menon is the Head of Product at Subex. He has also been a consultant and he began his foray into revenue assurance as an implementation on-site engineer.