I will tend to, in the course of my posts, try to displace the accepted norms of Revenue Assurance. The primary driving force behind this desire would be to challenge the way things are currently done, so that we can analyze and come up with better methods. The following is a discussion that Eric and I were debating on earlier.
One common thing I have noticed in RA implementations is that the Telco tends to gravitate towards accuracy of transferance (xDRs from one network element being recorded in the corresponding downstream system) as the crux of revenue assurance. In my opinion, transferance is simply a symptom rather than the disease itself (eg. in usage data a Voice CDR in the MSC not being present in the downstream Mediation platform might be because of incorrect provisioning of a Postpaid customer, where the CDR stamping at the MSC defines the call as a prepaid call). However, in accordance with the Drip-Tray model I can see the validity of performing transferance checks, but I feel that Revenue Assurance as a discipline should begin to recognise the benefits of performing atomic level checks instead of the macro-diagnosis model that is currently prevalent.
What am I suggesting here? Simply put, check the health of the underlying system before we check the output of the same. A simple example would be rating. In my opinion, there are two ways to perform checks on the rating engine. One, we can re-rate some sample XDRs and check whether the system rating is in-line. Secondly, we could instead perform a reconciliation on the underlying rating tables which forms a critical part of the rating engine. As anyone related to a rating function in a telco would tell you, maintaining a seperate parallel rating framework to rerate XDRs is a massive task. It would be so much simpler (and more cost-efficient) to validate the rating structure itself within the rating engine.
Macro-diagnosis does have its benefits (as summaries help to get a bird’s-eye view of the overall health), but I believe in the intrinsic value of performing atomic level checks as they would be more cost-efficient as well as being beneficial in terms of root-cause analysis.
CDR counting between systems, does tend to be a popular RA activity I agree. Once a team starts down the reconciliation route it can be very amusing observing how deep telcos want to go. Have you ever seen an operator attempt a CDR level reconcilation? Grabbing the low hanging fruit first is the intelligent approach.
Well Matt, interestingly enough, most of the operators would LOVE a cdr level reconciliation approach…for all the millions of CDRs in their network!!!
The issue is not with supplying the solution. Yes, the CDR level reconciliation approach can be done. The problem is intelligently handling the HUGE outputs and identifying discrepancies in the same.
Typically, a RA department in these parts of the world consists of 20 people. The ideal RA solution should be one that follows the 80-20 approach. Even then, the solution should be able to provide a strong workflow that ensures that all issues which are identified are addressed and the appropriate steps to close the issue are recorded.
Imagine trying to address 20 million CDRs worth of discrepancies!!! It would essentially mean, trying to close 20 million cases!!! This in itself will make the RA guy wonder what he did to deserve such punishment. A more feasible approach is to understand the underlying pattern (root-cause analysis).
I believe that people accountable for but not responsible for the RA function in the organization put their faith in a CDR level reconciliation as being a good indication of the “completeness” of a control. If a completeness of 95% to 99% even on a sample basis can be reported to the audit committee or equivalent authority, management survived another board meeting and this is important in business.
Should these same people be responsible to do such a reconciliation, it is viewed differently. The effort that goes into such an exercise far outweighs the benefits perceived from the lower ranks of employment. Add to this, the operators who attempt to do so in MS Excel. Yes, been there, done that. Excel 2007 on steroids!
It would be interesting to get an auditor’s or CFO’s view on this question.
Assessing the underlying health of the system prior to putting effort into the output analysis also serves a change management objective. I have witnessed several occasions at different operators where business, IT and/or NWG would attribute missing usage to system processing errors or “poor solution design and project implementation” of a new billing solution.
By addressing the perceived system errors through a formal system audit, you either find real funnies you were not aware of or remove excuses to take ownership of the accurate and complete data through put.