I started writing a comment to Ashwin’s blog but as it was getting too long, I thought I should just start a blog.
From my perspective, there is little doubt that detailed analytics are essential to be undertaken by RA teams and should be the primary activity. Dashboards serve some value in identifying trends but I suggest that any movements upwards or downwards in trend lines (such as the 0.3511% in Ashwin’s email) become almost impossible to justify as leakage. Let me explain my views:
First – what is the right baseline value to begin a dashboard with? Assume every day for the last 3 years, 100 calls come into a mediation device and 70 come out. So you set a baseline at 70 and if this drops to 65 then you decide to follow up with the mediation people. Great theory, but who says 70 is correct in the first place? It may well be that there is a leakage in the 70 and in fact it should have been 80 all this time. The only way to know where to set a possible baseline then is by doing a detailed analytic. So let’s now assume we did that and we know it should have been 80, we’ve fixed the mediation device and we start seeing 80 go through every day. Great stuff: we found 10 lost calls, fixed that issue (hopefully got a pat on the back), and are now monitoring every day just in case, this drops back to 75 in which case we will be on the phone and get that problem fixed as well. Two weeks later, and the volumes hit 70 and they stay there for the next few days. You’re on the phone saying something is wrong, no-one believes you and ask you for the call detail to prove it. You do the analytic again and find that 70 is the right volume as new products have been introduced and the mediation device treats them differently, or there was more of a particular call type in that time period due to a marketing promotion, or…etc etc. My point is that the reasons could be endless and setting a good baseline dashboard value would only be possible in a stagnant, never changing, environment – not really well suited to telco then. And if, every time you called out “leakage !!!” when there was some other change you would be 1) busy tryng to prove this and 2) lose your reputation for integrity and the time of the people who you keep raising alarms to.
Second – many revenue leakages reside in the less than 1% bracket. Sure, there are some massive ones and these are the ones that we talk about at conferences or hear of in training but the normal reality is that most are so small, relative to the revenue pool that a dashboard would find impossible to discern from other activity (similar to my point above). But if you have a $100M product, then finding $1M is a good result but to find it is likely going to need detailed analytics.
Third – only with some detailed data can you go back to the business area and tell them what specifically to consider addressing. Analysis may even also know what to fix specifically. Analysis can also tell you the extent of the issue, making a quantification of the total impact, more easily achieved and this can help drive prioritisation of resources to address this, relative to all the many other competing demands across the company for money and people’s time.
Lastly (for now anyway) – the cost and effort to maintain a RA dashboard that adapts for all the business rules and changes that go on in a business would be very high. In fact, I would contend that it would be almost as high as putting in the operational system changes that it is meant to monitor. Why? to keep it current of all network and IT changes going on would not be insignificant, add in all pricing changes, campaigns, what the competition is doing and how to account for the behaviour of people is practically impossible to model. To make a point, how many times have I heard stories of RA managers raising alarms at a 50% drop in traffic over a 2 hour period, only to later realise that this was due to the nation being absored by a football match (and the RA manager probably was watching on TV as well).
Do I think there is value in RA dashboards then? Well, yes but they have to have some key attributes:
1) data presented has to be summarised correctly from accurate underlying data (avoid garbage in – garbage out)
2) “alerts” have to drive an action (e.g. do analytic, phone billing)
3) more true positive alerts are generated than false alerts (the challenge of our fraud system friends as well)
4) data needs to be delivered in a timely manner (no point finding a leajage after everyone else has)
Putting that into play though requires time, thought and effort. And that effort could be spent on doing a detailed analytic. Getting the balance right is the challenge !!!