It’s been quite some time since my last post on Commsrisk and so much has changed. Though I now spend most of my time in the world of data and analytics, this hasn’t tempered my interest in the worlds of telco risk and RA, so neatly knitted together on this site.
The area that always seemed elusive, to me, was prevention of leakage and how this could be done effectively.
My early approaches at prevention would better be described as early detection. The RA team would be tasked to test new products or services soon after launch. This is not ideal for a business used to expecting significant revenue uplift from RA, even if the case could strongly be made that lifetime benefits of early detection work are more compelling when compared to waiting for a leakage to become large. If done early there is also the problem that customers are not making full use of the services, so there will be issues with testing a sample that does not represent all the possibilities which may be achievable.
After those early efforts I shifted towards pre-launch engagement with product and pricing managers to understand the constructs being developed, assess the requirements and look at the design before recommending controls to be developed. Of course, there are trade-offs between the speed to deploy and the cost of controls, but this was often of value.
Now though, I might approach this differently – this perspective being bought on two observations from the world of data.
Firstly, I would accept that revenue errors (whether overcharges or leakages) are inevitable. I may not know where and I may not know when but I would develop a capability to quickly respond in a way that is proportionate to the loss. This response would be defined ahead of identification of loss, and while it might draw on a general framework to shape it, it would also be customised, documented and acknowledged across the business. Where did I steal this idea from? Watching organisations who acknowledge that data and privacy breaches will affect them, and knowing they need to respond with speed to protect revenue and reputation.
Secondly, I would look at some of the changes underway in how IT solutions are delivered and would consider how to embed RA thinking into these. I would look at how IT is looking to break monolithic applications into microservices and understand how functionality is being modularised and how RA microservices could be embedded within transactional flows as control points. I would look at how organisations are driving delivery velocity through adoption of DevOps and recognise that this is as much a cultural change as it is a technical one. I would be looking at how automated testing accelerates code from development to production, and assess how I can embed automated RA test cases to be triggered every time code is checked in. In essence, I would want to drive better code quality at the point of development not at deployment.
Many words, my own included, have proposed ways forward for RA practitioners – add this to the mix.