10 RAFM Issues: Will AI Help You or Replace You?

This is the penultimate post of a 10-week ‘reverse survey’ which seeks to rank the importance of some major revenue assurance and fraud management challenges by seeing which of 10 different articles generates the most interest. This week’s article is about artificial intelligence, or more specifically the rise in the usefulness and popularity of deep learning. But instead of focusing on the technology, this piece concentrates on the implications for human workers. In short, will this technology empower employees working in RAFM, or it will take over their jobs?

Finding patterns

I always find it a shame when people act like RAFM has no creative component. Such attitudes are manifest when practitioners demand a manual describing how to do their job, a checklist of the ‘top’ losses they should look for, or any long prescriptive list of tasks they should routinely perform. Such documentation can be very helpful, but it will never be exhaustive. Even a car mechanic needs imagination; when a car breaks down the mechanic must imagine what could have caused its failure. Their first guess may be wrong, so they will have to guess again. Creative thought is an advantage that humans have over machines. If we minimize the creative element of RAFM then we are creating work for machines, or for people who behave like machines.

There is a strong drive in RAFM towards systematizing work in a way that makes it more repetitive. This has advantages. Repetitious tasks are easier to measure and monitor, the act of repetition can lead to systematic improvement, and it is easier to ensure a consistent quality of performance for repetitious activities even when there is a change of personnel. Being systematic is fine, but there are limits to what can be done by being systematic. It is easy to program a machine to perform in a perfectly predictable environment; humans have the relative advantage when there is a need to adapt to changing circumstances. A machine trained to understand my voice may not be able to understand yours. A robot designed to walk on a flat concrete surface may be tripped up by stairs or sand. But the technology of adaptation has improved greatly over time, and machines are increasingly good at learning by doing, just like humans. Machines are better at adapting, and they are better at coping with the unknown.

As soon as Donald Rumsfeld made his famous observation about known knowns and unknown unknowns I found myself repeating his quote to help my colleagues understand the challenges of risk and assurance. I even insisted that ‘unknown unknowns’ must be an explicit category in our leakage reports. Of course there was nothing to report under that category – we do not know what we do not know. The purpose of the category was to remind the reader that the rest of the report was incomplete, and we should never think otherwise. We monitored what we monitored, reported on what we knew about, and highlighted the gaps in our knowledge that we were conscious of. But there would always be reason to believe that there were other gaps in our knowledge which had not yet been considered. Tackling ignorance is part of our job, which is another way of saying that we must keep finding ways to learn.

By always reporting that we had unknown unknowns of indeterminate value we were communicating to executives the expectation that our assurance team must always spend some time and effort exploring the possibility of leakages that nobody had thought of before. We also set a foundation which permitted leakage to go up – because we had newly identified a cause of leakage – without that being seen as a failure of our previous work. In short, understanding the significance of unknowns unknowns is necessary for RAFM work to be adaptable and expansive, ready to cope with change and able to close previously ignored gaps in our knowledge. Being aware of our own ignorance is a vital component of being versatile and resilient. We do not know what we do not know, but we do know we must anticipate our own ignorance and devise strategies to reduce it.

Humans always had the advantage when it came to dealing with ignorance. We have imagination, whilst a computer program written in an imperative language can only follow the fixed logical pathways that have been set for it. Deep learning changes that. If computers can see new patterns, and respond accordingly, they also have the ability to expand their field of activity, and to reduce their ignorance. That is why they call it ‘learning’. Searching through great swathes of data at tremendous speed, we have affordable computers that are increasingly capable of turning unknown unknowns into known unknowns, and then perhaps into known knowns. They can ‘see’ the pattern of a new fraud, or observe a data anomaly indicative of a leakage that nobody ever monitored before. Consider the possible impact on my old leakage report. Previously a human being had to imagine a leakage for it to be included in the spreadsheet that was used to capture and present the numbers. A human being had to think of what could go wrong, think of a name for it, and manually add a new row to the spreadsheet. Deep learning is a technology which could accomplish the same task. And because computers can absorb and process much more data than people can, deep learning may identify leakages that no human could.

A career choice

I admit to bias when writing this article. In my own career, I always favored creativity over systematization. Other experts like Gadi Solotorevsky always wanted to solve problems by creating an algorithm that a machine could execute repetitively, not least because his employers sold software. Often he would deliberate define problems to exclude the possibility that any unknown element which could not be systematized. In short, he skewed the entire program of the TM Forum towards problems which are best solved by machines, and away from the development of human beings and their careers. In his worldview, it would be foolish to include ‘unknown unknowns’ on the leakage report. He was not alone. Many want a job where they are given a manual, and told to follow it, without needing to make choices or be creative. But I always thought those people were making bad bets, in terms of the longevity of their careers. The irony was that they wanted jobs made possible by technology, but they showed little awareness of where technology was headed.

A lucky few might get promoted so quickly that they would become a senior manager – a position which is divorced from the specific tasks that most employees perform and which are essential to delivering the business’ service or product – and so escape the need to be creative before anybody noticed that top management also need creativity if the business is to excel compared to rivals. The rest risked falling into a giant trap: creating systematic and repetitive jobs for themselves that would be made redundant as technology closed the gap on their skills, and then overtook them.

Systematization is a good thing, but it is not the hardest thing. The technology of deep learning has been coming for a long time; neural networks were hardly a new idea when I learned about them at university, over 20 years ago. The prospect of machines making people redundant was obvious, especially in realms where people are a little bit more adaptable than machines, but otherwise do repetitive tasks. Software is better at comprehending different voices than ever before, which is bad for call center staff who have been paid to read from pre-determined scripts. Taxi drivers may try to protect their income by waging a war against Uber, but they will be in real trouble when driverless cars become common. Similar analogies can be applied to the careers of those working in RAFM.

If you want a job where you push buttons on a system that somebody else developed, and fill in numbers on a leakage report with a fixed number of rows because nobody ever thinks it necessary to investigate and add new kinds of leakage, then your job could be performed by a machine in future. Worse still, if you have not been exercising the skills needed to expand the remit of assurance and risk mitigation, you might be overtaken by a machine which identifies frauds and leakages before you do. They say prevention is better than cure, and generally people are still better at preventing leakages than a machine will be, because people can anticipate the future without waiting for data to process. People also need to ask themselves what they are doing to prevent their career being sidelined by automation. If they define their job as detecting problems, or following a manual, then their career is at risk.

Big bets

Things change, and the pace of change keeps increasing. That means that we are all gambling on the direction our careers may take. Those bets tend to be very big. Technology may be a threat to some career choices, but there are other factors too. Governments can intervene to protect careers, much like they have done when protecting some taxi drivers from competition with Uber. On the other hand, many governments are less keen on protecting telcos from being ravaged by OTT providers. My personal choice was to bet against the RAFM career model promoted by Gadi Solotorevsky, Rob Mattison and others. I thought they were training taxi drivers whilst choosing to ignore all the investment into driverless cars. As time goes on, I like my bet more and more.

The evidence of my bet can be found across my career. My revenue assurance book was an anti-manual which focused on telling practitioners how to think for themselves (and was my attempt to provide an antidote to the dogma that all we needed was more and more ‘standards’, even though I had been involved in the writing of some of those standards). My LinkedIn profile says I am retired, and I do not want to be bothered with a conventional job. I spend my time doing more creative activities than I was allowed to do inside telcos, such as managing this website. Per the parlance of stock markets I have chosen to bet against the investments that most people have made in their careers. But I am not betting against RAFM. I see a strong continuing need for advanced risk mitigation. I am not even betting against systems; deep learning will help us find patterns, and the best deep learning systems will deliver the best results. My bet is against the systematization of RAFM jobs, specifically those inside telcos.

Will the number of RAFM jobs decline? Possibly not. Just as governments can intervene to prop up a job market there are many other factors to consider. Executives and RAFM department heads may prefer to retain staff even though they purchase technology to perform tasks that people did before. Partly the decisions will depend on employment law in their country and wage rates, but ego and loyalty may also influence management thinking. Managers tend to get crudely ranked by the number of staff working for them, which is one reason to always seek to employ more. Certainly it is possible that RAFM functions will buy deep learning systems and keep the staff whose work will be taken over by those systems. The staff may be moved sideways, to chase other kinds of leakage, or to perform different kinds of role. But here it is important to reflect that much will depend on the creativity of the manager arguing for the need for those people to do different jobs. Some of the people who preferred to limit the horizons of RAFM, for the sake of a steady job, may find their career is saved by somebody else showing imagination and arguing for an investment in roles that never existed before. Suddenly there may be an urgency to discover and tackle problems which were previously neglected because they languished in the category of unknown unknowns.

Deep learning will stir the employment pot for many working in RAFM. In companies that see no reason to expand the remit of their RAFM teams, and who want to reduce operating costs, jobs will be at risk. Other telcos will opt to redeploy their staff and redefine their roles, rather than making people redundant. And so, whether they choose to or not, people working in RAFM will need to adapt, and may find their changed roles demand more imagination than before.

Eric Priezkalns
Eric Priezkalns
Eric is the Editor of Commsrisk. Look here for more about the history of Commsrisk and the role played by Eric.

Eric is also the Chief Executive of the Risk & Assurance Group (RAG), a global association of professionals working in risk management and business assurance for communications providers.

Previously Eric was Director of Risk Management for Qatar Telecom and he has worked with Cable & Wireless, T‑Mobile, Sky, Worldcom and other telcos. He was lead author of Revenue Assurance: Expert Opinions for Communications Providers, published by CRC Press. He is a qualified chartered accountant, with degrees in information systems, and in mathematics and philosophy.