Can the ROC be Lean?

Does Toyota have the equivalent of a Revenue Operations Centre (ROC)? It is a difficult thing to imagine, but a pertinent question to ask. Toyota is the most cited example of a lean business. In telecoms, the ROC is a concept that has become associated with, and sometimes dominates, the field of revenue assurance and revenue management. The ROC is often described as a component of a lean telco. But is the idea of the ROC consistent with being lean?

The term ROC was devised by Subex in order to help them market their products, though it could equally well be used to describe the range of products offered by some of their rivals. The concept is analogous to the Network Operations Centre (NOC). In a recent article in the Indian Business Standard, Subex CEO Subash Menon described the ROC as:

a framework of systems and processes that enables ‘operational assurance’, helping the service provider understand the impact of operational processes and outcomes on profit

The further description given by Subex on their website and in their literature greatly clarifies the definition:

ROC is a centralized and integrated operations infrastructure that…

  • Monitors, controls & ensures integrity of the revenue chain and control over costs through continual automated tracking of KPIs
  • Provides tools to ensure that the impact of operations on profit is understood and managed proactively
  • Captures & delivers relevant and timely data for upstream analytics and planning systems, ensuring that decisions and initiatives continue to foster Operational Dexterity

In other words, if the NOC is “mission control” for the network, with lots of data and alarms about what is happening on the network, then the ROC is the equivalent for the telco’s revenues, using automated extraction and analysis of data to highlight where the issues are.

That explains the ROC. Now, what does it mean to be “lean”? The term was first used by Professor James P. Womack and consultant Daniel T. Jones in their book, Lean Thinking. Their work was based on the Toyota model. You can find out more about lean principles from the website of the Lean Enterprise Institute. The motivation for being lean came from the shortages of raw materials faced in Japanese car manufacturing after the Second World War. If they could eliminate waste, they would greatly reduce their costs. Womack and Jones identified five principles:

  • Specify the value desired by the customer
  • Identify the value stream for each product providing that value and challenge all of the wasted steps (generally nine out of ten) currently necessary to provide it
  • Make the product flow continuously through the remaining, value-added steps
  • Introduce pull between all steps where continuous flow is possible
  • Manage toward perfection so that the number of steps and the amount of time and information needed to serve the customer continually falls

The idea is very popular, and has now been used in many other kinds of business.

In telecommunications, the organization that has most passionately promoted the concept of lean operations is the TM Forum. Keith Willetts, Chairman and Founder of the TMF, has even given tutorials on the application of lean thinking to telcos. In this slide pack, Willetts explains the history of lean thinking and his vision for how lean operators will come to dominate. He also mentions the plugging of fraud and revenue leakages as a tactic to gather ‘low hanging fruit’ in the early days of transforming an operator to a lean approach. Despite the mention of leakages, Willett’s advocacy of fraud prevention and revenue assurance as a tactic is inconsistent with the strategic emphasis that Subex places on the ROC. Can these distinct visions be reconciled?

Although lean thinking was originally concerned with reducing waste in manufacturing, it is not hard to see how the thinking can be applied to revenue maximization. Not getting all the revenues for services that have been provided can be seen as a kind of “waste”, in the same way that a leaking pipe represents a waste of water. Not only are revenues wasted, but there is also the wastage of effort expended in processing data which ultimately is faulty, and useless. Worse still, bad data may lead to bad decisions. Following the lean principles, it also follows that data is a waste if it serves no purpose. However, this is where we may start to question the applicability of lean principles to telecommunications. Lean principles can be most obviously applied to any operation where there is a physical or tangible sense of wastage, but it can be so easily applied to operations where the product is data? A manufacturer may save a lot of money by reducing the wastage of raw materials. Saving on raw materials may significantly reduce unit costs and hence increase profits. In addition, wasted raw materials are not a useful source of information – other than as a measure of how much material has been wasted. Contrast this with data. Unlike physical material, the costs of data are not likely to be so readily correlated, or directly variable, with output. It has some cost, but it is not obvious whether reducing these costs will make a significant difference to profitability. Furthermore, it may not always be obvious what data is needed to manage the business. There may be good reasons to build processes that generate or retain data which is unlikely to confer any clear benefits. Going one step further, the principles of being lean are often associated with gathering and using data in order to identify where processes can be made leaner. This idea underpins continuous improvement. From this perspective, being lean is not consistent with being skimpy with data. This suggests that a data-driven ROC can be a valid element of lean operations.

One key way to tackle waste is to lower and ultimately eliminate the need to reject finished units because they fail quality control. A finished product that fails quality control represents both a waste of raw materials and the expense that went into manufacturing the unit. The converse of a batch-driven quality check mentality is the idea of ‘right first time’. It is an easy mantra to reproduce, but few in telecoms genuinely aspire to it. In many ways, the business case justification of the ROC is not to make processes leaner, but instead to execute batch quality control. Instead of inspecting the quality of manufactured units, the ROC is used to inspect, via the intermediary of automated checks and reconciliations, the quality of data. In this regard, the ROC is actually a reversion to the batch quality model that lean businesses like Toyota had sought to replace. Their goal was to ‘build-in’ quality, so that no units would fail. To build-in quality, you must break down a process into its elements, and try to make every element as simple and fool-proof as possible, so there are no rejects later on. The micro-analysis of processes needed to get things right first time runs counter to the mission control macro-overview obtained by the ROC. Can the two be reconciled?

To be fair to the ROC, the overused term ‘proactive’ is often meant to signify the movement away from batch-checking towards continuous or real-time checking. However, the way in which this is achieved often imposes a limit on how far it can go. If a check takes data from one system, and compares it to data in another system, or somehow reprocesses the same data, the check still takes place after the fact. The delay before a batch check is executed may get shorter and shorter. The batches may be smaller and more frequent, to the point where each datum is checked individually. But fundamentally the objective still involves identifying and rejecting failed data, not denying the possibility of failure before it occurs.

The solution in reconciling the ROC to lean processes depends on what you see as the ultimate goal of the ROC. If the business case for the ROC is to identify flaws, and to fix those flaws, its mission is to execute retrospective quality control. This can be done in batches, or not; all that matters is the control is always after the error takes place. The more flaws identified by the ROC, the better the return on the investment made in the ROC. The logical conclusion is that a ROC, when viewed in isolation, generates greater returns if the processes are flawed, and not if they are fit to deliver data that is right first time. Conversely, if processes do consistently deliver data that is right first time, then the business justification of the ROC, from a revenue assurance perspective, cannot be to improve the bottom line in any methodical way. The justification must be demonstration of quality for the sake of demonstrating quality. Demonstration of quality may give peace of mind, but will not lead to any financial rewards.

In a business with a genuine right first time philosophy, and a successful approach, it would be hard to justify the expenditure on the ROC. Whilst it might be useful to have an independent function confirm the quality of all processes, it is only useful if you have to assume that only an independent function could confirm the quality of processes. Otherwise, there would be no real advantage to having centralized checks of quality across all processes, when you could distribute the checks and building-in of quality. Collating the quality control data in one place only adds further value if it is safe to assume that mistakes will be made, and that nobody will correct these mistakes without the intervention of a supervisory body. If checks were built into all processes, and people responded to them appropriately, or if the processes were designed to be error-proof, then the further supervision of the ROC would add nothing.

Where the ROC could add value to a lean operator would be in driving the decrease of error rates. However, the people employed in the ROC would ultimately be aiming to put themselves out of work, by driving the redesign of processes to avoid errors in future. The benefits of retrospective centralized quality control over streams of revenue and cost data could also be attained by implementing ever more elegant and detailed checks within processes, or better still by redesigning processes to eliminate the risk of error. Understood this way, the use of a ROC can be consistent with lean operations, but only when viewed as a medium-term tactical approach to identifying flaws, educating the business about quality and decreasing waste. The ROC can only be a strategic end goal if the business believes batch quality control and irregular rework of faulty data is more cost-effective than making the investment to design and implement processes that will be right first time. Whether it is cheaper to find faults and fix them, or to eliminate faults before they happen, is not a matter of dogma or philosophy, but should be decided based on empirical observation. Sadly, we lack the public data to know decisively either way. But we can say that there are problems with trying to integrate the idea of a permanent and strategic ROC within the concept of genuinely lean telecoms operator in the way that a business like Toyota is lean. So the strategic question for telcos can be put another way: is it leaner to design processes to prevent errors, or to allow errors to take place and fix them when they do?

Eric Priezkalns
Eric Priezkalns
Eric is the Editor of Commsrisk. Look here for more about the history of Commsrisk and the role played by Eric.

Eric is also the Chief Executive of the Risk & Assurance Group (RAG), a global association of professionals working in risk management and business assurance for communications providers.

Previously Eric was Director of Risk Management for Qatar Telecom and he has worked with Cable & Wireless, T‑Mobile, Sky, Worldcom and other telcos. He was lead author of Revenue Assurance: Expert Opinions for Communications Providers, published by CRC Press. He is a qualified chartered accountant, with degrees in information systems, and in mathematics and philosophy.