How Do You Calculate the ROI of Fraud Systems?

A recent article in fraudscoop.com outlined a method to calculate the return of investment (ROI) on a fraud system. Or did it?

You should probably take a look at the article for yourself. There is a lot to admire about it. A rigorous approach is encouraged, involving proper discounting per the company’s weighted average cost of capital, and a comprehensive analysis of all the costs involved in acquiring and running a fraud system. All of this is greatly superior to some of the ‘back of envelope’ calculations I have seen deployed by RAFM managers who dreamt of bigger budgets. It is true that you cannot sincerely argue the benefits will outweigh the costs unless you are prepared to be honest about all of the costs. However, I was dubious about the proposed method of determining the benefits. The proposed calculations did not lack detail. If anything, the flaw was that the calculations would be too detailed.

Consider the following excerpt, which focuses on the benefits delivered.

The returns of fraud detection systems depends on the amount of cases investigated, and the fraction of those that are effectively fraudulent. Remark that this fraction is a property of fraud detection systems, and depends on the power of the system to detect fraudulent cases.

The optimal amount of resources to allocate to fraud investigation and as such the sample to investigate is defined as the amount of resources that maximized the total utility associated with inspecting a sample. This sample can be selected either as a top-fraction of most suspicious cases with the highest scores assigned by a detection model, or as a top-fraction of the cases with the highest expected fraud amount which is defined as the probability to be fraudulent times the estimated fraud amount.

The utility of different outcomes is expressed as a net monetary value, either positive or negative, representing the costs and benefits to an organization (of any nature, both economic and non-economic, yet always expressed in monetary units) associated with the decision to inspect or not to inspect either a fraudulent or non-fraudulent case.

I think all of this is true, but useless. The returns will depend on how many fraud cases are investigated (or prevented?) and how many represent real frauds, as opposed to false positives (though financial benefits may be realized even when no fraud is identified, because not all genuine issues identified by a fraud system need be genuine frauds). However, what is the goal when observing that if 20 percent of cases are investigated, and half of them are frauds, then I have received benefits from 10 percent of cases? I might assume these ratios in advance, but they would be assumptions. Baseless assumptions will not justify significant expenditure. Or maybe I should use this technique with a historical perspective, determining the ratios for frauds dealt with in the past. But if I did that, I might as well total the financial benefits received in actual practice, without going to the trouble to worry about these ratios.

Similar difficulties arise when considering the second paragraph. Why are they telling us about what would be optimal? This definition is circular and redundant – nobody would try to increase efficiency beginning with maths like this. The ‘top’ fraction of cases should be reviewed… but what if you identify patterns where the same frauds recur in cases outside of that top fraction? If you can prevent that class of fraud, you will be preventing all of them, irrespective of whether they are found in the top fraction or not. And once again, what are we calculating here anyway? Are we meant to be speculating on the proportion of cases which will be reviewed in future, which is ultimately an exercise in guesswork? Or are we just applying unnecessary maths to the totaling of historic results?

Perhaps the final paragraph clarifies why the supposed ‘ROI calculation’ has morphed into a numerical analysis unsuited to real numbers. Clearly (positive) benefits can only be realized if you actually take an interest in a case. And doing something about fraud may involve (negative) costs which could outweigh the benefits realized in practice. But this is just restating the obvious, by way of linking the return to each specific case (which may or may not be a fraud). The utility is hence related to how many actual cases of frauds the business may or may not suffer, and how severe they are. That is what drives the determination of whether an investment in a fraud system might deliver a positive return, but this must be specific to the business and grounded in concrete risks, and cannot be usefully abstracted.

Nevertheless you should review the article and see what you can learn from it. At the very least it should prompt some thought experiments for your business. How have you calculated the ROI for your existing fraud system(s)? How would you estimate the ROI for a future fraud system, whether it replaces an existing system or provides new functionality? What might you learn from one exercise that would help you to better perform the other?

Eric Priezkalns
Eric Priezkalnshttp://revenueprotect.com

Eric is the Editor of Commsrisk. Look here for more about the history of Commsrisk and the role played by Eric.

Eric is also the Chief Executive of the Risk & Assurance Group (RAG), an association of professionals working in risk management and business assurance for communications providers. RAG was founded in 2003 and Eric was appointed CEO in 2016.

Previously Eric was Director of Risk Management for Qatar Telecom and he has worked with Cable & Wireless, T‑Mobile, Sky, Worldcom and other telcos. He was lead author of Revenue Assurance: Expert Opinions for Communications Providers, published by CRC Press.

Related Articles

Get Our Weekly Newsletter by Email