In my experience, there have been a few truths about fraud that remain as valid today as when I started out in telco fraud management in the late 1990s. Firstly, fraudsters are looking for the gaps that provide them with the maximum benefit, while at the lowest risk. Secondly, once they find this gap they will exploit this until either the gap gets closed or the risk/reward equation changes and they need, or choose to, look elsewhere.
As a result of this, the fraud manager’s response has been fairly uniform. When gaps are identified, then either systems and/or processes are re-designed to close the gap; or detection mechanisms are enhanced to more quickly identify when the fraud is detected. This all makes sense – close the gap and the fraudster has to expend effort to find a new gap, and that effort may be too much and they move on. Enhance your fraud management system and the benefits to the fraudster decline as the time available to profit from the exploit reduces and, again, they find themselves looking elsewhere. But there is another and third truth, that the fraudster is also intelligent and adaptive, and will continue to innovate to maximise their return.
Perhaps this response needs some further consideration and challenge. The cost of process and system re-design can be expensive with no guarantee that it will be successful – especially if this is process driven and relies on human intervention and judgement. Additional controls can also adversely impact the customer experience. The improvement of fraud detection can also take time and resources, especially if new data is required to be integrated and analysts trained on methods for detection. And yet, despite these challenges, fraud teams around the world are often remarkably adept to protecting their organisation from emerging threats.
However, recall the third truth. Once fraudsters learn of the response made and the changes it introduces, they also adapt their behaviour. The game of cat and mouse is underway and the pace accelerates as each new exploit is opened and then closed (even partially). Every time a telco responds, this provides crucial learnings and insight to the fraudsters on areas such as whether the action was even visible to the telco, how quickly they responded, what follow up action was taken, who took the follow up action. It enables the smart fraudster to understand not only the gaps but how to avoid suspicion and detection.
And so, I suggest, maybe telcos could seek to “defraud the fraudster”, to deceive the fraudster as to what the telco’s fraud management capabilities are. Once a fraudster has been identified, by slightly delaying a response, the fraudster may incorrectly assign the trigger for detection to a later action by the telco, as opposed to the real one. This then becomes the behaviour the fraudster tries to change to avoid detection. When the telco has awareness of the fraud occurring and can monitor and manage the risk at an appropriate level, then they can, at least in part, seek to control and alter the fraudster’s understanding of their processes. To provide a simplistic example: a telco has identified the CDR signature of a call-sell operation. Instead of shutting down the operation immediately on the basis of that signature being observed, perhaps a call is made to the “customer” asking about inconsistent account holder information and then this is the rationale for limiting service. The fraudster continues with the same call signature but invests their time looking at how they should set up their fraudulent accounts. Just as fraudsters confuse the telco, the telcos can confuse and frustrate the fraudster, by quickly closing accounts, seemingly for reasons the fraudster does not seem able to bypass.
Of course, the risk must be managed based on the organisation’s objectives and a balance is needed as I would not want to recommend allowing frauds to run without remediation – that sends a different and more dangerous message. But fraudsters learn quickly about cause and effect, and perhaps seeking to manipulate understanding this can help, in what remains the ongoing battle to manage telco fraud.
It’s a very good point. Reminds me of the deception operations the British ran in World War II. At all costs they needed to protect the truth that they could actually read the ULTRA radio encryption codes of the German army. In some cases, that meant they could not defend a city from a bombing attack because the intelligence about the impending attack could not be verified by other sources.
Winston Churchill once said — and I think it could equally be applied to the fraud cat and mouse game — “In wartime, the truth is so precious, it must be protected by a bodyguard of lies.”
Your comment reminds me that there is a long history of military deception, which shows us how strategy can be augmented by keen imagination. During the Siege of Mafeking, Colonel Baden Powell (founder of the scout movement) had insufficient barbed wire, landmines and cannons to defend the town. So he pretended he had more, to fool them the besiegers into keeping their distance. Instead of mines, his men buried boxes filled with sand in places the enemy would be watching. Occasionally he would use a little real dynamite to ‘test’ the fake mines. Because barbed wire is not visible at a distance, he had his men crawl under and over imaginary barbed wire, so the watching enemy would assume there was barbed wire in place. And Baden Powell made it seem he had many more cannons by getting his boy scouts to constantly move the cannons between firings.
Similar thinking can be applied to fighting fraud. Instead of making defences seem stronger than they really are, allow them to appear weaker. Use the apparent disparity to gather data about fraudsters, with a view to imposing criminal and civil penalties upon them. If we think of fraud as a game of cat and mouse, sometimes the cat should capture the mouse, instead of chasing it away.