Communications Privacy Also Stops Crime

The launch of the Calyx Institute, driven by American privacy campaigner Nicholas Merrill, has prompted a lot of conversation amongst the talkRA authors. Merrill hopes to provide a range of communications services – internet, mobile and all – where privacy is guaranteed because even the supplier has no knowledge of its customers’ identities, and no way to spy on them. I began the conversation by asking if the USP of inviolable privacy might be worth more than the gains from exploiting customer data. In a measured response, Rob Chapman questioned whether an overly protective attitude to civil liberties undermines the efforts of law enforcement agencies. To continue the debate, I want to pose a question that is rarely given sufficient attention. We often hear how data about phone and internet users is an antidote to crime. But to what extent is data, and the inability to control and protect it, the root cause of crime?

Public debate about data gathering and communications surveillance has parameters that are well established. On the one hand, law enforcement agencies see increased access to data as a way to identify criminals and subversives, both before and after they do wrong. On the other hand, privacy campaigners question whether so much power in government hands is a threat to the rights of the individual. As a risk manager, I spend my life looking for bias, and trying to counteract it. In the real world, the most efficient solutions may have no advocate, because nobody receives a special benefit, or because nobody has the information to justify the solution. When listening to the recurring patterns of the public debate about communications privacy, I wonder if a kind of bias has taken over the arguments on all sides. Is it possible that really thorough privacy might also prevent some of the root causes of crime? And if so, has this benefit of privacy been properly weighed against the benefit of using surveillance to detect crime?

In his post, Rob argued that:

With only basic information as to who are the services users, I perceive the possibility of a fully encrypted service to be a serious risk and, whilst I admire and support the actions of Mr Merrill in his dealings with the FBI, I think that the pendulum has swung too far in the opposite direction to be healthy.

I cannot argue with the values that lead Rob to his conclusion. And there is little to say about whether more data would further empower the FBI. It would. But here I want to remind the audience of a position that I have taken for a long time, and have often repeated on talkRA: human nature leads us to underestimate the relative benefits of prevention, as compared to detection. Normally I discuss this principle in the context of preventing and detecting revenue leaks, but it applies to data leaks in general. Who, of the loudest voices in this public debate about surveillance, has asked whether the enablers of surveillance are also an important root cause of crime? And who has tried to evaluate the benefits of removing that root cause?

From the way I pose my questions, it should be clear which way I lean. That is not to say I have reached a conclusion – I think it is too soon to reach definitive conclusions. We need more facts, and less opinion. Unfortunately, most of the opinions expressed come from biased sources. And none has sufficient motivation to highlight the possibility that ‘prevention is better than cure’. To crudely stereotype the principle actors in this debate:

  • Law Enforcement Agencies: like most providers of public goods, law enforcement bodies receive greater rewards (budget, status, praise) when responding to a visible problem than when they are doing things that prevent a problem in the first place. Some efforts to prevent crime can be extremely productive; car crime has fallen significantly as manufacturers have improved the security features they install as standard. However, preventative efforts need to be backed by political will, especially when the public is not engaged and does see the need for spending on prevention.
  • Communications businesses: apart from Nicholas Merrill’s innovative proposition, no telco boss has ever constructed a business model to make more revenue by having less data. Customer data is routinely lauded as being a key weapon in the fight against declining margins. So all telcos start out with the same mindset when it comes to data – they want it. They also assert it should be protected, managed properly, subject to regulation and so forth, and this all has a cost which they seek to manage to reasonable levels. But none suggests they would be better off not collecting the data in the first place. Their cost vs. benefit argument has always been purely internal to the business. As technology costs have gone down, so the optimal balance has shifted. But gathering data also has an external cost, borne by society. It leads to crime. Now, I accept some will find this assertion to be inflammatory. “But if the data is properly and securely controlled then there is no crime.” Indeed. I ask not if security is possible, in theory, but if it is reasonable to expect it in practice. Look at the evidence all around. Security breaches keep happening. And when they do, they result in more than just costs to the business, but costs to the rest of the society too. To use an explosive analogy to make my point, consider the Fukushima nuclear reactor. When building the reactors, nobody factored in the social cost of a tsunami causing a meltdown. If they had, the reactors would not have been built. Yet asserting that sufficient safety measures are in place does not make it so, and we know that human beings have a recurring bias towards optimism, whenever they evaluate the downside cost of events perceived to be devastating but highly unlikely. The same is true of data breaches, which also lead to costs for wider society – in the form of crime – as well as costs for the business.
  • Privacy activists: the best way for an activist to get attention is position themselves as defending the rights of the individual against the abuse of power. It can be a highly persuasive argument. Proponents of unfettered communication routinely point to examples like the Arab Spring, where electronic communication was seen as a vital enabler for organizing protest. The argument is very human, and very emotive. Even Nicholas Merrill benefits from presenting himself as a David who took on Goliath, after he spent many years fighting an FBI gagging order that was subsequently found to be unconstitutional. And, if we are being thoroughly cynical, criminals too would favour these kinds of argument about battling oppressive governments. Kim Dotcom, the colourful character behind Megaupload, will bolster his fight against prosecution by releasing a song including the lyrics: “we must oppose / those who chose / to turn innovation into crime” (see here). Presumably Dotcom is less keen to publicize his previous track record; he has been convicted of insider trading and embezzlement. However, whilst privacy advocates may not make the argument that privacy might reduce some forms of crime, that does not invalidate the argument. After all, it is difficult to imagine them having the resources to successfully calculate the cost to society that stems from poor data security. It is in the nature of this problem that only those with privileged information will be able to put a dollar value to the crimes enabled by data breaches.

Put into this context, we need to weigh how Nicholas Merrill’s approach to privacy can both inhibit law enforcement and remove a root cause of crime. No crime will ever be committed as a result of a data breach from Merrill’s proposed comms provider, because the provider will have no relevant data about its customers. The Calyx service provider will offer the metaphorical equivalent of Germany’s response to Fukushima: there is no chance of a nuclear meltdown if you stop having nuclear reactors. On the other hand, not all crime is caused by data breaches. Surveillance is a means to detect very many types of crime. So in weighing up the arguments, we must recognize that Merrill’s proposed privacy model, if adopted widely, would eliminate a root cause of certain types of crime, whilst inhibiting detection of other types of crime. A dispassionate analysis should fairly evaluate the potential net impact on crime, and not merely calculate the impact on crime detection. In fact, a balanced analysis of this kind would go some way to addressing fears about government motives. The right priority is to reduce crime, by whatever means. Inherently preferring detection, surveillance and control is the wrong priority, as that suggests authoritarianism is the real goal. However, this is an argument where the people who have a lot of useful data – the businesses – are unlikely to do the right thing for society as a whole. After all, the data gatherers cannot be expected to fairly forecast the chances and social costs of data breaches, any more than the nuclear industry can be expected to accurately forecast a Fukushima, or a Chernobyl, or a Three Mile Island.

Many arguments are hampered because they outline a schema for making a judgement, but provide insufficient data to reach a conclusion. On a big scale, most of the arguments between economists, and about the best policies for debt-riddled nations, are hampered because we can observe economic metrics, but we cannot get additional data through experimentation. On a small scale, most of the arguments in favour of addressing revenue leakage have explained what might go wrong, but lacked data to conclusively establish how much does go wrong – and I still believe we suffer from a lack of robust and public data about leakage, irrespective of the endless opinion polls that masquerade as research. My argument above suffers from the same lack of information. How much crime is detected by surveillance? How much more crime might be detected by increased surveillance? And what is the value of all the crimes caused by privacy breaches, which would be made impossible if all communications providers adopted the privacy model proposed by the Calyx Institute? I do not know. But for that reason, the Calyx Institute is a good thing. As well as being a business, it is also an experiment, and hence has the potential to provide important information that we can plug into our estimations of risk. We must let the experiment be run, in order to get the information we need.

In an article for Slate, Merrill has adopted some of the arguments about privacy being a way to reduce crime:

Merrill admits prospective funders of his latest project have expressed concerns that it could lead to a confrontation with powerful actors (“It’s challenging to go up against some of the forces that are trying to open up all communications to wiretapping,” he says). But he is trying to address this by showing that government and law enforcement agencies could themselves benefit from his technology. Cybersecurity and privacy are part of the same problem but framed differently, he believes. Both could be addressed at once by ubiquitous encryption of communications and data transfer — protecting user privacy while also helping prevent malicious hackers from stealing information.

The article says that Merrill has now raised USD70k from crowd-funding. This is well short of the USD1M he said was needed for launch, but does suggest a degree of popular support for his ambitions. And perhaps there is an obvious motivation for his neat argument that cybersecurity and privacy are two sides of the same coin…

Merrill says that he has held talks with a host of interested venture capitalists and a few “really big companies” apparently interested in partnering up or helping with financial support. Now the “surveillance-proof” software is in development, and he is on track to begin operating a limited service by the end of the year.

Financial backers are not going to throw their money behind someone threatening to undermine law and order, so it makes sense that Merrill is trying to co-opt the language of law enforcement. Before rushing to judge the risks, I would like to see if Merrill can persuade serious backers to support him, and then to see how his business would work in practice. As Merrill puts it:

“What we’re trying to do is re-envision how the telecommunications industry could work if privacy and encryption technology was built in from the beginning.”

I am curious to see that. Merrill is an innovator, and innovation always introduces risk. In this case, we need to see the results that are achieved in real life, before we rush to a judgement about the risks being taken. Merrill is an outsider, a tiny David, taking on some mighty Goliaths. Many innovators start out with crazy ambitions and a garage instead of an office. Most fail. Some give birth to Apple. I want to see Merrill pursue his dream, and to see what the results are. There are risks, but there are potential rewards too. If we are serious about finding the right balance between surveillance and privacy, and about providing customers with services they actually want, we need to see what Merrill can deliver.

Eric Priezkalns
Eric Priezkalns
Eric is a recognized expert on communications risk and assurance. He was Director of Risk Management for Qatar Telecom and has worked with Cable & Wireless, T‑Mobile, Sky, Worldcom and others.

Eric was lead author of Revenue Assurance: Expert Opinions for Communications Providers, published by CRC Press. He was a member of Qatar's National Committee for Internet Safety and the first leader of the TM Forum's Enterprise Risk Management team. Eric currently sits on the committee of the Risk & Assurance Group. He is a qualified chartered accountant, with degrees in information systems, and in mathematics and philosophy.

Commsrisk is edited by Eric. Look here for more about Eric's history as editor.