Review of Ofcom Report on Data and the IoT

We all talk a lot about the impact of the Internet of Things (IoT). In almost every article about the topic, the author will include one sentence saying that personal data is important, and we need to think about privacy. Consider this recent article by Ali Durmus for Telcoprofessionals.com:

2015 – The year the wearable market explodes

…IDC estimates that 72.1 million devices will be shipped in 2015 up 173.3% from the 26.4 million units shipped in 2014. Fitbit continues to be the market leader with 34.2% market share in 1Q15 with Xiaomi coming second at 24.6%. Fitness, including activities such as how many steps a person has taken in a day along with running, hiking and multi-sport activities continues the dominate the reasons why people are buying wearables…

…The use of wearables in a connected healthcare and lifestyle environment will become of increasing important especially with the management of illnesses as well as for health predictions and the use of calculating things such as health insurance. There are the obvious security and privacy issues that need to be addressed with that.

I am sorry, but it is not good enough to write a long article about the huge revenue growth that will come with IoT, and include one bland sentence that says ‘obvious’ issues will ‘need to be addressed’. How? By whom? At what cost? What will be the penalties if they are not?

When it comes to the IoT, we are not talking about trivial or unimportant data about a human being. But there will be few easy answers to the questions we must face. Consider that a wearable medical device will capture lots of sensitive data, and need to transmit it. We also want them to be light, unobtrusive, and have a long-lasting battery – nobody’s life will be saved by a sensor that is switched off. But that immediately creates huge challenges. Encryption is a computationally intensive process, so the stronger the encryption, the worse the battery life! Who decides what is an acceptable trade-off between safeguarding a patient’s data, and the chance of a flat battery?

This concern came up again and again when I talked to manufacturers at Wearable Tech 2015. To be fair to them, they admitted to the scale of the challenge. But many manufacturers were start-ups in fierce competition with rival firms. We are at grave danger if we foster an approach that focuses on the revenues now, and deals with the issues later. And it would be irresponsible for a big comms firm to slap their established brand over a new service, then point fingers of blame when their manufacturing partner builds devices with inadequate security. To its credit, the industry knows we face problems. This was illustrated when I engaged in a RIMS Twitter chat about cybersecurity and the IoT. This was the comment that prompted most response.

We know that those IoT devices already in widespread use have failed to incorporate adequate protection of data. Smart meters face fewer technical challenges than wearables, but academics lambasted the poor cryptography of one of the most popular smart meter specifications.

Thankfully, one positive message I heard at Wearable Tech 2015 is that regulators are thinking about the risks, even if they do not yet have all the answers. Confirmation of that came recently with Ofcom, the UK comms regulator, commissioning a report by independent consults WIK-Consult. The purpose of the report was to study recent academic research into the best ways to interact with customers when handling data gathered via the IoT. This report is not the end of a process; we need much more industry debate about these risks. However, I hope the report will inject some urgency into the pursuit of compromises that will take all risks into consideration. What follows are some of the key findings from that report.

The IoT multiplies the scale of the data and privacy challenge.

If expectations about the take-up of such connected devices are correct, online tracking of personal data is likely to become seamless across all areas of people’s lives. Besides the increase in the amount of data, one may also expect that data gathering, aggregation and analysis will become even more subtle as machines talk to machines without (almost) any human intervention. Thus, consumers have even less opportunity to learn about data-gathering practices. In some cases, they may not even be aware that the device they are currently using is actually connected to the Internet.

…it is likely that the IoT will multiply the number and complexity of contractual relationships, which has to be reflected in the terms and conditions… Furthermore, it is likely that many connected devices will feature only very small screens or even no screens at all. This will also render attempts to make such agreements easier to read, for example by turning them into a label, largely futile… Finally, the IoT is likely to increase uncertainty about the consequences of consumers’ actions because as the complexity of interactions multiplies, so do potentially adverse effects of willingly or unwillingly disclosing personal data.

The need for informed consent is well established in privacy law.

Consent plays a central role in most data privacy laws in the world.

[In Europe] the stance on personal data and privacy is clear-cut. The right to protection of personal data is a fundamental right constituted in the European Union, and the European Data Protection Directive of 1995 which is implemented by the member states. The directive entails specific requirements for the processing of personal data. Within that, informed consent plays a central role.

However, informed consent is rarely given in practice.

Consumers rarely read terms and conditions at all.

The signing-without-reading problem or, in the online environment, the clicking-without-reading problem is a well-documented phenomenon.

…consumer surveys consistently show that consumers do say they worry about their personal data and what happens with it. However, in practice, they show very little if any interest in engaging with terms and conditions or even more specifically privacy policies. The most seminal study in this area finds that only about 0.05% of agreements are actually accessed by consumers before they consent to them. It was found that access does not necessarily translate into consumers actually having read the terms and conditions as the average time spent viewing the content of the agreements was significantly below one minute. Understandably, this is not enough to grasp the meaning of the respective agreement.

…if one were to read all the terms and conditions of the websites one visits throughout a year, this would take up several weeks assuming a full 40 hours of reading time each week.

Cranor and McDonald found that significantly less than half of web users (40%) were aware that their emails may be scanned to enable targeted advertisements. The “discrepancy between attitudes and behaviors” phenomenon is referred to as the “privacy paradox”. Furthermore, 29% of users in the same study did not believe that this was actually common practice as they thought such practices would be unlawful. These results may be taken as an indication that consumers do not have the (ex-ante) knowledge to make informed decisions about privacy.

Commonly, the length of terms and conditions and the legalistic jargon are blamed for consumers not being able to understand them. In fact, even law students were found to have significant problems understanding them. Studies investigating the readability of terms and conditions consistently find that at least university-level reading skills are needed to understand them.

…it can also be argued that there is contradiction between companies natural interest to build “liability shields” and consumers’ interest to be informed about the most important cornerstones of a specific privacy policy. Consequently, companies often use “long texts that are too legalistic and complicated” for privacy policies. Interestingly, from a legal perspective, information that is too detailed, technical and lengthy might not violate data protection regulations as such but could lead to high risks to the privacy of users as they consent to something that hides its meaning more or less deliberately from them. Representatives from the data protection authorities of each EU member state, the Article 29 Working Party, strongly disapprove of long privacy policies full of legalese. In a letter to Google Inc. they criticise the company’s new privacy policy and explicitly state that in general “Internet companies should not develop privacy notices that are too complex, law-oriented or excessively long”.

Businesses can do a better job of communicating the important facts to their customers.

There are signs that (in the UK at least) firms are becoming more pro-active as regards communicating their privacy notices. For instance, they are moving towards “just in time” notices that pop up at appropriate times.

Standardization, perhaps led by governments, is one way to improve communication.

More harmonised information provisions may help reduce consumers’ burdens for reading and understanding. Again, several researchers suggest using icons instead of text popups or other condensed information. These icons generate trust when they embody a certification scheme. Furthermore, privacy policies that reflect a consumer’s individual cultural background and preferences were found to contribute to better understanding. Other approaches shown to improve consumers’ understanding use automated information extraction from privacy policies. Warnings about unexpected terms in a privacy policy may serve as a means to help consumers become aware of unusual Personal Data and Privacy practices.

Some countries have already adopted laws governing how information on public administration websites needs to be presented. For instance, in 2011 Germany implemented a provision for barrier-free access to digital information. Since then, national government institutions have been obliged to offer digital information not only for disabled people but also in “easy-to-understand language”. Website users can easily switch from elaborate to simpler code.

Another way to improve communication is to encourage consumers to speak for themselves.

…new developments in digital media may also have a positive impact on consumer privacy as they can enhance their position in negotiating privacy concerning general as well as specific issues. Via video, blog entry, tweet or Facebook post, individual consumers are able to publish their opinions on privacy and to protest against, for example, changes in terms of contracts. Consumers can generate more attention on a certain topic, network with other consumers and eventually a provider might be more easily forced to react than in the offline world. Digital publication of information and arguments can result in enormous pressure on the “data collectors”. We would see information and the potential response of the public as one necessary but not sufficient tool of consumer empowerment.

We do not have all the answers.

It seems that a single solution for all – or at least most – issues is yet to be found.

However, one of the better answers involves raising consumer awareness in general.

…future research could perhaps address the phase of the consumer information process before they even come in contact with terms and conditions, namely when they become aware that there is an issue at all. Currently, there is a lot of uncertainty with consumers and experts alike regarding the potential effects of data collection. First, we have to be able to point to the specific (adverse) effects that may emerge from the tracking of personal data. Specific information about these effects is likely to raise awareness among consumers. This, in turn, is likely to motivate them to engage with terms and conditions and in particular privacy policies of services and products they consume.

Low consumer awareness of issues means they are not ready to debate the balance between risks and rewards. However, we need that debate.

…the key to agreement on an acceptable level of costs is likely to depend on… whether we can agree on the risks and benefits that consumers derive from using online services. Clinical research went through a decades-long painful process before finally reaching agreement on the importance of trial participants understanding both the risks and the benefits. As a result, clinical research has accepted significant transaction costs in order to ensure informed consent. This would suggest that a key first step in thinking about how such an approach could be adapted to online services, there would need to be agreement on the risks and benefits to consumers. Such agreement would need to reflect a consensus on socially and economically acceptable behaviour in the data-driven economy, especially with respect to data harvesting, forwarding, aggregation and analytics practices. As we know that consumers are unaware even about relatively simple analytics techniques… it will be interesting to learn about how consumers value more advanced techniques – and how they would express the potential of these techniques in terms of a risk/benefit trade-off. We are by no means implying that this trade-off is negative; what we want to emphasise is that risks and benefits appear mostly unexplored today – which appears to be the obvious result of unaware consumers and the widespread use of data harvesting and processing practices that take place in secrecy. Delineating right from wrong thus will be an essential part of naming risks and benefits to consumers. Valuing risks and benefits will, in turn, be an essential part of defining how serious we actually take informed consent as a requirement. This valuation – in combination with the assessment whether other means to improve informed consent have the desired effect – will be an important consideration in determining whether providers need to take measures to ensure that consumers understand the risks and benefits, and under what circumstances providers might need to document that consumers understood.

We must learn from historical mistakes, or we will repeat them.

We refer… to the example of clinical research, where it was first necessary to establish the actual risks if informed consent was not actual informed consent. It took a long list of – from today’s perspective – unethical and quite frightening cases to come to the attention of researchers in the domain, courts and society at large before the domain could establish a notion of right and wrong, of what is ethical and that human trial subjects have rights after all. As much as it was a challenge for clinical research, it will be challenging for our domain to establish a widely acceptable understanding of relevant risks and benefits to consumers. However, if informed consent is to be the benchmark, there is no way around this debate in our domain. If the goal truly is that consumers make informed decisions, we can learn from clinical research that we have to make consumers understand and that we have to be able to specify relevant risks and benefits for consumers when they choose to use a digital and data-driven product or service.

You can obtain the full WIK-Consult report for Ofcom from here.

Eric Priezkalns
Eric Priezkalns
Eric is the Editor of Commsrisk. Look here for more about the history of Commsrisk and the role played by Eric.

Eric is also the Chief Executive of the Risk & Assurance Group (RAG), a global association of professionals working in risk management and business assurance for communications providers.

Previously Eric was Director of Risk Management for Qatar Telecom and he has worked with Cable & Wireless, T‑Mobile, Sky, Worldcom and other telcos. He was lead author of Revenue Assurance: Expert Opinions for Communications Providers, published by CRC Press. He is a qualified chartered accountant, with degrees in information systems, and in mathematics and philosophy.