5 Reasons to Fear 2023

George Orwell wrote Nineteen Eighty-Four in 1948, and so he had to imagine the technology that might be available to a future authoritarian regime. The following excerpt was taken from the opening pages of that story.

Behind Winston’s back the voice from the telescreen was still babbling away about pig-iron and the overfulfilment of the Ninth Three-Year Plan. The telescreen received and transmitted simultaneously. Any sound that Winston made, above the level of a very low whisper, would be picked up by it, moreover, so long as he remained within the field of vision which the metal plaque commanded, he could be seen as well as heard. There was of course no way of knowing whether you were being watched at any given moment. How often, or on what system, the Thought Police plugged in on any individual wire was guesswork. It was even conceivable that they watched everybody all the time. But at any rate they could plug in your wire whenever they wanted to. You had to live — did live, from habit that became instinct — in the assumption that every sound you made was overheard, and, except in darkness, every movement scrutinized.

It may seem hard to imagine now, but televisions were still a novelty in 1948. That year there were around 350,000 TV sets in the USA, for a population of 147 million. British broadcasting benefited from a denser population and there were roughly 350,000 televisions between 49 million Brits. It was in the next few years that television manufacturing ramped up, leading to 5 million TV sets being sold to Americans in 1950 alone. 90 percent of US households had a television by the end of the 1950’s. Orwell correctly anticipated that televisions would become ubiquitous, but it would take a lot longer to realize a future where centrally-connected cameras and microphones would also be found in every home. Perhaps Orwell would have struggled to imagine them in every pocket, which is why Nineteen Eighty-Four depicts only wired connections. We do not have to imagine that kind of technology any more; it exists today.

Orwell’s story still resonates because of what it says about human beings, whose weaknesses remain unchanged no matter how much technology improves. We do not need imagination to conceive of technologies that would be exploited by an Orwellian society. We just need to have sufficient imagination to remind ourselves what people are capable of. Technology itself is neither good nor bad, but it is easy to underestimate the harmful ways it will be used. The invention of the motor car gave people unprecedented freedom of movement but also led to a series of violent bank robberies as gangsters like Bonny and Clyde outran the police in cars with powerful engines, racing to the nearest state border because the absence of federal law enforcement left state police powerless to act outside of their own jurisdiction. Science fiction writers like Arthur C. Clarke predicted a future where cheap, safe and plentiful nuclear power would be used to launch space stations and travel to other planets. Nobody has yet stepped upon Mars, but we do live in a world where Vladimir Putin and Kim Jong-un use the threat of nuclear warfare to bully peaceful neighbors.

Networked technologies also present a threat to our safety and wellbeing, if the risks are not mitigated. Here are five threats that are rising sharply as we enter the new year.

1. COVID-19 Surveillance Technology Has Not Been Turned Off

A recent article by the Associated Press highlights how networked surveillance which was justified by the need to monitor and control the spread of disease has been retained and is now used to pursue more questionable objectives.

Israelis had become accustomed to police showing up outside their homes to say they weren’t observing quarantine and knew that Israel’s Shin Bet security agency was repurposing phone surveillance technology it had previously used to monitor militants inside Palestinian territories.

Australia’s intelligence agencies were caught “incidentally” collecting data from the national COVIDSafe app. News of the breach surfaced in a November 2020 report by the Inspector-General of Intelligence and Security, which said there was no evidence that the data was decrypted, accessed or used. The national app was canceled in August by a new administration as a waste of money: it had identified only two positive COVID-19 cases that wouldn’t have been found otherwise.

At the local level, people used apps to tap their phones against a site’s QR code, logging their individual ID so that if a COVID-19 outbreak occurred, they could be contacted. The data sometimes was used for other purposes. Australian law enforcement co-opted the state-level QR check-in data as a sort of electronic dragnet to investigate crimes.

“When they see someone not wearing a mask, they go up to them, take a photo on their tablet, take down their details like phone number and name,” said B Guru Naidu, an inspector in Hyderabad’s South Zone.

Officers decide who they deem suspicious, stoking fears among privacy advocates, some Muslims and members of Hyderabad’s lower-caste communities.

“If the patrolling officers suspect any person, they take their fingerprints or scan their face – the app on the tablet will then check these for any past criminal antecedents,” Naidu said.

2. The Automation of Crime

Industrialization generates enormous improvements in standards of living because of the massive increase in productivity relative to the amount of human resource required. However, crime has generally remained labor-intensive. A pickpocket takes your wallet with his own hands, whilst fraudsters may spend hours making phone calls in the hopes of socially engineering one employee in a target business. Email scams were an early example of crimes where simple automation allowed a massive change in the ratio of victims to criminals, but their repetitive nature meant the public could be warned about them. The spread of malware is harder to detect, until it is too late. This has led to a surge of ransomware attacks, with Colonial Pipeline, JBS Foods, CNA Financial, the Washington DC Metropolitan Police Department, and Advanced, a supplier of IT services to the UK’s National Health Service, amongst the high-profile victims.

Despite all the additional surveillance of ordinary citizens, little can be done to tackle cybercriminals, partly because they are hard to identify, but mostly because they will base themselves in jurisdictions which allow them to commit crime on condition that only foreigners are attacked.

3. Politics by Other Means

Prussian general Carl von Clausewitz famously observed that “war is the continuation of policy with other means”. Given the blurring of boundaries between cyberwarfare and cybercrime, we might now conclude that systematic networked crime can also represent the continuation of a country’s foreign policy. Russian hackers prefaced the invasion of Ukraine with a series of attacks on Ukrainian government websites, and this was later followed by attacks on the websites of Ukrainian banks. Later in the year, as Russia sought to use exploit Western dependency on oil and gas imports, a cyberattack on a petroleum refinery in a NATO country was blamed on a Russian criminal group.

The skills deployed in recent Russian cyberattacks were honed through criminal endeavors, with the Russian authorities effectively licensing theft from foreigners as a way to build up the capabilities available to them for national operations too. This is a modern analogy to the 16th Century English pirates who were licensed to raid Spanish ships, thus hindering the development of Spain’s navy and increasing the experience of English crews, who were then better able to defend the country when Spain unsuccessfully sought to invade England in 1588. The pirates of the cyberseas are not just Russian in nationality, with North Korea earning a reputation for state-sponsored theft of cryptocurrency. Other recent cyberpiracy includes state-backed hackers from Iran installing cryptocurrency mining software on US government computers and COVID-19 benefit fraud being tied to state-sponsored hackers for the first time when a Chinese hacking group stole from US pandemic relief funds.

4. Who Controls the Algorithms?

It would be foolish to imply all Western leaders are selflessly pursuing the best interests of ordinary people. Despots like Putin have cruder methods of controlling the press, but Western politicians also seek to gain the advantage by influencing what the population does and does not hear. This is becoming especially evident with the concern expressed by many Western leaders over how the internet can be used to communicate information they do not like, although they would rather use words like ‘disinformation’ to skew perceptions before any specific examples are cited. This is a convenient tactic, given that a Western politician expressing these concerns is as likely to be a source of disinformation as any other.

As Putin showed in Russia, you do not need to immediately arrest all hostile journalists if you want to progressively skew public perceptions in your favor. It is better to exert incremental pressure over time, so that when it becomes necessary to use more brutal methods to silence dissenters there will be vocal support for this oppression from the majority of the press. A policy of steadily increased censorship of the web and social media can be married to increased reliance on state funding for those journalists which receive government approval. Censorship will increasingly need to rely on algorithms rather than human censors, further obfuscating the influence exerted by powerful people, often through informal connections. These tactics were found in Putin’s playbook, but they can also be found in the policies pursued by Western governments too.

Canada’s government is dangerously undermining freedom of speech by trying to juke algorithms to favor some content over other content, whilst simultaneously wanting to direct the flow of more money to mainstream press organizations that would have already collapsed were it not for existing subsidies. Germany’s traditional media businesses have consistently agitated for subsidies from tech companies that recycle snippets of their content. These German firms have used their influence over the opaque decision-making process within the European Union to keep drafting proposals that would benefit them financially. If politicians are not subject to scrutiny, but are willing to do favors to the press, we should not be surprised if this affects the coverage they receive. New Zealand Prime Minister Jacinda Ardern generated plenty of headlines when she told the United Nations about the launch of her initiative to research social media algorithms but none of that coverage queried why the leader of the world’s 126th-largest country is so much better known than most other heads of government. It seems the algorithms favor more positive coverage for Ardern outside of New Zealand than she receives within it. I would rather ask Ardern why she emphasizes the need for an independent foreign policy whenever asked about the influence exerted by China’s authoritarian regime, despite her insistence that only a multinational response can manage the use of social media by citizens of Western liberal democracies.

5. Even Google Fears ChatGPT

The ChatGPT chatbot engine launched by OpenAI in November has received a lot of attention because of how well it produces high-quality natural language text in response to seemingly any question put to it by users. According to the New York Times, the potential for ChatGPT to disrupt other businesses is so severe that an internal memo at Google says they have…

…upended the work of numerous groups inside the company to respond to the threat [of] ChatGPT

Google is concerned that ChatGPT does such a good job of synthesizing information it finds online that it will harm the revenues generated by Google’s search engine. But it is not just Google that needs to worry about code that does a better job of producing human language than the average human being. Chatbots exist because they are cheaper than employing staff to deal with customer queries, so a superior chatbot could put enormous numbers of people out of work. This has been made easier by Western businesses already offshoring so many contact center jobs to reduce costs. There are also a lot of people employed as journalists and marketing copywriters whose roles will be threatened by ChatGPT. Many do little more than rearrange and polish some basic input like a press release or a client’s outline, bolstering this with a few additional observations they gleaned from the web. An AI that can turn a basic query into an immaculate essay, with no further need for proofing or editing, will pose a threat to those human writers too. The only defense against a machine that produces elegant and grammatical text is the human ability to engage the interest of an audience by offering an alternative point of view. However, so much professional writing consists of strictly adhering to mainstream opinions that ChatGPT will have an advantage because it is less likely to introduce personal bias than a human author.

Thinking about the threats identified above, machines that create perfect natural language will also be a boon to criminals and state-sponsored hackers who want to engage in social engineering on a massive scale. A spam SMS message or scam email might be identified by recipients because of mistakes in the choice of words, or they might be automatically blocked because exactly the same strings of words are used each time. A machine that never makes mistakes whilst producing thousands of different ways of expressing the same concepts will be much more effective at fooling victims and evading filters. It would also lead to the creation of bots that are able to spread propaganda whilst remaining indistinguishable from human users. And those professional writers who currently demand more subsidies for their work will be under even more pressure to explain why they deserve to be paid every time their words are quoted by a search engine if a more sophisticated software program can summarize the gist of what they have written without reproducing any part of it verbatim. Copyright has never been an adequate protection against plagiarism; ChatGPT is effectively the automation of such a refined form of plagiarism that no human will be able to identify the sources that the bot has paraphrased.

And if you thought algorithms were dangerous because they can influence what you see and read, and hence how you think, consider what will happen when powerful people start trying to determine what you can be told by an AI chatbot that sounds just like a real person. At some point in the future somebody may use the text of a profound and forceful story like Nineteen Eighty-Four as an input to a machine that is tasked to rewrite it for a modern audience, but with an extra flourish here, and some details omitted elsewhere. The result would be a version of Newspeak that even Orwell would have found difficult to imagine.

Eric Priezkalns
Eric Priezkalns
Eric is the Editor of Commsrisk. Look here for more about the history of Commsrisk and the role played by Eric.

Eric is also the Chief Executive of the Risk & Assurance Group (RAG), a global association of professionals working in risk management and business assurance for communications providers.

Previously Eric was Director of Risk Management for Qatar Telecom and he has worked with Cable & Wireless, T‑Mobile, Sky, Worldcom and other telcos. He was lead author of Revenue Assurance: Expert Opinions for Communications Providers, published by CRC Press. He is a qualified chartered accountant, with degrees in information systems, and in mathematics and philosophy.