Why Massive Political Bias in Email Spam Algorithms Is Problematic for Free Speech More Generally

You have probably noticed all the recent fuss about Elon Musk taking over Twitter, with so many people disputing whether he will save free speech or be a threat to public safety that you have to wonder why so few have anything to say about Twitter’s current policy of allowing literal warmongers to rationalize mass murder. You are less likely to have noticed a recent study by academics at North Carolina State University which found three of the most popular email providers exercised considerable political bias in the emails they redirected to spam. They examined the impact of the spam filtering algorithms (SFAs) of Gmail, Outlook and Yahoo upon a wide range of emails sent by candidates during the 2020 US elections and found:

…all SFAs exhibited political biases in the months leading up to the 2020 US elections. Gmail leaned towards the left (Democrats) whereas Outlook and Yahoo leaned towards the right (Republicans). Gmail marked 59.3% more emails from the right candidates as spam compared to the left candidates, whereas Outlook and Yahoo marked 20.4% and 14.2% more emails from left candidates as spam compared to the right candidates, respectively.

You might speculate that spam filters are sensitive to particular kinds of words, and this leads to bias because politicians on one side of the divide will use those words more often than opponents on the other side. However, the researchers allowed for this by matching subsets of emails sent from politicians on both the left and right in groups that should eliminate these effects. The shocking result was that each spam filter still exhibited a clear political bias, even though this could only relate to the source of the email and not the message being conveyed. As the academics put it:

They mark emails with similar features from the candidates of one political affiliation as spam while do not mark similar emails from the candidates of the other political affiliation as spam.

I expect this serious problem will largely be ignored by politicians on both sides of the divide because:

  • voters are not influenced by problems they have not heard about;
  • most politicians are fine with bias so long as they can find ways to make it work in their favor; and
  • even if they wanted to fix this problem, politicians do not know how to.

This latter point is most important for risk professionals working in electronic communications of all types. This study should influence our thinking about how algorithms bias the information we receive because it is so much easier to study a very large number of emails than to do a similarly large study of other forms of communication. Email is not new, but this study demands attention because it has demonstrated the enormous scale of a problem that few have even considered so far. If businesses like Google and Microsoft have constructed algorithms which exhibit severe bias in the emails they decide users should see, what are the chances of unbiased filters being applied to other forms of communication, despite political demands for far more filtering than ever before? Google’s code of conduct says “don’t be evil” whilst Bill Gates has publicly commented on the challenge of keeping people open to contrasting points of view. However, neither Google nor Microsoft has delivered fair and neutral email filtering algorithms in practice.

The first email was sent in 1971. By 2011 there were 1.4 billion email users worldwide, sending 50 billion non-spam emails each day. There are now almost 4 billion users of email, of which 1.7 billion use Gmail. We have plenty of data about email, and the creators of spam also have the advantage of being able to review the entirety of a message before they decide if it should be filtered or not. But Google is still not clever enough to realize that an email sent from my Gmail account to the same Gmail account should not be redirected to my junk folder, no matter how many times I attempt to train their algorithms to behave differently. We can comfort ourselves that even if an email is wrongly labeled as spam it can still be retrieved, although few people ever check the contents of their spam folder. We have no such comfort when algorithms choose to block a phone call, or a tweet, or one of the other countless ways we communicate electronically.

I began this article by mentioning Elon Musk because his name has become central to a debate about whether privately-owned businesses should be trusted to make decisions about which messages should be blocked. This debate is a smokescreen, designed to distract the public from considering a much more fundamental issue. Governments are not going to make decisions about which messages to block because they are not even capable of specifying detailed rules about what they want blocked. Only authoritarian governments will choose to act in their own name. All other governments will prefer to complain about the blocking algorithms implemented by others, whilst offering no constructive suggestions for how to improve those algorithms. Worse than that, even if it was possible to finely craft such specific criteria to distinguish the good from the bad and the permissible from the prohibited, governments would not want be so specific for fear that many voters will disagree with their decisions, one way or another. It is easier for governments to outsource the problem, and the blame, to the private sector or some other body that is not directly under government supervision. Governments in ‘liberal’ democracies have already begun down this same path when it comes to decisions about blocking voice calls too.

Voice is the most immediate and urgent form of communication. If you have an emergency then you make a phone call; you do not send an email or compose an SMS message. But governments in countries like the USA appear blind to the risks whilst pushing telcos to block more voice calls than ever before. They want private enterprise to decide which calls should be blocked, and only offer the vaguest guidance instead of firm rules delineating good from bad. Politicians do this whilst simultaneously chirruping about Big Tech, Facebook, Elon Musk, and many of the decisions already made by the private sector about who can say what. Musk is a clever guy and is as trustworthy a potential owner of Twitter as anyone, but he is no more capable of delivering the perfect filtering algorithm than anyone else, because we live in societies that cannot agree what constitutes perfection, or even the best approximation to perfection we might realistically achieve in practice. The researchers at North Carolina State University have given everybody reason to recalibrate their expectations when imposing rules to automatically distinguish good communication from bad communication. The sad truth is that their research will largely be ignored because so few want to hear their expectations are unrealistic.

“A Peek into the Political Biases in Email Spam Filtering Algorithms During US Election 2020” by Hassan Iqbal, Usman Mahmood Khan, Hassan Ali Khan and Muhammad Shahzad of North Carolina State University can be found here.

Eric Priezkalns
Eric Priezkalnshttp://revenueprotect.com

Eric is the Editor of Commsrisk. Look here for more about the history of Commsrisk and the role played by Eric.

Eric is also the Chief Executive of the Risk & Assurance Group (RAG), an association of professionals working in risk management and business assurance for communications providers. RAG was founded in 2003 and Eric was appointed CEO in 2016.

Previously Eric was Director of Risk Management for Qatar Telecom and he has worked with Cable & Wireless, T‑Mobile, Sky, Worldcom and other telcos. He was lead author of Revenue Assurance: Expert Opinions for Communications Providers, published by CRC Press.

Related Articles

Get Our Weekly Newsletter by Email