YouTube AI Flags 80% of Unsafe Videos

Over the past few years YouTube has relied on a combination of human intervention and technology to “flag” content that is considered inappropriate in light of YouTube’s community guidelines. In particular, content can be flagged by YouTube’s automated flagging systems, members of the Trusted Flagger programme (which includes NGOs, government agencies and individuals) or from simple users within the YouTube community.

Google/YouTube has recently released a new transparency report. This combines reports on copyright, the right to be forgotten, and government requests.

Content is flagged if: sexual, spam or misleading, hateful, abusive, violent or repulsive. The statistics given here exclude requests due to copyright.

The report specifies that about 80% of videos that violated the site’s guidelines in 2017 had first been detected by Artificial intelligence (AI) machines. Furthermore, out of the 8mn removed between October and December approximately 6.6mn were first notified through automated flagging systems.

As opposed to human “flaggers”, AI machines enable YouTube to act more quickly and accurately to enforce its policies. Google states that:

These systems focus on the most egregious forms of abuse, such as child exploitation and violent extremism. Once potentially problematic content is flagged by our automated systems, human review of that content verifies that the content does indeed violate our policies and allows the content to be used to train our machines for better coverage in the future. For example, with respect to the automated systems that detect extremist content, our teams have manually reviewed over two million videos to provide large volumes of training examples, which improve the machine learning flagging technology.

The transparency report is interesting because of how bots are central to the way YouTube works. Discussions around notice and takedown, filtering, etc have been heating up; there is plenty of argument about how to handle incredibly vast amounts of content that is made available and shared online every day. This is true both in relation to takedown requests and the handling of such requests. It also raises the question of how much of the material taken down also stays down, but this may be a new chapter to the never-ending story of online rights enforcement…

The original version of this article was posted on the IPKat by Nedim Malovic. It has been reproduced under a Creative Commons CC BY 2.0 UK Licence.

IPKat
IPKat
Launched in 2003 as a teaching aid for Intellectual Property Law students in London, the IPKat’s weblog has become a popular source of material, comment and amusement. IPKat covers copyright, patent, trade mark, info-tech and privacy/confidentiality issues from a mainly UK and European perspective.

The IPKat team is Neil J. Wilkof, Annsley Merelle Ward, Darren Smyth, Nicola Searle, Eleonora Rosati, Merpel and David Brophy.