Externalities from content moderation

Well, it has been a while since my latest update other than sharing classroom notes. Since the beginning of this year, I spent a lot of time putting out fires in some projects, while at the same time trying to conclude the final courses for my law degree. As a result, I did not achieve the kind of posting regularity that is healthy for a blog. Still, at least this effort paid off, and I ended up moving across the Atlantic to join the European University Institute as a doctoral researcher.

It feels strange to have good news during the dumpster fire that 2020 has been in general — and even more so in Brazil. Still, this new start provides me with the opportunity to use this blog not just for personal updates, but also as a thinking instrument for developing ideas that are not yet mature enough for facing more formal venues. (Of course, the opinions presented in this blog are still mine and do not reflect the positions of my employer or other people, unless otherwise noted.)

As a starting point, I would like to mention something that caught my attention while reading Giovanni Sartor and Andrea Loreggia’s recent report on content moderation (Sartor and Loreggia 2020). The report itself will be relevant for those interested in the interfaces between law and technology,1 but what I intend to briefly explore here is a brief tangent that occurred while reading their text.

Sartor and Loreggia (2020, 35) highlight how measures which are taken for maintaining a lawful and health online community, such as content moderation techniques, might end up restricting or otherwise affecting the proper exercise of civil liberties. For example, Sap et al. (2019) show how discourse by minorities can end up being improperly flagged as hate speech, thus curtailing the right of individuals from minoritized groups to express themselves on the virtual environment. Since this is a socially undesirable outcome, Sartor and Loreggia (2020, 35) point out how compliance with European Union law will continue to require the adoption of such measures, but also the balancing of their community-fostering purposes with the rights and interests of those affected by them.

Within the European fundamental rights system, limits to freedom of expression are only accepted if they are prescribed by law and necessary in a democratic society (ECHR Article 10). It follows from this requirement entails that content filtering systems are lawful only if they incorporate suitable technical and organisational measures for constraining the impact of the system on freedom of expression.2 Conversely, a system that manages to properly account for this balancing exercise could be deployed within the reach of European Union law, and thus one might expect that the acceptance3 of such systems would lead to the development of new techniques and systems for content moderation.

Given the complexity involved in meeting the technical and legal standards for filtering, many moderation systems will either rely on off-the-shelf components or, as Sartor and Loreggia (2020, 54) describe, on external providers of content-moderation-as-a-service. But the activities of the providers of such solutions are not necessarily restricted to European countries,4 which means that content filtering technologies developed in response to European requirements might be deployed elsewhere.

While this reuse of technical solutions is, in general, a very positive aspect of technological development, technologies that are (mostly) harmless in their original contexts might produce severe consequences if deployed without suitable measures and safeguards. As authors such as Evgeny Morozov have argued for a long time, and current events keep reminding us, authoritarian governments are quite adept at leveraging technological innovation in their attempts of consolidating their grip over the general population. But, even within democratic(-ish) systems, differences between political and legal cultures may end up removing some of the safeguards that contain the harmful side effects of specific technologies, as shown by the Schrems II decision and how it has been read at both sides of the North Atlantic.5 In both cases, the development of technologies that are human friendly to human rights in their original jurisdictions will adversely impact the human rights of people from other jurisdictions.

This is a problem that cannot be solved by purely technological approaches, as technical safeguards can be removed or sidestepped in response to political imperatives, legal requirements or even economic calculation, among other factors. A heavy-handed solution such as banning the use of certain technologies without their accompanying safeguards would be of little use, as suggested by the debates on export restrictions for cryptographic methods. As I see it, this is not really a problem that is unique to technology, but rather a consequence of the more general challenges of enforcing human rights at a global scale. Nevertheless, it worries me as somebody who is working on data protection by design.

References

Giovanni Sartor and Andrea Loreggia, The impact of algorithms for content filtering or moderation. (European Parliament report, 2020).

Sap * et al.*, “The Risk of Racial Bias in Hate Speech Detection” (2019), Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 1668


  1. Since I have been working with Professor Sartor on some projects, you might want to take my recommendation cum grano salis, but I think this report manages to present technical issues and their legal relevance in an accessible way. ↩︎

  2. The language of technical and organisational measures is directly connected to the GDPR (Article 25 on data protection by default and by design), but the concept behind it has been recognised by the ECtHR: in I v. Finland (Application No. 20511/03), the ECtHR has ruled that Finland failed to adequately protect the private life (ECHR Article 8) of hospital patients by not adopting best practices for data protection. Therefore, failure to adopt suitable measures for protecting freedom of expression might also constitute a failure to adequately protect this fundamental right. ↩︎

  3. Or even obligation, e. g. from Article 17 of the Copyright Directive. ↩︎

  4. In fact, given the current dynamics of AI technological development, it is highly likely that high-end technology providers will also be active in other markets, especially given the role that the United States and China play in global AI research. ↩︎

  5. As another example, the German NetzDG has been cited as a major influence for the Brazilian Fake News Bill, which has been criticised as providing backdoors for the suppression of political discourse. ↩︎

Researcher, Law and Artificial Intelligence

Currently researching the regulation of artificial intelligence at the European University Institute.