General reading impressions - 2020 Week 5

This week’s post is a little late, as my house spent most of yesterday without power. So, here come the usual comments on texts.

Papers

Ann-Sophie Barwich. 2019. The Value of Failure in Science: The Story of Grandmother Cells in Neuroscience. Frontiers in Neuroscience 13.

Provides an interesting analysis of the concept of a grandmother cell and its empirical and conceptual shortcomings, but, more importantly, makes a solid case for the importance of studying scientific failures. Focusing on successful ideas and models, according to Barwich, can lead to distorted images of science, impacting scientific practice and communication. Furthermore, analysing modes of scientific failure may provide insights about our criteria for success, our conceptual and valuational foundations for practice, and, in the case of partial failures, why the failed idea still led to some contribution.

Christian List. 2011 [2005]. Group Knowledge and Group Rationality: A Judgment Aggregation Perspective. In: Social Epistemology: Essential Readings, edited by Alvin Goldman and Dennis Whitcomb.

Introduces the use of judgment aggregation techniques for the formation of group knowledge. By discussing two problems — how to generate consistent judgments and how to derive knowledge from those judgments —, the author shows that results from judgment aggregation theory may be useful for understanding and designing institutional structures for epistemic agents.

Lucas Miotto. 2021. What Makes Law Coercive When It Is Coercive?, Archiv für Rechts- und Sozialphilosophie, forthcoming.

A necessary-and-sufficient-conditions account of what is coercive about typical legal systems. Since this paper is still a preprint, and it asks explicitly that the draft be not cited, I will try to keep my comments general, but anybody interested in analytical accounts of coercion should read this one. I like that this paper provides some tools for bridging the gap between abstract considerations about the law and the empirical legal studies crowd, as the strawmen versions of each field tend to be dismissive of one another.

While I am not really convinced by the author’s starting premises regarding the nature of coercion, his proposals for describing the mechanisms for coercion in typical legal systems are insightful, especially when it comes to emphasising the systemic nature of coerciveness. However, while the author sustains that consistent use of coercive sanctions and enforcement mechanisms is too weak a condition for establishing the coerciveness of a legal system, I am convinced that it is actually too strong a demand, as belief in selective enforcement might increase, rather than decrease, the perceptions of citizens regarding the authorities’ disposition to enforce legal mandates (especially among those groups that are systematically targeted by that selective enforcement). Nevertheless, this is an insightful paper about how legal coercion operates.

Bart van der Sloot. 2017. Decisional privacy 2.0: the procedural requirements implicit in Article 8 ECHR and its potential impact on profiling. International Data Privacy Law 7(3), pp. 190–201.

An overview of how the idea of decisional privacy — the right to make decisions about one’s private life without undue interference — has been incorporated into European Union case law, shaping the interpretation of Article 8 ECHR.

Jeroen van den Hoven et al. 2019. Privacy and Information Technology, The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta.

A good introduction to how information and communication technologies present new opportunities and challenges to our conceptions of privacy. It presents a good overview of the existing currents (one that can be supplemented by reading the main SEP article on Privacy), of the various technological developments that impact privacy, and of the possibilities raised by privacy-enhancing technologies.

FAT* ‘20

FAT* is a big conference on fairness, accountability, and transparency of algorithmic systems. Unfortunately, I was not able to attend this year’s edition, which happened a few days ago in Barcelona, but the proceedings are available online for free. So, I shall present here my favourite papers from there.

Chelsea Barabas, Colin Doyle, J B Rubinowitz, and Karthik Dinakar. Studying up: reorienting the study of algorithmic fairness around issues of power.

A call for action, urging data scientists to “study up”, that is, paying attention to the most powerful groups in society rather than uncritically accepting their framings of situations. That would entail switching a focus from questions that have disproportionate effects in vulnerable populations to alternative framings — such as eschewing studies of recidivism rates in favour of metrics for evaluating judge behaviour and, at least in theory, help towards better decisions. As the article’s case study show, any attempt of studying up will face technical and non-technical challenges, but this change of perspective allows for a broader perspective on what actually is a fair system.

Elettra Bietti. 2020. From Ethics Washing to Ethics Bashing: A View on Tech Ethics from Within Moral Philosophy.

Recapping from a previous post

Much of the discourse about ethics in artificial intelligence has been co-opted (or designed from the outset) by actors interested in avoiding substantial regulation of AI tech. While, despite claims to contrary, that does not mean that AI ethics is necessarily a tool for ethics washing corporate interests, enough instances of that phenomenon have been observed, and as a consequence there has been some active criticism of the field as a whole, which this paper describes as ethics bashing.

Faced with those two extremes — ineffective or even harmful discourse with a veneer of ethical legitimacy versus claims that AI ethics is always harmful — Bietti manages to outline their modes of failure, and makes an elegant case for not throwing the baby out with the bathwater. The resulting image of AI Ethics shows how the field can be adequately scrutinised and, at the same time, still provide insights about the impacts of technology in society and what should be done about that.

Bogdan Kulynych, Rebekah Overdorf, Carmela Troncoso, and Seda Gürses. POTs: protective optimization technologies.

Approaches the limits of fairness from a computer science perspective. A key contribution of this paper is highlighting that an emphasis on algorithms is misleading, as it casts aside many of the possible sources of harm, such as those ensuing from the organisational context in which such systems are used.

The case studies provided by the authors are enlightening, but show one key aspect that warrants further discussion: who is to be protected by POTs? In one of the cases, the authors mention the development of technical solutions for keeping Waze routes from overloading traffic in small towns. This approach empowers a set of stakeholders whose interests were neglected by Waze’s product, at some cost for drivers. This POT, as is usually the case, needed to make a choice about how to weigh the interests of different groups, and the development of truly fair AI will need to develop mechanisms for evaluating and adapting the balance of interests, as some situations will inevitably result in conflicts with no easy solution. 

Maranke Wieringa. What to account for when accounting for algorithms. A systematic review on algorithmic accountability.  By analysing 242 English-language papers on topics related to algorithmic accountability, ranging from 2008 to 2018, the author maps the various treatments of the subject along the definition of accountability proposed by Mark Bovens and its five salient elements: 

  1. the accountable actors;
  2. the forum for accountability;
  3. the relationship between actors and forums;
  4. the content and criteria of the account;
  5. the possible consequences from the account.  With that mapping, it is possible to see both the trans-disciplinary nature of the work in algorithmic accountability and the areas that are currently underdeveloped.
Researcher, Law and Artificial Intelligence

Currently researching the regulation of artificial intelligence at the European University Institute.