General reading impressions - 2020 Week 3
Once more unto the breach…this week has not been very productive for my reading goals: I spent most of my time writing, and re-reading the stuff I needed for writing, and so there were few new things.
Why did Foucault dedicate an entire course to what is usually called “neoliberalism”? After all, his analysis of the ordo-liberal and Chicago School thinkers has led detractors to outright label him as a neo-liberal and somewhat bothers most of his readers, who occasionally try to explain away any resemblance of Foucault finding any kind of merit in positions labelled as neo-liberal.
De Lagasnerie shows, in his brief monograph, how Foucault perceived neoliberalism not as ideological clothing for an atomised society, made uniform by market relations. Instead, The Birth of Biopolitics reveals how neoliberalism — by enshrining markets as the only social structures capable to cope with the plurality of knowledge, interests, and modes of life — provides a critique of governmentality which aims to constrain both sovereignty and the psychological and psychiatric forms of control.
During his exposition, de Lagasnerie shows how Foucault leveraged such positive claims regarding neoliberalism as means to show how radical action can constitute itself against the current order by actually producing something new, rather than becoming a reactionary proposal longing for pre-capitalist societies. Given that I do not belong to the political groups to which this alternative is directed, I cannot comment on that aspect of things. However, the book surely offers a moment of reflection for neo-liberals, especially after the resounding failure of “fusionist” approaches, and shows how a non-conservative neoliberalism might be possible and useful.
Giorgia Lupi and Stefanie Posavec, two data visualisation experts, exchanged a postcard per week, each time with a different graph showing an aspect of their lives: how much time they spent on some tasks, how many clothes do they have, and so on. To compile the result of their year-long project, they published this book, including the beautiful visualisations, some preparatory material, and tips on how to build great graphs.
It is a short, pleasant read that will be useful for anybody who works with data visualisation or want a study ($n$ = 2) of the datification of life. From a more instrumental perspective, I think this book does a great job of presenting three aspects of the information society:
- how many aspects of our lives can be datafied;
- how much this quantification of life can reveal about us, either to ourselves or third parties; and
- how data analytics can still be meaningful even without massification.
Of course, the book does not mean to present general answers to how we, as a society, should deal with data. But it is an elegant reminder of how, under the right conditions, quantification does not entail losing touch with beauty and the human condition in general, and, by extension, of how our relationships with technology might be (re)built along different lines than our current arrangements.
Guido Napolitano. 2012. Guida alla ricerca per i giovanni giuristi. IRPA.
This brief e-book (83 pages) offers some advice on legal scholarship and how to write well. It covers what one could call the research pipeline: from identifying your first research theme to getting a job as a professor in the Italian system, including along the way instructions about how to research, write a readable and interesting text, seeking and dealing with feedback, and the social aspects of the research profession.
I started reading it to practice my Italian, as I had never read anything longer than a couple of pages, and yet I felt that the book was an accessible read at my (currently not that high) level of linguistic competence. Furthermore, some of the advice was actually useful — such as how to outline a sustainable research program in law, and how to engage with proofreaders — and parts of it aligned with what I take to be best practices that can be generally useful, while just a bit of it felt specific to the Italian or legal research contexts.
Elettra Bietti. 2020. From Ethics Washing to Ethics Bashing: A View on Tech Ethics from Within Moral Philosophy. Proceedings of ACM FAT* Conference (FAT* 2020).
Much of the discourse about ethics in artificial intelligence has been co-opted (or designed from the outset) by actors interested in avoiding substantial regulation of AI tech. While, despite claims to contrary, that does not mean that AI ethics is necessarily a tool for ethics washing corporate interests, enough instances of that phenomenon have been observed, and as a consequence there has been some active criticism of the field as a whole, which this paper describes as ethics bashing.
Faced with those two extremes — ineffective or even harmful discourse with a veneer of ethical legitimacy versus claims that AI ethics is always harmful — Bietti manages to outline their modes of failure, and makes an elegant case for not throwing the baby out with the bathwater. The resulting image of AI Ethics shows how the field can be adequately scrutinised and, at the same time, still provide insights about the impacts of technology in society and what should be done about that.
Abeba Birhane and Jelle van Dijk. 2020. Robot Rights? Let’s Talk about Human Welfare Instead. 2020 AAAI/ACM Conference on AI, Ethics, and Society (AIES’20), forthcoming.
This paper has two main points. First, it offers a post-Cartesian framework, based on phenomenological approaches and embodied cognition, that shows the idea of “robot rights” is based on a distorted account of how humans relate with such artefacts. Based on that, the second part shows how the really relevant issues in AI ethics, that can get crowded out by an emphasis on robot rights, are very human matters: violation of human rights through the use of AI, machine bias and discrimination, and the human labour that happens “under the hood” of AI systems.
I am very sympathetic to the authors’s position of seeing “robot rights” as a non-issue, but I must admit that I remain unpersuaded by their argument for that claim, especially when it comes to the use of the Milgram experiment (Subsection 2.3). That is not to say that the argument is uninteresting, as it presents an attractive framing of human-technology relations, and highlights the gender, race, and geographical frames that shape AI ethics (in particular, see Section 2.2).
Finally, even if one does not buy their argument regarding the impossibility of robot rights, Section 3 provides a persuasive discussion of how current AI practices generate threats to human welfare, with special attention to precarious working conditions in the training of AI systems. With its powerful portraits, Section 3 does a much better job than Section 2 in showing how the discussion of robot rights is not an emancipatory theme, but in fact may contribute to the de-humanisation of actual human beings. Even so, both parts contain relevant insights for anyone interested in AI ethics.
Marie Hicks. 2019. Hacking the Cis-Tem: Transgender Citizens and the Early Digital State. IEEE Annals of the History of Computing, January–March 2019.
An analysis of a historical case: how the ongoing digitisation of the British Welfare State during the 1960s created obstacles to transgender people seeking to correct the gender listed on their government-issued IDs. The paper shows clearly how software design process, both by deliberate design and by thoughtless replication of existing biases, may create hardships for people, turning complex realities into coarse-grained distinctions for the sake of computing simplicity and bureaucratic reality. A reminder that algorithmic bias has been with us for longer than usually considered.
C. Estelle Smith et al. 2020. Keeping Community in the Loop:
Understanding Wikipedia Stakeholder Values for Machine Learning-Based Systems. CHI 2020, forthcoming.
The paper provides an initial mapping of convergent values within a community and, based on that, recommendations for ML implementation, which were then validated with Wikipedia editors. An elegant example of how value-sensitive design can be used to design machine learning systems that are compatible with a community’s demands.
Barry Strauch. 2018. [Ironies of Automation: Still Unresolved After All These Years] (https://ieeexplore.ieee.org/document/8013079). IEEE Transactions on Human-Machine Systems 48(5), pp. 419–433.
Strauch presents a tribute to Lisanne Bainbridge’s classical 1983 paper, Ironies of Automation — a must-read for anybody working on automation —, exposing the context in which the paper was produced, how it was received, and further developments since them. Thanks to that, Strauch’s paper is an excellent review of the automation literature, which is essential for understanding the use of AI for automated decision-making.