General reading impressions - 2020 Week 1

Happy new year, everybody! As part of my efforts to blog more than once in a decade, I have decided to try and write weekly posts about my long reads (usually books or academic papers, but might also include journalistic stuff, etc.). Each text should get a small description of its contents and some impressions, but particularly thought-provoking pieces might warrant a post about them.

With any luck, my lists will provide interesting suggestions to readers with similar interests. At the very least they will provide me with a clearer visibility of what I am reading, as in 2019 I barely read any fiction.

Books

Samuel Beckett. 2003. Últimos trabalhos [Last Works]. Independente.

A bilingual edition of Beckett’s three last works — Worstward Ho, his final prose piece Stirrings still, and his poem What is the word? —, including translations to European Portuguese by Miguel Esteves Cardoso. Of those, the first piece is the most impressive (but definitely not a good introduction to Beckett), but I also enjoyed the poem. The translation itself felt a bit stiff, but, for more on that, I recommend my fiancée’s paper comparing the Brazilian and European Portuguese translations of Worstward Ho with the source material, which will be published soon.

Gilles Deleuze. 2018 [1963]. A Filosofia Crítica de Kant [La philosophie critique de Kant]. São Paulo: Autêntica. Available from: Amazon BR.

Deleuze’s fascinating reading of Kant’s Critiques, centred on the evolving relationship between the faculties along the three books. This book was published before many of Deleuze’s most famous works, especially his collaboration with Guattari, but he already has a very clear voice, treating Kantian themes with his own questions in mind, while remaining connected to the source texts. The Brazilian Portuguese translation was very readable, a bit more than the European Portuguese version from Edições 70, but I admit that it was a little weird to see the quotations translated not from the original works but rather from Deleuze’s quotes.

Richard Earl. 2019. Topology: A Very Short Introduction. Oxford: Oxford University Press. Available from: Amazon BR, Amazon US.

It is an accessible read, but not a trivial one. As somebody with no previous studies of topology, I felt that the book managed to convey a good image of the relevance and the beauty of the field. A reader with little mathematical experience could probably follow this book if they are willing to put in the work, but following the concepts and demonstrations will demand some effort even from those who already have some exposition to mathematics. This is not the author’s fault, as he clearly presents what one must know to face the challenges posed by the book, but it’s still a lot of content to cover within the limits of a Very Short Introduction. But even if one only engages lightly with the book, it still seems to provide a nice panoramic view of topology.

Papers

Theo Araujo, Natali Helberger, Sanne Kruikemeier, and Claes H. de Vreese. 2020. In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI & Society open forum.

The authors conducted a survey-based experiment ($N$ = 958) with a representative sample of Dutch population, concluding that people are concerned about risks from AI, and have mixed opinions about its fairness and usefulness — but, within the domains of health, justice, and journalism, they were more likely to have a better opinion of AI along those lines than they had about the use of AI in general. The experimental design accounts for some factors, such as a person’s confidence in managing their own privacy (increases leads to more perception of usefulness and less of risk) or beliefs in the need for economic equality (higher belief, rather surprisingly, led to greater belief in fairness and usefulness of AI).

The authors seem to have taken a sober analysis of results and of the limits of what one can infer from their experiment. Beyond the experiment itself, the paper’s literature review will also be of interest for those working with automated decision-making, thanks to tidbits such as their discussion of how the phenomena of algorithmic aversion and algorithmic appreciation, both diagnosed in previous studies, can coexist, even if the paper itself focuses on algorithmic appreciation. A must-read for anyone interested in the social and legal aspects of AI.

Andrew Feenberg. 1992. Subversive Rationalization: Technology, Power, and Democracy. Inquiry 35 (3–4), pp. 301–322.

Feenberg provides answers to economic and technological determinisms, claiming instead that processes of “subversive rationalization” may be used to detach technological development from the model of an industrial society to which it currently provides support. Through historical and sociological analyses, Feenberg shows how technology’s role in reinforcing authoritarian, hierarchical structures is not a necessary feature of how technology is constituted; instead, that contingency may be transformed by the expansion of democratic practices to technology — involving the users and other stakeholders rather than relying just on centralised design — and, therefore, to the social domains where technology plays a mediating role. Escaping from the rationality imposed by modern technology is therefore not only necessary but also possible without eschewing technology altogether, even if substantial change in sociotechnical configurations is required for that.

Even though Feenberg defends social changes that would go far from my own political positions, I am well persuaded by his points regarding the contingent nature of technology, a position that avoids both the naïve idea that technology is neutral and Panglossian and Luddite determinisms. As Cristiano Cruz puts it, it becomes thus possible to hold a “critical optimism” about the sociotechnical capabilities that could be developed without denying the evident problems within the current technological framework, such as the environmental crisis or the discriminatory effects of facial recognition technologies.

Roman Frigg. 2006. Scientific Representation and the Semantic View. Theoria 55, pp. 49-65.

Here, Frigg sustain that the semantic view of scientific theories fails to explain what is a scientific representation. For him, any such explanation must satisfy two requirements:

  1. Showing how one can learn, through models, about the things that they represent;
  2. How models may incorrectly represent something.

Semantic accounts of representation rely on (partial) isomorphisms, but isomorphisms have properties that representation do not, and things may be isomorphic without being a representation of one another. Furthermore, an object might be represented by multiple models; (but here Frigg dismisses a bit too quickly the Suppesian claim that models represent data models and not objects, which has been used, e.g. by Valter Alnis Bezerra, to provide other descriptions of the representation relation). Those problems are present not just in structural-ish versions of the semantic view, but also in more relaxed views such as Giere’s similarity account of the model-target relation.

Since his requirements for scientific representation seem reasonable to me, this paper is interesting as a bar-setting exercise for any theories that aim to explain it.

Susan Haack. 2019. “Scientific Inference” vs. “Legal Reasoning”? — Not So Fast! Problema. Anuario de Filosofía y Teoría del Derecho 13, pp. 193–213.

Haack contrasts the enterprises of science and law, tracing the differences between them not to a “scientific method” that is unique to the inferences made by scientists, but rather to the different purposes, constraints, and cultural contexts for each practice. According to the author, a better understanding of those differences would be relevant for addressing the various issues that happen in the interface of law and science — e.g., the debate on the validity of statistical evidence. I tend to agree with that claim, but I think that a proper mapping of this gap, especially in Civil Law countries, will need to look not just at legal practice but also at the role played by legal scholarship in shaping that practice.

Remco Heesen, Liam Kofi Bright, and Andrew Zucker. 2019. Vindicating Methodological Triangulation. Synthese 196(8), pp. 3067–3081.

This paper presents a defence of methodological pluralism as a form of triangulation: if one is uncertain about the reliability of a method, as one usually is, then achieving similar results with different methods will give us more certainty about those results. The authors provide a formal model of methodological triangulation, produced by abstraction from a triangulation approach used by W. E. B. DuBois, which do not require some of the presuppositions criticised by triangulation sceptics, such as independence between methods, the need that the methods share presuppositions other than their applicability to a given problem, or that the various methods must be applied by the same researchers.

As the authors point out, it remains to be seen whether methodological triangulation would be beneficial if widely adopted (rather than just being used by individual researchers in specific circumstances), as well as the effects of correlated error between methods, but this seems to be a promising framework for further exploration.

Drew McDermott. 1976. Artificial Intelligence Meets Natural Stupidity. SIGART Newsletter 57.

This brief paper (six double-column pages) describes three ways in which artificial intelligence research ends up overselling itself: the use of mnemonics and terminology that promise more than is actually implemented, uncritical understanding and attempted emulation of natural language, and believing that identifying the issues present on a preliminary version of a program amounts to the design of that new, improved version. While, in the 40+ years since this paper, the actual practices of artificial intelligence have changed — for example, with the substantial role played by machine learning nowadays — those issues persist to a lesser or greater extent, and they show that outsiders are not the only ones who have unclear notions of what artificial intelligence can or cannot do.

Ubaldus de Vries. 2013. Kuhn and Legal Research. A Reflexive Paradigmatic View on Legal Research. Law and Research 3(1).

De Vries describes the modern law research community in terms of an established research paradigm and then attempt to identify some anomalies which would suggest the need for a paradigm shift. The author acknowledges the difficulties involved in the direct application of the Kuhnian model outside natural science, proposing instead to use the description of a legal research community as a reflexive tool.

His treatment of candidate anomalies in legal research is somewhat more normative than I am comfortable with — especially considering that, as Kuhn aptly describes, not all anomalies end up dooming a paradigm. And, given De Vries’s reflexive aims, it would be interesting to pay more attention to Kuhn’s description of normal science and its disciplinary matrices, but his description of a paradigm centred on methodological nationalism, even if somewhat coarse-grained, shows that approaches from the philosophy of science can be cognitively useful for understanding legal scholarship even as one recognises the substantial differences between the pratices of law and science.

Researcher, Law and Artificial Intelligence

Currently researching the regulation of artificial intelligence at the European University Institute.