Should one learn to program before discussing AI regulation?

Arrigo Sacchi is a famous football coach, who led Milan to back-to-back European titles and took the Italian National Team to the 1994 World Cup Final 1. Unusually for a world-class manager, Sacchi had never played football professionally, which led unsympathetic outsiders to question his credentials as a manager. His response to this line of criticism, however, became famous in footballing circles: “A jockey has never been a horse, either”.

This one-liner comes to my mind every time I read arguments along the lines of “Everybody should learn to program”. The general version of this idea has been critiqued elsewhere, but as somebody with a tech background who is now working on law, I occasionally see good lawyers shying away from a deeper engagement with AI regulation themes because they lack the technological perspective on the subject.

I would be lying if I said that my computer science background does not help me in my legal studies, especially when it comes to understanding how general propositions might or not be feasible in practice. However, many relevant legal discussions on AI do not rest upon technical nuances,2 and there is a lot of pertinent, non-technical work that must be done to improve our understanding of AI systems as technologies that operate within — and are given meaning by — the social and organisational context in which they operate.3 In fact, one might say that familiarity with software development can easily lead one to the politician’s fallacy of implementing technical “solutions” that do not address the actual problems at hand.

So, whenever I am asked whether one should learn to program, I always encourage them to see if they like the experience of programming. 4 However, the skills that one develops as a programmer are not necessarily the ones that one needs for having an informed understanding of how AI works. So, I would like to highlight some materials that might be useful for acquiring a solid, external perspective on AI.

My go-to recommendation is a course by Reaktor and the University of Helsinki, Elements of AI. This course requires no programming or mathematical background, as it intends to explain the concepts that are used for building intelligent systems.5 By focusing on a conceptual approach, the course makes the ideas behind AI systems accessible to a general audience. It also provides knowledge that will remain useful even after current technologies become outdated, as one may still rely on fundamental concepts.

If one wants to try their hand at programming, but with the aim of participating in the legal and policy debates on AI, then CS50 For Lawyers can be a good course. It adopts a top-down approach, discussing some socially relevant issues with a heavy technical component and then introducing the Computer Science methods relevant for understanding the subject at hand.

Finally, I would like to point towards three resources that do a good job in meshing technical and external perspectives. Playing with the data, by David Lehr and Paul Ohm (2017), is a law review (i.e. huge) paper that discusses the development cycle of machine learning systems, to show the legal relevance of various technical factors and choices. Brkan and Bonnet (2020)’s paper on the right to an explanation follows a similar line, leveraging technical discussions about algorithmic transparency for the purposes of the GDPR. And Bryson (2020) provides an overview of technical matters that are relevant to the debates on AI ethics.

If I were a bit less lazy, then I would say that I plan to update this post as I come across other materials that might be useful for non-programmers dealing with AI. But it is my belief that those resources provide a good starting point not just for understanding the technical dimensions of AI debates, but also for establishing bridges with technologists and allowing a beneficial division of labour.


  1. As a personal aside, that Final and the Brazil v Sweden game in the semifinals are among my earliest memories. ↩︎

  2. There are, however, valuable lines of research on the application of artificial intelligence to legal issues. Engaging with those research questions, and their applications, requires a more direct engagement with technical matters than the sort of regulatory question that I consider on my main post. ↩︎

  3. See, e.g., Feenberg’s Subversive rationalization (1992) and Ananny and Crawford’s Seeing without knowing (2018). ↩︎

  4. While there is a lot of beauty in programming, not enjoying this activity is perfectly reasonable. ↩︎

  5. My wife, who comes from a Literary Theory background, enjoyed the course and found its approach accessible. ↩︎

Researcher, Law and Artificial Intelligence

Currently researching the regulation of artificial intelligence at the European University Institute.