Privacy and Data Protection Constraints to Automated Decision-Making in the Judiciary


Competent authorities of many countries around the world are deploying automation tools in decision-making processes in the judiciary. The scope of application of automated judicial systems, often based on artificial intelligence ({AI}), is broad, ranging from the improvement and acceleration of organisational or office-based court tasks to the automation of substantive judicial decisions. The ongoing or future judicial automation in the European Union ({EU}) requires an in-depth legal analysis from the perspectives of data protection and privacy law, which raise crucial legal issues in the judicial automation landscape. The paper, therefore, aims to identify the constraints created by the {EU} data protection and privacy law to the use of automated decision-making within judicial proceedings. It first describes two models of judicial automation, which need to be distinguished on the basis of the degree of automation and the level of human involvement in the decision-making process (fully and partially automated decision-making and two sub-models of the latter). The paper then discusses the legal bases for both models of automated personal data processing in the judiciary, with a particular emphasis on the French national regulation, based on Article 22 of the {GDPR}, which provides the sharpest restriction to judicial automation within the European Union. Even when there are lawful grounds for judicial automation, legal requirements will provide constraints and limits to the operation of any automated decision-making systems. Acknowledging this fact, the paper concludes with the analysis of three perspectives which introduce limits to automation — data protection regulation, privacy law, and technological constraints —, as well as the safeguards that these viewpoints provide to the rights and interests of those affected by judicial automation.

PLSC-Europe 2020 (Cancelled due to Covid)