Human in the loop of automated decision-making

Marco Almada, BSCS, MCompEng

Researcher, Lawgorithm

LL.B. student, USP Law School

www.marcoalmada.com

Lawgorithm

Overview

  • An outline of automated decision-making
  • How automation impacts tax decision-making
  • Information about automated decisions
  • Effective control of automated decisions
  • Technological and legal tools
Why is automated decision-making relevant in the tax domain?
  • Taxpayers rely more and more on automated systems for:
    • Taxable events: e.g. High Frequency Trading
    • Compliance with tax obligations
  • States also (could) make use of such technologies:
    • Tax authorities: machine learning for oversight
    • Judges: tools for handling processes (e.g. the Socrates project)

The decision-making loop

Partial or full automation can change the steps that lead to a decision.

  • Human in the loop
  • Human on the loop: human override
  • Human out of the loop
  • Human under the loop: humans conform to automated decisions

Humans and the decision loop

  • Even in fully automated systems, humans still play roles
    • Designing systems
    • Setting up goals
    • Those roles might, themselves, lead to responsibility
  • So, what does it mean to have a human in the loop?
    • Informational: what is going on?
    • Supervisory: human intervention
The right to an explanation
  • Explanation of what is going on
    • Doctrinal construction (law in FR, HU)
  • Information must be provided to data subjects
    • Model parameters
    • How results are produced
    • How results are used
  • Purpose: allow data subjects to seek redress for harms
Limits and alternatives to explanation
  • Challenges to information-rich approaches
    • Provided information might not make sense to a non-expert
    • Many decisions -> TMI
    • Keeping the attention of data subjects
  • Alternative constructions
    • Impact Assessment and Certification (Edwards & Veale 2017)
    • Right to reasonable inferences (Wachter & Mittelstadt 2019)

Intervention in automated decisions

  • Possibility of changing outcomes
    • Pointwise fix
    • Requires effective change
  • Justification
    • Legal obligation: e.g. tax authority reserve
    • Legal liability
    • Correcting errors and biases: e.g. Knight Capital, COMPAS
Automated decision-making in the LGPD
  • Data protection law provides new remedies
    • Right to review (art. 20, caput)
    • Right to clear and adequate information (art. 20, § 1)
    • Audits (art. 20, § 2)
  • Scope: treatment of personal data
    • Does not cover all relevant operations
    • Could provide useful analogies and inspiration for non-personal data

Technology and the human in the loop

  • Explainable Artificial Intelligence
  • Contestability by Design (see Almada 2019)
    • Systems are created with intervention in mind;
    • Controls over decisions and interventions.
  • Implementation of both approaches can be expensive
  • LGPD remedies as parts of a broader ecosystem
    • Limits to judicial and administrative activity
    • Ancillary tax obligations
    • Technological standards (see ANPD)
  • What, then, is the role of intervention?
    • Intervention as a dispute resolution tool
    • Intervention as the result of dispute resolution

Concluding remarks

  • Automated decision-making may be used by tax autorities, taxpayers, and judging authorities;
  • The use of AIs in decision-making can be relevant for taxation
    • New taxable events
    • Compliance/avoidance
    • Oversight
    • Judgment
  • Existing controls may be combined with new techniques to ensure fair automated decisions.

Thank you!

www.marcoalmada.com

Decisions based solely on automated data processing

Clear-cut case: automated decision-making

  • No humans present in the decision loop
  • Automated decision-making, however, should be understood as a shorthand and not an exhaustive description
  • Some decisions can be based solely on automated data processing even if there are humans involved

Humans based solely on automated data processing

  • Example: rubber-stamping (see, e.g. Brkan 2017)
    • Algorithm might provide information and choices to a human
    • That human decider simply chooses the best-ranked option
  • Excluding this sort of decision from the scope of the right to human intervention would open space for loopholes

A more complicated case

  • Instead of rubber-stamping, the human decider now makes a deliberate choice between scenarios, using their knowledge.
  • Is this still a decision based solely on automated data processing?
    • Yes, if the decider only relies on factual information from the algorithm
    • Decider cannot alter the content of a decision: choose your own adventure.

Issues stemming from automated decision-making

  • How to detect undesirable effects of automation:
    • Harmful decisions: e.g. undue infraction notices
    • AI as an enabler of undesirable outcomes: e.g. avoidance
  • How to inform stakeholders about relevant decisions
  • How to address, preventively or not, decision effects

Who should be held responsible by a decision?

  • In the foreseeable future, it makes no sense to hold machines legally responsible for their actions.
    • Not just technical challenges, but legal ones (Brennan-Marquez & Henderson 2019).
  • Even if that were technically possible, AI legal responsibility could be misused (Bryson et al. 2017)
  • Solutions:
    • Keep a human in the loop.
    • Human intervention in automated decision-making.

Requesting human intervention

To request an intervention, data subjects must:

  • Know that they are affected by an automated decision
  • Know how they are being affected
  • Have adequate means for requesting intervention

Design approaches might be used to ensure these goals

Replacing machines with humans

  • In many cases, a trustworthy, competent human could probably lead to better results than automated systems.
  • How to avoid a biased or incompentent intervenor?
    • Short run: individual liability
    • Long run: applying the same standards applied to automated decision bar applied to automated decisions
Keeping humans informed about automated decisions
  • Most modern AI systems are opaque (Burrell 2016):
    • Based on models with inherent complexity;
    • Technical communication can be difficult;
    • Organisational
  • Information needed for seeking remedies
    • Know that there is a relevant decision
    • Know what is going on

Human in the tax loop?

  • Current legislation requires a human in the loop only for decisions affecting data subjects.
    • Natural taxpayers might have an effective tool against tax authorities;
    • No equivalent duties for intervention against harms to other subjects.
  • Taxpayer loops: human as overseer/responsible
  • State loops: intervention as a tool for administrative/judicial control

As translated by Ronaldo Lemos et al. (highlights are mine):

Art. 20. The data subject has the right to request for the review of decisions made solely based on automated processing of personal data affecting her/his interests […]

Human intervention in the law

  • GDPR Article 22(3) establishes a right to human intervention.
  • Brazilian legislation picks a more restricted right to review.
    • Limits to ex post intervention;
    • Current legislation does not require a human review.
  • In both cases, intervention is motivated by a (potential) harm to the rights and legitimate interests of a data subject.

Is it always desirable to keep a human in/on the loop?

  • Keeping a human in/on the loop is crucial for accountability and quality control.
  • However, some tasks would not be feasible with humans: digital economy.
  • In other cases, human intervention could be sub-optimal
    • Costs involved in bringing a human into the loop
    • Human biases and prejudices
  • Intervention itself may fail to achieve its desired goals.

Modes of failure for human intervention

  • Data subjects might not be able to request intervention
    • Lack of information (see Ohm 2018)
    • Lack of means
  • Intervention failures
    • Ineffective intervention
    • Harmful intervention

Contestability by design (CbD) and privacy by design (PbD)

  • Designing contestable systems cannot be subsumed into Privacy by Design
    • PbD directly protects a value
    • CbD establishes an instrument
    • CbD may benefit from PbD
    • …but they may also clash