Concerns about “black box” machine learning algorithms have led many data protection laws and regulations to establish a right to human intervention on decision-making supported by artificial intelligence. Such interventions are presented as a means to protect the rights, freedoms, and legitimate interests of data subjects, either as a bare minimum requirement or as a central norm governing decision-aiding artificial intelligence. In this paper, I claim that contestability by design approaches can be used to address two kinds of issues with current legal implementations of the right to human intervention. The first kind is the uncertainty about what kind of decision should be covered by this right: while a narrow reading of rules such as GDPR Article 22(3) would include all sorts of fully-automated decisions, I show how a broader interpretation can provide more effective protection for data subjects against side-effects of automated decision-making. The second class of issues ensues from practical effects of the right to intervention, or the lack thereof: within a clear conceptual framework, data subjects might still lack the necessary information to exercise in practice their right, or the biases and limitations resulting from human intervention might leave them worse off than they were under the purely automated decision. After discussing how those effects can be identified and measured, I then explore how the proper protection of the rights of data subjects is possible only if the possibility of contesting decisions based solely on automated processing is not an afterthought, but instead a requirement at each stage of an artificial intelligence system’s lifecycle.