Research | Journal Articles
Smart Justice? Making Sense of the Rise of Algorithm-Based Pre-trial Risk Assessment in Criminal Justice Through ‘Legal Models’
View Abstract
With the increased use of algorithmic tools embedded in software by state actors, scholars from criminology, criminal justice as well as from data science have analyzed the recent wave of ‘smart-on-crime’-politics in the US and the political dynamics underpinning this movement. However, while reforms of the US bail system have been studied extensively, we know little about how these reforms, including the recent embrace of digital risk prediction tools, reflect shifting commitments to underlying principles of the CJ system. Therefore, this article interprets the waves of US bail reforms through the application of three legal-theoretical models: ‘retributive justice’ (RJ), ‘actuarial justice’ (AJ) and ‘preventive justice’ (PJ). This conceptual lens enables us to illuminate how the increased use of pre-trial risk assessment tools based on big data can be understood in legal-theoretical terms. Empirically, we find a shift away from censure and retribution towards crime prevention and the use of risk assessment tools, which both AJ and PJ models can accommodate. However, while our analysis demonstrates that these models help draw into sharper focus the principles and values which animated US bail reforms, it also reveals several limitations owing to the nature of these models as witnesses of the time when they were developed.
From ‘wild west’ to ‘responsible’ AI testing ‘in-the-wild’: lessons from live facial recognition testing by law enforcement authorities in Europe
View Abstract
Although ‘in-the-wild’ technology testing provides an important opportunity to collect evidence about the performance of new technologies in real world deployment environments, such tests may themselves cause harm and wrongfully interfere with the rights of others. This paper critically examines real-world AI testing, focusing on live facial recognition technology (FRT) trials by European law enforcement agencies (in London, Wales, Berlin, and Nice) undertaken between 2016 and 2020, which serve as a set of comparative case studies. We argue that there is an urgent need for a clear framework of principles to govern real-world AI testing, which is currently a largely ungoverned ‘wild west’ without adequate safeguards or oversight. We propose a principled framework to ensure that these tests are undertaken in an epistemically, ethically, and legally responsible manner, thereby helping to ensure that such
tests generate sound, reliable evidence while safeguarding the human rights and other vital interests of others. Although the case studies of FRT testing were undertaken prior to the passage of the EU’s AI Act, we suggest that these three kinds of responsibility should provide the foundational anchor points to inform the design and conduct of real-world testing of high-risk AI systems pursuant to Article 60 of the AI Act.
Defending the Rule of Law from Law from Threats Posted by AI-Enabled Surveillance Systems in the Hands of Law Enforcement Authorities
View Abstract
This article concludes and reflects upon the contributions collected in this volume, taken as a whole. It argues that they reflect a troubling turn towards authoritarianism, calls on scholars to take forward the task of critical inquiry to facilitate urgently needed democratic debate to help our communities decides on and the boundaries establishment of and use of lawfully legitimate recognition technologies
Beyond ‘AI boosterism’
View Abstract
‘AI boosterism’ has characterised British industrial policy for digital and data-enabled technologies under successive Conservative administrations, intended to ‘turbocharge’ artificial intelligence (AI) sector growth. Although former prime minister, Rishi Sunak, believed that public trust in AI was essential, evident in his initiatives championing AI safety (such as the AI Safety Summit in Bletchley Park in November 2023), Sunak retained an unwavering belief that existing laws, complemented by voluntary cooperation between industry and government, would address AI’s threats and harms via technical fixes.
How do ‘technical’ design choices made when building algorithmic decision-making tools for criminal justice authorities create constitutional dangers? (Part I)
View Abstract
This two-part paper argues that seemingly ‘technical’ choices made by developers of machine-learning based algorithmic tools used to inform decisions by criminal justice authorities can create serious constitutional dangers, enhancing the likelihood of abuse of decision-making power and the scope and magnitude of injustice. Drawing on three algorithmic tools in use, or recently used, to assess the ‘risk’ posed by individuals to inform how they should be treated by criminal justice authorities, we integrate insights from data science and public law scholarship to show how public law principles and more specific legal duties that are rooted in these principles, are routinely overlooked in algorithmic tool-building and implementation. We argue that technical developers must collaborate closely with public law experts to ensure that if algorithmic decision-support tools are to inform criminal justice decisions, those tools are configured and implemented in a manner that is demonstrably compliant with public law principles and doctrine, including respect for human rights, throughout the tool-building process.
How do ‘technical’ design-choices made when building algorithmic decision-making tools for criminal justice authorities create constitutional dangers? (Part II)
View Abstract
This two-part paper argues that seemingly ‘technical’ choices made by developers of machine-learning based algorithmic tools used to inform decisions by criminal justice authorities can create serious constitutional dangers, enhancing the likelihood of abuse of decision-making power and the scope and magnitude of injustice. Drawing on three algorithmic tools in use, or recently used, to assess the ‘risk’ posed by individuals to inform how they should be treated by criminal justice authorities, we integrate insights from data science and public law scholarship to show how public law principles and more specific legal duties that are rooted in these principles, are routinely overlooked in algorithmic tool-building and implementation. We argue that technical developers must collaborate closely with public law experts to ensure that if algorithmic decision-support tools are to inform criminal justice decisions, those tools are configured and implemented in a manner that is demonstrably compliant with public law principles and doctrine, including respect for human rights, throughout the tool-building process.