‘Technical’ design choices are not neutral: algorithmic decision-making in criminal justice settings

Algorithmic prediction models are informing life-changing decisions in a wide variety of contexts, including estimating the risk of offending identifying potential social welfare fraudsters, and offering some individuals opportunities to participate in rehabilitation programs. Although these decisions are often “rights-critical” (as opposed to “safety critical”), these algorithmic tools are being developed and deployed largely in ignorance of public law principles and the legal duties to which they give rise.

ai-generated-8334304_1280-min

In the few short years that such systems have been operational,  legal principles and constitutional values are being trampled at every step in the design, testing and deployment. These include the presumption of innocence, the right to liberty, to privacy, and the right to contest decisions.

pexels-google-deepmind-17485709-min

For example, algorithmic tools are widely used in the US to decide whether a person who has been arrested and taken into police custody should be released on bail or retained in custody prior to trial. Many of these tools purport to assess the probability that an individual will commit a criminal offence (if released).

Peers challenge police use of artificial intelligence

However, tools are often created on the basis of ‘arrest’ data, with tool-developers treating arrest as a ‘proxy’ for the commission of a crime. Yet arrest data is a highly misleading indicator of crime. Not all arrested persons are charged, those who are charged may not be convicted, and many crimes are committed for which no arrests are made (including those that are unreported, or never detected). Hence predictions generated by tools trained on arrest data cannot offer meaningful indications of future ‘crime’ and may be ‘legally irrelevant’ to the matter which the public official is legally required to decide.

Computer says No - How the EU's AI laws cause new injustice

Yet it is on the basis of arrest data, for example, that the HART tool was developed and used by the Durham police. The tool is described as generating predictions about whether an individual is likely to ‘commit a criminal offence’ within a two-year period. At the very least, these tools should be properly described as ‘arrest predictors’ rather than mislabelling and misrepresenting them as predictions of future criminal offending.

AI could detect rogue police officers

Related | Publications

Scroll to Top