Research | Book Chapters

“Can risks to fundamental rights arising from AI systems be ‘managed’ alongside health and safety risks? Implementing Article 9 of the EU AI Act”

Yeung, K. ‘“Can risks to fundamental rights arising from AI systems be ‘managed’ alongside health and safety risks? Implementing Article 9 of the EU AI Act”. In IIfeoma Ajunwa and Jeremiah Adams-Prassl (eds) Oxford Handbook of Algorithmic Governance and the Law (2026) Oxford University Press, forthcoming.

This paper examines whether the risks to fundamental rights posed by AI systems can be effectively ‘managed’ along with health and safety risks.

With the EU’s AI Act provisions to govern ‘high-risk’ AI systems shortly entering into force, high-risk AI providers will be obliged to establish and maintain a ‘risk management system’ which reduces risks to ‘health, safety and fundamental rights’ to a level ‘judged acceptable’.

But what does this mean? And how can these obligations be operationalised on the ground?

Although safety risk management systems are well-established to manage the risks of safety-critical products, fundamental rights establish moral boundary makers to safeguard respect for the dignity of each and every person.

I show that it is theoretically possible to devise and maintain a single integrated risk management system that encompasses both risks to health, safety and fundamental rights to comply with Article 9. But this is a complex exercise. While Article 9 is rich with opportunities to develop integrated cross-disciplinary methods to manage risks and threats to health, safety and fundamental rights in collaboration with affected stakeholder groups, I argue that there are real dangers that Article 9 compliance will be merely performative: appearing to take health, safety and fundamental rights seriously even as the deployment of AI technologies further undermines respect for the individual dignity and freedom upon which democracy depends.

the european union ai act

The European Union’s AI Act: beyond motherhood and apple pie?

Smuha, Nathalie A. and Yeung, Karen, The European Union’s AI Act: beyond motherhood and apple pie? (June 24, 2024).

Watch keynote lecture here.

In spring 2024, the European Union formally adopted the AI Act, aimed at creating a comprehensive legal regime to regulate AI systems. In so doing, the Union sought to maintain a harmonized and competitive single market for AI in Europe while demonstrating its commitment to protect core EU values against AI’s adverse effects. In this chapter, we question whether this new regulation will succeed in translating its noble aspirations into meaningful and effective protection for people whose lives are affected by AI systems. By critically examining the proposed conceptual vehicles and regulatory architecture upon which the AI Act relies, we argue there are good reasons for skepticism, as many of its key operative provisions delegate critical regulatory tasks to AI providers themselves, without adequate oversight or redress mechanisms. Despite its laudable intentions, the AI Act may deliver far less than it promises.

Lost in translation the troubling logics underpinning the embrace of governmental machine-learning based prediction tools for ‘citizen scoring’ by Karen Yeung

Lost in translation: the troubling logics underpinning the embrace of governmental machine-learning based prediction tools for ‘citizen scoring’

Karen Yeung, ‘Lost in translation: the troubling logics underpinning the embrace of governmental machine-learning based prediction tools for ‘citizen scoring’’ (2023) to be published in Dimitri Van Den Meerssche, Gavin Sullivan and Fleur Johns (eds.) Global Governance by Data: Infrastructures of Alg

Machine learning (ML) applications referred to as ‘predictive analytics’ or ‘big data analytics, now ubiquitous in retail, entertainment and logistics, are increasingly common in public sector contexts too. Applications include algorithms that claim to estimate an individual’s ‘risk’ of specific behaviours, such as an offender’s likelihood to reoffend or the probability that a child will be subject to abuse or neglect. This chapter asserts that the embrace of these data-driven ‘citizen scoring’ systems is underpinned by a set of promises, assumptions, beliefs and rationalities (collectively referred to as ‘logics’) that seek to replicate the success of ML in commercial contexts into the public sector with no regard for the fundamental differences between the two contexts. I critique three specific claims that have encouraged the adoption of commercial ML techniques by the state: (a) that ML produces more accurate predictions (b) that these predictions offer valuable ‘actionable insight’ for public authorities, and (c) that ‘early intervention’ based on such actionable insight is desirable. I argue that although it may be legitimate for profit-seeking firms to use probabilistic estimates derived from algorithms to inform low-stakes decisions (such as identifying which web-ads to display to users to encourage more clicks), far more significant state interventions such as denying the early release of a prisoner due to their perceived risk of reoffending or taking a child into care identified as ‘at risk’, cannot be justified on the same terms. Yet, thanks to the uncritical adoption of commercial ML methods in the public sector, power and authority are being illegitimately, and sometimes unlawfully, redistributed in ways that produce injustice without public awareness or democratic debate, to the detriment of some of the most vulnerable members of society.

AI Governance by Human Rights Centred-Design Deliberation and Oversight An End to Ethics Washing

AI Governance by Human Rights Centred-Design, Deliberation and Oversight: An End to Ethics Washing

Yeung, K., Howes, A. & Pogrebna, G., Jul 2020, The Oxford Handbook of Ethics of AI. Oxford University Press

This chapter argues that international human rights standards offer the most promising basis for developing a coherent and universally recognized set of standards that can be applied to meet any of the normative concerns currently falling under the rubric of AI (artificial intelligence) ethics. It then outlines the core elements of a human rights–centered design, deliberation, and oversight approach to the governance of AI. This approach requires that human rights norms are systemically considered at every stage of system design, development, and deployment, drawing upon and adapting technical methods and techniques for safe software and system design, verification, testing, and auditing in order to ensure compliance with human rights norms. The regime must be mandated by law and relies critically on external oversight by independent, competent, and properly resourced regulatory authorities with appropriate powers of investigation and enforcement. However, this approach will not ensure the protection of all ethical values adversely implicated by AI, given that human rights norms do not comprehensively cover all values of societal concern. As such, a great deal more work needs to be done to develop techniques and methodologies that are robust—reliable yet practically implementable across a wide and diverse range of organizations involved in developing, building, and operating AI systems.

51gAHgSCeQLSY445_SX342

Algorithmic regulation: an introduction

Yeung, K. & Lodge, M., 12 Sept 2019, Algorithmic Regulation. Yeung, K. & Lodge, M. (eds.). Oxford University Press, p. 1-18 18 p.

This book presents interdisciplinary perspectives on algorithms which can be observed in contemporary debates. These have been gathered together under the broad notion of ‘algorithmic regulation’. This introductory chapter develops the book’s argument in a number of steps. First, it explores the meaning of ‘algorithmic regulation’. Second, it reflects on how debates regarding algorithmic regulation relate to wider debates about the interaction between technology and regulation. Third, it considers algorithmic systems as potentially novel types of large technical systems. In doing so, it highlights the growing consensus regarding critical challenges for the regulation and governance of algorithmic systems, but also notes the lack of agreement regarding potential solutions. It concludes with an overview of the subsequent chapters.

51gAHgSCeQLSY445_SX342

Why worry about decision-making by machine?

Yeung, K., 12 Sept 2019, Algorithmic Regulation. Yeung, K. & Lodge, M. (eds.). Oxford University Press, 28 p.

This chapter poses a deceptively simple question: what, precisely, are we concerned with when we worry about decision-making by machine? Focusing on fully automated decision-making systems, it suggests that there are three broad sources of ethical anxiety arising from the use of algorithmic decision-making systems: concerns associated with the decision process, concerns about the outputs thereby generated, and concerns associated with the use of such systems to predict and personalize services offered to individuals. The chapter examines each of these concerns, drawing on analytical concepts that are familiar in legal and constitutional scholarship, often used to identify various legal and other regulatory governance mechanisms through which the adverse effects associated with particular actions or activities might be addressed.

Scroll to Top