ai-generated-8334304_1280-min

AI Equality by Design, Deliberation and Oversight

Funder: Responsible AI UK (UKRI)

Principal Investigator: Prof Karen Yeung

Project dates: 2023-2025

Project Background

Despite their dazzling capabilities, public mistrust in AI systems persists due to fears about their adverse and unwanted impacts as they rapidly expand into many domains of contemporary life. One well-known concern arises from the tendency of many AI systems to unfairly discriminate between persons and groups on the basis of race and gender. Will the EU’s AI Act, the world’s first ‘horizontal’ AI regulation provide effective safeguards against unlawful and unfair discrimination? Its stated aim is to “improve the functioning of the internal market and promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection for health, safety, fundamental rights enshrined in the Charter, including democracy, the rule of law and environmental protection, against the harmful effects of AI systems in the Union, and supporting innovation”. 

The primary vehicle through which equality and other fundamental rights will be protected, will be through the obligation of providers of ‘high risk’ AI systems to comply with the Act’s ‘essential requirements’. But those ‘essential requirements’ take the form of broad principles, and will rely heavily on technical AI standards currently being drafted by CEN/CENELEC’s JTC 21 working groups to operationalise them through detailed ‘technical’ standards. Once approved, AI providers who voluntarily comply with those standards will benefit from a legal ‘presumption of conformity’ with the Act’s essential requirements. But unless the resulting standards include effective, demonstrable equality protection measures (reflecting EbD thinking), the AI Act will fail: offering a veneer of legal protection while unfair and unlawful AI-generated discrimination continues unabated. 

Moreover, the legitimacy of standards produced by European standards organisations rests on unstable foundations. Because these organisations assert copyright protection over the content of their standards, they are only available to those who purchase them, rather than being openly published. Only national standards bodies who are members of CEN/CENELEC are entitled to participate and vote on standards. One of the most serious shortcomings inherent in the regulatory architecture of the AI Act is a misplaced assumption that technical standards produced by CEN/CENELEC will offer meaningful protection of fundamental rights. Yet technical experts lack expertise in fundamental rights protection. This serious deficiency demands urgent attention before the window of participation in drafting and agreeing European AI standards closes, which this project seeks to address.

Project Aims

The overarching aim of this project is to develop and embed a governance approach called ‘equality by design, deliberation and oversight’ into European AI standards (and, in turn, British and International AI standards), helping to ensure that they are properly interpreted, understood and applied to provide effective protection against AI-generated unfair discrimination across Europe. To do so, we will undertake three core activities:

Co-Investigators: 

Equinet

Beyond Reach Consulting Limited

Patricia Shaw

Participating partners: 

university of birmingham
equinet
beyond reach

Project Team

Prof Karen Yeung (Principal Investigator and Lead Negotiator), Patricia Shaw (Senior Negotiator), Milla Vidina (Equinet Senior lead), Dr James Maclaren (Project Operations Lead), Dr Aaron Ceross (Chief data scientist), Fabian Lutz (Civil society and tech stakeholder lead), Prof Sandy Fredman (Oxford University, Consultant advisor)

aie by ddo team pic

More Information

Scroll to Top