pexels-google-deepmind-17483907-min

AI ‘assurance’ via standardisation and certification systems

Project name: European Lighthouse on Safe and Secure AI, Network of Excellence

Funder: European Union and UKRI 

Website: https://www.elsa-ai.eu/

Joint Principal Investigator: Prof Karen Yeung

Project Information

Robust technical standards will not deliver safe and secure AI in Europe unless and until they are embedded within legitimate and effective governance architectures that provide meaningful human oversight, demonstrably in accordance with core European values: namely, respect for democracy, human rights (including the protection of safety and security) and the rule of law. While EU policy-makers have produced several legal oversight regimes, including the GDPR, the EU Medical Device Regulation, and the proposed EU AI Act, whether the resulting governance architecture, and mechanisms through which compliance with these goals, will be achieved and assured remains unclear, untested and unknown.

Cross-disciplinary research, incorporating the insights of both technical and legal/governance experts, is vital to integrate and embed technical methods within ethical and governance regimes that can provide meaningful and evidence-based assurance (‘AI Assurance’).

The aim of this project is to identify architectures, mechanisms, and methods capable of generating the kind of meaningful and evidence-based assurance necessary to secure and maintain the safety and security dimensions of AI systems. Through interdisciplinary investigation between technical experts working in close collaboration with those from the fields of law, ethics and governance, it will evaluate existing and emerging methods and mechanisms, and develop new approaches, for establishing and certifying safe and secure AI with effective human oversight.

The project of which I am principal investigator will critically explore the extent to which third party certification and accreditation (‘TPCA’) mechanisms constitute legitimate and effective techniques for generating AI Assurance. It will produce new insight, identifying the strengths and of TPCA in relation to safety, on the one hand, and fundamental rights compliance on the other, seeking to identify the conditions under which TPCA can provide meaningful and effective public assurance that underlying regulatory objectives have been met.

Scroll to Top