US and UK sign bilateral agreement to develop safety tests for AI models

The UK and US yesterday (Monday 1 April) signed a Memorandum of Understanding which will see them work together to develop tests for the most advanced artificial intelligence (AI) models, following through on commitments made at the AI Safety Summit last November.

Signed by Technology Secretary Michelle Donelan and US Commerce Secretary Gina Raimondo, the partnership will see both countries sharing information about the capabilities and risks associated with AI models and systems, as well as fundamental technical research on AI safety and security. The Financial Times reports that it will evaluate private AI models built by the likes of OpenAI and Google.

This is the first bilateral AI cooperation agreement between countries. The UK and US AI Safety Institutes have laid out plans to build a common approach to AI safety testing and to share their capabilities to ensure these risks can be tackled effectively. They say they intend to perform at least one joint testing exercise on a publicly accessible model. They also intend to tap into a collective pool of expertise by exploring personnel exchanges between the Institutes.

The partnership will take effect immediately and as the countries strengthen their partnership on AI safety, they have also committed to develop similar partnerships with other countries to promote AI safety across the globe.

Donelan said: “This agreement represents a landmark moment, as the UK and the United States deepen our enduring special relationship to address the defining technology challenge of our generation. We have always been clear that ensuring the safe development of AI is a shared global issue. Only by working together can we address the technology’s risks head on and harness its enormous potential to help us all live easier and healthier lives.”

Raimondo added: “AI is the defining technology of our generation. This partnership is going to accelerate both of our Institutes’ work across the full spectrum of risks, whether to our national security or to our broader society. Our partnership makes clear that we aren’t running away from these concerns – we’re running at them. Because of our collaboration, our Institutes will gain a better understanding of AI systems, conduct more robust evaluations, and issue more rigorous guidance.

“By working together, we are furthering the long-lasting special relationship between the U.S. and UK and laying the groundwork to ensure that we’re keeping AI safe both now and in the future.”

The new partnership follows the passing of the EU’s Artificial Intelligence Act, which is Europe’s attempt to create a uniform legal framework for developing and deploying AI technologies. While it derives from Europe, speaking to Legal IT Insider just after the Act was passed, Axiom account executive Dr Maryam Salehijam said: “The Act’s reach may extend beyond EU borders. Similar to the GDPR, corporations with a global presence, particularly those operating in the European market or whose AI systems process data of EU citizens might need to comply with the Act’s provisions.”