UK and US sign pact to develop AI safety tests
[ad_1]
The UK and US have signed a landmark agreement to collaborate on developing rigorous testing for advanced AI systems, representing a major step forward in ensuring their safe deployments.
The Memorandum of Understanding – signed Monday by UK Technology Secretary Michelle Donelan and US Commerce Secretary Gina Raimondo – establishes a partnership to align the scientific approaches of both countries in rapidly iterating robust evaluation methods for cutting-edge AI models, systems, and agents.
Under the deal, the UK’s new AI Safety Institute and the upcoming US organisation will exchange research expertise with the aim of mitigating AI risks, including how to independently evaluate private AI models from companies such as OpenAI. The partnership is modelled on the security collaboration between GCHQ and the National Security Agency.
“This agreement represents a landmark moment, as the UK and the United States deepen our enduring special relationship to address the defining technology challenge of our generation,” stated Donelan. “Only by working together can we address the technology’s risks head on and harness its enormous potential to help us all live easier and healthier lives.”
The partnership follows through on commitments made at the AI Safety Summit hosted in the UK last November. The institutes plan to build a common approach to AI safety testing and share capabilities to tackle risks effectively. They intend to conduct at least one joint public testing exercise on an openly accessible AI model and explore personnel exchanges.
Raimondo emphasised the significance of the collaboration, stating: “AI is the defining technology of our generation. This partnership is going to accelerate both of our Institutes’ work across the full spectrum of risks, whether to our national security or to our broader society.”
Both governments recognise AI’s rapid development and the urgent need for a shared global approach to safety that can keep pace with emerging risks. The partnership takes effect immediately, allowing seamless cooperation between the organisations.
“By working together, we are furthering the long-lasting special relationship between the US and UK and laying the groundwork to ensure that we’re keeping AI safe both now and in the future,” added Raimondo.
In addition to joint testing and capability sharing, the UK and US will exchange vital information about AI model capabilities, risks, and fundamental technical research. This aims to underpin a common scientific foundation for AI safety testing that can be adopted by researchers worldwide.
Despite the focus on risk, Donelan insisted the UK has no plans to regulate AI more broadly in the short term. In contrast, President Joe Biden has taken a stricter position on AI models that threaten national security, and the EU AI Act has adopted tougher regulations.
Industry experts welcomed the collaboration as essential for promoting trust and safety in AI development and adoption across sectors like marketing, finance, and customer service.
“Ensuring AI’s development and use are governed by trust and safety is paramount,” said Ramprakash Ramamoorthy of Zoho. “Taking safeguards to protect training data mitigates risks and bolsters confidence among those deploying AI solutions.”
Dr Henry Balani of Encompass added: “Mitigating the risks of AI, through this collaboration agreement with the US, is a key step towards mitigating risks of financial crime, fostering collaboration, and supporting innovation in a crucial, advancing area of technology.”
(Photo by Art Lasovsky)
See also: IPPR: 8M UK careers at risk of ‘job apocalypse’ from AI
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
[ad_2]
Source link