Team Blitz India
The United States and the United Kingdom have taken a huge step towards regulating and ensuring the safe deployment of artificial intelligence (AI) systems. The two countries signed an agreement to collaborate on developing tests for advanced AI models.
The agreement, signed on April 1, is a direct result of commitments made at the Bletchley Park AI Safety Summit last year. It aims at tackling the potential risks associated with AI while leveraging its benefits. Effective immediately, the agreement facilitates the sharing of crucial information between the two nations regarding the capabilities and risks of AI models and systems.
Additionally, both countries will collaborate on fundamental technical research on AI safety and security, as well as work towards aligning their approach to safely deploying AI systems. The partnership entails aligning scientific methodologies and accelerating the development of robust evaluation suites for AI models, systems, and agents. The US and the UK AI Safety Institutes have outlined plans to establish a common framework for AI safety testing and to share their expertise to effectively mitigate risks.
As part of their commitment to advancing AI safety globally, the US and the UK have pledged to establish similar partnerships with other countries, as outlined in a press release by the US Department of Commerce.
Meta and OpenAI, both AI players, have offered their perspectives on AI model openness. While Meta emphasises importance of open-source innovation, OpenAI advocates a balanced approach, highlighting benefits of both open weights releases and commercial product-based releases.
Meanwhile, amid rapid innovation in the private sector, lawmakers worldwide are grappling with AI regulation to mitigate its potential downsides. The European Union recently reached a deal on its AI Act, including safeguards on AI use and empowering consumers to raise complaints against violations.