Google DeepMind, Meta, OpenAI, among top AI firms sharing safety policies ahead of UK summit

Leading AI companies, including Google DeepMind, Anthropic, OpenAI, Microsoft, Amazon, and Meta, publicly shared their safety policies in response to a request from the UK Technology Secretary, aiming to enhance transparency, promote best practices, and address AI risks, while also committing to collaborate with various stakeholders and work on solutions for critical global challenges.

Google DeepMind, Meta, OpenAI, among top AI firms sharing safety policies ahead of UK summit

Leading AI companies publicly shared their safety policies on Friday, 27 October, in response to a request from the UK Technology Secretary last month. Google DeepMind, Anthropic, OpenAI, Microsoft, Amazon, and Meta have all published their AI policies, a move seen as a way to boost transparency and encourage the dissemination of best practices within the AI community. The move outlines the tech companies’ renewed commitment to managing the risks posed by AI, including safety, security, and trust.


They agreed to information sharing across the industry and with governments, civil society, and academia on managing AI risks, including approaches to safety, attempts to bypass safeguards, and technical collaboration. For example, Microsoft and OpenAI shared how they will advance responsible AI, including by implementing prior voluntary pledges they made with other companies at the White House in July.
The tech giants are also committed to developing and deploying advanced AI systems to help address humankind’s most pressing challenges, such as climate change and cancer prevention.

Why does it matter?

The UK Government’s Frontier AI Taskforce has been hiring top names from all sectors of the AI ecosystem to advise on the risks and opportunities of AI. These safety policies and voluntary commitments are a significant step towards responsible AI innovation. Leading AI firms will hire independent specialists to examine their models for vulnerabilities and to disclose their findings to each other, governments, and researchers.


The news comes one day after Prime Minister Rishi Sunak confirmed the UK would establish the world’s first AI Safety Institute to enhance knowledge, carefully evaluate new types of AI models, and better understand their capabilities. Sunak made the announcement on the eve of the Nov. 1–2 international AI safety summit at historic Bletchley Park.

Go to Top