US secures voluntary commitments from leading AI companies

The companies have committed to conducting thorough testing of AI technologies before releasing them, while also prioritising transparency and sharing information to reduce risks and invest in cybersecurity.

US secures voluntary commitments from leading AI companies

The Biden-Harris Administration has secured voluntary commitments from top AI companies to manage AI risks, emphasising safety, security, accountability and trust. This is part of their broader commitment to ensure responsible AI development and protect Americans from harm and discrimination. To reinforce this, they will develop an Executive Order and pursue bipartisan legislation for enhanced safety measures. Among the prominent companies making this pledge are OpenAI, Alphabet, and Meta Platforms, along with other notable names such as Anthropic, Inflection, Amazon.com, and Microsoft.

The voluntary commitments include the following:

  1. Ensuring safety before public release:
    • Conducting internal and external security testing of AI systems prior to release, including input from independent experts.
    • Sharing information on managing AI risks with governments, civil society, and academia to establish best practices and technical collaboration.
  2. Putting security first:
    • Investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights, the crucial components of AI systems.
    • Facilitating third-party discovery and reporting of vulnerabilities in their AI systems to promptly address issues post-release.
  3. Earning the public’s trust:
    • Developing robust technical mechanisms, such as watermarking systems, to identify AI-generated content, promoting creativity while reducing fraud and deception.
    • Publicly reporting AI systems’ capabilities, limitations, and appropriate and inappropriate uses, addressing security risks and societal concerns like fairness and bias.
    • Prioritising research on societal risks posed by AI, including harmful bias, discrimination, and privacy protection, ensuring AI systems are designed to mitigate these dangers.
  4. Addressing societal challenges:
    • Committing to developing and deploying advanced AI systems to tackle critical challenges, such as cancer prevention and climate change, to enhance prosperity, equality, and security.

The US administration is also dedicated to establishing an international framework for AI development and usage. They aim to collaborate with allied countries like Australia, Canada, France, India, Japan, and the UK to align and support shared principles for AI governance through consultations.

Go to Top