European Commission faces pressure as deadline nears for AI Code of Practice
The Commission’s workshop marks a final attempt to secure support for the Code of Practice before the self-regulatory initiative becomes operational. However, its success will depend on whether large AI firms ultimately decide to sign on.

As the 2 August deadline approaches for AI companies to sign onto the European Union’s Code of Practice for general-purpose AI (GPAI), the European Commission is hosting a final workshop this week in Brussels to rally support for the initiative. The July 2–3 event will present the completed version of the Code, including provisions on copyright that have not been previously disclosed during the nine-month drafting process.
The Commission’s efforts come amid persistent resistance from major technology firms, who have expressed concerns over the voluntary rules. The tension is further amplified by broader transatlantic disagreements on AI regulation. Critics warn that recent concessions could undermine the EU AI Act’s intent to ensure accountability and transparency in high-risk AI systems.
On 25 June, a coalition of over 40 signatories, including Nobel laureates Geoffrey Hinton and Daron Acemoglu, UC Berkeley professor Stuart Russell, and the Center for AI and Digital Policy (CAIDP), addressed a letter to Commission President Ursula von der Leyen. The letter urges the EU to uphold its regulatory stance and resist lobbying efforts that would dilute the effectiveness of the GPAI framework. The authors warn that several large GPAI model providers have significantly reduced transparency and safety-testing practices, even as the capabilities and risks of their systems continue to grow.
The coalition argues that the draft Code, particularly in its March version, weakens key safeguards by making fundamental rights impact assessments optional. They caution that this shift would place disproportionate regulatory burdens on startups and SMEs, while allowing major providers to avoid accountability.
In response, the letter outlines three central recommendations:
- Mandatory third-party testing – External audits should be required for GPAI models with systemic risk potential. Independent assessment can ensure that safety mitigations are effective and prevent the influence of internal business pressures to accelerate deployment timelines.
- Robust review mechanisms – The Code must include provisions for continuous updates in response to emerging risks and evolving technical standards. A dedicated process for emergency amendments is also proposed to address imminent large-scale threats.
- Expanded enforcement capacity – The coalition advocates scaling the AI Safety Unit (DG CNECT A3) to 100 staff, and increasing the broader AI Office team to 200, matching the scale of other regulatory bodies like the team overseeing the Digital Services Act. Recruiting global experts in AI safety and security is also emphasised.
The letter describes the Code as a proportionate and carefully crafted instrument, shaped by the input of over 1,000 stakeholders. It notes that most obligations would apply to a relatively small number of companies—between five and fifteen—and overlap with existing industry practices. The proposed framework is positioned as a minimum baseline for GPAI risk management, intended to reduce liability, support legal certainty, and foster broader AI adoption across sectors.