EU urges stronger AI oversight after Grok controversy
European Union prepares new compliance guidelines for AI developers as Grok chatbot prompts debate on transparency and systemic risk safeguards.

A recent incident involving Grok, the AI chatbot developed by xAI, has reignited European Union calls for stronger oversight of advanced AI systems.
Comments generated by Grok prompted criticism from policymakers and civil society groups, leading to renewed debate over AI governance and voluntary compliance mechanisms.
The chatbot’s responses, which circulated earlier this week, included highly controversial language and references to historical figures. In response, xAI stated that the content was removed and that technical steps were being taken to prevent similar outputs from appearing in the future.
European policymakers said the incident highlights the importance of responsible AI development. Brando Benifei, an Italian lawmaker who co-led the EU AI Act negotiations, said the event illustrates the systemic risks the new regulation seeks to mitigate.
Christel Schaldemose, a Danish member of the European Parliament and co-lead on the Digital Services Act, echoed those concerns. She emphasised that such incidents underline the need for clear and enforceable obligations for developers of general-purpose AI models.
The European Commission is preparing to release guidance aimed at supporting voluntary compliance with the bloc’s new AI legislation. This code of practice, which has been under development for nine months, is expected to be published this week.
Earlier drafts of the guidance included provisions requiring developers to share information on how they address systemic risks. Reports suggest that some of these provisions may have been weakened or removed in the final version.
A group of five lawmakers expressed concern over what they described as the last-minute removal of key transparency and risk mitigation elements. They argue that strong guidelines are essential for fostering accountability in the deployment of advanced AI models.
The incident also brings renewed attention to the Digital Services Act and its enforcement, as X, the social media platform where Grok operates, is currently under EU investigation for potential violations related to content moderation.
General-purpose AI systems, such as OpenAI’s GPT, Google’s Gemini and xAI’s Grok, will be subject to additional requirements under the EU AI Act beginning 2 August. Obligations include disclosing training data sources, addressing copyright compliance, and mitigating systemic risks.
While these requirements are mandatory, their implementation is expected to be shaped by the Commission’s voluntary code of practice. Industry groups and international stakeholders have voiced concerns over regulatory burdens, while policymakers maintain that safeguards are critical for public trust.
The debate over Grok’s outputs reflects broader challenges in balancing AI innovation with the need for oversight. The EU’s approach, combining binding legislation with voluntary guidance, seeks to offer a measured path forward amid growing public scrutiny of generative AI technologies.