EU tech chief Virkkunen opens door to AI act implementation delays

As the August 2025 deadline approaches, the Commission faces a critical decision that will shape not only the future of AI regulation in Europe but also the broader trajectory of EU digital policy. The ultimate choice between maintaining the current timeline and implementing delays will likely depend on the progress made in developing essential technical standards and the continued pressure from stakeholders across the AI ecosystem.

EU tech chief Virkkunen opens door to AI act implementation delays

The European Union’s Executive Vice President for Tech Sovereignty and Digital Technologies, Henna Virkkunen, has confirmed that postponing certain provisions of the landmark AI Act remains a viable option amid mounting industry pressure and implementation challenges. It comes as the world’s first comprehensive AI regulation faces its most critical implementation phase, with key deadlines approaching in August 2025 and concerns growing over the readiness of essential technical standards and guidance documents. The ‘stop the clock’ mechanism, which would pause application deadlines when compliance tools are not ready, represents a potential shift in the EU’s approach to AI regulation enforcement.

During a meeting with EU digital ministers in Luxembourg on Friday, 6 June 2025, Virkkunen stated that ‘If we see that the standards and guidelines … are not ready in time, we should not rule out postponing some parts of the AI Act’ acknowledging the practical challenges facing the regulation’s implementation. This statement comes after months of increasing pressure from both industry stakeholders and some EU member states seeking greater implementation flexibility.

The timing of Virkkunen’s comments is particularly important, occurring just one day before the current date and representing the most recent and definitive statement on this issue from EU leadership. 

Virkkunen’s stance has evolved over several months of her tenure as Tech Commissioner, beginning with early commitments to simplification and cutting red tape during her confirmation hearings. Throughout her first 100 days in office, she engaged extensively with industry representatives, conducting 29 meetings with external companies, many of which focused on AI-related discussions. These interactions appear to have informed her understanding of the practical challenges facing AI Act compliance.

The call for implementation delays has gained support from multiple stakeholders across the EU ecosystem. Poland has emerged as a key advocate for the ‘stop the clock’ mechanism, formally requesting to ‘congelare le scadenze applicative’ (freeze application deadlines) pending the completion of implementing regulations. This proposal was included in a note discussed at the 6 June Telecom Council meeting, reflecting broader input from member states and industry on simplifying digital regulation.

The Polish initiative extends beyond just the AI Act, proposing similar pauses for other digital legislative packages, including the Data Act and Digital Services Act, when essential technical standards remain incomplete. 

Industry pressure has intensified in recent weeks, with companies expressing concern about meeting compliance obligations without adequate guidance. The business community’s lobbying efforts have been particularly focused on the technical standards gap, arguing that meaningful compliance is impossible without clear, finalised guidance on how to meet the AI Act’s requirements. This pressure has been amplified by concerns about competitive disadvantage, particularly given the rapid pace of AI development in other jurisdictions, notably the United States.

The implementation challenges

The core issue driving calls for implementation delays centres on the incomplete state of critical technical standards and guidance documents necessary for AI Act compliance. The most significant gap exists in the code of practice for general-purpose AI models, which remains unfinished despite the August 2025 deadline for major AI model providers. This document is crucial for companies developing and deploying large language models and other advanced AI systems, as it will provide specific guidance on how to meet the Act’s requirements for risk assessment, testing, and documentation.

The European Commission has launched a consultation and call for evidence specifically aimed at identifying ‘further measures that are needed to facilitate a smooth, streamlined and simple application of the AI Act’. This consultation process is designed to inform the Commission’s broader simplification efforts and potentially feed into revision proposals for the legislation itself. 

The technical standards development process has proven more complex and time-consuming than initially anticipated, involving multiple stakeholders, including industry experts, academic researchers, and regulatory bodies. The challenge lies not only in creating technically sound standards but also in ensuring they are practical for implementation across the diverse landscape of AI applications covered by the Act. This complexity is compounded by the rapid evolution of AI technology, which can outpace the standards development process.

Go to Top