Meta faces legal risk over AI training as noyb issues cease and desist letter

At the heart of the case is a fundamental question: should companies ask for permission before using personal data, or should they be allowed to take it by default?

Meta faces legal risk over AI training as noyb issues cease and desist letter

European privacy group noyb, led by Max Schrems, has issued a formal cease and desist letter to Meta Ireland. Under the EU Collective Redress Directive, this move is a first step toward a possible EU-wide injunction. If Meta does not change course, noyb may pursue collective legal action to seek damages for affected users, potentially amounting to billions of euros.

The dispute centers on Meta’s decision to use personal data from Facebook and Instagram users in the EU to train its AI systems, starting 27 May 2025. Instead of asking for users’ explicit opt-in consent, Meta claims a ‘legitimate interest’ to justify processing this data, leaving users only with the option to object. According to noyb, this violates the General Data Protection Regulation (GDPR), which requires consent to be freely given, specific, informed, and unambiguous.

Noyb argues that Meta’s approach undermines core privacy rights and sets a dangerous precedent. The group also warns that Meta’s reliance on opt-out mechanisms, combined with limited user transparency, could make it difficult to uphold other GDPR rights, such as the right to erasure or rectification, especially once personal data is embedded in AI models.

Schrems argues that Meta’s justification lacks credibility, noting that the European Court of Justice has already ruled against similar uses of ‘legitimate interest’ for advertising. He also points out that many successful AI developers, including OpenAI and France’s Mistral, do not rely on personal data from social networks and yet deliver better-performing models.

Beyond noyb’s action, other European groups are also preparing legal challenges. In Germany, the consumer protection organisation VZ NRW has already initiated proceedings, and more national cases are expected to follow. If any court grants an injunction, Meta could be forced not only to stop using EU data but to delete AI models trained on it. This could lead to further liabilities if it turns out that illegally obtained data was mixed into existing AI systems.

Meanwhile, national data protection authorities have taken a largely passive stance. Instead of enforcing compliance, many have simply advised users to opt out individually. According to noyb, this shifts the burden onto citizens and weakens the role of regulators.

At the heart of the case is a fundamental question: should companies ask for permission before using personal data, or should they be allowed to take it by default? Schrems argues that Meta is choosing the latter to avoid a drop in training data volume, despite legal and reputational risks. With over 400 million monthly active users in Europe, even modest per-user compensation could result in one of the largest privacy-related payouts in EU history.

Go to Top