Meta faces legal challenge over use of European user data for AI training
Meta claims it has a ‘legitimate interest’ in processing the data, but privacy advocates argue this violates the EU General Data Protection Regulation (GDPR), which requires informed, opt-in consent for such uses.

Consumer rights organisation Verbraucherzentrale North Rhine-Westphalia has formally demanded that Meta halt its plan to use European users’ personal data from Instagram and Facebook to train AI systems. The request comes ahead of Meta’s scheduled rollout on 27 May 2025, which would include all public posts in AI training datasets, without seeking explicit user consent.
Meta claims it has a ‘legitimate interest’ in processing the data, but privacy advocates argue this violates the EU General Data Protection Regulation (GDPR), which requires informed, opt-in consent for such uses. The move follows prior GDPR complaints filed by privacy NGO noyb, which led the Irish Data Protection Authority to temporarily block Meta’s plans in 2024. Despite this, Meta is moving forward, prompting fresh legal threats.
The Verbraucherzentrale warns that once user data is fed into AI models, it becomes virtually impossible to extract, making immediate action critical. If Meta does not voluntarily comply with the cease-and-desist letter sent on 30 April, further legal proceedings may follow.
Why this matters for civil society
This issue strikes at the heart of digital rights in Europe. Civil society depends on strong data protection frameworks to safeguard individuals from commercial overreach. Meta’s actions, as claimed by human rights advocates, bypass user autonomy and undermine trust in digital platforms. Thus, if left unchallenged, such practices could weaken the enforcement of GDPR and set a dangerous precedent for how personal data is treated in AI development.