Brazilian authorities issue joint recommendation to X over Grok AI
Brazilian public authorities have issued a joint recommendation to X Brasil Internet Ltda. concerning the operation of the Grok artificial intelligence system, following reports that the tool enables the creation of non-consensual sexualised synthetic content involving identifiable individuals, including children and adolescents.
The recommendation was issued jointly by the Ministério Público Federal (MPF), the Agência Nacional de Proteção de Dados (ANPD), and the Secretaria Nacional do Consumidor (Senacon). It follows multiple representations submitted to the authorities and builds on a prior technical assessment conducted by ANPD.
According to the document, the Grok system, integrated into the X platform, has been used to generate and edit synthetic images and other media depicting real individuals in sexualised contexts without consent. The authorities highlight heightened risks where such content involves children and adolescents, citing potential violations of constitutional protections, data protection law, consumer law, and child protection legislation.
The recommendation calls on X to immediately implement technical and governance measures to prevent Grok from generating sexualised synthetic images, videos, or audio involving minors or identifiable adults without authorisation. It also sets out additional steps, including the removal of existing content of this nature, stricter enforcement of X’s own platform policies, the establishment of effective user complaint mechanisms, and the preparation of a dedicated data protection impact report covering Grok’s image-generation features.
The authorities set a deadline of 27 January 2026 for X to confirm compliance with the initial measures and to provide a timeline for implementing the remaining recommendations. The document notes that failure to comply may lead to administrative or judicial action under applicable Brazilian law.
The joint recommendation is framed as a preventive and supervisory measure rather than a final enforcement decision, and forms part of broader regulatory scrutiny of generative AI systems and their impact on data protection, consumer rights, and the protection of children online
