
The European Commission has opened a formal investigation into X under the Digital Services Act (DSA). The probe will assess whether X properly evaluated and mitigated risks associated with Grok, its AI tool deployed on the platform in the EU since 2024.
Grok enables users to generate text and images and provide contextual content for posts. The investigation follows reports of illegal and harmful content, including manipulated sexually explicit images, child sexual abuse material, antisemitic content, and non-consensual deepfakes of women.
History of Regulatory Scrutiny on X
The Commission is examining whether X met its obligations as a very large online platform (VLOP) under the DSA, including assessing systemic risks and protecting users from harms posed by its services.
This extends an earlier inquiry launched in December 2023, which looked at X’s recommender systems, notice-and-action mechanisms, mitigation measures against illegal content, and issues such as advertising transparency and data access for researchers. In December 2025, X was fined €120 million for non-compliance in these areas.
Commission Statement
Thomas Regnier, Spokesperson for Tech Sovereignty, Defence, Space & Research, said:
Today the Commission is opening an investigation into Grok under the Digital Services Act. We believe that X may have breached the DSA. We’ve seen over the last weeks and months antisemitic content, non-consensual deepfakes of women, and child sexual abuse material.
In Europe, no company will make money by violating our fundamental rights. Such output has no place in Europe, and we need to protect our citizens from potential future harms. This investigation into Grok will allow us to look deeper into the matter, to protect our women, our children, and our citizens.
The statement emphasizes the Commission’s focus on systemic risks and fundamental rights in the EU.
Scope of Investigation
The investigation will examine whether X:
- Properly assessed and mitigated systemic risks from Grok, including illegal content, gender-based violence, and threats to physical and mental well-being.
- Conducted and submitted an ad hoc risk assessment report for Grok prior to deployment.
- Evaluated the impact of its switch to a Grok-based recommender system on systemic risks.
Potential violations could involve Articles 34(1) and (2), 35(1), and 42(2) of the DSA.
Coordination and Enforcement
The investigation is coordinated with Coimisiún na Meán, Ireland’s national Digital Services Coordinator, under Article 66(3). Formal proceedings allow the Commission to:
- Take enforcement actions, including a non-compliance decision.
- Accept commitments from X to remedy identified issues.
Evidence collection may include information requests, interviews, inspections, and interim measures. National authorities are temporarily relieved of enforcement powers regarding these suspected infringements.
Individuals affected by AI-generated content, including child sexual abuse material or non-consensual images, can seek assistance at the national level and file complaints with their Member State’s Digital Services Coordinator.
Outlook and Next Steps
The Commission will continue evidence collection and risk assessment, including the functioning of Grok and X’s recommender systems. The investigation will determine whether X’s measures were sufficient and may result in binding enforcement decisions or remedial commitments.
Speaking on the investigation, Henna Virkkunen, Executive Vice-President for Tech Sovereignty, Security, and Democracy, said:
Sexual deepfakes of women and children are a violent, unacceptable form of abuse and degradation. With this investigation, we will determine whether X has met its legal obligations under the DSA, or whether it treated the rights of European citizens – including those of women and children – as collateral damage of its service.
