The European Commission has initiated a formal investigation into X, under the Digital Services Act (DSA), concerning the deployment of its AI tool Grok within the EU.
This latest regulatory move escalates scrutiny on X’s compliance as a designated very large online platform. The investigation specifically looks at whether X properly assessed and addressed risks associated with the introduction of Grok into its services, including concerns about illegal material spread, such as manipulated sexually explicit content, potentially including child sexual abuse material.
Broader examination of recommender systems and AI deployment
Simultaneously with this investigation, the Commission has broadened its earlier inquiry into X’s recommender systems initiated in December 2023. The expanded review will delve deeper into whether X has adequately identified and mitigated systemic risks involved in content prioritization and distribution, especially considering the platform’s shift to a Grok-based system.
The Commission will scrutinize whether X conducted an appropriate targeted risk assessment for Grok before its deployment. This includes evaluating compliance with obligations related to preventing illegal content from being disseminated, managing risks linked to gender-based violence, and safeguarding user physical and mental well-being.
Should the Commission find any breaches in these areas, it could indicate violations of several DSA provisions concerning systemic risk management and transparency. Notably, the initiation of formal proceedings does not prejudge the final outcome.
The Irish media regulator Coimisiún na Meán is cooperating with the investigation as X’s national Digital Services Coordinator for its EU operations. The Commission reserves the right to request additional information, conduct inspections, or implement interim measures if necessary.
This case follows earlier enforcement actions against X, including a non-compliance decision issued in December 2025 that led to an EUR 120 million fine for issues related to advertising transparency, data access by researchers, and interface design practices.
Commission representatives stressed the aim is to ensure adequate protection of European users, particularly women and children, during the AI-driven features’ deployment.











