Changeflow GovPing Government Ofcom Investigates X Over Grok Sexualised Imagery
Urgent Enforcement Added Final

Ofcom Investigates X Over Grok Sexualised Imagery

Favicon for www.ofcom.org.uk Ofcom News Centre
Filed January 12th, 2026
Detected February 7th, 2026
Email

Summary

Ofcom has launched a formal investigation into X (formerly Twitter) concerning allegations that its Grok AI chatbot was used to create and share sexualised imagery, including child sexual abuse material. The investigation will determine if X has complied with its duties under the UK's Online Safety Act to protect users from illegal content and harm.

What changed

Ofcom, the UK's online safety regulator, has initiated a formal investigation into X regarding reports of its Grok AI chatbot being used to generate and distribute undressed images of individuals and sexualised images of children. This action falls under the UK's Online Safety Act, and Ofcom will examine whether X has met its legal obligations, including assessing risks, preventing illegal content, swift takedown of such content, and protecting users from privacy breaches and child sexual abuse material.

This investigation signifies a critical enforcement action by Ofcom, demanding X demonstrate compliance with its duties. While X has reported implementing measures to prevent further misuse of the Grok account, Ofcom's investigation remains active to ascertain what went wrong and ensure corrective actions are sufficient. Companies operating online services in the UK must be prepared for scrutiny regarding illegal content and child protection measures, with potential consequences for non-compliance under the Online Safety Act.

Source document (simplified)



Ofcom launches investigation into X over Grok sexualised imagery

Share this page URL copied


Online safety Illegal and harmful content Protecting children News and updates News Published:
12 January 2026 Last updated:
15 January 2026

Update, 15 January 2026

X has said it has implemented measures to prevent the Grok account from being used to create intimate images of people.

This is a welcome development. However, our formal investigation remains ongoing. We are working round the clock to progress this and get answers into what went wrong and what’s being done to fix it.

Original release, 12 January 2026

The UK’s independent online safety watchdog, Ofcom, has today opened a formal investigation into X under the UK’s Online Safety Act, to determine whether it has complied with its duties to protect people in the UK from content that is illegal in the UK.

Our initial assessment

There have been deeply concerning reports of the Grok AI chatbot account on X being used to create and share undressed images of people – which may amount to intimate image abuse or pornography – and sexualised images of children that may amount to child sexual abuse material (CSAM). [1]

As the UK’s independent online safety watchdog, we urgently made contact with X on Monday 5 January and set a firm deadline of Friday 9 January for it to explain what steps it has taken to comply with its duties to protect its users in the UK.

The company responded by the deadline, and we carried out an expedited assessment of available evidence as a matter of urgency. [2]

What our investigation will examine

Ofcom has decided to open a formal investigation to establish whether X has failed to comply with its legal obligations under the Online Safety Act – in particular, to:

  • assess the risk of people in the UK seeing content that is illegal in the UK, and to carry out an updated risk assessment before making any significant changes to their service;
  • take appropriate steps to prevent people in the UK from seeing ‘priority’ illegal content – including non-consensual intimate images and CSAM; [3]
  • take down illegal content swiftly when they become aware of it;
  • have regard to protecting users from a breach of privacy laws;
  • assess the risk their service poses to UK children, and to carry out an updated risk assessment before making any significant changes to their service; and
  • use highly effective age assurance to protect UK children from seeing pornography. [4]

Ofcom’s role

The legal responsibility is on platforms to decide whether content breaks UK laws, and they can use our Illegal Content Judgements Guidance when making these decisions. Ofcom is not a censor – we do not tell platforms which specific posts or accounts to take down.

Our job is to judge whether sites and apps have taken appropriate steps to protect people in the UK from content that is illegal in the UK, and protect UK children from other content that is harmful to them, such as pornography.

Ofcom’s investigation process

The Online Safety Act sets out the process Ofcom must follow when investigating a company and deciding whether it has failed to comply with its legal obligations. [5]

Our first step is to gather and analyse evidence to determine whether a breach has occurred. If, based on that evidence, we consider that a compliance failure has taken place, we will issue a provisional decision to the company, who will then have an opportunity to respond our findings in full, as required by the Act, before we make our final decision.

Enforcement powers

If our investigation finds that a company has broken the law, we can require platforms to take specific steps to come into compliance or to remedy harm caused by the breach. We can also impose fines of up to £18 million or 10% of qualifying worldwide revenue, whichever is greater. [6]

In the most serious cases of ongoing non-compliance, we can make an application to a court for ‘business disruption measures’, through which a court could impose an order, on an interim or full basis, requiring payment providers or advertisers to withdraw their services from a platform, or requiring internet service providers to block access to a site in the UK. The court may only impose such orders where appropriate and proportionate to prevent significant harm to individuals in the UK. [7]

UK jurisdiction

In any industry, companies that want to provide a service to people in the UK must comply with UK laws. The UK’s Online Safety Act is concerned with protecting people in the UK. It does not require platforms to restrict what people in other countries can see.

There are ways platforms can protect people in the UK without stopping their users elsewhere in the world from continuing to see that content. [8]

Suzanne Cater, Director of Enforcement at Ofcom, said: “Reports of Grok being used to create and share illegal non-consensual intimate images and child sexual abuse material on X have been deeply concerning. Platforms must protect people in the UK from content that’s illegal in the UK, and we won’t hesitate to investigate where we suspect companies are failing in their duties, especially where there’s a risk of harm to children.

“We’ll progress this investigation as a matter of the highest priority, while ensuring we follow due process. As the UK’s independent online safety enforcement agency, it’s important we make sure our investigations are legally robust and fairly decided.”

We will provide an update on this investigation as soon as possible.

Notes to editors

  1. It is illegal in the UK to share non-consensual intimate images or child sexual abuse material. See 66D, subsections 5-7 of the Sexual Offences Act, as inserted by the Online Safety Act, for the definition of intimate image abuse, and Schedule 6 explains the child sexual exploitation and abuse offences that are priority offences under the Act.
  2. We also received a response from xAI on Friday 9 January. We are assessing whether there are potential compliance issues with xAI – in connection with the provision of Grok – under the Online Safety Act that warrant investigation. We have sought urgent clarification from xAI on the steps it is taking to protect users in the UK.
  3. Ofcom’s illegal harms codes of practice set out safety measures providers can implement to comply with their duties, such as: having user reporting and complaints processes for illegal content that are easy to find, access and use; adequately resourcing and training content moderation teams as appropriate to deal with illegal content; and having content moderation systems that are designed to take down illegal content swiftly when they become aware of it.
  4. We have published guidance here and here setting out age assurance methods that we consider are capable of being highly effective at correctly determining whether or not a user is a child.
  5. Our Online Safety Enforcement Guidance can be found here.
  6. Since duties came into force less than a year ago, we have made use of a range of our powers under the Online Safety Act,, including:
  • Launching investigations into more than 90 platforms.
  • Issuing six fines for non-compliance, including against an AI nudification site for not having robust age checks in place.
  • Issuing the first £1 million fine under the Online Safety Act against porn company AVS Group.
    As a result of our enforcement action to date:

  • Porn providers have introduced new age checks.

  • Services used to distribute CSAM have deployed hash-matching technology.

  • Some high-risk sites are no longer available to UK IP addresses.
    A full list of our enforcement actions under the Online Safety Act is available here.

  1. While we will not hesitate to use these powers where it is appropriate and proportionate, it would be a significant regulatory intervention due to the impacts it has on the availability of services and information online for people in the UK. We recently put the provider of a suicide forum on notice that we are prepared to apply to a court for business disruption measures swiftly after the period for making representations on our provisional decision has elapsed, if any non-compliance we may identify in our provisional decision continues.
  2. More information on jurisdiction is available here.

Related content

### Ofcom issues update on Online Safety Act investigations

Ofcom has today provided an update on our enforcement activity under the Online Safety Act. ### Protecting people in the UK from illegal online content – regardless of its origin

The Online Safety Act introduces new rules for providers of online user-to-user, search and pornography services, to help keep people in the UK safe from content which is illegal in the UK, and to protect children from the most harmful content such as pornography, suicide and self-harm material. ### Enforcing the Online Safety Act: Platforms must start tackling illegal material from today

From today, online platforms must start putting in place measures to protect people in the UK from criminal activity.

Source

Analysis generated by AI. Source diff and links are from the original.

Classification

Agency
Office of Communications
Filed
January 12th, 2026
Instrument
Enforcement
Legal weight
Binding
Stage
Final
Change scope
Substantive

Who this affects

Applies to
Technology companies
Geographic scope
UK

Taxonomy

Primary area
Data Privacy
Operational domain
Compliance
Topics
Child Protection Data Privacy

Get Government alerts

Weekly digest. AI-summarized, no noise.

Free. Unsubscribe anytime.

Get alerts for this source

We'll email you when Ofcom News Centre publishes new changes.

Free. Unsubscribe anytime.