UK Prime Minister, Sir Keir Starmer, has warned that Elon Musk’s X could lose the “right to self regulate” after its Grok AI tool was linked to the creation and circulation of illegal sexualised imagery, prompting a formal Ofcom investigation and an accelerated UK government response.
Background
The controversy centred on X, formerly Twitter, and its AI chatbot Grok, developed by xAI. In early January, multiple reports and user complaints highlighted that the Grok account on X had been used to generate and share digitally altered images of real people, including women being undressed or placed into sexualised scenarios without their consent. Some of the reported material involved sexualised images of children, raising concerns that the content could meet the legal definition of child sexual abuse material.
In several cases, individuals said large volumes of sexualised images had been created using the tool, with content spreading rapidly once posted. Campaigners argued that the integration of AI image generation directly into a social platform significantly increased the speed and scale at which this form of abuse could occur.
The issue fed into a wider debate about AI-generated intimate image abuse, sometimes referred to as nudification or deepfake sexual imagery. While the sharing of such material has long been illegal in the UK, ministers argued that generative AI had transformed the threat by lowering the technical barrier to abuse and increasing the likelihood of mass distribution.
The Warning
The political response escalated on Monday 12 January 2026, when UK Prime Minister Keir Starmer addressed Labour MPs at a meeting of the Parliamentary Labour Party. During that meeting, Starmer warned that X could lose the “right to self regulate” if it could not control how Grok was being used. He said: “If X cannot control Grok, we will – and we’ll do it fast, because if you profit from harm and abuse, you lose the right to self regulate.”
The warning came on the same day that Ofcom confirmed it had opened a formal investigation into X under the Online Safety Act, citing serious concerns about the use of Grok to generate illegal content.
On 15 January, Starmer reinforced his position publicly on X. In a post shared from his account, he wrote: “Free speech is not the freedom to violate consent. Young women’s images are not public property, and their safety is not up for debate.”
He added: “I welcome that X is now acting to ensure full compliance with UK law – it must happen immediately. If we need to strengthen existing laws further, we are prepared to do that.”
The timing was deliberate, as the warning coincided with mounting pressure on the government to demonstrate that recently passed online safety laws would be enforced decisively, including against the largest global platforms.
Why Grok Became A Regulatory Flashpoint
Grok’s image generation capability was not unique in the AI market, but its deployment inside a major social platform raised specific risks. For example, because Grok was embedded directly into X’s interface, images could be generated and shared within the same environment. This reduced friction between creation and publication, increasing the likelihood that harmful material could circulate widely before being detected or removed.
Ofcom said it made urgent contact with X on 5 January and required the company to explain what steps it had taken to protect UK users by 9 January. While X responded within that deadline, the regulator concluded that the situation warranted a formal investigation.
Ofcom said there had been “deeply concerning reports” of the Grok account being used to create and share undressed images of people that may amount to intimate image abuse, as well as sexualised images of children that may constitute child sexual abuse material.
What Losing The Right To Self Regulate Would Mean
Losing the right to self regulate would carry serious consequences for X.
Under the Online Safety Act, platforms are expected to assess the risks their services pose and put effective systems in place to prevent users in the UK from encountering illegal content. Ofcom does not moderate individual posts and does not decide what should be taken down.
Instead, its role is to assess whether a platform has taken appropriate and proportionate steps to meet its legal duties, particularly when it comes to protecting children and preventing the spread of priority illegal content.
Starmer’s warning made clear that if X is judged unable or unwilling to manage those risks through its own systems, the government and regulator are prepared to intervene more directly, shifting the balance away from platform-led oversight and towards formal enforcement.
In practical terms, that could mean e.g., Ofcom imposing specific compliance requirements, backed by legal powers, rather than relying on X’s own judgement about what safeguards were sufficient.
For example, under the Act, Ofcom can issue fines of up to £18 million or 10 per cent of qualifying worldwide revenue, whichever is greater. In the most serious cases of ongoing non-compliance, it can apply to the courts for business disruption measures.
These measures can include requiring payment providers or advertisers to withdraw services, or requiring internet service providers to block access to a platform in the UK.
What Is Ofcom’s Investigation Examining?
Ofcom said its investigation would examine whether X had complied with several core duties under the Online Safety Act. For example, these include whether X had adequately assessed the risk of UK users encountering illegal content, whether it had taken appropriate steps to prevent exposure to priority illegal content such as non-consensual intimate images and child sexual abuse material, and whether it had removed illegal content swiftly when it became aware of it.
The regulator is also examining whether X properly assessed risks to children and whether it used “highly effective age assurance” to prevent children from accessing pornographic material.
Suzanne Cater, Ofcom’s Director of Enforcement, said: “Reports of Grok being used to create and share illegal non-consensual intimate images and child sexual abuse material on X have been deeply concerning.”
She added: “Platforms must protect people in the UK from content that’s illegal in the UK, and we won’t hesitate to investigate where we suspect companies are failing in their duties, especially where there’s a risk of harm to children.”
While Ofcom acknowledged changes made by X, it has said the investigation remained ongoing and that it was working “round the clock” to establish what went wrong and how risks were being addressed.
The Response
X and xAI (Elon Musk’s AI company behind Grok) reportedly responded by tightening controls around Grok’s image generation features and publicly setting out their compliance position.
For example, X said it had introduced technical measures to stop the Grok account on the platform from being used to edit images of real people in revealing clothing, including swimwear. These restrictions apply globally and cover both free and paid users.
The company also said it had limited image creation and image editing via the Grok account on X to paid subscribers only, arguing this would improve accountability where the tool is misused.
In addition, X said it would geoblock, in jurisdictions where such material is illegal, the ability to generate images of real people in underwear or similar attire. xAI confirmed it was rolling out comparable geoblocking controls in the standalone Grok app.
Alongside these changes, X was keen to say it has zero tolerance for child sexual exploitation and non-consensual intimate imagery, and that accounts found to be generating or sharing such content would face enforcement action, including permanent suspension.
That said, at the same time, Elon Musk criticised the UK government’s response, suggesting it amounted to an attempt to restrict free expression. UK ministers rejected that characterisation, maintaining that the action was about enforcing criminal law and protecting people from serious harm, not limiting lawful speech.
The Government’s Legal And Policy Response
The regulatory pressure on X was matched by swift legislative action from the UK government. For example, Liz Kendall, the Technology Secretary, told MPs that the Data (Use and Access) Act had already created an offence covering the creation or request of non-consensual intimate images, but that the offence had not yet been brought into force.
She said the offence would be commenced that week and would also be treated as a priority offence under the Online Safety Act. Kendall described AI-generated sexualised images as “weapons of abuse” and said the material circulating on X was illegal.
She also said the government would criminalise the supply of tools designed specifically to create non-consensual intimate images, targeting what she described as the problem “at its source”.
Kendall rejected claims that the response was about limiting lawful speech, saying it was about tackling violence against women and girls.
Wider Implications For Platforms, AI Tools, And Users
It seems this case has become one of the most high-profile tests of the Online Safety Act since its duties came into force. It all means that for X, the risks include financial penalties, enforced changes to how Grok operates in the UK, and long-term reputational damage if the platform is seen as unsafe or slow to respond.
For other platforms and AI providers, the episode is also likely to send a clear signal that generative tools embedded into social systems will be scrutinised under UK law, regardless of where the technology is developed.
For businesses that use X for marketing, customer engagement, or recruitment, the dispute raises questions around brand safety, platform governance, and the risks of operating on a service under active regulatory investigation.
Also, at a regulatory level, the case shows that Ofcom is prepared to pursue major global platforms and to use the full range of powers available under the Online Safety Act where serious harm is alleged.
Challenges And Criticisms
Despite the technical changes and legislative pushback, it seems this episode has exposed a number of unresolved challenges and points of criticism. For example, one of the clearest tensions is between political pressure for rapid enforcement and the need for legally robust regulatory processes. Ministers have urged Ofcom not to allow investigations to drift, while the regulator has repeatedly stressed that it must follow the formal steps set out in the Online Safety Act.
There are also questions about the effectiveness of narrowly targeted technical controls. For example, critics have pointed to Grok’s earlier design choices, including permissive modes that encouraged provocative or boundary-testing outputs, as contributing to misuse. From that perspective, restricting specific prompts or image categories may address symptoms rather than the underlying incentives built into generative AI tools.
Also, age assurance, i.e., methods used to verify whether a user is a child or an adult, remains a significant area of concern. Ofcom has highlighted the need for “highly effective” protections for children, but deploying such systems at scale continues to raise questions around accuracy, privacy, and user trust.
What Does This Mean For Your Business?
The dispute around X and Grok seems to have clarified how far the UK government is prepared to go when online platforms are judged to be falling short of their legal duties, particularly where new AI tools are involved. The warning issued by the Prime Minister was not just rhetorical, and underlined a willingness to move beyond cooperative regulation if a platform cannot demonstrate that it understands and controls the risks created by its own systems.
For UK businesses, the case is a reminder that platform risk is no longer just a reputational issue but also a regulatory one. Organisations that rely on X for marketing, customer engagement, recruitment, or public communication should know that they are now operating on a platform under active regulatory scrutiny. That raises practical questions around brand safety, governance, and contingency planning, especially if enforcement action leads to service restrictions or further operational changes.
Also, the episode sets a precedent for how AI features embedded within digital services are likely to be treated under UK law. Ofcom’s investigation, alongside the government’s decision to accelerate legislation, signals that generative AI will be judged not only on innovation but on real world impact.
For platforms, AI developers, regulators, and users alike, the expectations are now clear. Companies rolling out generative AI tools are expected to build in safeguards from the outset, respond quickly when misuse occurs, and show regulators that risks are being actively managed, not simply acknowledged after the fact.