The UK government has condemned Elon Musk’s platform X for restricting its AI image-editing tool Grok to paid subscribers, calling the move ‘insulting’ to victims of sexual violence and threatening a ban if the company fails to comply with online safety laws.
X implemented the restriction after Grok was used to generate non-consensual sexualized images, including deepfakes that digitally undressed individuals without their consent. The tool, which can be tagged in posts to edit images, had been exploited to alter photos of women and children, leading to widespread outrage and calls for action from politicians and advocacy groups.
Technology Secretary Liz Kendall stated that sexually manipulating images of women and children is ‘despicable and abhorrent,’ and she expressed support for regulator Ofcom if it decides to block X’s access in the UK. Under the Online Safety Act, Ofcom has the power to seek court orders to disrupt services that refuse to comply with UK law, though such measures are largely untested.
Elon Musk responded on X, accusing the UK government of using the outcry as an ‘excuse for censorship.’ He questioned why other AI platforms are not being scrutinized, highlighting the ongoing tension between tech companies and regulatory bodies over content moderation and free speech.
Victims and experts have expressed concern over the psychological impact of such AI-generated imagery. Dr. Daisy Dixon, a lecturer who experienced harassment via Grok, welcomed the change but called for a complete redesign of the tool with ethical guardrails. Similarly, the Internet Watch Foundation noted that limiting access does not undo the harm already caused.
The controversy has prompted cross-party political condemnation, with Prime Minister Sir Keir Starmer describing the use of Grok for such purposes as ‘disgraceful,’ while Reform UK leader Nigel Farage criticized the idea of banning X as an attack on free speech. Meanwhile, some Labour MPs have urged the government to stop using X for official communications due to safety concerns.
Ofcom is conducting an expedited assessment of X’s compliance and has set deadlines for the company to explain its actions. The regulator’s spokesperson confirmed that urgent contact was made, and further updates are expected shortly, indicating that regulatory pressure is mounting.
Looking ahead, the situation underscores the challenges in balancing innovation with safety in the AI era. As governments worldwide grapple with deepfake legislation, this incident may accelerate calls for stricter regulations and ethical standards in AI development, potentially leading to broader reforms in how tech companies handle harmful content.
