UK presses Ofcom to act on xAI’s Grok image tool
UK Technology Secretary Liz Kendall has called for swift regulatory action after xAI’s Grok was found to still generate or edit intimate deepfake images despite overnight changes. In a government statement on 9 January, she said Ofcom should update “in days, not weeks” and reminded xAI that the Online Safety Act allows the regulator to seek UK service blocks if a platform refuses to comply. ([gov.uk](Link
X’s response remains unsettled. Reuters reported that image generation and editing on Grok were limited to paying subscribers on X. The Verge and Ars Technica, however, showed that free users could still access editing through other pathways, and TechCrunch noted the standalone Grok app remained available at the time of publication. ([reuters.com](Link
For UK‑facing platforms, the regulatory stakes are high. Ofcom can levy fines of up to £18m or 10% of global revenue and, in serious cases, apply to the courts for access restriction orders to block services via ISPs or app stores. ([cnbc.com](Link
The regulator has already set expectations for protecting women and girls online. Ofcom’s guidance, finalised on 25 November 2025, outlines product‑level measures to reduce abuse and improve reporting, signalling how it expects firms to design safer services. ([ofcom.org.uk](Link
Criminal law is shifting too. The government has pledged to ban ‘nudification’ tools in the Crime and Policing Bill and has previously announced plans to criminalise the creation of sexually explicit deepfakes and other intimate images without consent. ([gov.uk](Link
Enforcement is not theoretical. Since the first Online Safety codes became enforceable in March 2025, Ofcom says it has launched 21 investigations across 69 services, issued a £20,000 fine for non‑cooperation, and is scrutinising sites that geoblock the UK to ensure they do not promote circumvention. ([ofcom.org.uk](Link
The reputational cost is visible. The Commons Women and Equalities Committee has paused use of X over Grok‑related abuse, a warning sign for advertisers and investors assessing brand safety and governance on the platform. ([theguardian.com](Link
More duties are arriving. As of 8 January, platforms must block unsolicited sexual images under new rules linked to the Online Safety Act-a reminder that proactive design, not paywalls, is the standard being set in the UK. ([reuters.com](Link
For product and policy teams, this is a short‑term risk checklist: suspend or sandbox high‑risk image features where safeguards are weak; complete and publish an illegal‑harms risk assessment; instrument robust auditing and appeals; and make abuse reporting obvious and fast. Those steps reduce regulatory exposure and rebuild user trust. ([ofcom.org.uk](Link
For investors, the downside is quantifiable: up to 10% of global turnover and potential UK blocking. Platforms that can prove real‑time detection, swift takedown, and auditable logs will protect revenue better than those trying to offset risk behind a paywall. ([cnbc.com](Link