Deepfake intimate image offences start 6 February 2026
The law has moved. From 6 February 2026, it will be a criminal offence in England and Wales to create a synthetic (“purported”) intimate image of an adult without consent, or to request that someone makes one. The change arrives via the Data (Use and Access) Act 2025, after the Ministry of Justice signed the fifth commencement regulations on 15 January. For operational teams, the headline is simple: prompts alone can now trigger criminal liability. (legislation.gov.uk)
What exactly changed? Section 138 of the 2025 Act inserts new sections 66E to 66H into the Sexual Offences Act 2003. Section 66E covers creating a purported intimate image of an adult without consent (or reasonable belief in consent). Section 66F criminalises requesting such an image, whether the request is general or specific. Both are summary offences carrying up to six months’ custody (rising to 51 weeks when sentencing changes take effect) and/or a fine. (legislation.gov.uk)
The request offence is broader than many expect. A “request” includes doing something that could reasonably be taken as a request - for example, agreeing to an offer or clicking through conditions. It applies regardless of whether the image is ever created and regardless of where the person making or receiving the request is located. In short: wording inside a chat, DM, email or job brief can be enough. (legislation.gov.uk)
The law also defines what counts as an “intimate state”. It spans sexual acts, behaviour a reasonable person would consider sexual, exposed genitals, buttocks or breasts (including through wet or transparent clothing or covered only by underwear), and certain toileting or related personal care acts. This definition already sits in section 66D of the 2003 Act and is now imported for deepfakes. (legislation.gov.uk)
Prosecutors have longer to act. New section 66H gives magistrates’ courts up to three years from the date of the offence and six months from the point sufficient evidence comes to the prosecutor’s knowledge to bring a case. That raises the bar for governance: audit trails and retention policies become more important, not less. (legislation.gov.uk)
Courts also gain clearer powers to seize images and devices. Amendments to the Sentencing Act 2020 and Armed Forces Act 2006 allow deprivation orders tied to these offences, meaning phones, laptops or drives containing the images can be taken on conviction - a risk that extends into the Service Justice System. (publications.parliament.uk)
Two scope points matter for policy. First, the new offences relate to adults; abuse involving child images is already captured elsewhere in law. Second, existing Online Safety Act duties remain live: Ofcom treats intimate image abuse as a priority area, with significant fines for platforms that fail to assess and mitigate risks. (gov.uk)
Why this matters for AI startups: your product can be the scene of the crime even if you are not the perpetrator. A user who prompts a model to produce a non‑consensual intimate image may be committing the 66F offence at the moment of asking. You cannot be charged with “encouraging or assisting” that requesting offence under the Serious Crime Act 2007 (the Act disapplies those inchoate offences for 66F), but you still face Online Safety Act duties and reputational risk if your tooling enables harm. Build prompt classifiers for sexualised requests, require explicit consent artefacts when real people are referenced, and log denials with minimal personal data to evidence reasonable steps. (legislation.gov.uk)
For social platforms and image generators, the direction of travel is clear. Ministers have indicated they will treat the deepfake creation/request offences as priority offences under the Online Safety Act, and regulators are already scrutinising services after the Grok incident on X. Expect enforcement that focuses on risk assessments, proactive detection and speed of removal - with penalties up to 10% of global turnover for systemic failures. (gov.uk)
For UK employers, HR and legal teams should update codes of conduct, acceptable‑use policies and disciplinary procedures. A staff member asking a colleague or a third‑party contractor to make a deepfake of another adult could now commit a criminal offence regardless of whether an image is produced. Internal chat tools, BYOD devices and procurement of creative AI services should be covered by clear rules and training. (legislation.gov.uk)
Compliance heads should note the time limits and evidence rules. With a three‑year window to prosecute and six months from evidential discovery, reasonable retention of safety logs, moderation decisions and consent checks becomes a defensive asset. Keep processes proportionate to UK GDPR and the 2025 Act’s privacy changes, but ensure you can demonstrate your steps to verify consent and block abuse. (legislation.gov.uk)
Finally, context. The Government has framed this step as part of a wider clampdown on intimate image abuse, following earlier changes that criminalised sharing and threatening to share such content, and as scrutiny of major platforms intensifies. For operators, the practical takeaway is to treat intimate deepfakes like any other high‑risk, illegal content category: prevent at source, document the rationale, and act quickly when signals fire. (legislation.gov.uk)