📈 Markets | London, Edinburgh, Cardiff

MARKET PULSE UK

Decoding Markets for Everyone


UK adds self-harm, sexting to Online Safety priorities

The UK has approved an update to the Online Safety Act’s list of “priority offences”, elevating serious self-harm encouragement and key intimate image crimes. Once in force, content that amounts to these offences must be identified and tackled proactively by in‑scope services, not just removed after user reports. Ministers said both Houses have signed off the regulations.

The statutory instrument was made on 18 December 2025 and takes effect 21 days later, which puts commencement on 8 January 2026. It applies across England, Wales, Scotland and Northern Ireland. That narrow window means product, policy and trust‑and‑safety teams will need to fold the new offences into illegal‑content workflows immediately after the holiday period.

What’s changed in law is straightforward. The offence of encouraging or assisting serious self-harm (section 184 of the Online Safety Act 2023) is added to Schedule 7 as a priority offence. At the same time, cyberflashing (section 66A of the Sexual Offences Act 2003) is added and the offence of sharing or threatening to share an intimate photograph or film (section 66B) remains listed. In practical terms: both cyberflashing and intimate image threats now sit alongside s184 as harms platforms must actively prevent users from encountering.

For context on the intimate image offences: section 66A covers intentionally sending a photograph or film of genitals with intent to cause alarm, distress or humiliation, or for sexual gratification while being reckless about the impact; it carries up to two years’ imprisonment. Section 66B addresses sharing or threatening to share intimate photos or films. These provisions were inserted into the Sexual Offences Act by the Online Safety Act.

Priority status matters because it switches on stronger duties under Part 3 of the Act for regulated user‑to‑user services and search services. Providers must have proportionate systems to prevent people encountering content amounting to these offences, supported by risk assessments, clear reporting and rapid takedown. Ofcom’s codes of practice remain the reference point for what “proportionate” looks like in different service types and sizes.

Most services have already completed their first illegal‑content risk assessment under Ofcom’s initial codes, with safety duties live from mid‑March 2025. The immediate job is to revisit those assessments and controls so that s184, 66A and 66B are explicitly modelled: where and how such content might arise, how it’s detected, and how exposure is minimised. Expect Ofcom to signpost any code updates once the new offences take effect.

For small and mid‑sized platforms, this doesn’t have to mean expensive new tooling on day one. Start by updating incident taxonomies and moderator playbooks to distinguish between illegal encouragement of self-harm and allowed content such as recovery forums or education. Tighten prompts and user flows to deter image abuse in private messaging, add friction to unsolicited image sending where feasible, and refresh terms so users understand that cyberflashing and intimate image threats are illegal and will be referred to law enforcement. These are low‑cost, high‑signal adjustments that align with Ofcom’s proportionate approach.

Search services should map queries and result types that risk returning illegal content, adjust ranking and removal processes, and ensure user‑reporting routes are prominent. Where automated detection is used, apply conservative thresholds and human review for edge cases to avoid suppressing legitimate support content about self‑harm. Document the trade‑offs and rationale-regulators will ask to see them.

Governance also steps up. Ofcom expects each provider to name a senior person accountable to the board for the illegal‑content and complaints duties and to keep auditable records of decisions, escalations and enforcement actions. Non‑compliance can draw penalties of up to £18m or 10% of global turnover, so boards should treat this as a financial risk as much as a policy obligation.

One more moving piece to track: the government has signalled via the Criminal Justice Bill that it intends to replace section 184 (for England and Wales) with a broader offence that covers encouraging or assisting serious self‑harm by any means, not just communications. If Parliament enacts that change, providers should expect further technical updates-but today’s step makes self‑harm encouragement a priority offence under the current framework.

Finally, the government notes no full Impact Assessment was prepared for this instrument on the basis that impacts are below the de minimis threshold. For operators, the real cost will come from re‑running risk assessments, training moderators, tightening product flows and documenting decisions before 8 January. Those are manageable tasks if teams start now.

← Back to Articles