The UK government is introducing pivotal amendments to the Crime and Policing Bill that will mandate technology companies to remove nonconsensual intimate images, including AI-generated deepfakes, within 48 hours of being reported. Failure to comply with this strict window could result in catastrophic financial penalties—up to 10% of a firm’s global annual revenue—or the service being blocked entirely within the United Kingdom. This crackdown specifically targets a “culture of permission” that Starmer argues has allowed such content to proliferate unchecked.
A central component of this strategy is the empowerment of Ofcom, the UK’s communications regulator, which is expected to take over enforcement by the summer of 2026. The new “flag once, remove everywhere” system is designed so that a victim only needs to report an image once to trigger a wider alert, preventing the exhausting cycle of reporting the same content as it resurfaces across different sites. Additionally, the government is pushing for the adoption of digital watermarking and advanced “hash matching” tools to automatically detect and block known abusive imagery before it can be re-shared.
The move also addresses the rise of generative AI tools, such as X’s chatbot Grok, which recently faced scrutiny for its ability to create explicit images. By designating the creation and distribution of these images as a “priority offense” under the Online Safety Act, the UK is effectively elevating image-based abuse to the same legal severity as terrorism and child sexual abuse material. Starmer emphasized that this is not just a policy shift but a moral imperative, stating that women and girls must no longer be treated as a “commodity to be used and shared.”





