British Tech Firms and Child Protection Officials to Examine AI's Capability to Create Abuse Images

Technology companies and child protection agencies will be granted permission to evaluate whether AI tools can generate child abuse images under new British legislation.

Substantial Rise in AI-Generated Illegal Material

The announcement came as revelations from a protection monitoring body showing that reports of AI-generated CSAM have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.

New Regulatory Framework

Under the changes, the government will permit approved AI developers and child safety groups to examine AI systems – the foundational systems for conversational AI and image generators – and ensure they have sufficient protective measures to prevent them from creating images of child sexual abuse.

"Ultimately about preventing abuse before it occurs," declared Kanishka Narayan, noting: "Experts, under strict conditions, can now identify the danger in AI models promptly."

Tackling Legal Challenges

The amendments have been introduced because it is against the law to create and possess CSAM, meaning that AI developers and others cannot generate such images as part of a testing process. Until now, officials had to delay action until AI-generated CSAM was published online before addressing it.

This legislation is aimed at averting that issue by helping to stop the creation of those images at their origin.

Legislative Framework

The changes are being introduced by the authorities as revisions to the criminal justice legislation, which is also implementing a prohibition on possessing, creating or distributing AI systems designed to create child sexual abuse material.

Practical Consequences

This recently, the official toured the London headquarters of Childline and heard a simulated call to advisors involving a account of AI-based abuse. The call depicted a adolescent seeking help after facing extortion using a sexualised deepfake of themselves, constructed using AI.

"When I hear about young people facing extortion online, it is a source of extreme frustration in me and rightful anger amongst families," he said.

Alarming Statistics

A leading internet monitoring organization stated that instances of AI-generated abuse content – such as online pages that may contain numerous files – had significantly increased so far this year.

Instances of category A content – the most serious form of abuse – increased from 2,621 images or videos to 3,086.

  • Female children were overwhelmingly targeted, making up 94% of illegal AI depictions in 2025
  • Portrayals of newborns to two-year-olds rose from five in 2024 to 92 in 2025

Industry Reaction

The law change could "constitute a crucial step to ensure AI products are secure before they are released," stated the chief executive of the internet monitoring foundation.

"AI tools have made it so victims can be targeted all over again with just a simple actions, giving offenders the capability to create possibly limitless quantities of advanced, photorealistic child sexual abuse material," she added. "Content which additionally exploits survivors' trauma, and makes children, particularly female children, less safe both online and offline."

Counseling Interaction Information

Childline also released details of counselling sessions where AI has been referenced. AI-related risks discussed in the conversations include:

  • Using AI to rate weight, physique and looks
  • AI assistants dissuading young people from talking to safe guardians about abuse
  • Being bullied online with AI-generated material
  • Online blackmail using AI-manipulated images

During April and September this year, the helpline delivered 367 support interactions where AI, conversational AI and related terms were discussed, significantly more as many as in the equivalent timeframe last year.

Fifty percent of the references of AI in the 2025 interactions were connected with mental health and wellness, encompassing utilizing chatbots for support and AI therapeutic applications.

Robert Stephens
Robert Stephens

Elara is a financial strategist with over a decade of experience in wealth management and startup consulting.

February 2026 Blog Roll

Popular Post