British Technology Companies and Child Safety Agencies to Examine AI's Ability to Generate Abuse Images

Tech firms and child protection agencies will receive authority to assess whether AI tools can produce child abuse material under new UK legislation.

Significant Increase in AI-Generated Illegal Material

The announcement coincided with findings from a safety monitoring body showing that cases of AI-generated child sexual abuse material have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.

New Legal Framework

Under the changes, the government will permit designated AI developers and child safety organizations to examine AI models – the foundational technology for chatbots and visual AI tools – and verify they have sufficient safeguards to prevent them from creating depictions of child exploitation.

"Ultimately about stopping abuse before it occurs," declared Kanishka Narayan, noting: "Experts, under strict conditions, can now detect the danger in AI systems promptly."

Tackling Regulatory Challenges

The changes have been implemented because it is against the law to produce and own CSAM, meaning that AI creators and other parties cannot generate such images as part of a testing regime. Previously, authorities had to delay action until AI-generated CSAM was published online before dealing with it.

This law is aimed at averting that problem by enabling to halt the production of those materials at source.

Legal Framework

The amendments are being introduced by the government as revisions to the criminal justice legislation, which is also implementing a prohibition on possessing, producing or distributing AI models designed to create child sexual abuse material.

Practical Impact

This recently, the minister visited the London headquarters of Childline and listened to a simulated call to counsellors involving a report of AI-based exploitation. The interaction portrayed a adolescent seeking help after being blackmailed using a explicit deepfake of himself, created using AI.

"When I learn about young people experiencing extortion online, it is a source of extreme anger in me and justified anger amongst parents," he said.

Concerning Statistics

A leading online safety organization stated that instances of AI-generated abuse content – such as webpages that may include numerous files – had more than doubled so far this year.

Instances of category A content – the most serious form of exploitation – rose from 2,621 visual files to 3,086.

  • Female children were predominantly targeted, accounting for 94% of illegal AI images in 2025
  • Portrayals of newborns to toddlers increased from five in 2024 to 92 in 2025

Sector Response

The legislative amendment could "represent a crucial step to ensure AI tools are safe before they are released," commented the head of the online safety organization.

"Artificial intelligence systems have enabled so survivors can be victimised repeatedly with just a few clicks, giving offenders the capability to make possibly endless quantities of sophisticated, lifelike exploitative content," she continued. "Material which additionally exploits survivors' suffering, and makes children, especially girls, more vulnerable on and off line."

Support Session Information

The children's helpline also published details of counselling sessions where AI has been referenced. AI-related risks discussed in the conversations include:

  • Using AI to evaluate body size, body and looks
  • Chatbots discouraging children from consulting safe adults about abuse
  • Being bullied online with AI-generated material
  • Online blackmail using AI-faked images

During April and September this year, Childline delivered 367 counselling sessions where AI, conversational AI and associated topics were discussed, significantly more as many as in the same period last year.

Fifty percent of the mentions of AI in the 2025 sessions were related to mental health and wellness, including using chatbots for support and AI therapeutic apps.

Alyssa Silva
Alyssa Silva

Elara is an experienced editor and novelist passionate about helping new writers find their voice and navigate the publishing world.