UK Technology Firms and Child Protection Officials to Test AI's Ability to Generate Abuse Content
Technology companies and child protection agencies will receive authority to evaluate whether AI tools can produce child exploitation material under recently introduced UK legislation.
Significant Increase in AI-Generated Illegal Content
The announcement coincided with findings from a protection monitoring body showing that reports of AI-generated child sexual abuse material have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.
New Regulatory Framework
Under the amendments, the authorities will permit approved AI companies and child safety organizations to inspect AI models – the foundational technology for conversational AI and visual AI tools – and verify they have sufficient protective measures to stop them from producing images of child sexual abuse.
"Fundamentally about preventing abuse before it happens," declared the minister for AI and online safety, noting: "Specialists, under rigorous conditions, can now identify the risk in AI systems early."
Addressing Legal Challenges
The amendments have been implemented because it is against the law to produce and own CSAM, meaning that AI developers and other parties cannot generate such images as part of a testing regime. Previously, authorities had to delay action until AI-generated CSAM was uploaded online before addressing it.
This legislation is aimed at preventing that issue by helping to stop the production of those materials at source.
Legislative Framework
The amendments are being added by the government as revisions to the crime and policing bill, which is also establishing a ban on owning, creating or distributing AI models developed to create child sexual abuse material.
Real-World Impact
This week, the minister visited the London headquarters of a children's helpline and listened to a simulated conversation to counsellors featuring a report of AI-based exploitation. The interaction depicted a adolescent seeking help after being blackmailed using a sexualised deepfake of himself, constructed using AI.
"When I learn about children experiencing blackmail online, it is a source of intense frustration in me and rightful concern amongst families," he said.
Concerning Statistics
A leading online safety organization stated that instances of AI-generated abuse content – such as webpages that may include numerous images – had more than doubled so far this year.
Cases of the most severe material – the gravest form of abuse – rose from 2,621 images or videos to 3,086.
- Girls were predominantly targeted, accounting for 94% of illegal AI images in 2025
- Depictions of infants to two-year-olds increased from five in 2024 to 92 in 2025
Industry Reaction
The law change could "represent a vital step to guarantee AI products are secure before they are launched," stated the chief executive of the online safety organization.
"Artificial intelligence systems have made it so survivors can be victimised repeatedly with just a few clicks, giving offenders the capability to create potentially endless amounts of sophisticated, photorealistic exploitative content," she continued. "Material which further exploits victims' trauma, and renders young people, especially girls, more vulnerable on and off line."
Support Session Data
Childline also released information of counselling sessions where AI has been referenced. AI-related risks discussed in the sessions include:
- Employing AI to evaluate weight, physique and appearance
- AI assistants discouraging children from talking to trusted adults about abuse
- Facing harassment online with AI-generated content
- Digital blackmail using AI-faked images
Between April and September this year, Childline conducted 367 counselling interactions where AI, conversational AI and related terms were mentioned, significantly more as many as in the equivalent timeframe last year.
Half of the references of AI in the 2025 sessions were connected with psychological wellbeing and wellbeing, encompassing using AI assistants for support and AI therapy applications.