UK Tech Firms and Child Protection Agencies to Test AI's Ability to Generate Exploitation Images
Tech firms and child safety agencies will receive authority to evaluate whether AI tools can produce child abuse material under recently introduced British laws.
Substantial Increase in AI-Generated Harmful Material
The declaration came as findings from a safety monitoring body showing that cases of AI-generated CSAM have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.
New Regulatory Framework
Under the amendments, the authorities will allow approved AI developers and child safety groups to inspect AI models – the underlying technology for chatbots and visual AI tools – and verify they have sufficient safeguards to stop them from producing images of child exploitation.
"Fundamentally about preventing abuse before it occurs," stated Kanishka Narayan, noting: "Experts, under rigorous conditions, can now identify the danger in AI models early."
Addressing Regulatory Obstacles
The changes have been implemented because it is against the law to create and possess CSAM, meaning that AI creators and others cannot generate such images as part of a evaluation process. Until now, authorities had to delay action until AI-generated CSAM was uploaded online before addressing it.
This legislation is designed to preventing that issue by helping to halt the creation of those materials at source.
Legal Framework
The changes are being introduced by the government as revisions to the criminal justice legislation, which is also establishing a ban on possessing, producing or sharing AI systems designed to create child sexual abuse material.
Practical Consequences
This recently, the minister toured the London headquarters of a children's helpline and listened to a mock-up conversation to counsellors featuring a account of AI-based exploitation. The interaction portrayed a adolescent seeking help after facing extortion using a sexualised deepfake of themselves, constructed using AI.
"When I learn about young people facing blackmail online, it is a source of extreme anger in me and justified anger amongst families," he said.
Concerning Statistics
A prominent online safety organization reported that instances of AI-generated abuse content – such as online pages that may contain numerous files – had more than doubled so far this year.
Cases of category A material – the most serious form of abuse – increased from 2,621 visual files to 3,086.
- Girls were predominantly victimized, accounting for 94% of prohibited AI images in 2025
- Depictions of newborns to two-year-olds rose from five in 2024 to 92 in 2025
Industry Reaction
The law change could "represent a vital step to guarantee AI products are safe before they are launched," commented the head of the internet monitoring foundation.
"AI tools have enabled so survivors can be victimised all over again with just a few clicks, giving offenders the capability to create possibly limitless quantities of sophisticated, photorealistic exploitative content," she continued. "Material which additionally commodifies victims' suffering, and makes children, especially female children, more vulnerable both online and offline."
Counseling Session Information
The children's helpline also released information of support interactions where AI has been mentioned. AI-related risks discussed in the sessions comprise:
- Using AI to rate body size, physique and appearance
- AI assistants dissuading young people from talking to safe guardians about abuse
- Facing harassment online with AI-generated material
- Online extortion using AI-manipulated pictures
Between April and September this year, Childline conducted 367 counselling interactions where AI, conversational AI and related terms were discussed, four times as many as in the equivalent timeframe last year.
Half of the references of AI in the 2025 sessions were connected with psychological wellbeing and wellness, including using chatbots for assistance and AI therapy apps.