UK Tech Firms and Child Protection Agencies to Test AI's Capability to Create Exploitation Images
Technology companies and child safety organizations will receive authority to evaluate whether artificial intelligence systems can generate child abuse material under recently introduced British laws.
Significant Rise in AI-Generated Harmful Content
The declaration coincided with revelations from a safety monitoring body showing that reports of AI-generated CSAM have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.
New Regulatory Framework
Under the amendments, the authorities will allow approved AI developers and child protection organizations to examine AI models – the underlying systems for chatbots and image generators – and ensure they have sufficient safeguards to prevent them from creating depictions of child sexual abuse.
"Ultimately about preventing exploitation before it happens," stated the minister for AI and online safety, noting: "Experts, under strict conditions, can now identify the risk in AI systems promptly."
Tackling Legal Challenges
The changes have been introduced because it is illegal to create and possess CSAM, meaning that AI developers and other parties cannot create such images as part of a testing regime. Until now, authorities had to delay action until AI-generated CSAM was published online before dealing with it.
This law is designed to preventing that problem by helping to halt the production of those images at their origin.
Legal Framework
The changes are being added by the government as modifications to the crime and policing bill, which is also establishing a ban on owning, creating or distributing AI systems developed to create exploitative content.
Practical Impact
This recently, the official visited the London headquarters of a children's helpline and heard a simulated call to advisors featuring a report of AI-based exploitation. The interaction depicted a adolescent requesting help after being blackmailed using a sexualised deepfake of himself, constructed using AI.
"When I learn about children experiencing blackmail online, it is a source of intense anger in me and justified concern amongst parents," he said.
Alarming Data
A leading online safety foundation reported that cases of AI-generated abuse content – such as webpages that may contain multiple images – had significantly increased so far this year.
Cases of category A material – the most serious form of abuse – rose from 2,621 images or videos to 3,086.
- Girls were overwhelmingly victimized, making up 94% of prohibited AI images in 2025
- Portrayals of newborns to two-year-olds rose from five in 2024 to 92 in 2025
Sector Reaction
The law change could "represent a crucial step to ensure AI products are safe before they are released," stated the head of the online safety organization.
"Artificial intelligence systems have made it so victims can be targeted all over again with just a simple actions, providing criminals the ability to create possibly endless amounts of advanced, photorealistic child sexual abuse material," she continued. "Material which additionally exploits survivors' trauma, and makes young people, particularly female children, less safe on and off line."
Counseling Session Information
The children's helpline also published information of counselling interactions where AI has been mentioned. AI-related harms mentioned in the sessions include:
- Using AI to rate body size, physique and looks
- AI assistants dissuading young people from talking to safe adults about harm
- Facing harassment online with AI-generated material
- Online blackmail using AI-faked pictures
During April and September this year, Childline delivered 367 support sessions where AI, chatbots and related terms were discussed, significantly more as many as in the equivalent timeframe last year.
Fifty percent of the mentions of AI in the 2025 sessions were related to mental health and wellbeing, encompassing utilizing AI assistants for assistance and AI therapeutic apps.