The UK government has announced four new laws to combat the growing threat of child sexual abuse material (CSAM) created using artificial intelligence (AI).
These measures aim to close legal loopholes and strengthen protections for children online.
What Will Change?

The UK will become the first country in the world to make it illegal to own, create, or distribute AI tools designed to generate child abuse images. Offenders could face up to five years in prison.
Additionally, possessing AI-generated manuals that teach people how to exploit AI for sexual abuse will also be made illegal.
Those caught with such materials could be jailed for up to three years.
Speaking on the BBC’s Sunday with Laura Kuenssberg, Home Secretary Yvette Cooper said AI was “supercharging online child abuse” and warned that these laws may need to be expanded further in the future.
Other major changes include:
Running websites that allow child abusers to share content or provide grooming advice will become a criminal offense, punishable by up to 10 years in prison.
Border Force officers will have new powers to inspect the digital devices of individuals suspected of posing a risk to children when they enter the UK. Depending on the severity of the content found, offenders could face up to three years in prison.
How AI is Being Misused?
AI-generated CSAM can include completely computer-created images or manipulated real photos where children’s faces are swapped into explicit content.
Some AI tools can even replicate real children’s voices, leading to further victimization.
These fake images and videos are often used to blackmail and exploit children, pushing them into further abuse.
Experts warn that this technology makes it harder to distinguish between real and fake abuse material, making enforcement even more challenging.
The Scale of the Problem
The National Crime Agency (NCA) reports that 800 people are arrested each month for online child abuse offenses in the UK.
An estimated 840,000 adults pose a risk to children, representing 1.6% of the adult population.
The Internet Watch Foundation (IWF) has also recorded a 380% increase in AI-generated child abuse images, with 245 confirmed cases in 2024, up from just 51 in 2023.
Some dark web sites have been found hosting thousands of such images.
Calls for Even Stronger Action

While experts welcome these new laws, some believe more action is needed.
Professor Clare McGlynn, an expert in online abuse laws, argues that the government should ban AI-powered “nudify” apps and address the sexualization of young-looking actors in pornography, which she describes as “simulated child abuse videos.”
The IWF’s interim chief executive, Derek Ray-Hill, warns that AI-generated abuse material encourages offenders and puts real children at greater risk.
He supports the new measures but stresses that more needs to be done to prevent AI from being exploited.
Lynn Perry, CEO of children’s charity Barnardo’s, urges tech companies to take responsibility for keeping children safe online.
She emphasizes that the Online Safety Act must be enforced effectively to prevent further harm.
Next Steps
These new laws will be introduced as part of the Crime and Policing Bill, which is expected to be brought to Parliament in the coming weeks.
If passed, they will mark a significant step in protecting children from AI-generated sexual abuse and holding offenders accountable.