Europol announced Friday that a global campaign led to 25 arrests for child sexual abuse material generated by artificial intelligence distributed online.
The Hague-based European Police Agency said that “Operation Cumberland was one of the first cases to involve AI-generated material relating to child sexual abuse, which made it extremely challenging for investigators because there is no national legislation dealing with these crimes.”
Most of the arrests took place on Wednesday, during a global operation that was led by Danish police and involved law enforcement agencies in the EU, Australia, Britain, Canada, and New Zealand. Europol reports that the U.S. Law Enforcement Agencies did not participate in this operation.
The arrest of the principal suspect in this case, a Danish citizen who operated an online platform through which he distributed AI material that he created, took place last November.
Europol reported that after a “symbolic payment online, users around the globe were able to obtain a password to access the platform to watch children being abused.”

The agency has warned that online child sexual exploitation is one of the most dangerous manifestations of cybercrime within the European Union.
The report said that “continued arrests are expected as the investigation continues.” It added that it was one of the top priorities of law enforcement agencies who were dealing with an increasing volume of illegal content.
Europol has said that Operation Cumberland was aimed at a platform as well as people who shared content created with AI. However, “deepfakes”, AI-manipulated images, have also proliferated online. These images often use real people, including children, and can have devastating effects on their lives.
A report states that there will be more than 21,000 fake pornographic videos or pictures online in 2023. This is an increase of 460% compared to the previous year. Internet users are bombarded with manipulated content as legislators in the U.S. and other countries race to pass new legislation.

The Senate recently passed a bipartisan law called the “TAKE it DOWN Act”, which, if signed, criminalizes the “publication” of non-consensual images (NCII), such as AI-generated NCII (“deepfake revenge pornography”) and requires that social media sites and other websites implement procedures for removing this content within 48-hours of notification from a victim.
Some social media platforms appear to be unable or unwilling, at the moment, to clamp down on the spread of sexualized AI-generated deepfake images, including those of celebrities. Meta, the owner of Facebook and Instagram, announced in mid-February that it had removed more than a dozen sexualized images featuring famous female athletes and actors. This was after an investigation revealed a high prevalence on Facebook of AI-manipulated images.
Meta spokesperson Erin Logan stated in a press release, “This is an industry challenge, and we are constantly working to improve our technology for detection and enforcement.”