Child Sexual Abuse Material generated through Artificial Intelligence (AI CSAM) is a growing area of concern. Attorneys general from across the US in all 54 U.S. states and territories urged lawmakers in a Sept. 5, 2023 letter to create a commission dedicated to studying the impacts of AI on child exploitation.
The letter expresses deep concern about the potential misuse of artificial intelligence (AI) to exploit children, particularly through the generation of child sexual abuse material (CSAM). The signatories highlight various ways AI could be used to harm children:
- Location tracking: AI tools can scan and track images of children across the internet, potentially allowing malicious actors to approximate or even anticipate a child's location. This poses a significant physical safety risk.
- Voice mimicking: AI can study short voice recordings, such as those from voicemail or social media posts, and create convincing imitations of a person's voice. The letter mentions that this technology has already been used by scammers to fake kidnappings, which could cause severe emotional distress to families.
- Deepfake images: AI can generate highly realistic "deepfake" images by:
- Studying real photographs of abused children to create new images showing those children in sexual positions.
- Overlaying faces of unvictimized children onto bodies of abused children in existing abuse images.
- Combining data from photographs of both abused and non-abused children to create new, realistic sexualized images of children who don't actually exist but may resemble real children.
- AI-generated CSAM: Even when the AI-generated images don't depict real children, they're still problematic because:
- They may be based on source images of abused children.
- They often resemble actual children, potentially harming otherwise unvictimized children.
- They support the growth of the child exploitation market by normalizing child abuse.
- They're quick and easy to generate using widely available AI tools.
Attorneys General urged Congress to establish an expert commission to study AI's potential for child exploitation and propose solutions to address this issue. The letter also called for expanding existing CSAM restrictions to explicitly cover AI-generated content, and emphasized the urgency of acting to protect children from these emerging AI-related threats and request that Congress prioritize this issue alongside other AI-related concerns such as national security and education.
The Internet Watch Foundation (IWF), the UK organization responsible for detecting and removing child sexual abuse imagery from the internet, reported it had found nearly 3,000 AI-made abuse images that broke UK law. The IWF’s latest report had shown an acceleration in use of the technology, and its chief executive stating they are “seeing criminals deliberately training their AI on real victims’ images who have already suffered abuse. Children who have been raped in the past are now being incorporated into new scenarios because someone, somewhere, wants to see it.” The IWF also has seen evidence of AI-generated images being sold online.
“Our worst nightmares have come true. Earlier this year, we warned AI imagery could soon become indistinguishable from real pictures of children
suffering sexual abuse, and that we could start to see this imagery proliferating in much greater numbers. We have now passed that point.
--Susie Hargreaves OBE, IWF Chief Executive
The IWF report: “How AI is being abused to create child sexual abuse imagery” (October 2023)
focused on a single dark web forum dedicated to child sexual abuse imagery.
In a single month:
- The IWF investigated 11,108 AI images which had been shared on a dark web child abuse forum.
- Of these, 2,978 were confirmed as images which breach UK law – meaning they depicted child sexual abuse.
- Of these images, 2,562 were so realistic, the law would need to treat them the same as if they had been real abuse images*.
- More than one in five of these images (564) were classified as Category A, the most serious kind of imagery which can depict rape, sexual torture, and bestiality.
- More than half (1,372) of these images depicted primary school-aged children (seven to 10 years old).
- As well as this, 143 images depicted children aged three to six, while two images depicted babies (under two years old).
Among other findings highlighted in the report:
- Perpetrators can legally download everything they need to generate these images, then can produce as many images as they want – offline, with no opportunity for detection. Various tools exist for improving and editing generated images until they look exactly like the perpetrator wants.
- Most AI CSAM found is now realistic enough to be treated as ‘real’ CSAM. The most convincing AI CSAM is visually indistinguishable from real CSAM, even for trained IWF analysts. Text-to-image technology will only get better and pose more challenges for the IWF and law enforcement agencies
- There is now reasonable evidence that AI CSAM has increased the potential for the re-victimization of known child sexual abuse victims, as well as for the victimization of famous children and children known to perpetrators. The IWF has found many examples of AI-generated images featuring known victims and famous children.
“We’re seeing AI CSAM images using the faces of known, real, victims. We’re seeing the ‘de-aging’ of celebrities and AI CSAM using the likeness of celebrity children. We’re seeing how technology is ‘nudifying’ children whose clothed images have been uploaded online for perfectly legitimate reasons. And we’re seeing how all this content is being commercialized. “
--Susie Hargreaves OBE, IWF Chief Executive