close
close
AI-generated images of child sexual abuse are spreading. The authorities rush to arrest them

AI-generated images of child sexual abuse are spreading. The authorities rush to arrest them

WASHINGTON (AP) — A child psychiatrist who altered a first-day-of-school photo he saw on Facebook to make a group of girls appear naked. A US Army soldier accused of creating images showing children he knew being sexually abused. A software engineer in charge of generating hyper-realistic and sexually explicit images of children.

Law enforcement agencies across the United States are cracking down on a Worrying spread of images of child sexual abuse created using artificial intelligence technology, from manipulated photographs of real children to computer-generated graphic representations of children. Justice Department officials say they are aggressively pursuing criminals who exploit artificial intelligence tools, while states are competing to ensure that people who create “deepfakes” and other harmful images of children can be prosecuted under their laws.

“We have to signal early and often that this is a crime, which will be investigated and prosecuted when the evidence supports it,” Steven Grocki, who heads the Justice Department’s Child Exploitation and Obscenity Section, said in an interview with The Associated Press. “And if you’re sitting there thinking otherwise, you’re fundamentally wrong. And it’s only a matter of time before someone holds you accountable.”

The Justice Department says existing federal laws clearly apply to such content, and recently filed what is believed to be the first federal case involving purely AI-generated images, meaning the children depicted are not real but virtual. . In another case, federal authorities arrested a U.S. soldier stationed in Alaska in August on charges of posting innocent photos of real children he knew through an artificial intelligence chatbot to make the images sexually explicit.

Trying to catch up with technology

The prosecutions come as children’s advocates are working urgently to curb the misuse of technology to prevent a flood of disturbing images that officials fear could make it harder to rescue real victims. Law enforcement officials fear that investigators will waste time and resources trying to identify and locate exploited children who do not actually exist.

Meanwhile, lawmakers are passing a series of laws to ensure local prosecutors can bring charges under state laws for AI-generated “deepfakes” and other sexually explicit images of children. Governors in more than a dozen states have signed laws this year to crack down on digitally created or altered images of child sexual abuse, according to a review by the National Center for Missing and Exploited Children.

“We’re trying to catch up as law enforcement with technology that, frankly, is moving much faster than we are,” said Ventura County, California, District Attorney Erik Nasarenko.

Nasarenko pushed the legislation signed last month by Governor Gavin Newsom, making clear that AI-generated child sexual abuse material is illegal under California law. Nasarenko said his office was unable to prosecute eight cases involving AI-generated content between last December and mid-September because California law had required prosecutors to prove that the images depicted a real child.

AI-generated child sexual abuse images can be used to groom children, law enforcement officials say. And even if they do not suffer physical abuse, children can be deeply affected when their image is transformed to appear sexually explicit.

“I felt like a part of me had been taken away. Although I wasn’t physically raped,” said Kaylin Hayman, 17, who starred in the Disney Channel show “Just Roll with It” and helped push the California bill after becoming a victim of “deepfake” images.

Kaylin Hayman, 17, poses in front of Ventura City Hall in Ventura, California, on October 17, 2024.
Kaylin Hayman, 17, poses in front of Ventura City Hall in Ventura, California, on October 17, 2024.(Eugene García | AP Photo/Eugene García)

Hayman testified last year in the federal trial of the man who digitally superimposed his face and those of other child actors onto bodies performing sexual acts. In May he was sentenced to more than 14 years in prison.

Criminals are known to prefer open-source artificial intelligence models that can be downloaded to their computers by users, who can train or further modify the tools to produce explicit representations of children, experts say. Abusers exchange tips in dark web communities on how to manipulate artificial intelligence tools to create such content, officials say.

A report last year. by the Stanford Internet Observatory found that a research data set that was the source for leading AI image makers, such as Stable Diffusion, contained links to sexually explicit images of children, contributing to the ease with which some tools have been able to produce harmful images. The data set was deleted and the researchers later said they erased More than 2,000 web links to images suspected of child sexual abuse.

Major tech companies including Google, OpenAI and Stability AI have agreed to work with anti-child sexual abuse organization Thorn. to combat the spread of images of child sexual abuse.

But experts say more should have been done early on to prevent misuse before the technology was widely available. And the steps companies are taking now to make it harder to abuse future versions of AI tools “will do little to prevent” violators from running older versions of models on their computers “undetected,” a Department of Justice prosecutor said. Justice in recent court documents.

“There was no time spent making the products secure, rather than efficient, and it’s very difficult to do that after the fact, as we’ve seen,” said David Thiel, chief technologist at the Stanford Internet Observatory.

AI images become more realistic

Last year, the National Center for Missing and Exploited Children’s CyberTipline received about 4,700 reports of content involving artificial intelligence technology, a small fraction of the more than 36 million total reports of suspected child sexual exploitation. As of October this year, the group was receiving about 450 reports per month of AI-related content, said Yiota Souras, the group’s legal director.

However, those figures may be insufficient, since the images are so realistic that it is often difficult to tell if they were generated by AI, experts say.

“Investigators spend hours trying to determine whether an image actually represents a real minor or is generated by AI,” said Ventura County Deputy District Attorney Rikole Kelly, who helped draft the California bill. “There used to be some really clear indicators…with advances in AI technology, that’s no longer the case.”

Justice Department officials say they already have the tools provided by federal law to pursue offenders for such images.

The Supreme Court of the United States in 2002 overturned a federal ban on virtual child sexual abuse material. But a federal law signed the following year prohibits the production of visual depictions, including drawings, of children engaged in sexually explicit conduct that is considered “obscene.” That law, which the Justice Department says has been used in the past to charge cartoon images of child sexual abuse, specifically notes that there is no requirement “that the minor depicted actually exists.”

An Orlando mother is suing Google and another technology company after her 14-year-old son committed suicide. (Source: WKMG, CHARACTER.AI, FAMILY PHOTO, CNN)

The Justice Department brought that charge in May against a Wisconsin software engineer accused of using the artificial intelligence tool Stable Diffusion to create photorealistic images of children engaged in sexually explicit conduct, and was caught after he sent some to a boy in 15 years through a direct link. message on Instagram, authorities say. The man’s attorney, who is pushing to have the charges dismissed based on the First Amendment, declined to comment further on the allegations in an email to the AP.

A spokesperson for Stability AI said the man is accused of using an older version of the tool released by another company, Runway ML. Stability AI says it has “invested in proactive features to prevent the misuse of AI for the production of harmful content” since taking over sole development of the models. A spokesperson for Runway ML did not immediately respond to a request for comment from the AP.

In cases involving “deepfakes,” when a photo of a real child has been digitally altered to make it sexually explicit, the Justice Department is pursuing charges under the federal “child pornography” law. In one case, a North Carolina child psychiatrist who used an artificial intelligence app to digitally “undress” girls posing on the first day of school in a decades-old photograph shared on Facebook was found guilty of federal charges last year. past.

“These laws exist. They will be used. We have the will. “We have the resources,” Grocki said. “This won’t be a low priority that we ignore because there isn’t a child involved.”

__

The Associated Press receives financial assistance from Omidyar Network to support coverage of artificial intelligence and its impact on society. AP is solely responsible for all content. Find AP’s standards for working with philanthropic organizations, a list of supporters, and funded coverage areas at AP.org

Back To Top