Soldier’s arrest highlights growing issue of AI-created explicit material

Investigators say a soldier created and spread explicit images of children generated by artificial intelligence, but its unclear what federal or military law, if any, covers them.
An Alaska soldier was arrested last month on charges related to possessing child sexual abuse material and allegedly creating thousands of those images using artificial intelligence. Military lawyers say that the Department of Defense can use "catch all" charges to hold service members accountable. Getty images photo.

Share

An Alaska soldier was arrested last month on charges related to possessing child sexual abuse material and allegedly creating thousands of those images using artificial intelligence. There is no federal or military law that outright criminalizes the use of AI to create explicit materials but military justice system lawyers say that the Department of Defense has other methods to hold service members accountable.

Spc. Seth Alan Herrera, 34, a motor vehicle operator stationed at Joint Base Elmendorf-Richardson in Anchorage, Alaska was arrested in August on federal charges for the alleged transportation, receipt and possession of materials depicting child sexual abuse material, also known as ‘CSAM’ that were real and some that were AI-generated, according to the Department of Justice

Herrera remains in civilian confinement but is on active duty status, according to the Army. In court documents, officials said Herrera is currently “surrounded by minors” on the Alaska base and that six children live within his fourplex. They also noted that as an Army heavy vehicle driver, he regularly drives supplies from Anchorage to Fairbanks, extending “his access to military families and their children.” 

If Herrera is convicted, he could face a maximum of 20 years in prison.

Army officials did not indicate there would be separate Uniform Code of Military Justice actions against Herrera but depictions of children in sexually explicit materials have previously come up in the military justice system. 

Air Force Staff Sgt. Remington Carlisle was charged with possessing and viewing anime porn depicting childlike characters, Stars & Stripes first reported. A judge ruled that the videos and images depicted “fictional cartoon characters,” “not persons,” “not human beings,” and therefore do not fall under definitions that are prosecutable with Article 134. But Air Force criminal appeals judges unanimously ruled in May 2024 that the original judge erred in his decision and called for a trial. 

“They argue that these anime videos and images do meet the definition of child pornography in accordance with Article 134, UCMJ,” the appellate judges wrote in their decision. “We hold that whether the videos and images meet the definition of child pornography as set forth by the President is a factual question to be resolved by the fact-finder at trial.”

While there is no UCMJ article that specifically criminalizes actions related to possessing or distributing child sexual abuse material, prosecutors have opted for Article 134 which allows troops to be court-martialed for violating non-capital federal civilian offenses. 

“Why do we have it underneath 134? Because Congress never bothered to get around to enumerate it,” said Rachel VanLandingham, a former Air Force judge advocate lawyer and director of the National Institute of Military Justice.

VanLandingham said 134 is considered a “catch-all provision” that allows the government to criminalize conduct that is either service discrediting or prejudice to good order and discipline. 

A defense official told Task & Purpose that Article 134 could be used to deal with the issue of AI-generated explicit content in UCMJ cases.

The defense official also said that UCMJ Article 117a, which prohibits broadcasting or distributing explicit and intimate images of someone without their consent, could be applied “to images made or altered by means of artificial intelligence.”

Brian Ferguson, a lawyer who regularly represents service members in the military justice system, said a deepfake case involving a service member would either be handled by UCMJ or the federal government – never both because of double jeopardy laws. If a service member violates state law, officials sometimes bring an Article 92 charge for disobeying a general order to follow state laws, he said.

“You can’t get court-martialed and taken to federal court for the same crime, because that’s the feds in both cases,” Ferguson said. “But the state and the feds can prosecute you for the same thing, because it’s considered two different sovereigns.” 

There are no UCMJ articles that specifically forbid service members from using AI to create sexually explicit content depicting adults or minors. Guy Womack, a lawyer who specializes in military legal cases said that civilian defense teams have argued that “AI-created images cannot be considered child pornography” and that the case outcomes have gone “both ways.”

The trouble with prosecuting AI depictions of sexually explicit images and videos is playing out in civilian courts due to the lack of laws. There is no outright federal crime against using AI to create ‘deepfake’ sexually explicit content but some states are passing laws that criminalize deepfake pornographic content or give victims the ability to sue those who create images and videos using their likenesses.

Subscribe to Task & Purpose today. Get the latest military news and culture in your inbox daily.

The White House issued an October 2023 executive order on AI signaling future policies that “protect against unlawful discrimination and abuse” in the justice system. Earlier this year, Congress introduced the bipartisan Disrupt Explicit Forged Images and Non-Consensual Edits Act of 2024 or the DEFIANCE Act of 2024 which would hold social media companies responsible for proliferating nonconsensual, sexually-explicit “deepfake” images and videos. In July, the bill passed the Senate and was sent to the House where it has yet to be heard.

Alaska case

Homeland Security investigators said they found thousands of images depicting violent sexual abuse of infants and children on three Samsung Galaxy cellphones belonging to Herrera. The Alaska solider allegedly used AI to create child sexual abuse material, also known as “CSAM,” depicting children he knew and “to whom he had access in 2022 and 2023 outside of Alaska,” according to court documents. Herrera also allegedly saved “surreptitious recordings” of minors undressing in his home he shared with his wife and daughter.

Officials said Herrera allegedly used encrypted messaging apps to join groups devoted to the trafficking of CSAM and even created his own public Telegram group to store CSAM and sent himself video files that included “screaming children being raped,” according to court documents filed in support of Herrera’s pretrial detention. 

“The Defendant poses a serious risk to his minor daughter, who remains in his care, and the broader community. He was seeking out CSAM specifically in the age range of his young daughter,” the government said in their filing.

The judge ordered Herrera’s detention on Aug 27.

Herrera was assigned to the 17th Combat Sustainment Support Battalion, 11th Airborne Division. He joined the Army in November 2019, and arrived in Alaska in August 2023.

According to court documents, Herrera had allegedly possessed child pornographic images since August 2023, over a year before his arrest. 

Officials from the Homeland Security Investigations and Army Criminal Investigation Division are investigating the case.

Explicit ‘deepfake’ content

Though new AI technologies receive significant media coverage, the issue of fake pornographic material has already been before the Supreme Court at least once in a 2002 case, Ashcroft v. Free Speech Coalition. 

The court sided with an adult entertainment association that challenged the Child Pornography Prevention Act of 1996 for restricting free speech. The association argued that the law’s use of language like “appears to be” and “conveys the impression” regarding children in artistic media was too broad. The government argued that this type of media could harm children indirectly and warned that technological advances could become increasingly difficult to prosecute.

“Technology may evolve to the point where it becomes impossible to enforce actual child pornography laws because the Government cannot prove that certain pornographic images are of real children,” Justice Clarence Thomas wrote in his concurring opinion.

AI-generated porn is a growing issue that advocates have tried raising alarms about with unclear accountability methods from governments. The issue gained momentum in the U.S. after superstar Taylor Swift spoke out about the issue after her likeness was used in explicit images and videos that were circulated online.

According to the Government Accountability Office, deepfake technology can be used by influence campaigns to erode public trust but have mainly been used to create non-consensual pornography which “disproportionately victimizes women.”

The latest on Task & Purpose