All 50 states call on Congress to address AI-generated CSAM



Attorneys general from across the US are urging lawmakers to create a commission dedicated to studying the impacts of AI on child exploitation.

A matrix of green binary code flows down in the background of a laptop computer with a green-hued image of the US Capitol building

The attorneys general from all 50 US states want lawmakers to establish a commission dedicated to investigating the impact of AI on child exploitation, as reported earlier by The Associated Press. In a letter to Congress, the attorneys general say that the proposed commission should come up with solutions to prevent the creation of AI-generated child sexual abuse material (CSAM).

As outlined in the letter, the attorneys general point out that bad actors can train an AI using images of abused and non-abused children to create deepfakes while also animating “new and realistic sexualized images of children who do not exist, but who may resemble actual children.” The letter adds that readily available AI tools make this process “easier than ever.”

The initiative, led by South Carolina Attorney General Alan Wilson, includes signatures from the attorney generals in all 50 states and four territories. Each chief prosecutor asked that Congress establish a commission to “study the means and methods of AI that can be used to exploit children,” as well as to expand “existing restrictions on CSAM to explicitly cover AI-generated CSAM.”

The US government has already begun evaluating some of the risks related to AI. After the Biden administration rolled out a plan to promote the ethical use of AI in May, the Senate held a remarkably friendly hearing on AI regulation. There are still no solid plans to implement sweeping laws to control the use of AI, which is something the European Union is already doing.

“While we know Congress is aware of concerns surrounding AI, and legislation has been recently proposed at both the state and federal level to regulate AI generally, much of the focus has been on national security and education concerns,” the letter reads. “And while those interests are worthy of consideration, the safety of children should not fall through the cracks when evaluating the risks of AI.”


Leave a Reply