... “This project was intended to make abundantly clear that you don’t need to throw up your hands,” said Rebecca Portnoff, vice president of data science at Thorn. “We want to be able to change the course of this technology to where the existing harms of this technology get cut off at the knees.” ...
When Thorn approached AI companies, they found that while some companies already had large teams focused on removing child-sexual-abuse material, others were unaware of the problem and potential solutions. There is also a tension between the imperative to safeguard these tools and business leaders’ push to move quickly to advance new AI technology. ...
Today, watermarks are removable, and AI companies are still looking for ways to mark AI-generated images permanently, said Ella Irwin, senior vice president of integrity at Stability AI, the company behind the open-source image-generation model Stable Diffusion. ...
See the full story here: https://www.wsj.com/tech/ai/ai-developers-agree-to-new-safety-measures-to-fight-child-exploitation-2a58129c