Several leading artificial intelligence companies, including OpenAI, Microsoft, Google, Meta, and others, have come together to make a joint commitment to prevent the exploitation of children and the creation of child sexual abuse material (CSAM) using their AI tools. This initiative was spearheaded by child safety organization Thorn and All Tech Is Human, a non-profit organization focused on promoting responsible technology.
Thorn stated that the pledges made by these AI companies “set a groundbreaking precedent for the industry and mark a significant step forward in protecting children from sexual abuse as generative AI technology evolves.” The primary objective of this initiative is to stop the production of sexually explicit material involving children and remove it from social media platforms and search engines. Thorn reported that over 104 million files of suspected child sexual abuse material were reported in the US in 2023 alone. Without collective action, generative AI could exacerbate this issue and overwhelm law enforcement agencies already struggling to identify real victims.
Recently, Thorn and All Tech Is Human published a paper titled “Safety by Design for Generative AI: Preventing Child Sexual Abuse,” which provides strategies and recommendations for companies developing AI tools, search engines, social media platforms, hosting companies, and developers to prevent the misuse of generative AI in harming children.
One of the recommendations urges companies to carefully select data sets used to train AI models, avoiding those containing instances of CSAM or adult sexual content, as generative AI may conflate the two. Thorn also calls on social media platforms and search engines to remove links to websites and apps that facilitate the sharing of child nudity images, which can lead to the creation of new AI-generated CSAM online. The influx of AI-generated CSAM could make it harder to identify genuine victims of child sexual abuse due to the overwhelming amount of content that law enforcement agencies must sift through.
Rebecca Portnoff, Thorn’s vice president of data science, emphasized the importance of this project in redirecting the harmful impacts of this technology. Some companies have already taken steps to separate child-related content from adult content in their data sets to prevent their models from mixing the two. Additionally, some companies are using watermarks to identify AI-generated content, although this method is not foolproof as watermarks and metadata can be easily removed.