Meta is making changes to its policies regarding manipulated and A.I.-generated content in preparation for the upcoming fall elections. An independent body overseeing the company’s content moderation found the previous policies to be “incoherent and confusing” and recommended that they be reconsidered.
These changes were prompted by the Meta Oversight Board’s review earlier this year of a highly edited video of President Biden that circulated on Facebook. The video had been altered to create a misleading impression, showing Mr. Biden inappropriately touching his adult granddaughter’s chest.
The Oversight Board determined that the video did not violate Meta’s policies because it was not manipulated using artificial intelligence (AI) and did not depict Mr. Biden saying or doing anything he did not actually do. However, the board criticized Meta’s current policy as lacking justification and focusing too much on how content is created rather than the specific harms it aims to prevent.
As a result, Meta will begin labeling AI-generated content starting in May and will adjust its policies to label manipulated media with informational labels and context instead of removing it based solely on violations of community standards. This broader approach will address a wider range of manipulated content to provide more information and context to users.
The Oversight Board praised Meta’s commitment to these changes, acknowledging the evolving landscape of AI technology and the need to address manipulation in various forms of media. As AI tools make it easier to alter videos and audio clips, concerns about misinformation and voter suppression in elections are growing.
Loading PerspectiveSplit analysis...