Meta will adjust its policies on manipulated and AI-generated content to begin labeling ahead of the fall election, after an independent body that oversees the company’s content moderation found that previous policies were “incoherent and confusing” and said they should be “reconsidered”. “
The changes stem from recommendations the Meta Oversight Board issued earlier this year in its review of a highly edited video of President Biden that appeared on Facebook. The video was manipulated to make it appear that Biden repeatedly and inappropriately touched his adult granddaughter’s breast.
In the original video, made in 2022, the president places an “I Voted” sticker on his granddaughter after voting in the midterm elections. But the video under review by the Meta Oversight Board was replayed and edited into a seven-second clip that critics said left a misleading impression.
The Oversight Board said the video did not violate Meta’s policies because it had not been manipulated with artificial intelligence (AI) and did not show Mr. Biden “saying words he did not say” or “doing something he did not do.”
But the board added that the company’s current policy on the matter was “incoherent, lacking persuasive justification, and inappropriately focused on how content is created rather than the specific harms it aims to prevent, such as disruption of electoral processes.”
On a blog post published on Friday, Meta’s vice president of content policy Monika Bickert wrote that the company would begin labeling AI-generated content starting in May and would adjust its policies to label manipulated media with “informative and context,” rather than removing video based on whether or not the post violates Meta’s community standards, which include prohibitions on election interference, intimidation and harassment, or violence and incitement.
“The labels will cover a broader range of content beyond the manipulated content that the Oversight Board recommended labeling,” Bickert wrote. “If we determine that digitally created or altered images, videos or audio create a particularly high risk of materially misleading the public about an important matter, we may add a more prominent label so people have more information and context.”
Meta admitted that the Oversight Board’s assessment of the social media giant’s approach to manipulated videos was “very narrow” because it only covered those “that are created or altered by AI to make a person appear to say something they did not say.”
Bickert said the company’s policy was written in 2020, “when realistic AI-generated content was rare and the general concern was about videos.” She noted that AI technology has evolved to the point that “people have developed other types of realistic AI-generated content, like audio and photos,” and agreed with the advice that it is “important to address manipulation that shows a person doing something that did not”. I do not do.”
“We welcome these commitments, which represent significant changes to how Meta treats manipulated content,” the Oversight Board said. wrote on X in response to the policy announcement.
This decision comes as AI and other editing tools make it easier than ever for users alter or manufacture Realistic-looking video and audio clips. Before the New Hampshire presidential primary in January, a fake robocall Impersonating President Biden has encouraged Democrats not to vote, raising concerns about misinformation and voter suppression in the November general election. AI-generated content about former President Trump and Mr.