The head of the Federal Communications Commission on Wednesday unveiled a proposal that would require political advertisers to disclose when they use AI-generated content in TV and radio ads.
The proposal, if adopted by the Commission, would add a layer of transparency that many lawmakers and artificial intelligence experts have been calling for, as rapidly advancing generative AI tools produce realistic images, videos and audio clips that threaten to mislead consumers. voters in the upcoming US elections. .
But the FCC, the nation’s top telecommunications regulator, only has authority over television, radio and some cable providers. Any new rules would not cover the explosive growth of advertising on digital and streaming platforms.
“As artificial intelligence tools become more accessible, the commission wants to ensure that consumers are fully informed when the technology is used,” FCC Chairwoman Jessica Rosenworcel said in a statement Wednesday. “Today, I shared a proposal with my colleagues that makes it clear that consumers have a right to know when AI tools are being used in the political ads they see, and I hope they act quickly on this issue.”
This is the second time this year that the commission has begun taking significant steps to combat the growing use of artificial intelligence tools in political communications. Previously, the FCC confirmed that AI voice cloning tools in robocalls are prohibited under existing law. This decision followed an incident in the New Hampshire primary election when robocallers used voice cloning software. imitate President Joe Biden to dissuade voters from going to the polls.
If adopted, the proposal would ask broadcasters to check with political advertisers whether their content was generated using AI tools — such as text-to-image creators or voice cloning software. The FCC has authority over political advertising on broadcast channels under the Bipartisan Campaign Reform Act of 2002.
But commissioners would still have to discuss several details, including whether broadcasters would have to disclose AI-generated content in an on-air message or just in the television or radio station’s political files, which are public. They will also be tasked with agreeing on a definition of AI-generated content, a challenge that has become complicated as retouching tools and other AI advancements become increasingly embedded in all types of creative software.
Rosenworcel hopes to have the regulations in place before the election.
Jonathan Uriarte, a spokesperson and policy consultant for Rosenworcel, said it intends to define AI-generated content as that generated using computer technology or machine-based systems, “including, in particular, AI-generated voices that sound like human voices, and AI-generated Actors that appear to be human actors.” He said its draft definition will likely change during the regulatory process.
The proposal comes at a time when political campaigns have already experimented heavily with generative AI, from building chatbots for their websites to creating videos and images using the technology.
Last year, for example, the RNC released an entirely AI-generated ad intended to show a dystopian future under another Biden administration. He employed fake but realistic photos showing boarded-up storefronts, armored military patrols on the streets, and waves of immigrants creating panic.
Political campaigns and bad actors have also used highly realistic images, videos, and audio content to deceive, deceive, and disenfranchise voters. In India’s elections, recent AI-generated videos misrepresenting Bollywood stars as critical of the prime minister exemplify a trend that AI experts say is emerging in democratic elections around the world.
Rob Weissman, president of the advocacy group Public Citizen, said he was happy to see the FCC “struggling to proactively address threats from artificial intelligence and deepfakes, including especially to election integrity.”
He has urged the FCC to require on-air disclosure for the benefit of the public and chided another agency, the Federal Election Commission, for its delays, while also considering whether to regulate AI-generated deepfakes in political ads.
Rep. Yvette Clarke, a Democrat from New York, said it’s time for Congress to act on the spread of misinformation online, over which the FCC has no jurisdiction. It has introduced legislation for disclosure requirements for AI-generated content in online advertisements.
As generative AI has become cheaper, more accessible and easier to use, several bipartisan groups of lawmakers have called for legislation to regulate the technology in politics. With just over five months to go until the November elections, no bills have yet been approved.
A bipartisan bill introduced by Sen. Amy Klobuchar, a Democrat from Minnesota, and Sen. Lisa Murkowski, a Republican from Alaska, would require political ads to carry a liability disclaimer if they are made or significantly altered using AI. It would require the Federal Election Commission to respond to violations.
Uriarte said Rosenworcel realizes the FCC’s ability to act against AI-related threats is limited, but wants to do what he can before the 2024 elections.
“This proposal provides the maximum standards of transparency that the commission can enforce under its jurisdiction,” Uriarte said. “We hope government agencies and policymakers can take advantage of this important first step in establishing a standard of transparency around the use of AI in political advertising.”