Generative artificial intelligence could threaten election security in November, intelligence agencies warned in a new federal bulletin.
Generative AI uses images, audio, video and code to create new content, such as so-called “deep fake” videos, in which a person appears to be saying something they never said.
Both foreign and domestic actors could leverage technology to create serious challenges in the 2024 election cycle, according to analysis compiled by the Department of Homeland Security and sent to law enforcement partners across the country. Federal bulletins are infrequent messages to law enforcement partners designed to draw attention to specific threats and concerns.
“Several threat actors will likely attempt to use generative artificial intelligence (AI) – augmented media to influence and sow discord during the 2024 US election cycle, and AI tools could potentially be used to boost efforts to disrupt the election,” says the bulletin. , shared with CBS News, he said. “As the 2024 election cycle progresses, generative AI tools will likely provide domestic and foreign threat actors with greater opportunities for interference by worsening emerging events, disrupting election processes, or attacking election infrastructure.”
Director of National Intelligence Avril Haines also warned Congress about the dangers of generative AI during a Senate Intelligence Committee hearing last week, saying AI technology could create realistic “deepfakes” whose origin could be hidden.
“Innovations in AI have enabled foreign influencers to produce seemingly authentic and personalized messages more efficiently and at greater scale,” she testified, while insisting that the US is better prepared than ever for an election.
One example that DHS cited in the bulletin was a fake robocall impersonating President Joe Biden’s voice on the eve of the New Hampshire primary in January. The fake audio message was released, encouraging call recipients to “save their vote” for the November general election rather than participate in the state’s primary.
The “moment of election-specific AI-generated media can be as critical as the content itself, as it may take time to counter-message or debunk the false content that permeates online,” the bulletin stated.
The memo also noted the persistent threat abroad, adding that in November 2023, an AI video encouraged a southern Indian state to vote for a specific candidate on election day, giving authorities no time to discredit the video.
The bulletin continues to warn about the potential use of artificial intelligence to target electoral infrastructure.
“Generative AI could also be leveraged to augment conspiracy attacks if a threat actor, namely a violent extremist, attempted to target U.S. election symbols or critical infrastructure,” the bulletin said. “This may include helping to understand U.S. elections and associated infrastructure, examining Internet-facing election infrastructure for potential vulnerabilities, identifying and aggregating a list of election targets or events, and providing new or improved tactical guidance for an attack ”.
Some violent extremists have even experimented with AI chatbots to fill gaps in tactical and weapons guidance, DHS said, although the department noted that it has not yet observed violent extremists using this technology to supplement election-related targeting information.