Moody’s Warns: Election Deepfakes Could Undermine Institutional Credibility

Rapid Evolution of AI in Political Advertising Raises Concerns About U.S. Election Integrity

Advertisement

As election season heats up and artificial intelligence continues to advance, the manipulation of political advertising through AI has become a growing concern for market and economic stability. In a new report released on Wednesday, Moody’s warns that generative AI and deepfakes pose significant risks to the credibility of U.S. institutions.

“The election is likely to be closely contested, increasing concerns that AI deepfakes could be deployed to mislead voters, exacerbate division, and sow discord,” wrote Moody’s assistant vice president and analyst Gregory Sobel and senior vice president William Foster. “If successful, agents of disinformation could sway voters, impact the outcome of elections, and ultimately influence policymaking, which would undermine the credibility of U.S. institutions.”

Government Efforts to Combat Deepfakes

Advertisement

In response to these threats, the government has intensified efforts to combat deepfakes. On May 22, Federal Communications Commission (FCC) Chairwoman Jessica Rosenworcel proposed a new rule requiring political TV, video, and radio ads to disclose if they used AI-generated content. The FCC’s proposal reflects growing concerns about AI use in political ads, particularly the potential issues with deepfakes and other manipulated content.

While social media falls outside the FCC’s regulatory scope, the Federal Elections Commission (FEC) is also considering widespread AI disclosure rules extending to all platforms. However, the FEC has encouraged the FCC to delay its decision until after the elections to avoid voter confusion, as digital political ads might not uniformly require AI disclosures.

Social Media Platforms’ Self-Regulation

Some social media platforms have proactively adopted AI disclosure measures ahead of regulations. Meta, for instance, requires AI disclosure for all its advertising and is banning new political ads in the week leading up to the November elections. Google mandates that political ads with modified content depicting real or realistic-looking people or events must have disclosures, though it does not require AI disclosures on all political ads.

The proactive stance of social media companies aims to prevent the spread of misinformation and align with major brands’ concerns about associating with misinformation during a pivotal election cycle. Google and Facebook are expected to capture 47% of the projected $306.94 billion U.S. digital advertising spend in 2024.

Challenges of Policing AI-Manipulated Content

Despite self-regulation efforts, AI-manipulated content continues to appear on platforms without labels due to the vast amount of daily posted content. “The lack of industry standards and rapid evolution of the technology make this effort challenging,” said Tony Adams, Secureworks Counter Threat Unit senior threat researcher. Nonetheless, platforms report successes in policing the most harmful content through technical controls, often powered by AI.

The accessibility and affordability of generative AI tools have made creating sophisticated deepfakes easier than ever. Moody’s assistant vice president Abhi Srivastava noted, “With the advent of readily accessible, affordable Gen AI tools, generating a sophisticated deepfake can be done in minutes. This ease of access, coupled with the limitations of social media’s existing safeguards against the propagation of manipulated content, creates a fertile environment for the widespread misuse of deepfakes.”

Protective Measures and Legislative Action

Moody’s highlights the decentralized nature of the U.S. election system and existing cybersecurity policies as protective measures against deepfakes. States and local governments are enacting measures to block deepfakes and unlabeled AI content, although free speech laws and technological advancement concerns have slowed progress in some state legislatures.

As of February, 50 pieces of AI-related legislation were being introduced per week in state legislatures, focusing on deepfakes and election interference. Thirteen states have laws addressing these issues, eight of which were enacted since January.

Impact on Public Confidence

Moody’s emphasizes that even the perception of deepfakes influencing political outcomes can undermine public confidence in the electoral process and government institutions. This lack of trust poses a significant credit risk, potentially leading to increased political and social risks and compromising the effectiveness of government institutions.

“The response by law enforcement and the FCC may discourage domestic actors from using AI to deceive voters,” said Secureworks’ Adams. “But there’s no question that foreign actors will continue to meddle in American politics by exploiting generative AI tools and systems. To voters, the message is to keep calm, stay alert, and vote.”