FCC proposes mandating AI disclosure in political ads to safeguard election integrity

This move aims to enhance transparency and protect voters from deceptive practices.

Image Credit: SEAN GLADWELL/Getty Images

Amid the U.S. political primary season and growing concerns about the potential misuse of artificial intelligence (AI) in influencing elections, the Federal Communications Commission (FCC) on Wednesday unveiled a proposal to mandate the disclosure of AI use in campaign advertising. This move aims to enhance transparency and protect voters from deceptive practices.

The rapid advancements in AI technology have raised significant concerns about its potential impact on political advertising. Historical instances of misinformation in political campaigns, coupled with the rise of deepfake technology, have underscored the urgent need for regulatory measures. The current political primary season has heightened these fears, prompting the FCC to take action.

FCC Chair Jessica Rosenworcel announced the proposal, emphasizing the need for consumers to be fully informed when AI tools are used in political ads. The proposal includes several key measures:

Disclosure Requirements: On-air and written disclosures in broadcasters’ political files when AI-generated content is used in ads.

Scope of Application: The disclosure rules would apply to both candidate and issue advertisements, affecting broadcasters, cable operators, satellite TV and radio providers.

Definition and Identification: Seeking comment on a specific definition of AI-generated content to ensure clarity and enforce-ability.

The proposal has been lauded by advocacy groups and stakeholders:

Common Cause: Ishan Mehta, Media and Democracy Program director, praised the rule making, highlighting the threat of deceptive AI and deepfakes to democracy.

Public Citizen: Robert Weissman emphasized the importance of prominent, real-time disclosure in protecting voters from being deceived and defrauded.

The potential misuse of AI and deepfake technology in political advertising poses a serious threat to election integrity. Transparency in political ads is crucial for maintaining democratic processes. However, defining and identifying AI-generated content presents technological and legal challenges that must be addressed.

Advocates are urging Congress and the Federal Election Commission (FEC) to follow the FCC’s lead. The need for comprehensive regulations and quicker actions is critical to safeguard future elections. The FCC’s proposal is a significant step, but additional measures from other regulatory bodies are necessary.

Past regulatory actions in political advertising provide valuable lessons. International approaches to AI and deepfakes in political ads offer potential best practices that the U.S. could adopt. Learning from these examples is essential for developing effective regulatory frameworks.

“As the proposal is honed and finalized, the FCC should require advertisers to disclose the use of AI in the ads themselves, not just require a note to files maintained by broadcasters,” asserted Robert Weissman. “Prominent, real-time disclosure is the essential standard to protect voters from being deceived and defrauded.”


If you liked this article, please donate $5 to keep NationofChange online through November.

Previous articleLow-life Supremes openly break supreme perfidy records
Next articleHow workers are revolutionizing the South
Alexis Sterling is a seasoned War and Human Rights Reporter with a passion for reporting the truth in some of the world's most tumultuous regions. With a background in journalism and a keen interest in international affairs, Alexis's reporting is grounded in a commitment to human rights and a deep understanding of the complexities of global conflicts. Her work seeks to give voice to the voiceless and bring to light the human stories behind the headlines. Alexis is dedicated to responsible and engaged journalism, constantly striving to inform and educate the public on critical issues of war and human rights across the globe.