AI-generated deepfakes of Black Trump supporters ignite calls for ad ban

The push for regulatory action has gained bipartisan support in the Senate.

101
SOURCENationofChange

Recent developments have reignited debates on the use of artificial intelligence (AI) in political campaigns, as deepfake images of Donald Trump with Black supporters circulate online, sparking fresh calls for regulatory action against AI-generated content in political advertisements.

BBC recently spotlighted a series of deepfakes created by supporters of the former U.S. President, including an image by Mark Kaye, a right-wing Florida radio host, which depicted Trump surrounded by smiling Black women. The authenticity of these images quickly crumbles under scrutiny, with anomalies like missing fingers and garbled text revealing their artificial origins. Kaye, unapologetic about the deceptive nature of his creation, stated, “I’m not claiming it’s accurate. I’m a storyteller.”

The ease with which these images have spread across platforms like X (formerly Twitter) and Facebook has raised concerns about the potential for misinformation and targeted intimidation of Black voters. One such deepfake, portraying Trump with young Black men, was flagged on X as AI-generated but not before amassing over 1.4 million views. The deceptive nature of these posts, often accompanied by misleading captions, has led to widespread calls for action from racial justice groups like Color of Change.

In response to this alarming trend, Color of Change has demanded stringent measures to combat AI-generated misinformation in political ads, advocating for an outright ban on AI in political advertising, mandatory disclosure of AI use in all other content, a prohibition on deepfakes, and the reinstatement of rules against spreading falsehoods about election legitimacy.

The push for regulatory action has gained bipartisan support in the Senate, with Senators Amy Klobuchar (D-Minn.), Chris Coons (D-Del.), Josh Hawley (R-Mo.), and Susan Collins (R-Maine) introducing legislation aimed at curbing the use of AI-generated content that falsely depicts political candidates. Moreover, the Federal Communications Commission recently took steps to combat AI-generated robocalls featuring fake endorsements from political figures.

Despite these initiatives, the Federal Election Commission has been criticized for its delayed response to public demands for regulation of deepfakes, leaving a gap in the safeguarding of electoral integrity. This has prompted state-level action, with at least 13 states passing laws governing the use of AI in political ads.

Tech companies have also responded to the rise of deepfakes, with Google announcing requirements for the disclosure of AI use in political ads and Meta banning political campaigns from using its generative AI tools. OpenAI, the maker of the popular ChatGPT chatbot, has declared its refusal to allow the creation of content for political campaigns and plans to embed watermarks in images generated by its DALL-E tool.

The use of deepfakes in political campaigns has far-reaching implications, not just for the integrity of elections but for the very fabric of democratic discourse. As Cliff Albright, co-founder of the Black Voters Matter campaign, points out, these deepfakes are part of a “very strategic narrative” aimed at wooing African American voters, a demographic crucial to electoral outcomes.

With Trump’s support among Black voters showing a modest increase from 8% in 2016 to 12% in 2020, and Biden’s support among African Americans dropping from 92% in the last election cycle to 71% today, the impact of misinformation on voter preferences cannot be understated. As the 2024 election approaches, the need for vigilance and regulatory action against the misuse of AI in political advertising becomes increasingly urgent, to preserve the sanctity of the electoral process and protect the rights of all voters.

In the words of Mark Kaye, reflecting on the influence of AI-generated content, “If anybody’s voting one way or another because of one photo they see on a Facebook page, that’s a problem with that person, not with the post itself.”

FALL FUNDRAISER

If you liked this article, please donate $5 to keep NationofChange online through November.

SHARE
Previous articleLiving on the wrong world
Next articleSanders proposes Social Security overhaul to combat rising senior poverty in the US
Alexis Sterling is a seasoned War and Human Rights Reporter with a passion for reporting the truth in some of the world's most tumultuous regions. With a background in journalism and a keen interest in international affairs, Alexis's reporting is grounded in a commitment to human rights and a deep understanding of the complexities of global conflicts. Her work seeks to give voice to the voiceless and bring to light the human stories behind the headlines. Alexis is dedicated to responsible and engaged journalism, constantly striving to inform and educate the public on critical issues of war and human rights across the globe.

COMMENTS