(fyiteam)

Could AI disrupt the American elections?

Add your Headline Text Here
@fyinews team

30/10/2024

Copy link
fyi:
  1. As the rivalry between superpowers intensifies in an increasingly multipolar world, many believe that the dominant force will be the one that develops the most effective AI.
  2. The EU is the first institution in the world to establish a comprehensive AI regulation, which offers a human-centered and ethically grounded framework despite its flaws and gaps.
  3. What we need, however, are applications developed ethically, trained on data collected with citizens’ consent, and created by diverse and inclusive teams of developers.

by Petros Karpathiou

Nearly two years have passed since the sweeping arrival of ChatGPT and other AI tools, along with the anxiety that these models might empower malicious players to produce fake content, directly impacting democratic processes like elections.

This year, those concerns intensified, with billions of people voting or set to vote in over 70 countries. Yet, so far, these fears don’t seem to have been well-founded. As I write this, I’m reminded of past articles I’ve written, and the well-known GIF of Homer Simpson disappearing into the bushes comes to mind.

A few months have passed since Donald Trump used AI-generated images to imply that Taylor Swift supported him or posted a fake photo of Harris with P. Diddy on X. Shortly afterward, Swift publicly endorsed Kamala Harris, and numerous clips surfaced of Trump claiming a friendship with the rapper, who was facing legal issues.

Now, let’s look at what’s happened in recent European elections. Only 16 cases of deepfakes or misleading AI-generated content in the UK went viral. Across both the European and French elections, there were only 11 such cases combined. In total, fewer than 30 instances were recorded across these three major election events. And how many had a decisive impact on the results? None

What’s truly concerning—and likely to demand much more of our attention in the future—is that people are finding it increasingly difficult to distinguish between real and fake content.

According to analysts, AI-generated content is seen as ineffective propaganda since most people who view and share it are already convinced by its messages. And how do we know this? Content analysis studies show that those sharing such content had already expressed similar views beforehand. This means that AI-generated content is more likely to reinforce existing beliefs rather than influence undecided voters. Why do you think Trump and Harris have allocated 25% of their ad budgets to Georgia, the notorious swing state, if ChatGPT could sway the undecided?

Hostile states toward the West seem to continue using more rudimentary tactics, like social bots that flood comment sections and sow division. What truly seems concerning—and, if you ask me, will demand much more of our attention in the future—is that people are increasingly struggling to distinguish between authentic and fake content.

With superpower rivalries heating up in today’s increasingly multipolar world, plenty of people believe the winning edge will go to the country that develops the most effective AI. The EU has become the first institution worldwide to establish a comprehensive AI regulation—a framework that, while not without flaws, is both human-centered and ethically grounded. This stands in contrast to many other countries developing and exporting AI without similar constraints. The U.S. still lacks a comparable framework, and the upcoming election’s outcome could be pivotal in determining which direction the world’s leading power will take.
The real challenge is regulating AI, and if you ask me, much of it could hinge on the results of the U.S. election. AI is here to stay, but what we truly need are applications developed with ethics in mind, trained on data gathered with citizen consent, and built by inclusive and diverse teams.

And why does this matter? Do you remember life before social media? I don’t—and it’s not because of age. We’ll probably experience something similar with AI. Mustafa Suleyman, CEO of Microsoft AI and co-founder of DeepMind and Inflection AI, said in the Washington Post two months ago that he believes “everyone in the world will eventually have their own personal AI.” Personally, I’d like mine to be reliable, ethical, and respectful of human rights and core values.

 

AD(1024x768)