In 2024, nearly half of the world’s population will participate in elections.
2024 will see presidential races in 30 countries and parliamentary votes in 20 others. Major democracies like India, Mexico, and the USA will hold pivotal elections, shaping the future distribution of power and empowering citizens.
However, the integrity of these elections faces challenges from modern technologies, especially generative AI. Deepfakes, which blur the line between reality and fabrication, pose a significant threat, eroding trust in information and political processes. The rising use of deepfakes highlights the urgent need to address these issues to protect the integrity of elections in this super-election year.
This article revealed that numerous fake social media profiles are actively disseminating misinformation via deepfakes. These profiles seek to distort political narratives and sway voter opinions, highlighting the importance of staying vigilant and critically assessing the news we encounter.
Impact of Deepfakes on Elections
Deepfakes subtly influence elections by amplifying existing issues like deceptive narratives, voter suppression, and attacks on election workers.
Despite notable instances, such as Slovakia’s recent election, the tangible impact of deepfakes on outcomes is hard to quantify. The real danger may lie in how they contribute to the manipulation of political narratives, fostering doubt and cynicism. Addressing these challenges requires a holistic approach that considers the complex interplay of media, technology, and trust in modern elections.
Influencing Voter Decisions
Fake news often exploits emotions rather than logic, using sensational headlines and fabricated stories to trigger strong reactions like fear, anger, or sympathy. This emotional appeal can bypass rational thinking, leading to quick, uninformed decisions and widespread sharing.
Confirmation bias further fuels the spread of fake news, as people tend to accept and share information that aligns with their preexisting beliefs, even if it’s false. A Google DeepMind study revealed that swaying public opinion is the most common malicious use of generative AI, with deepfake media being nearly twice as prevalent as other forms of AI-driven misinformation, like text-based online content.
Undermining Confidence in Information
The rise of deepfakes threatens the democratic process by undermining public access to accurate information, crucial for informed voting. More concerning is the “liar’s dividend,” where generative AI fosters widespread mistrust, making it difficult to distinguish between real and manipulated content. The rapid growth of AI tools like large language models and text-to-speech or video software exacerbates this issue by accelerating content creation. This erosion of trust in election information demands a coordinated, cross-sector response to protect the integrity of democratic processes.
In recent years, we have also seen the public themselves become the targets of deepfakes and AI voiceover in scams.
Polarization
Misinformation deepens societal divisions by targeting specific groups, reinforcing biases, and creating polarized perceptions of reality. This tailored misinformation leads to ideological divides, making it harder for different groups to find common ground.
While most U.S. adults can distinguish real political news from fake, a significant minority, particularly among older generations and less-educated individuals, still fall for misinformation. Social media algorithms worsen this issue by creating echo chambers that reinforce existing beliefs, further entrenching views and widening the divide between political and social groups.
Technology’s Contribution
The technology sector is at the forefront of tackling the issues posed by deepfakes and AI-generated misinformation in elections, with a focus on detection, mitigation, and content verification. AI offers tools that can assist in identifying and correcting false information, but achieving the right balance is crucial.
Verification of Facts
Fact-checking will play a crucial role in reducing the impact of deepfakes in future elections. In response to the rise of AI-driven misinformation, several companies have rolled out fact-checking solutions. For instance, Meta has mandated the disclosure of AI-generated content in political ads on its platforms. Google has introduced SynthID, a tool that subtly embeds a digital watermark into an image’s pixels.
Critical Media Awareness
Media literacy is a useful method to combat the negative application of deepfakes as it teaches the audience how to evaluate the content and make the right decision. It acts as a preventive measure against the ills of deepfakes on society, trust, and the process of democracy.
Final Thoughts
It is imperative for the government to take an active part in offering directions on creating policies to educate the public on content origin, critical thinking about information, and features to consider when viewing political information on the Internet.
Unless audiences are equipped with proper media literacy skills or tools that confirm the use of AI, it will be difficult to devise a solution that will counter the threats posed by deepfakes and misinformation in the long run.
The Rise of AI Influencers
Discover YouTube’s New AI Policy
Sometimes we include links to online retail stores such as Amazon. As an Amazon Associate, if you click on a link and make a
purchase, we may receive a small commission at no additional cost to you.