+
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors

How to Spot Deepfakes and Disinformation in the Final Days Before the 2024 Election

With just over a week until the November 5th election, security experts and campaign officials are raising urgent alarms about a new and pressing threat to democracy: the possibility of sophisticated A.I.-generated deepfakes designed to disrupt the electoral process. 

The concern isn’t just theoretical — we’ve already seen A.I.-generated robocalls mimicking President Biden’s voice earlier this year, and more recently Russian propaganda groups have attempted to smear the Harris-Walz campaign with deepfakes and disinformation.

Election officials warn that the final days before the election could create a perfect storm where a convincing deepfake could shake public confidence without leaving enough time for proper verification and debunking.

This threat is particularly acute because traditional fact-checking processes often take days or even weeks — time we simply don’t have in these crucial final days before the election. As both campaigns and security experts have warned in recent weeks, a well-timed deepfake released in the last 72 hours before Election Day could create chaos that might influence voter turnout or election results before it can be definitively proven false.

In this environment, where artificial intelligence can clone voices, create hyper-realistic videos, and generate convincing images with a few clicks, your ability to spot manipulation has never been more crucial. The good news? You’re about to learn practical tools to protect yourself and your community from disinformation during this critical period.

The Rising Tide of Digital Deception

Remember when seeing was believing? Those days are rapidly fading. Today’s A.I. “deepfakes” — artificial images, videos, or audio recordings — look and sound remarkably real. These aren’t just pranks or entertainment; they’re increasingly being weaponized to spread false information, particularly during election seasons.

Also concerning is what experts call the “liar’s dividend” — when bad actors claim real evidence of their misconduct is fake, exploiting growing public skepticism about digital media. A perfect example came in August 2024, when Donald Trump lied that a photo showing thousands of supporters at a Kamala Harris rally was A.I.-generated, despite abundant photographic and video evidence of the crowd’s authenticity. 

This strategy is particularly insidious because it leverages growing public awareness of A.I. capabilities to cast doubt on inconvenient truths. When confronted by reporters about his lie, Trump doubled down with a telling response: “Well, I can’t say what was there, who was there.” 

This approach allows him to dismiss authentic evidence as fake. As UC Berkeley professor Hany Farid notes, “This is an example where just the mere existence of deepfakes and generative A.I. allows people to deny reality.” 

The tactic is especially dangerous because it doesn’t just undermine specific pieces of evidence; it erodes the very foundation of shared reality that democracy requires to function. 

As Senator Bernie Sanders warned, if Trump can convince supporters that thousands of people at a televised rally don’t exist, it becomes much easier to convince them that election returns in key states are “fake” and “fraudulent.”

Your Digital Defense Toolkit

That’s why, in these final days before the November 5th election, your ability to spot manipulation isn’t just about personal protection — it’s about safeguarding democracy itself. 

What’s more, we can’t rely solely on institutions to separate fact from fiction. Each of us needs to become our own first line of defense. 

Think of the following tools as your personal authentication kit. Just as a bank teller learns to spot counterfeit bills by studying authentic currency, you can train yourself to detect digital manipulation by understanding what to look for. These aren’t just abstract tips — they’re practical tools you can start using immediately to protect yourself and your community from disinformation during this crucial period.

Here’s your essential toolkit for navigating the digital landscape in these final critical days:

1. Master the Art of Deepfake Detection

When viewing videos or images, pay attention to these tell-tale signs:

  • Watch the Eyes: Look for natural blinking patterns and consistent eye movement. A.I. often struggles to maintain natural eye behaviors, particularly in longer videos.
  • Check the Face: Notice any unnatural smoothing or aging inconsistencies between skin and features. Pay special attention to how the skin texture changes during movement.
  • Study Body Language: Does the person’s head movement match their speech patterns? Watch for the coordination between gestures and words — they should flow naturally together.
  • Observe Lighting: Look for consistent shadows and reflections, especially on glasses. A.I. often fails to accurately render complex lighting interactions.
  • Examine Details: Pay special attention to hands, teeth, and hair — A.I. often struggles with these elements. Count fingers (yes, really!) as A.I. frequently generates incorrect numbers of digits.
  • Listen Carefully: In audio deepfakes, pay attention to breathing patterns and voice modulation. A.I. voices often maintain unnaturally consistent tone and volume.

2. Build Your Information Verification Habits

  • Check Multiple Sources: Don’t rely on a single platform or outlet for news. Cross-reference information across different reputable news organizations.
  • Verify Before Sharing: Take a moment to fact-check before amplifying content. Ask yourself: “Would I stake my reputation on this being true?”
  • Use Trusted Fact-Checking Sites: Resources like PolitiFact, FactCheck.org, and Snopes are valuable allies. Bookmark them for quick reference.
  • Question Your Biases: Be especially skeptical of content that perfectly aligns with your existing beliefs. Our own biases can make us more susceptible to manipulation.
  • Follow the Source: Trace information back to its original context. Be wary of screenshots or clips that don’t provide full context.
  • Check Publication Dates: Disinformation often recycles old content with new dates to make it appear current and relevant.

3. Protect Yourself from Voice Scams

With A.I. voice cloning becoming increasingly sophisticated, establish authentication methods with family and friends:

  • Create personal code words or phrases that would be known only to you and close contacts
  • Ask specific questions about recent shared experiences that an A.I. couldn’t know
  • Be extremely wary of urgent money requests, even if the voice sounds familiar
  • Set up alternative verification methods (like text messages or video calls)
  • Establish family protocols for financial requests, such as requiring multiple forms of verification

To learn more about how to detect and guard against A.I. disinformation and deepfakes, we encourage you to consult the following resources: 

The Broader Impact

The proliferation of deepfakes and disinformation isn’t just about individual deception — it threatens the very fabric of our shared reality. When we can’t trust what we see and hear, it becomes harder to maintain the informed citizenry essential for democracy. This is especially true in these final days before the election, where the timeline for verification becomes compressed and the stakes couldn’t be higher.

Security experts are particularly concerned about several scenarios in the coming days:

  • Fake videos showing candidates making inflammatory statements or endorsements
  • A.I.-generated audio of election officials providing false voting information
  • Synthetic videos purporting to show election fraud or misconduct
  • Fabricated “breaking news” about candidate behavior or policy positions

What makes these scenarios particularly dangerous is the “fog of war” effect in the final days before an election. With news cycles moving at breakneck speed and everyone’s attention focused on the vote, even sophisticated media organizations might struggle to verify content quickly enough. By the time a deepfake is debunked, the damage to public trust could already be done.

The impact also extends far beyond election cycles. A.I.-generated content is becoming increasingly prevalent in various aspects of our lives:

  • Financial fraud through voice cloning of trusted individuals
  • Manipulation of business communications and stock markets
  • Creation of fake evidence to harass or discredit individuals
  • Generation of false narratives about current events

These technologies are developing at a pace that often outstrips our ability to create effective safeguards, and the potential for misuse presents a significant societal challenge. Now more than ever, we need to develop both technological solutions and social resilience to navigate this new landscape.

The challenge we face isn’t just about detecting individual pieces of false content — it’s about protecting our ability to conduct meaningful democratic discourse in an era where the very nature of evidence and truth is being called into question. As we approach this critical election, our response to this challenge will help determine the future of how we as a society establish and maintain shared truth in the digital age.

Looking Forward: Real Solutions for Real Problems

While the challenges are significant, there are reasons for optimism. Here’s what’s being done and what you can do:

Institutional Response

Individual Action

  • Stay informed about new developments in A.I. and digital media
  • Support digital literacy initiatives in your community
  • Report suspected deepfakes to platform authorities
  • Engage in constructive discussions about media literacy with friends and family
  • Participate in local efforts to combat disinformation
  • Share knowledge about detection techniques with vulnerable populations

The Role of Technology

Advanced detection tools are being developed to help identify manipulated content, but they’re not perfect. That’s why human judgment and critical thinking remain our best defenses. Some platforms now include automated warnings about potentially A.I.-generated content, but these systems shouldn’t be relied upon exclusively.

Conclusion: Your Role in Protecting Democracy

As we approach crucial elections, the responsibility to combat disinformation falls on all of us. While technology companies and institutions play their part, individual vigilance and critical thinking are our strongest weapons against digital deception.

Remember: The goal isn’t to become cynical about all media, but rather to develop healthy skepticism and strong verification habits. By staying informed, sharing responsibly, and teaching others, you become part of the solution.

The future of our digital information landscape — and our democracy — depends on our collective ability to navigate this new reality with wisdom and discernment. Start applying these tools today, and help build a more resilient, informed society for tomorrow.