In the U.S., the General Election Day has arrived. To add to the political drama (is the U.S. populace idealistic or materialistic?) and risk of post-election violence, there is an undercurrent of cybersecurity risks, scams and fraud.
These risks are perhaps encapsulated by deepfakes and misinformation, both which are on the rise. The U.S. FBI has warned the public about two videos falsely claiming to be from the FBI on election security.
Robert Prigge Jumio’s CEO has shared insights on how U.S. society needs to uphold the democratic process throughout the pivotal day and afterwards.
Prigge explains to Digital Journal: “On Election Day, voters face an ongoing challenge in discerning the integrity of digital content. An overwhelming 72 percent of American consumers are concerned about the potential for AI and deepfakes to influence upcoming elections.”
The issue is not confined to the U.S., as Prigge pointed out: “With over 50 countries holding elections in 2024, these concerns don’t stop at U.S. borders.”
In terms of the actual threat, Prigge clarifies: “Deepfakes are a global issue affecting democracies around the world. From audio that convincingly mimics a politician’s voice to videos showing individuals saying or doing things they never did, these deceptive tools can spread misinformation and amplify distrust.”
And with the election itself: “This isn’t a hypothetical risk, as we have seen deepfake incidents falsely depicting Kamala Harris and Donald Trump and manipulating their messages.”
How this leads to real risks, Prigge explains: “The real danger of deepfakes lies in how easily the technology can now be accessed. Previously, creating deepfakes required advanced expertise and resources, but today, they can be produced with minimal skill due to online tools and platforms. The widespread availability of deepfake technology has paved the way for large-scale manipulation, creating serious risks for the integrity of information in democratic processes.”
There are measures that can be taken, however. Prigge addresses these: “To address the risks posed by AI-powered deepfakes, we need equally advanced detection technology including biometric analysis, which can be employed to identify and mitigate the risks posed by deepfakes before they cause significant harm. For online platforms and media outlets, adopting these verification technologies is essential to ensure that election-related content remains credible. With AI-driven identity verification, organizations can maintain a trust-based digital environment, preventing malicious actors from spreading damaging, falsified narratives.”
As to where the action should be placed, Prigge divides this between the public and private sectors: “The responsibility to curb deepfake risks falls on both governments and private platforms, as prioritizing verification technologies can uphold electoral integrity amidst AI’s growing influence on misinformation. In today’s digital age, investing in advanced identity verification is not just about protecting users — it’s about protecting the democratic process.”