A fake moustache and trenchcoat isn’t a convincing disguise, right? But a digitally altered video that makes your face identical to someone else’s?
That’s a different story.
Deepfakes are artificial images or videos that imitate a person’s likeness so convincingly that it can be nearly impossible to recognize they’re fake. Hackers use them to impersonate people’s faces and voices. This can have monumental impacts — even $25 million worth, which is what one undisclosed company lost in a deepfake scam.
Even with all the money a company spends on voice authentication and facial biometrics, it can all be in vain if a deepfake hacker manages to fool them.
Gartner explores the impact of deepfakes on organizational policy, and we’ll share some risk management considerations to address the trend.
30% of organizations can’t rely on facial recognition software and biometrics
Biometrics rely on presentation attack detection (PAD) to assess a person’s identity and liveness. The problem now is that today’s PAD standards don’t protect against injection attacks from AI deepfakes. Once a bulletproof security strategy, biometrics are now inefficient for 30% of companies surveyed by Gartner.
“These artificially generated images of real people’s faces, known as deepfakes, can be used by malicious actors to undermine biometric authentication or render it inefficient,”
— Akif Khan, VP Analyst at Gartner
The solution is a demand for more innovative cybersecurity tech. Gartner advises organizations to update their minimum requirements from cybersecurity members to include all of the following
- PAD
- Injected attacks detection (IAD)
- Image inspection
On top of that, you can beef up security with:
- Device identification: Numerical values or codes to identify a user’s device
- Behavioural analytics: Machine learning algorithms to detect any shifts in day-to-day online behaviour
So, how can you account for deepfakes risks and mitigation in practice? Here are a few more tips to consider:
- Educate employees: Hold monthly or quarterly meetings with experts in the field to help your employee identify common signs of deepfakes, including blurred or pixelated images in a person’s video, or distorted audio. Greater awareness of what to look out for can allow employees to flag suspicions.
- Don’t rely on one authentication process: Multi-factor authentication demands 2+ pieces of evidence to verify a user before admitting them into a network. Include email, phone, or voice verification in addition to biometrics.
- Invest in deepfake detection software: Consider a subscription Sensity AI, Deepware Scan, Truepic, or Microsoft Video Authenticator.
Gartner plans to share more findings and research on deepfakes at their security and risk management summits taking place in various countries around the world.
Read more about those summits and see the news release here.
