In Billy Joel’s latest video for the just-released song Turn the Lights Back On, it features him in several deepfakes, singing the tune as himself, but decades younger. The technology has advanced to the extent that it’s difficult to distinguish between that of a fake 30-year-old Joel, and the real 75-year-old today.
This is where tech is being used for good. But when it’s used with bad intent, it can spell disaster. In mid-February, a report showed a clerk at a Hong Kong multinational who was hoodwinked by a deepfake impersonating senior executives in a video, resulting in a $35 million theft.
Deepfake technology, a form of artificial intelligence (AI), is capable of creating highly realistic fake videos, images, or audio recordings. In just a few years, these digital manipulations have become so sophisticated that they can convincingly depict people saying or doing things that they never actually did. In little time, the tech will become readily available to the layperson, who’ll require few programming skills.
Legislators are taking note
In the US, the Federal Trade Commission proposed a ban on those who impersonate others using deepfakes — the greatest concern being how it can be used to fool consumers. The Feb. 16 ban further noted that an increasing number of complaints have been filed from “impersonation-based fraud.”
A Financial Post article outlined that Ontario’s information and privacy commissioner, Patricia Kosseim, says she feels “a sense of urgency” to act on artificial intelligence as the technology improves. “Malicious actors have found ways to synthetically mimic executive’s voices down to their exact tone and accent, duping employees into thinking their boss is asking them to transfer funds to a perpetrator’s account,” the report said. Ontario’s Trustworthy Artificial Intelligence Framework, for which she consults, aims to set guides on the public sector use of AI.
In a recent Microsoft blog, the company stated their plan is to work with the tech industry and government to foster a safer digital ecosystem and tackle the challenges posed by AI abuse collectively. The company also said it’s already taking preventative steps, such as “ongoing red team analysis, preemptive classifiers, the blocking of abusive prompts, automated testing, and rapid bans of users who abuse the system” as well as using watermarks and metadata.
That prevention will also include enhancing public understanding of the risks associated with deepfakes and how to distinguish between legitimate and manipulated content.
Cybercriminals are also using deepfakes to apply for remote jobs. The scam starts by posting fake job listings to collect information from the candidates, then uses deepfake video technology during remote interviews to steal data or unleash ransomware. More than 16,000 people reported that they were victims of this scam to the FBI in 2020. In the US, this kind of fraud has resulted in a loss of more than $3 billion USD. Where possible, they recommend job interviews should be in person to avoid these threats.
Catching fakes in the workplace
There are detector programs, but they’re not flawless.
When engineers at the Canadian company Dessa first tested a deepfake detector that was built using Google’s synthetic videos, they found it failed more than 40% of the time. The Seattle Times noted that the problem in question was eventually fixed, and it comes down to the fact that “a detector is only as good as the data used to train it.” But, because the tech is advancing so rapidly, detection will require constant reinvention.
There are other detection services, often tracing blood flow in the face, or errant eye movements, but these might lose steam once the hackers figure out what sends up red flags.
“As deepfake technology becomes more widespread and accessible, it will become increasingly difficult to trust the authenticity of digital content,” noted Javed Khan, owner of Ontario-based marketing firm EMpression. He said a focus of the business is to monitor upcoming trends in tech and share the ideas in a simple way to entrepreneurs and small business owners.
To preempt deepfake problems in the workplace, he recommended regular training sessions for employees. A good starting point, he said, would be to test them on MIT’s eight ways the layperson can try to discern a deepfake on their own, ranging from unusual blinking, smooth skin, and lighting.
Businesses should proactively communicate through newsletters, social media posts, industry forums, and workshops, about the risks associated with deepfake manipulation, he told DX Journal, to “stay updated on emerging threats and best practices.”
To keep ahead of any possible attacks, he said companies should establish protocols for “responding swiftly” to potential deepfake attacks, including issuing public statements or corrective actions.
How can a deepfake attack impact business?
The potential to malign a company’s reputation with a single deepfake should not be underestimated.
“Deepfakes could be racist. It could be sexist. It doesn’t matter — by the time it gets known that it’s fake, the damage could be already done. And this is the problem,” said Alan Smithson, co-founder of Mississauga-based MetaVRse and investor at Your Director AI.
“Building a brand is hard, and then it can be destroyed in a second,” Smithson told DX Journal. “The technology is getting so good, so cheap, so fast, that the power of this is in everybody’s hands now.”
One of the possible solutions is for businesses to have a code word when communicating over video as a way to determine who’s real and who’s not. But Smithson cautioned that the word shouldn’t be shared around cell phones or computers because “we don’t know what devices are listening to us.”
He said governments and companies will need to employ blockchain or watermarks to identify fraudulent messages. “Otherwise, this is gonna get crazy,” he added, noting that Sora — the new AI text to video program — is “mind-blowingly good” and in another two years could be “indistinguishable from anything we create as humans.”
“Maybe the governments will step in and punish them harshly enough that it will just be so unreasonable to use these technologies for bad,” he continued. And yet, he lamented that many foreign actors in enemy countries would not be deterred by one country’s law. It’s one downside he said will always be a sticking point.
It would appear that for now, two defence mechanisms are the saving grace to the growing threat posed by deepfakes: legal and regulatory responses, and continuous vigilance and adaptation to mitigate risks. The question remains, however, whether safety will keep up with the speed of innovation.