Email
Password
Remember meForgot password?
    Log in with Twitter

article imageBrands need to wake up to the dangers of deepfake attacks Special

By Tim Sandle     Feb 8, 2020 in Business
Companies need to wake up to the dangers of deepfake brand attacks and, in response, brands have a heightened responsibility with how they use and protect consumer data in relation to the deepfake era.
Brands need to take an ethical approach to deepfake technology, avoiding practices like manipulating images of customers to help increase hype around a product, service or special event. Brands also need to strengthen their cybersecurity.
READ MORE: As creepy new deepfakes emerge, should brands worry?
For instance, brands developing multi-channel authentication methods, helping to deliver a consumer-first customer experience, need to pay attention to methods for improving data security to stops deepfakes and bots. To do so requires new technological solutions.
ALSO READ: YouTube toughens up stance on deepfake videos
To gain an insight into the issue, Digital Journal caught up with Gil Cohen, General Manager, Multichannel Recording, NICE.
Digital Journal: To start with, what are ‘deepfakes’?
Gil Cohen: “Deepfakes” are usually videos that have been edited using an algorithm to replace the person in the original video with someone else in such a way that the video looks authentic. Deepfakes use AI-based technology to aggregate and alter video/audio content to present a distorted version of captured events – a version that never occurred. These videos are created by applying deep learning techniques and using synthesized voices. Unfortunately, deepfakes are used to impersonate people and steal identities, create reputational harm, or even assume ownership of bank accounts.
DJ: So, how sophisticated are these deepfakes becoming?
Cohen: Advancements in machine learning technologies have dramatically improved the sophistication of deepfake videos from their roots in voice synthesizing technology. Using techniques like deep learning or deep neural networks, deepfakes make synthesized voices sound much more authentic and natural while needing less audio to create and thus making it easier to generate. Opus Research indicates 14,698 deepfake videos were identified on the internet in June and July 2019, up from 7,964 in December 2018 — an 84% increase in just seven months. Clearly, this indicates the beginnings of a potentially more widespread fraud challenge that could impact a wide range of industries and businesses.
DJ: How easy is it for consumers to be fooled by deepfakes? Are there any notable cases you can provide?
Cohen:Deepfake videos can look very real and mislead viewers. That said, advanced technology that detects and stops deepfakes is available. We've seen fraudsters attempting to use deepfake technology to mislead contact center agents or IVR systems in such industries as finance, healthcare and telecom. These fraudsters attempt to overcome security measures by impersonating a legitimate contact center customer using a deepfake of their voice. Contact center agents aren’t typically trained to detect fraud. This is where our technology comes to the rescue. Powered by AI and machine learning, our voice biometrics solution detects and alerts the agent in real time of an anomaly. Its self-improving algorithms continuously leverage Deep Neural Networks (DNN) to detect new deepfake software, ensuring that we provide consistent protection for our customers.
DJ: What are the dangers of deepfake brand attacks?
Cohen:Contact center agents are trained to provide excellent service, rather than fight fraud, and therefore often find themselves as the main target of security attacks. Using deepfake technologies, fraudsters trick agents into skipping parts of the authentication process or revealing personal information that they later use to take over the account or to steal the identity of the account holder. The customer isn’t the only party at risk. A simple inquiry with an agent can lead to a breach that can cause substantial damage to the brand - both in reputation, resulting in a significant drop in customer loyalty, as well as in direct losses estimated at billions of dollars annually.
DJ: What responsibility do brands have in terms how they use and protect consumer data?
Cohen:Brands invest significantly in securing their databases and protecting customer data as well as in securing digital channels enabling customers to contact them. Sadly, brands often don't focus enough on safeguarding the gateway to the consumer – the contact center. Many brands still use seemingly secure authentication methods like security questions, passwords and pin codes, which are proven to be insufficient. In reality, it is the brand's responsibility to offer more advanced authentication methods, such as AI-based voice biometrics, to detect fraud in real time while ensuring a secure and seamless customer experience.
DJ: Which types of technologies can assist brands in reducing deepfake threats?
Cohen:AI-powered voice biometrics solutions like NICE Real-Time Authentication virtually eliminates deepfake threats. Using deep learning and deep neural networks (DNN), this solution detects deepfakes in real time, blocks the call and alerts the agent continuously during the call. This means that even if the fraudster used deepfake in the beginning to overcome the authentication process in the IVR and then continues the call with an agent using their own voice, our voice biometrics solution detects that the speaker changed. Moreover, machine learning and behavioral analytics capabilities make it possible to analyze in real time what the caller is trying to do to detect a fraudulent pattern.
More about deepfake, Brands, Marketing, Cybersecurity
 
Latest News
Top News