Remember meForgot password?
    Log in with Twitter

article imageQ&A: As creepy new deepfakes emerge, should brands worry? Special

By Tim Sandle     Jun 29, 2019 in Business
Bill Bronske, from Globant, discusses the good, the bad and the ugly that comes from using deepfake technology. Bronske has been at the forefront of developing AI-driven technologies that are helping the public and private sector identify deepfakes.
Deepfake technology presents ethical implications for brands that are considering using deepfakes to market their products. For instance, it could only be a matter of time before a brand attempts to manipulate an image of a well-known figure like Lebron James or Kylie Jenner to promote a product. Beyond this, the technology could be used to manipulate images of customers to help increase hype around a product, service or special event.
READ MORE: Deepnude app shutdown after protests
To understand more about the implications of deepfake technology for businesses, Digital Journal spoke with Bill Bronske, Senior Solutions Architect at Globant. Bronske is a new type of consultant that specializes in next-wave technology.
Digital Journal: Why are brands grappling with concerns surrounding deepfakes now?
Bill Bronske: I believe that it is the combination of two things coming together. The first is higher quality video composition tools which are now more widely available than ever before. The second is realizing that we’ve become a society trained on sound bites. Our attention spans have decreased. Entire narratives are developed using a single image or a couple seconds of audio. Although we’d like to represent ourselves as empowered and informed, this collision represents our collective vulnerability to be manipulated. We likely understand that context may be missing, text can be misquoted and photos can be photoshopped; however, we don't immediately recognize that video or audio can be as easily synthesized.
DJ: Is deepfake technology a tool for the good or for the bad?
Bronske: The entertainment industry has been using video composition techniques and CGI for years. Seamless face swap techniques are used when a stunt person stands in for an actor. Synthesized renderings of people are used to memorialize deceased singers and actors for cameo appearances. A few months ago, David Beckham appeared in a video ad for Malaria No More, a U.K. charity. During his voice petition, he appeared to encourage people to help the fight against malaria using 9 languages.
Countering deepfakes as an enterprise is a different discussion and likely will need to be approached by continuing to build a foundation of trust for your enterprise. Attacks on an enterprise’s values can be thwarted by a long-term trust journey with high-levels of transparency. Those who create or publish maliciously faked material should have appropriate penalties.
READ MORE: Samsung brings Mona Lisa 'to life' with deepfake AI
DJ: Should the public be worried about deepfakes? What are the potential ethical concerns?
Bronske:If there is agreement to the use and the cause, there is little concern. In each of these cases, the individual or estate of the person whose likeness is forged has fully agreed to participate. Therefore, legalities and copyrights have been respected. As a potential tool for the enterprise, always begin with the question “should we?” This enables conversations of empathy and provides a meaningful guide for responding ethically.
Remember that using the likeness of a person who may not be participating in the process can misrepresent the individual's views and damage the person's reputation. Follow the best practices of the entertainment industry, including performer consent. Inform audiences. Obviously, ensure that any use of these techniques fully complies with all legal and other copyright laws. Transparency with your consumers and all parties involved will serve you well.
DJ: How can deepfakes be detected?
Bronske:Snapchat, Instagram, and others have increased consumer demand and expectations for video rendering tools. Higher resolution screens and video delivery formats will only drive resulting quality. Detecting deepfakes reminds me of a “Spy vs Spy” comic in which two agents are involved in stereotypical espionage activities. In each new strip, the two spies would alternate between victory and defeat.
By definition, construction of deepfakes using generative adversarial networks (GAN) means that over time, they will be less and less susceptible to detection. Therefore, combining video and audio detection techniques alone will be short-lived victories begging defeat. Longer-term, we need to explore other options such as use of ledgering, certification, and checksums. These will necessitate complex processes and tooling similar to those used for electronic signing for legal documents.
More about deepfake, Images, Video, Communication
More news from
Latest News
Top News