Connect with us

Hi, what are you looking for?

Tech & Science

Rise in deepfakes will cause new security concerns

We are in a world where state actors will have the capacity to deploy deepfakes in their security and intelligence operations

AI Yoon's creators believe he is the world's first official deepfake candidate
AI Yoon's creators believe he is the world's first official deepfake candidate - Copyright AFP/File Jim WATSON, Grigory DUKOR
AI Yoon's creators believe he is the world's first official deepfake candidate - Copyright AFP/File Jim WATSON, Grigory DUKOR

Deepfakes have caused a number of issues a cross social media, especially those used to defraud others or for political purposes. As the technology becomes more sophisticated the challenges are set to grow.

To address the challenges that will arise, a new report from Northwestern University and the Brookings Institution outlines recommendations for defending against deepfakes. To support these recommendations, the researchers have developed deepfake videos in a laboratory, finding they can be created ‘with little difficulty’.

In setting out the warning, the researchers write: “The ease with which deepfakes can be developed for specific individuals and targets, as well as their rapid movement — most recently through a form of AI known as stable diffusion — point toward a world in which all states and nonstate actors will have the capacity to deploy deepfakes in their security and intelligence operations…Security officials and policymakers will need to prepare accordingly.”

To structure the output of their inquiries, the researchers have developed TREAD (Terrorism Reduction with Artificial Intelligence Deepfakes). This is a new algorithm that researchers can use to generate their own deepfake videos. By creating convincing deepfakes, researchers can better understand the technology within the context of security.

To test out the capability, the researchers used TREAD to create sample deepfake videos of deceased Islamic State terrorist Abu Mohammed al-Adnani. The resulting video looks and sounds like al-Adnani (to the level of realistic facial expressions and audio), he is actually speaking words by Syrian President Bashar al-Assad.

The researchers created the lifelike video within hours. The process was relatively straight-forward, to the extent that the researchers say that militaries and security agencies need to assume that rivals are capable of generating deepfake videos of any official or leader within minutes.

The key recommendations from the researchers argue for the U.S. and its allies to develop a code of conduct for responsible use of deepfakes.

The researchers predict the technology is on the brink of being used much more widely, and this could include targeted military and intelligence operations. The concern is that deepfakes could help fuel conflict by legitimizing war, sowing confusion, undermining popular support, polarizing societies, discrediting leaders and more.

Other recommendations made by the researchers designed to counterbalance the rise in deepfakes include educating the general public to increase digital literacy and critical reasoning.

The associated research report: “Deepfakes and international conflict,” has been published by Brookings.

Avatar photo
Written By

Dr. Tim Sandle is Digital Journal's Editor-at-Large for science news. Tim specializes in science, technology, environmental, business, and health journalism. He is additionally a practising microbiologist; and an author. He is also interested in history, politics and current affairs.

You may also like:

Tech & Science

The groundbreaking initiative aims to provide job training and confidence to people with autism.

Tech & Science

Microsoft and Google drubbed quarterly earnings expectations.

Business

Catherine Berthet (L) and Naoise Ryan (R) join relatives of people killed in the Ethiopian Airlines Flight 302 Boeing 737 MAX crash at a...

Entertainment

Steve Carell stars in the title role of "Uncle Vanya" in a new Broadway play ay Lincoln Center.