Connect with us

Hi, what are you looking for?

Tech & Science

Media advisory: AI makes it harder to spot deep fakes — awareness is key

To combat disinformation, Myers says there are two main sources – ourselves and the AI companies.

Image: — © AFP
Image: — © AFP

Artificial intelligence programs continue to develop and access is easier than ever. One darker consequence of this means that it is more challenging to separate fact from fiction.

As an example of misdemeanours, in Mya 2023, an AI-generated image of an explosion near the Pentagon made headlines online and even slightly impacted the stock market until it was quickly deemed a hoax.

Cayce Myers, a professor in Virginia Tech’s School of Communication, has been studying this ever evolving technology . Myers is especially interested in the future of deep fakes and how citizens can spot them.

Myers notes this is becoming an increasingly difficult challenge: “It is becoming increasingly difficult to identify disinformation, particularly sophisticated AI generated deep fake. The cost barrier for generative AI is also so low that now almost anyone with a computer and internet has access to AI.”

Myers thinks humanity will experience a lot more disinformation – both visual and written – over the next few years. To spot the resultant sounds and images, this will require users to have more media literacy and savvy in examining the truth of any claim.

The trickery should not be thought of as simply an extension of existing software. Take Photoshop as an example. Here Myers draws the parallel: “Photoshop allows for fake images, but AI can create altered videos that are very compelling. Given that disinformation is now a widespread source of content online this type of fake news content can reach a much wider audience, especially if the content goes viral.”

To combat disinformation, Myers says there are two main sources – ourselves and the AI companies. While we can examine sources and seek to be cognizant about understanding the warning signs of disinformation, this is probably is not going to be enough.

Myers says: “Companies that produce AI content and social media companies where disinformation is spread will need to implement some level of guardrails to prevent the widespread disinformation from being spread.”

Myers says that the technology of AI has developed so fast that it is likely that any mechanism to prevent the spread of AI generated disinformation will not be full proof.

Whilst attempts to regulate AI are going on in the European Union and in the U.S. at the federal, state, and even local level, there are doubts about the likely success. As part of the legal deliberations, lawmakers are considering a variety of issues including disinformation, discrimination, intellectual property infringement, and privacy.

As a result, politicians have a dilemma: Creating a law too fast can stifle AI’s development and growth, creating one too slow may open the door for a lot of potential problems. Striking a balance will be a challenge.

Avatar photo
Written By

Dr. Tim Sandle is Digital Journal's Editor-at-Large for science news. Tim specializes in science, technology, environmental, business, and health journalism. He is additionally a practising microbiologist; and an author. He is also interested in history, politics and current affairs.

You may also like:

Life

If the government doesn’t think differently about the delivery, it could leave the poorest children and families far behind.

Business

What is clear is how companies can increasingly "leverage the value of that advert across multiple different platforms, not just TV. 

Business

The moves, which Beijing said were to safeguard national security, swiftly followed Washington's own curbs to hobble China's ability to make advanced computer chips...

World

Donald Trump doubled down Sunday on hard-line campaign pledges to impose trade tariffs and carry out mass deportations.