Connect with us

Hi, what are you looking for?

Social Media

New technology challenges fake news and disinformation

Most fake content requires a human to detect and report it, new technology could change this.

Reading news on the go. — © Digital Journal
Reading news on the go. — © Digital Journal

Scientists are seeking to automatically differentiate between original and fake multimedia content, in order to improve trust online and to find a means to detect fake content. This is an important consideration for those seeking to improve what is shared on social media.

Through advances in technology, those wishing to alter content can take audiovisual files can develop ‘deepfakes’, forming montages that look like real footage. This adulterated footage ranges from the amusing to the use of digital media that sets out to cause harm.

At the moment, most fake content requires a human to detect and report it. Across platforms like Facebook and Twitter, this is an enormous task. While some forms of machine learning can assist, technology has not been sufficiently advanced to be wholly reliable.

To help address these concerns, the researchers are seeking to combine techniques from digital content forensics analysis, watermarking and artificial intelligence.

While it is a work in progress, this development forms part of a continuing battle as photographic and video editing and artificial intelligence tools become more sophisticated. The solution being proposed uses artificial intelligence and data concealment techniques to help users to automatically differentiate between original and adulterated multimedia content.

The development is being led by the Universitat Oberta de Catalunya (UOC). According to lead researcher, Professor David Megías: “The project has two objectives: firstly, to provide content creators with tools to watermark their creations, thus making any modification easily detectable; and secondly, to offer social media users tools based on latest-generation signal processing and machine learning methods to detect fake digital content.”

While the development is progressing, there is unlikely ever to be a single solution and detection will need to be carried out with a combination of different tools. It is on this opted to explore the concealment of information (watermarks), digital content forensics analysis techniques (based on signal processing) and machine learning.

Taking one of these as an example, digital watermarking uses multiple techniques in the field of data concealment that embed imperceptible information in the original file to be able automatically verify a multimedia file.

The new technology will leverage signal processing technology to detect the intrinsic distortions produced by the devices and programs used when creating or modifying any audiovisual file.

The technology to date is set out in the ARES-Workshops, with a research paper titled “Architecture of a fake news detection system combining digital watermarking, signal processing, and machine learning.”

Avatar photo
Written By

Dr. Tim Sandle is Digital Journal's Editor-at-Large for science news. Tim specializes in science, technology, environmental, business, and health journalism. He is additionally a practising microbiologist; and an author. He is also interested in history, politics and current affairs.

You may also like:

Tech & Science

We were surprised to find that different gut microbes spread through social contacts and shared environments.


OpenAI co-founder Ilya Sutskever and "superalignment" team co-leader Jan Leike announced their departures from the ChatGPT-maker this week.

Tech & Science

GenZ does not see home ownership as a priority, or even an attainable goal, as previous generations did.

Tech & Science

Artificial intelligence built on mountains of potentially biased information has created a real risk of automating discrimination.