This time of year we are warned about fake news, including AI scams, selling fake goods through AI-generated websites, news articles, and adverts. How savvy are people when it comes to fake news?
A survey was commissioned by Netskope, looking at the U.K. population, and this found that 50 percent of respondents were unable to identify AI-generated content. This was despite 84 percent of the cohort claiming they were able to spot fake news beforehand. A similar survey found that the U.S. public performed even worse, with just 44 percent of people answering the question correctly.
To capture the data, Netskope created an online quiz to assess the knowledge of 501 U.K. respondents.
The three most susceptible regions with the highest fail rates were:
- Greater London.
- East Midlands.
- Northeast.
In contrast, the Southeast emerged as the savviest region, 37.1 percent unable to identify fake online content.
With the test set for the participants, two of the top stories included AI-generated fake news, while the third was fake news about AI itself.
- An image of Pope Francis wearing an oversized white puffer coat – This racked up over 20.8 million social views and was covered by 312 publications.
- AI images of Donald Trump being arrested in Downtown Washington DC in March this year -The convincing images pre-empting a dramatic arrest went viral on X (Twitter), with over 10 million views, and were covered by 671 publications.
- The reported simulation of an AI drone killing its human operator – The misinformation was covered by the highest number of publications, with 1,689 pieces of coverage.
When presented with a potential fake news story it is good practice to try to find the original source of the story. If the image or video is on social media, the comments could hold information about where it originated.
For image-based stories it can be useful to enlarge the image and check for errors. Making the image bigger will reveal poor quality or incorrect details – both tell-tale signs of an AI-generated image. It is also useful to check the image’s proportions. A frequent mistake in AI images is the proportions and quantities of body parts and other objects. Hands, fingers, teeth, ears, and glasses are often deformed and have the wrong quantities.
For video-based stories it is useful to consider if the video image is too small? As with with images, a small video image indicates ‘deepfake’ AI software has been used that can only deliver a fake video at a very low resolution. Another clue is whether the subtitles are oddly placed? With fake videos, the subtitles are often strategically placed to cover faces, meaning it’s harder for viewers to notice the unconvincing video deepfake, where the audio often does not match the lip movements.
There is a wider concern that cybercriminals are using fake AI-generated images and content to trick people into handing over personal information and company secrets.
