Connect with us

Hi, what are you looking for?

Tech & Science

Ways to safeguard against a new wave of AI-enhanced scams

Scams enhanced by artificial intelligence (AI) have the potential to reach a new level of deception.

Investigators and researchers are still learning of the scope of the cyberattack which has hit US government agencies and other victims around the world - AFP
Investigators and researchers are still learning of the scope of the cyberattack which has hit US government agencies and other victims around the world - AFP

Scams enhanced by artificial intelligence (AI) have the potential to reach a new level of deception with the introduction of features such as ChatGPT 4o, that allow users to create convincing, photorealistic images, including fake documents, and realistic deepfake voices.

To consider how the industry should respond, a panel of Virginia Tech experts, including computer ethics educator Dan Dunlap, digital literacy educator Julia Feerrar, and cybersecurity researcher Murat Kantarcioglu discussed the implications of this ever-advancing technology. The authors have provided key discussion points to Digital Journal.

The panel cautioned against relying only on the safety measures built into the AI tools in order to avoid scams and explained ways to be vigilant and protect data, including the potential use of blockchain.

Dan Dunlap: Educating the public about fraud detection

According Dunlap: “Scams using AI are certainly newer and more widespread, and the increasing scale and scope are immense and scary, but there is nothing fundamentally new or different about exploiting available technologies and vulnerabilities for committing fraud. These tools are more accessible, easier to use, higher quality, and faster, but not really fundamentally different from previous tools used for forgery and fraud”.

He adds: “There is a constant need to educate the public and update detection and policy as criminals use the available tools,” he said. “Computer science professionals have a moral obligation to help in both educating the public and developing tools that help identify and protect all sectors.”

“Unfortunately, disseminating knowledge can also help to exploit the weaknesses of the technology,” Dunlap concludes. “Powerful, available, and accessible tools are destined to be co-opted for both positive and negative ends.”

Julia Feerrar: Watching for telltale signs of scams

“We have some new things to look out for when it comes to AI-fuelled scams and misinformation. ChatGPT 4o’s image generator is really effective at creating not just convincing illustrations and photo-realistic images, but documents with text as well,” Feerrar indicates. “We can’t simply rely on the visual red flags of the earliest image generators.”

“I encourage people to slow down for a few extra seconds, especially when we’re unsure of the original source,” she said. “Then look for more context using your search engine of choice.”

Feerrar follows this with: “Generative AI tools raise complex questions about copyright and intellectual property, as well as data privacy. If you upload your images to be transformed with an AI tool, be aware that the tool’s company may now claim ownership, including to further train the AI model.”

“For receipts or documents, check the math, the address — basic errors can be telling. Large language models struggle with basic math. However, know that a committed scammer can likely fix these kinds of issues pretty easily. You should also be asking how this image got to you. Is it from a trusted, reputable source?” she adds.

Feerrar concludes, noting: “Basic digital security and anti-phishing advice applies whether a scammer uses generative AI or not. Now is also a great time to set up 2-factor authentication,” she added. “This kind of decision-making is a key part of what digital literacy and AI literacy mean today.”

Murat Kantarcioglu: Using blockchain to prove files are unaltered

“It’s very hard for end users to distinguish between what’s real versus what’s fake,” Kantarcioglu states. “We shouldn’t really trust AI to do the right thing. There are enough publicly available models that people can download and modify to bypass guardrails.”

“Blockchain can be used as a tamper-evident digital ledger to track data and enable secure data sharing. In an era of increasingly convincing AI-generated content, maintaining a blockchain-based record of digital information provenance could be essential for ensuring verifiability and transparency on a global scale,” Kantarcioglu continues.

He also offered a simple but powerful low-tech solution, noting: “A family could establish a secret password as a means of authentication. For instance, in my case, if someone were to claim that I had been kidnapped, my family would use this password to verify my identity and confirm the situation.”

Avatar photo
Written By

Dr. Tim Sandle is Digital Journal's Editor-at-Large for science news. Tim specializes in science, technology, environmental, business, and health journalism. He is additionally a practising microbiologist; and an author. He is also interested in history, politics and current affairs.

You may also like:

World

Since the height of the opioid crisis in 2021, the outlook has improved in much of the country, including in Baltimore. 

Business

Canada’s nonprofits are stepping into AI with RAISE, a new national program helping the sector adopt ethical, mission-aligned tools.

Entertainment

Rock and Roll Hall of Famer Belinda Carlisle released her new solo single and music video for "The Air That I Breathe."

Business

Countries are prioritising resilience over dependence, signalling a long-term shift that is gaining momentum.