Tackling financial fraud has become more difficult than ever in recent years, thanks to the increasing prevalence of AI (artificial intelligence) in financial fraud. A recent report from Signicat has highlighted the prevalence of AI in the murky world of financial fraud, suggesting that AI now accounts for 42 percent of all financial fraud attempts.
This is in the context of just 22 percent of firms having AI defences in place. This disconnect is worrying, but sadly, it’s nothing new.
AI has made it easier for grifters to carry out their fraudulent activity, which has in turn resulted in an increase in overall fraud incidence. Signicat’s report also uncovered that the volume of fraud attempts is increasing rapidly, with total fraud attempts up by 80 percent over the last three years. This is in part due to the role AI plays in making it easier to complete financial fraud schemes, but is also attributable to external factors.
Digital Journal has considered the most common forms of AI-fuelled financial fraud, with input from Stuart Wilkie, Head of Commercial Finance at Anglo Scottish Finance.
Synthetic identity fraud
The majority of AI-aided financial fraud can be categorised as synthetic identity fraud. Under this scam, fraudsters use AI to create fake identities comprised of a combination of real and fake information, before signing up for loans, lines of credit or even applying for benefits.
AI’s ability to quickly identify patterns within large datasets has given fraudsters the ability to create realistic profiles that align with demographic trends. Generative AI is also used in the identity creation process, simulating a realistic credit history. These profiles are therefore near-impossible to distinguish from real people under standard verification checks.
A report from the U.S. Government Accountability Office (GAO) estimates that more than 80 percent of new account fraud can be attributed to synthetic identity fraud – indicating the vital importance of improving security measures.
Deepfaking
The growing adoption of biometrics as a security measure has reduced our reliance on passwords. For many people, it’s made life easier – there’s less pressure to remember umpteen different passwords, knowing that your face or your fingerprint is enough to sign into your mobile banking or social media.
However, generative AI has made it easier for fraudsters to bypass these mechanisms through deepfaking (images, audio or video that are edited or generated with AI, depicting real or non-existent people).
When combined with other identifying factors – such as an individual’s national insurance number or first line of address – deepfakes are increasingly finding gaps in finance institutions’ security measures, giving fraudsters access to bank accounts.
Fake customer service
As well as helping scammers impersonate banking customers to gain access to their accounts, generative AI is also helping target customers by impersonating customer service representatives. In days gone by, spotting fraudulent text messages or emails was typically easier – they might have spelling mistakes or grammar issues, or be written in a tone of voice that was not aligned with your bank.
Now that scammers are using generative AI chatbots, however, generating an email that sounds exactly like your bank is far easier – they can match the corporate email tone with ease and will never make a spelling mistake.
This side of financial fraud extends far beyond just emails, too – there have also been a number of instances of scammers creating entire fake websites using AI-generated content and designing the pages to mimic that of a trustworthy bank.