Remember meForgot password?
    Log in with Twitter

article imageAutomating the fraud management process will be a game changer Special

By Tim Sandle     Nov 13, 2020 in Business
The growth of digital services has led to a growth in fraud cases. One solution is automation. According to Mark Goldspink, automating the fraud management process will be a gamechanger in 2021.
To learn more about digital and automated fraud solutions, Digital Journal spoke with Dr Mark Goldspink, CEO of The ai Corporation (ai). The company's new flexible, AutoPilotMLTM orchestration technologies can be applied to many business processes, which require detailed consumer insight, including credit management, marketing (attrition) or pricing.
Digital Journal: How pressing is the problem of fraud today?
Mark Goldspink: Take PwC’s Global Economic Crime and Fraud Survey 2020 in which they quizzed more than 5,000 respondents across 99 territories about their experience of fraud over the past 24 months. Nearly half had suffered at least one fraud – with an average of six per company. The total cost of these crimes is estimated at US$42 billion. That is cash taken straight off a business’s bottom line. With 13% of those who had experienced a fraud reporting that they had lost US$50 million.
The most common types of fraud were customer fraud, cybercrime, and asset misappropriation. There was a roughly even split between frauds committed by internal and external perpetrators, at almost 40% each, with the rest being mostly collusion between the two.
DJ: How important is it for companies to be proactive when it comes to fraud detection?
Goldspink: I would say it is imperative that organisations take proactive steps to prevent fraud. Not just because of the tangible bottom-line financial costs, but also the intangible brand reputation issues that become apparent by fraud attacks.
Over the years, there have been many examples that I could cite, but the one I will always remember with some horror is the case of a bank that launched a pre-paid card programme. Unfortunately, as a second thought, they decided that a fraud system was needed to support the programme and the organisation I worked for at the time (not The ai Corporation (ai)) was selected as the solution.
As part of the first fraud review, we went in to see the bank, but just ahead of the review meeting, our Head of Fraud announced (and I have still not forgiven him) that if he turned the fraud system on, 98% of the bank’s transactions would be declined. As you can imagine, the review meeting was not easy, particularly as the bank had invested about $2.5m into the pre-paid card programme.
DJ: Does the current remote working atmosphere increase the difficulty for companies seeking to prevent fraud?
Goldspink: ai, like many businesses, has business continuity programmes in place to ensure we can work remotely, which is very important for managing the current COVID-19 lockdown. Remote working makes things a little more of a challenge for many businesses, but many businesses that have a heavy dependency on IT understand the need to be able to work remotely. All single points of failure need to be removed, and risk management and fraud systems are no different.
To answer the question directly, yes, remote working does create more of a challenge in for the risk management experts, but it is not insurmountable, and I am pleased to see how well our teams have adapted. If I were to pinpoint one challenge that working from home adds, it would be the lag in decision making and the lack of opportunity to quickly ask a colleague, who is usually sat next to you, for their advice. Two sets of eyes on an issue makes things easier and being apart means that things must be caught, more formally, through virtual stand up meetings at the start and the close of the day.
DJ: What can companies do to mitigate any difficulties involved with the current remote working atmosphere?
As well as integrating Machine Learning capabilities within an enterprise risk management system, it is crucial that sophisticated rules engines are deployed. One of the primary reasons for implementing a rules engine is to be able to create a series of risk management policies for both fraud and AML, that are designed to trigger alerts if, and when policy rules are broken.
To optimise any risk management system and to facilitate remote working globally, the system must have a customer engagement capability. For example, ai’s customer engagement system, SmartAlertsTM currently sends SMS alerts both internally and externally across over 36 countries. These are 2-way alerts, so a call to action can be acted upon, either by a customer or by our internal team of risk management experts.
In addition to the SMS functionality, the customer engagement function can send e-mails and is accessible through social media channels. These features ensure the enterprise risk management system is more accessible, which is critical to providing 24/7 customer service support.
DJ: How can machine learning or other types of AI be utilised to assist with fraud detection?
Goldspink: Machine Learning is not new in the world of payment fraud. One of the pioneers of Machine Learning is Professor Leon Cooper, Director of Brown University's Centre for Neural Science. The Centre was founded in 1973 to study animal nervous systems and the human brain. However, if you follow his career, Dr Cooper’s Machine Learning technology was adapted for spotting fraud on credit cards and is still used today for identifying payment fraud by many financial institutions around the world.
Machine Learning technologies improve when they are presented with more and more data. Since there is a lot of payment data around today, payment fraud prevention has become an excellent use case for A.I. To date, we have seen Machine Learning technologies being used mainly by banks, but more and more merchants are taking advantage with this technology to help automate fraud detection, including retailers and telecommunications companies.
Rules-based systems have a role to play in fighting fraud, but they can be prone to generating false positives, i.e. transactions that might look suspicious at first glance, but which are, in fact, genuine. Traditionally, these transactions would have to be reviewed manually, requiring significant manpower, as the rate of online payments continues to accelerate. Machine learning can take on large parts of this extra work, not only by providing a more accurate view of what counts as suspicious (ML differs from traditional rules by typically incorporating a broader range of data points); but also in automating existing manual processes, such as the creation and submission of suspicious activity reports.
DJ: Are there any benefits or potential drawbacks to its deployment?
Goldspink: In terms of the benefits, from an application point of view, Machine Learning technologies improve when they are presented with large quantities of accurate data. They need a data signal. If the data set that is being used is good enough, then the technology can provide a strong return on investment.
From an infrastructure point of view, more and more organisations are using Software as a Service model, rather than implementing technology on-premise. It is clear the SaaS model provides more favourable economies of scale and is easier to deploy through standard APIs.
With the potential drawbacks, from an application point of view, a Machine Learning solution is only as good as the data given to it. If the data available is partial or inaccurate, then this will generate misleading results. Some organisations may not have high-quality data readily available to ensure their system is optimised and providing the value that is required.
From an infrastructure point of view, machine learning systems require high levels of monitoring and auditing, to ensure that decisions being made by machines remain focused and unbiased. This can add an additional burden to on premise deployments.
Explainability is central. Especially in financial services, as the sector is not known to adopt new technologies, especially a black box solution, quickly. Any fraud solutions must be transparent and explain how decisions are being made. Ensuring they are fully auditable and most importantly, vendors need to educate key stakeholders on how best to use the technology. Efforts to ease the burden of adoption rely heavily on education.
DJ: What can AutopilotMLTM do to help ease the burden on companies seeking to fight fraud?
Goldspink: Until now the fraud detection industry has focussed on detecting fraud reactively; but it has not focussed on proactively evaluating the impact of automation on the whole end to end fraud management process. Clearly the interdependencies on these two activity streams are significant, so the question remains why fraud prevention suppliers are not considering both?
Fraud is increasing, so at what point do we recognise that the approach of throwing budget and increasing the number of analysts in our teams is not working, and that we need to consider automating more of the process? Machines do not steal data, people do, so why are the manual processes/interventions not attracting more attention?
It is not a stretch to imagine most, if not all, the fraud risk strategy process becoming automated. Indeed, ai’s new flexible, AutoPilotMLTM orchestration re-usable technologies are doing just that. Instead of the expanding teams of today performing the same manual task continually; those same staff members are spotting enhancements in customer insight. Enabling analysts to thoroughly investigate complex fraud patterns, which a machine has not identified, or to assist in other tasks outside of risk management, which provide added business value.
Process automation is continuing to innovate and provide increased efficiency and profit gains in the places it is implemented. The automation revolution is not coming, it is here.
More about Fraud, Automation, Cybersecurity
Latest News
Top News