8th October 2019

AI versus the fraudsters

How can trading standards harness the power of artificial intelligence and computer learning to tackle rogue traders in an online world?


By Mansoor Iqbal
Freelance writer for JTS
TYPE
SUBJECT
REGION
SHARE ARTICLE
We’ve been moving towards a much more intelligence-led way of tackling issues. What we’re trying to look at is where we can use technology to get ahead of the game
The threat now involves hackers, rogue nations, organised crime, hacktivists, even internal fraud. We’re facing a much more serious level of threat
Systems can only be manually updated after the fraud has already taken place. The beauty of AI is that it learns on the fly, and adapts
The key point of AI is that the algorithm behind it is constantly evolving, which requires collaboration and sharing, not just with other enforcement agencies, but also in terms of how we work with academia

No longer the stuff of science fiction, artificial intelligence has become an everyday reality. We might look at self-driving cars being pioneered in the US, algorithmic trading in the financial sector – or, closer to many people’s homes, Siri and Alexa virtual assistants.

Artificial intelligence technology also plays a crucial role in the ever-evolving battle against fraud. UK Research and Innovation recently awarded funding to a research project from Intelligent Voice, Strenuus, and the University of East London, which will see voice recognition and AI combined to assess the veracity of insurance claims (insurance fraud cost the UK £7bn in 2017).

Research led by the University of Warwick, with funding from the Engineering and Physical Sciences Research Council and the Economic and Social Research Council, aims to address ‘rom-con’ online dating scams by identifying fake dating profiles used to cheat users out of their money. Such scams cost 3,000 Brits £41m collectively in 2017.

Further afield, the Dubai Consumer app, launched in 2018, uses AI to protect consumer rights. The app allows consumers in the Emirate to report queries or complaints around the clock, then processes the claim based on current regulations using artificial intelligence. Consumers receive an ‘empowerment letter’ based on their claim or query, details of which are also sent to the merchant in question, which is legally obliged to resolve the issue.

According to Mike Andrews, Lead Co-ordinator of the National Trading Standards eCrime Team, “We’ve been moving towards a much more intelligence-led way of tackling issues. What we’re trying to look at is where we can use technology to get ahead of the game. It’s a sad fact that we’re probably always going to be a step or two behind the criminals but what we want to try to do is get as close to them as possible. So we’re looking at whether artificial intelligence can be used as a means to flag websites that have indicators that they could be fraudulent.

“Of course, it’s just a machine making that assessment, but if it can flag it up at an early stage, then a human can look at it and say ‘yes, we’ve reviewed it and it has all the tell-tale signs of being a particular fraud that we’re familiar with, which means we’ll potentially be able to intervene at a much earlier stage.”

Fraud on the rise

While being able to call on AI is no doubt a great boon in the battle against fraud, as the above examples might suggest, the necessity of its use hints at the increasingly far-reaching threat of
fraud. This is particularly the case in the online domain.

“A few years back we were mostly dealing with teenagers in basements, trying to impress their friends,” says Daniel Kornitzer, Chief Business Development Officer at online payments organisation Paysafe. “The threat now involves hackers, rogue nations, organised crime, hacktivists, even internal fraud. We’re facing a much more serious level of threat.”

To give an idea of the scale of the problem, the World Federation of Advertisers has suggested that online ad fraud alone (an often amazingly sophisticated operation) will become the biggest form of organised crime outside of the drug trade within the next decade.

This is, of course, a threat that primarily affects businesses. For consumers, one of the key risks is ‘card not-present’ fraud, says Matthew Attwell, Risk & Client Services Director at the AI Corporation, an organisation specialising in online payment fraud detection. The strengthening of card-present payment mechanisms, such as chip-and-pin, and the relative anonymity of online payment fraud, has resulted in a significant increase in the latter.

While cyber fraud levels as a whole fell in the UK between May 2017 and May 2018, card and account fraud increased by 40%, to a total value of £2bn. Around one in 10 UK adults are affected by this, with 27% not aware of how they were hacked, according to data from Compare the Market.

Three of the most common methods, says Attwell, are: ‘compromised detail fraud’, where  fraudsters seek to gain access to consumers’ email accounts or mobile phones in order to get around security measures such as 3D-secure or phone contact – making it very hard to detect; ‘man in the middle fraud’, where fraudsters create an effective replica of a legitimate business’s site in order to harvest information from consumers who are trying to complete a transaction with the business proper; and ‘synthetic customer fraud’, which circumvents the traditional basis of fraud detection by creating synthetic accounts at financial institutions or synthetic customers at retailers – this is also very hard for businesses to detect.

The increasing prevalence of e-commerce, observes Attwell, exacerbates this danger – particularly with the rise in pervasive computing. “As card-not-present retail becomes more common, we are certainly seeing an increase in risk,” he says. “Retailers are offering ever increasing channels for non-physical retail to take place – traditional PC, mobile, wearable, even voice assistant channels.

“A great emphasis is often placed on gaining a holistic ‘omnichannel’ view of these channels from a sales and marketing perspective to ensure a joined-up customer experience. Unfortunately, this holistic view is not carried through to the risk management of these channels. As such, gaps in the armour of a business’s fraud approach emerge with different monitoring and investigation at different channels. Fraudsters prey on these gaps and exploit them to maximise their gain.”

AI joins the fight

While the volume of data held on consumers may seemingly increase the risk of fraud, it can also be harnessed to help in the fight against crime.

“It’s all about the depth and consistency of data,” says Kornitzer. He references a McKinsey study, which aimed to use machine learning to tackle synthetic identity fraud, which it estimates to be the fastest growing type of financial fraud in the US. As mentioned above, it’s hard to detect; and therefore, there is a shortage of cases to feed into an artificial intelligence system.

Manual checks carry a high risk of false positives – and the concomitant negative consumer experience. The solution found was the enriching of the data held by a bank with data from other sources. While these synthetic accounts may be able to build up a good credit history, they do not leave the same rich and consistent historical data trail as a real person.

In all, nine data sources yielded 150 measures of a profile that could be applied to a test pool of 15,000 profiles. Parsing this data using AI showed that 85% of the profiles had the full level of depth and consistency expected of a real human being, with a further 10% falling just outside. The remaining 5% were problem profiles, and required further manual checks.

This, however, considerably streamlines the process. “It’s a further level of diligence,” says Kornitizer. “It leverages technology to fight fraud and to achieve great results. It can provide convenience for the consumer, doesn’t add any major cost for business, and fights fraud – the holy grail.”

With that quality of data in place, AI can serve as a reliable buttress against fraud – one that, crucially, evolves to take into account new dangers.

“Criminals change their modus operandi regularly,” reflects Kornitzer. “By definition, legacy systems are not adaptive. Systems can only be manually updated after the fraud has already taken place. The beauty of AI is that it learns on the fly, and adapts.”

Attwell echoes this: “AI technology can deploy and review risk strategies quicker than corresponding human fraud analysts. The AI Corporation worked with a regional telecoms company that was struggling to manage the fraud risk on its online retail channels. It was attempting to manage risk at a channel-specific level and was repeatedly overwhelmed in its efforts by fraudsters, who were rapidly changing their modus operandi and causing unsustainable losses.”

As well as a hugely reducing levels of loss for the company, the use of AI here had a significant effect in lowering false positives. This is important to note for those who might worry that AI might have the opposite effect, and come after innocent people who have made honest errors.

Technological arms-race

That is not, however, to suggest that all uses of technology are benign by any stretch of the imagination. Indeed, we might note that fraudsters are able to perpetuate digital fraud by virtue of their having recourse to some extremely sophisticated technology of their own.

“There is little doubt that fraudsters are becoming more technologically savvy and are starting to leverage the same AI technologies that institutions are deploying to target their activity and carry out fraud,” says Attwell. One of the key ways in which we might hope to address this is the simple addition of old-fashioned human expertise.

“The key to a successful fraud prevention strategy is to marry the strengths of the machine approach with the existing capabilities of your fraud experts. A mixture of latest-generation technology and deep human understanding of the business and the risks it faces are a potent defence against would-be fraudsters.”

Kornitzer agrees: “You need great tech, you need great expertise, and you need great processes.” A team of top students equipped with the latest technology, or a team of experienced professionals with one-year old software, he says, would not do the job.

The last point is pertinent. It is an arms race, and keeping up is key. “We need to make use of all the tools available. It’s not a nice-to-have. Merchants that are not good at fraud detection will attract fraud. It is a must – in terms of compliance as well as risk.”

As Mike Andrews observes, “The key point of AI is that the algorithm behind it is constantly evolving, which requires collaboration and sharing, not just with other enforcement agencies, but also in terms of how we work with academia.

“One of the projects we’re working on around AI is with academics, which means we need to look at how we allow them access to certain bits of data and intelligence to help them develop their understanding of the problem.”

The UK research projects mentioned above involve public and private sector actors. AI Corporation is also currently involved in a research partnership with the University of Southampton into the use of machine learning to tackle fraud.

The same spirit prevails across the tech industry, says Kornitzer. “Stopping fraud is top of mind. We cooperate with organisations, local authorities and thought leaders. Companies share data, even with competitors. We all want to collaborate to stop fraudsters, not let our competitors get killed.”

As Andrews puts it, “A trio of parties needs to be involved: academia, law enforcement and the private sector.”

While the overhanging global threat of digital fraud may be a terrifying prospect, surely this collaborative spirit bodes well for the running battles we can expect to be fighting against digital criminals for the foreseeable future.

Comments are closed.