8th July 2023

AI and the rise of synthetic scams

The speed of innovation in artificial intelligence has played into the hands of fraudsters, making it increasingly difficult to protect against ever more sophisticated and evolving scams


By William Henry
TYPE
SUBJECT
REGION
SHARE ARTICLE
With generative AI scams, we are relying on interaction from communities to actually tell us what’s going on

Businesses, government agencies and organisations around the world are all racing to figure out how to make the best use of artificial intelligence (AI). However, just as more applications for AI come to market, at ready availability, the technology and its remarkable possibilities are already sitting comfortably in the hands of scammers and fraudsters. Generative AI is powerful and compelling despite still being in its nascency; but fraudsters have been quick to see its potential, and enforcers and regulators are struggling with the new challenge.

So far, generative AI has been linked to everything from phishing scams to cloned audio imposter scams, ‘whaling scams’ targeting high-level executives, live voice identity scams, deepfake videos, fake positive reviews and much more. Especially alarming is the fact that voice cloning software is becoming much more accessible and in many instances requires just a three-second audio sample to simulate a voice.

Scams targeting celebrities are making headlines: one German newspaper published quotes attributed to a chatbot that realistically impersonated racing driver Michael Schumacher while another AI engine wrote and performed a full pop song that synthetically mimicked performer Drake. In both cases, audiences were completely duped, illustrating the ability of the technology to deceive.

While cases of this sort have led to a range of worrying questions, the publicity they have produced has helped bring consumer attention to the level of sophistication that AI makes possible, and allowed enforcers to alert the general public to be wary of clone scams.

“We’re very, very fortunate at the moment because we’ve got quite a good relationship with the media, says CTSI Lead Officer for Scams and Doorstep Crime, Katherine Hart. “Dare I say we do rely on the media to put that information out and then for people to react to it. We have to make people take responsibility and be cautious.”

Hart believes the best kind of protection is in raising awareness by circulating information about common fraudulent practices, encouraging people to look for warning signs, and spreading the word about what a consumer should do if they become the victim or target of fraud. There are however significant hurdles such as limited budgets and the difficulty in locating the scammers, many of whom are based overseas. Significantly, many consumers do not report incidents, making it all but impossible to get an accurate picture of the scale of the problem.

“We really have to encourage people to report incidents. What we have found about doorstep crime and scams is that they are often very underreported. People are too embarrassed to speak up,” says Hart. “With generative AI scams, we are relying on interaction from communities to actually tell us what’s going on.”

Equal opportunity fraud
Another major issue in chatbot-activated fraud is that it does not target particular demographics. With most other scam types, fraudsters target those perceived as particularly vulnerable, preying on those susceptible to falsehoods and impersonation. However, the sophistication of generative AI and the fact that it is in use across online ecosystems – from social media to bank websites – means that everyone is potentially a target. That makes communicating the different and unique risks much more difficult.

It is important to acknowledge however that not all AI is being used for nefarious purposes. Businesses are finding ways to automate repetitive tasks, improve sales and generally project manage. At the consumer end, AI is becoming normalised, with customers becoming more comfortable engaging with chatbots for advice and suggestions.

In terms of regulation, rule-makers are attempting to move forward to apply understanding to the immediate and longer-term consequences of the presence of AI. As in many areas, however, different regulatory bodies are approaching the phenomenon in different ways, and solutions are likely to arise at different times, meaning regulatory arbitrage is probable.

At the European Union level, for instance, research is being carried out in various industries to attempt to create a thoughtful approach to rules, while member states – such as Italy – have taken a firmer approach and banned certain AI applications altogether, pending review. Similarly, in the US, the Federal Government is assessing the technology’s impact while certain states are looking to draft rules quickly, wary of the speed with which the technology is developing. A key hurdle for regulators will be in data use: Europe’s General Data Protection Regulation (GDPR) and a handful of US states (California and New York among others) have created rigid consumer data protection regimes over the past few years that should factor prominently in any new rules. With the management and sharing of data being the backbone of AI, a host of issues must be addressed.

In the UK, the Government has proposed a number of core principles for developers and end users: AI must be used safely, be technically secure and function as designed; there must be appropriate transparency, an identified legal person responsible for the AI, and its use must comply with guidance – yet to be defined – from regulators. As part of its National AI Strategy, the Government’s plan is for regulatory bodies assigned to each individual sector to issue specific guidance. Within financial services, the Financial Conduct Authority (FCA) has conducted research and requested comment – a standard approach before draft rules are considered. The body has stated that it has “carried out extensive work to understand AI and consider its regulatory implications, as well as utilising AI methods to identify bad actors.” Other bodies – such as the Medicines and Healthcare products Regulatory Agency (MHRA) – have stated the need for “clear requirements to ensure healthy and productive AI use. Aside from this, little guidance has been put in the hands of enforcement officers.

Growing problem
What is worrying now will soon become alarming. The UK had the highest number of cybercrime victims per million internet users at 4,783 in 2022 – up 40% on 2020 figures. Last year alone, £4bn was lost to cyber fraudsters in the country.

For CTSI, continual engagement with the police and national fraud agencies is paramount. According to Hart, social media platforms should be held responsible for illegitimate actors as well as providing enforcers with open lines of communication to consumers in order to warn of the risks and issue best practice advice. She adds that consumers must take responsibility when the threat of fraud arises. “As consumers, we’re so busy and we just want to get where we’re going online as quickly as possible. Consumers need to have a little bit of responsibility too,” she says.

 

Leave a comment

Your email address will not be published. Required fields are marked *