15th January 2024

Opinion: Fake it till you make it

Artificial intelligence looks set to change the world as we know it, with innovative new approaches to enforcement, business — and scams


By Duncan Stephenson
Director of Policy and Public Affairs, CTSI
TYPE
SUBJECT
REGION
SHARE ARTICLE
From a consumer protection point of view, one of the darker sides of AI is the massive contribution it could make to scams

I’d like to use my column in this issue to dive into a fascinating topic that’s been in the media a lot recently — the impact of artificial intelligence (AI) on the UK’s regulatory landscape and how it will affect our profession, consumers and business.

AI is no longer a sci-fi dream. It’s here, it’s real, and it’s about to unleash significant changes in society and the wider economy. From supporting more effective e-commerce, assisting in areas such as underage sales through facial recognition, and even bringing about a digital transformation in legal metrology, AI will undoubtedly bring benefits to the Trading Standards profession. However, it also brings with it unprecedented ethical, legal and practical challenges.
At its heart AI is a branch of technology that enables a computer to think or act in a more human way. Access to data and algorithms are at the heart of a computer’s ability to learn — so called ‘machine learning’.

Reliance on data is one area where AI is causing a stir, particularly in relation to privacy and data protection. Our personal information is a hot commodity, and AI systems thrive on this data to learn and improve. The UK’s General Data Protection Regulation (GDPR) has given us more control over our data, but challenges remain: how do we regulate AI’s use of personal data without stifling innovation?

Then there’s the issue of bias in AI algorithms. These smart systems learn from the data they’re fed. But what if that data is biased? Imagine AI in hiring processes, mortgage approvals, or criminal justice systems making decisions based on biased data — it’s a recipe for disaster. The UK is grappling with how to ensure fairness and prevent discrimination in AI-driven decision-making.

Going phishing
From a consumer protection point of view, one of the darker sides of AI is the massive contribution it could make to scams. As this technology evolves, so do the strategies and capabilities of scammers, making it easier for them to exploit unsuspecting consumers.

One prominent area where AI’s impact on scams is evident is in phishing attacks. Scammers use AI-powered tools to analyse and craft highly personalised and convincing emails or messages which can replicate the tone, language, and even writing style of a person known to the recipient, making it challenging to distinguish between genuine and fraudulent communications.

AI-driven chatbots and voice synthesis technologies have made significant strides in replicating human conversation patterns and voices. Scammers can exploit these advances to create fraudulent customer support bots or to mimic voices in phone scams, making it harder for individuals to discern the authenticity of the interaction.

AI algorithms are also being employed to analyse and exploit consumer behaviour. They can collect and process immense amounts of personal data from social media profiles, online transactions and other sources to create highly targeted and convincing scams tailored to specific individuals. This data-driven approach allows scammers to craft more believable narratives, increasing the success rate of their fraudulent activities.

Money Saving Expert Martin Lewis recently highlighted concerns around the rise of deepfake technology. Deepfakes are AI-generated manipulations of audio, video or images that appear incredibly realistic and authentic. Scammers can use deepfakes to create misleading content such as videos impersonating individuals, or alter the context of a conversation, leading to fraudulent activities or damaging reputations.

Playing catch-up
The fast-paced nature of AI development poses a challenge for regulatory bodies. Scammers adapt quickly to new technologies, exploiting loopholes and weaknesses before they can be addressed by regulations or security measures. This agility allows fraudulent activities to evolve rapidly, posing a constant challenge for authorities and law enforcement agencies to keep up.

Consumer awareness and education about these evolving scamming techniques will be crucial, but responsibility also falls on technology companies, policymakers, regulators and enforcement agencies to implement robust security measures, develop AI-driven fraud detection systems, and establish effective regulations to curb the rise of AI-enabled scams.

The UK, like many other countries, is trying to strike the right balance between fostering innovation and protecting the public interest. It’s like walking a tightrope — maintaining a thriving tech ecosystem while ensuring ethics, fairness and accountability are considered.

I recently attended an AI summit organised by the British Standards Institution (BSI), and it was clear that at this early stage, the development of globally recognised standards is the main area of focus. While regulation and an accompanying regulator or watchdog will also be necessary, the challenge for regulators will be keeping on top of this rapidly advancing area.

Elsewhere, the EU appears to be ahead of the curve. The EU’s Artificial Intelligence Act 2023 sets a framework for regulating the development and use of AI. The Act outlines rules and regulations intended to govern AI systems and their applications across various sectors and categorises AI systems based on their level of risk. It distinguishes between unacceptable risk AI systems (such as those used for social scoring by governments); high-risk AI systems (like those used in critical infrastructure, healthcare, law enforcement); and lower-risk AI systems.

The focal point of the Act revolves around the regulation of high-risk AI systems. These are AI applications deemed to have a considerable impact on individuals, society, or specific sectors. For such high-risk systems, the legislation imposes strict requirements, including conformity assessments, data quality, documentation, human oversight, transparency, and accountability measures.

The Act stresses the importance of ensuring the transparency and explainability of AI systems. It mandates that users be informed when they are interacting with AI, allowing people to make informed decisions and understand when they are dealing with an AI-driven system. It also emphasises the need for human oversight in high-risk AI applications.

Like any complex legislative framework, the Act also faces challenges and debates. Critics argue that the regulations might stifle innovation and place a heavy burden on businesses, especially startups and smaller enterprises. However, given the huge risks presented by AI, it is only a matter of time before the UK’s regulatory landscape catches up.

We are still at an early stage, but the expectation is that there will be a thousand-fold increase in this transformative technology in the next five years. We will be holding a plenary session at CTSI’s annual Conference in June 2024 on this very topic.

Now if you are still reading this, to illustrate the impact of AI, around 70% of my article above has been created using generative AI. I have edited it and peppered it with a few additional facts and figures, but it demonstrates how helpful AI can be in supporting us to do our jobs — the question is to what extent that assistance could ultimately replace the human it is intended to support.

Leave a comment

Your email address will not be published. Required fields are marked *