The rapid development of AI tools exposes consumers to new scams, phishing campaigns, and misinformation, a report by the UK's antitrust watchdog has warned.
The Competition and Markets Authority (CMA) said the rapid evolution of large language models (LLMs) and other AI tools increases the risk of already-existent online threats and undermines consumer trust in businesses that use them.
It stated that bad actors could use AI to create fake reviews on e-commerce sites at scale and craft more personalised and convincing phishing campaigns crafted by LLM chatbots like ChatGPT.
The CMA also said that consumers could be easily manipulated by information shared by chatbots, citing examples of a chatbot fabricating medical notes and making false allegations against individuals.
“There remains a real risk that the use of AI develops in a way that undermines consumer trust or is dominated by a few players who exert market power that prevents the full benefits being felt across the economy,” said CMA chief executive Sarah Cardell.
“The CMA’s role is to help shape these markets in ways that foster strong competition and effective consumer protection, delivering the best outcomes for people and businesses across the UK.
“In rapidly developing markets like these, it’s critical we put ourselves at the forefront of that thinking, rather than waiting for problems to emerge and only then stepping in with corrective measures.”
“We can’t take a positive future for granted”
The CMA’s report arrived amid growing concerns over the rapid advancement of generative AI – technology that can create text, images and video barely distinguishable from humans’ output.
Regulators worldwide are stepping up their scrutiny of these technologies, with the EU already in the final stages of making its landmark AI Act law to protect society from the risks of AI and ensure fair competition across the continent.
Cardell said there was real potential for the technology to turbocharge productivity and make millions of everyday tasks easier – but a positive future could not be taken for granted.
The speed at which AI is becoming part of everyday life for people and businesses is dramatic. There is real potential for this technology to turbocharge productivity and make millions of everyday tasks easier – but we can’t take a positive future for granted.
She believes there is a risk that the use of AI being dominated by a few players who exert market power that prevents the full benefits from being felt across the economy.
“That's why we have today proposed these new principles and launched a broad programme of engagement to help ensure the development and use of foundation models evolves in a way that promotes competition and protects consumers," she said.
The CMA estimates about 160 foundation models have been released by a range of companies, including Google, Meta and Microsoft, as well as new AI firms such as OpenAI.
It warned that if competition is weak or developers fail to adhere to consumer protection law, people and businesses could be harmed through exposure to significant levels of misinformation and AI-enabled fraud.
“There remains a real risk that the use of AI develops in a way that undermines consumer trust or is dominated by a few players who exert market power that prevents the full benefits being felt across the economy,” Carnell said.
“In rapidly developing markets like these, it’s critical we put ourselves at the forefront of that thinking, rather than waiting for problems to emerge and only then stepping in with corrective measures.”
Regulating AI development
The CMA said it will begin discussions with AI stakeholders in the UK and globally around developing its principles further and working with those groups on developing AI markets further.
Today, our CEO, Sarah Cardell, has launched our initial report into #AI Foundation Models, which proposes new principles to support competition and protect consumers.
Find out more: https://t.co/dLANtGBOZu pic.twitter.com/JMlZ4ShZ0Z
— Competition & Markets Authority (@CMAgovUK) September 18, 2023
“While I hope that our collaborative approach will help realise the maximum potential of this new technology, we are ready to intervene where necessary”
“The CMA’s role is to help shape these markets in ways that foster strong competition and effective consumer protection, delivering the best outcomes for people and businesses across the UK.
Gareth Mills, Partner at City law firm Charles Russell Speechlys, welcomed the CMA’s report, commenting that the CMA has shown a “laudable willingness” to engage with the rapidly growing AI sector and to ensure that its competition and consumer protection agendas are engaged as early a juncture as possible.
“The principles contained in the Report are necessarily broad and it will be intriguing to see how the CMA seeks to regulate the market to ensure that competition concerns are addressed,” Mills said.
“The principles themselves are clearly aimed at facilitating a dynamic sector with low entry requirements that allows smaller players to compete effectively with more established names, whilst at the same time mitigating the potential for AI technologies to have adverse consequences for consumers.”
The CMA said it would publish an update on its principles, and how they have been received, in 2024. The UK government will host a global AI safety summit in early November, where global governments and tech companies will aim to establish global principles for AI development.
As the world becomes increasingly interconnected and complex, so too does the global risk landscape. That’s why it’s more important than ever for business leaders and department heads to keep up to date with the latest trends and best practices.
Taking place on September 27-28, 2023 at RAI Amsterdam, #RISK Amsterdam is Europe’s leading expo on governance, risk, compliance, ESG and workplace culture. With over 50 exhibitors, keynote presentations from over 100 experts and thought leaders, panel discussions, and breakout sessions, #RISK Amsterdam is the perfect place to learn about the present and future risk landscape.