UK AI Summit: Why Transparency is Key to AI Regulation

Published on
uk ai summit

When UK PM Rishi Sunak announced that the UK would be hosting the world’s first AI summit, he promised big. But experts say the UK's AI summit will need to be transparent about the risks of AI if it is to be successful in establishing global regulations on the tech.

First announced in June, the AI summit will be the first global summit centred around ensuring the safety of AI, especially generative AI foundation models and large language models (LLMs) like those behind OpenAI’s explosive chatbot ChatGPT. 

The UK government says the AI summit will “bring together key countries, leading tech companies and AI researchers” as they agree on safety measures to protect society against “significant risks of AI.” 

It is set to be held in November this year as part of Sunak’s mission to make the UK the “geographical home” of global AI safety and a global "AI superpower" by 2030

"We’re already a leading nation when it comes to artificial intelligence—and this summit will help cement our position as the home of safe innovation," said technology secretary Michelle Donelan. "By leading on the international stage, we will improve lives at home.”

Attendees of the summit are likely to include leading AI developers including OpenAI, Google DeepMind and Anthropic, which have all issued statements in support of the UK plans.

But industry experts question whether the summit will be able to establish concrete global laws for AI safety – especially with the attendance of big tech governments and the absence of world-leading governmental bodies such as China. 

“Having a global summit on the safety and trust of AI is a good idea and Mr Rishi Sunak’s initiative to host it in the UK should be cherished, especially when that’s held at the legendary and ominous location of Bletchley Park,” wrote Marcel van der Kuil, Business analyst at Erasmus MC, on LinkedIn. 

Not everybody is invited yet, which makes it seem like ‘Red Carpet’-event for Tech-leads to high-five, once more, as flagged by Dame Wendy Hall, regius professor of computer science.

Finding common ground

The UK is undoubtedly a leader in terms of AI development, trailing only China and the U.S. in terms of investment and with a total tech industry valued at more than $1 trillion.

But, while it also positions itself as a leader in AI regulation in ethics, it has so far taken the limited, pro-innovation approach to AI safety. 

And a report published last month by the Ada Lovelace Institute examining AI regulation in the UK suggested that its plans will fail to curb "an already-poor landscape of redress and accountability."

"The credibility of the UK’s AI leadership aspirations rests on getting the domestic regime right, and this should be a core element of the Taskforce’s work programme,” the report read

It is unlikely that international agreements will be effective in making AI safer and preventing harm unless they are underpinned by robust domestic regulatory frameworks that can shape corporate incentives and developer behaviour in particular.

The UK trailing far behind the EU, which is in the process of finalising the world’s first AI act that promises to control AI systems and introduce an outright ban on biometric surveillance, emotion recognition and predictive policing.

One of the biggest challenges for the UK government is therefore managing different global expectations of what AI regulation should look like, especially with it lagging behind itself in establishing its own guardrails on AI. 

“If the AI summit can reach a mutual agreement on safeguarding individuals and maintaining human control over these advancing technologies, it would be a groundbreaking achievement, said Kevin Bocek, VP of Ecosystem and Community at Venafi

"Having a shared vision around regulations that can help to contain the risks of AI, while encouraging exploration, curiosity, and trial and error, will be essential."

Open transparency 

Industry, academia, and society also need to be involved in the discussions at the AI summit, each of whom likely has different opinions about how to regulate AI.

“For these discussions to be effective, industry leaders – both large and small –, civil society and academia must be around the table,” said Sue Daley, director for tech and innovation at the industry body TechUK.

Carlos Eduardo Espinal MBE, Managing Partner at Seedcamp wrote in a LinkedIn post that that the conversation must also involve startups and early-stage investors rather than just large AI companies to ensure every voice is heard during the AI conversation.

“I sincerely hope that early-stage investors are also included in these discussions, not limiting participation solely to growth-stage companies and investors,” wrote Carlos. 

“I am of the opinion that any guidance formulated during the summit should be lucid in its directives on how it should impact and be integrated by startups, and at which developmental stages. 

“Insights gained from these conversations might provide guidance on where to invest to overcome challenges currently not being addressed.”

Join 34,209 IT professionals who already have a head start

Network with the biggest names in IT and gain instant access to all of our exclusive content for free.

Get Started Now