University of Cambridge Calls for AI Hardware Regulation

Published on
university of cambridge

The University of Cambridge has released a new report taking aim at AI hardware as the area that needs regulation.

Chips and datacentres, the ‘compute’ driving the AI revolution could be the most effective targets for risk-preventing policies and regulations - according to UoC’s Computing Power and the Governance of Artificial Intelligence paper released yesterday (February 14). 

A global registry tracking the flow of chips needed for AI supercomputers is just one of the policy suggestions present in the report, which looks to help prevent AI misuse. 

Other proposals mentioned in the report include ‘compute caps’ - hard-wired limits to how many chips each AI chip and connect with.

“Researchers argue that AI chips and datacentres offer more effective targets for scrutiny and AI safety governance,” reports the University website, “ as these assets have to be physically possessed, whereas the other elements of the “AI triad” – data and algorithms – can, in theory, be endlessly duplicated and disseminated.”

Later in the report, the experts point out that the hardware necessary for such technology is controlled by a handful of companies via a highly concentrated supply chain, and would make a strong intervention point for regulation. 

“Artificial intelligence has made startling progress in the last decade, much of which has been enabled by the sharp increase in computing power applied to training algorithms,” said Haydn Belfield, a co-lead author of the report from Cambridge’s LCFI. 

“Governments are rightly concerned about the potential consequences of AI, and looking at how to regulate the technology, but data and algorithms are intangible and difficult to control.

“AI supercomputers consist of tens of thousands of networked AI chips hosted in giant data centres often the size of several football fields, consuming dozens of megawatts of power,” said Belfield.

“Computing hardware is visible, quantifiable, and its physical nature means restrictions can be imposed in a way that might soon be nearly impossible with more virtual elements of AI.”

Meanwhile, some tech experts claim that it’s important to recognise that solely focusing on chips is not the answer. Victor Botev, CTO of Iris.ai, says: “While we agree with the University of Cambridge’s proposals on AI hardware governance…there are other alternatives to the hardware always playing catch up with the software.

“We need to ask ourselves if bigger is always better. In the race for ever bigger large language models (LLMs), let's not forget the often more functional domain-specific smaller language models that already have practical applications in key areas of the economy. Fewer parameters equals less compute power, meaning there’s more compute resource available to benefit society.”

Computing Power and the Governance of Artificial Intelligence is authored by nineteen experts and co-led by three University of Cambridge institutes – the Leverhulme Centre for the Future of Intelligence (LCFI), the Centre for the Study of Existential Risk (CSER) and the Bennett Institute for Public Policy – along with OpenAI and the Centre for the Governance of AI.

Join 34,209 IT professionals who already have a head start

Network with the biggest names in IT and gain instant access to all of our exclusive content for free.

Get Started Now