Skip to content
The Present

We asked a government advisor about 2 key problems (and solutions) in AI regulation

Big Think spoke with AI expert Nick Jennings about the future of regulating fast-evolving AI.
A gavel on top of a green and black background, symbolizing the regulation of AI.
Adobe Stock / pingebat / Unsplash / Big Think / Ana Kova
Key Takeaways
  • Governments are often designed to move slowly, carefully, and reversibly. Technology, especially around AI, is the polar opposite of this. How can legislative bodies keep up?
  • To answer this question, we reached out to Professor Nick Jennings, an AI expert and advisor to the UK government on AI.
  • Jennings outlined two major problems with AI regulation, suggesting that one key solution is to work on broader, foundational principles rather than specifics.
Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Democracies are intentionally slow-moving. Millions of people vote for hundreds of representatives, who then spend months debating a single law change. The strength of democracy is that it moves carefully, after great thought, and usually in a reversible way should things turn out badly. For most of history, this has more or less worked out, with technology and science moving slowly enough for democratic governments to keep up. But then came large language models. Say what you want about ChatGPT, but it has not moved slowly.

At the start of 2022, only tech-curious journalists, AI researchers, or Silicon Valley insiders would have known what a large language model was. Today, we’re talking about the robot apocalypse or sci-fi utopia, depending on your predilections. At the very least, most people agree that AI has developed to the extent that it will irreversibly change how society works, which poses a huge problem for governments: How can they make informed decisions on issues that will be, by tomorrow, old news? How can a democracy keep up with OpenAI?

To talk about some of the problems in regulating AI, Big Think reached out to Professor Nick Jennings, Vice-Chancellor and President of Loughborough University and an internationally recognized authority in AI, autonomous systems, cyber-security, and agent-based computing. Professor Jennings is also an advisor to the UK government on the issue of AI as part of their AI Council.

Here are two problems and two potential solutions to the issue of regulating AI.

Problem: The genie is out of the bottle

“When I give talks on AI,” Professor Jennings said, “I challenge the audience and say, ‘Think of an area where you don’t think AI can be applied’. And, you know, I’ll almost always give a counterexample.” Jennings’ point is that AI is everywhere. AI has been commonplace in many sectors for decades before OpenAI and LLMs burst onto the scene. The difference now is the scale, speed, and impact AI will have on society. Like it or not, though, the AI genie is out of the bottle. As Adam Smith argued centuries ago, the markets move far faster than any government can hope to keep up with, and Jennings agrees. Industries have already adapted AI and will continue to use it in ways that most people reading this cannot even imagine.

As Jennings told Big Think, “If a regulator says, ‘I want to regulate large language models,’ for example, they’re barking up the wrong tree. By the time you’ve got any form of regulation, that technology will have moved on.”

Another problem lurks in the rampantly pervasive application of AI: Different sectors require different regulations. Data protection in health, for example, has far broader and stricter rules than those found in social media terms and conditions. So, a government doesn’t simply have to “regulate AI,” but regulate AI in health, social media, defense, government, transport, policing, and so on.

Solution: Broader principles

If regulating individual technologies or sectors is barking up the wrong tree, an alternate strategy is to develop broader umbrella principles that will capture all or most iterations of a technology — general rules that can apply to any fast-changing innovation.

We asked Jennings to provide two examples of what these principles might look like. The first centers on accountability. Autonomous vehicles serve as a good point of comparison: They operate on an AI system, and many countries now regulate their use in some form. For a long time, the legislative problem has been: Who is legally responsible in case of a crash? With autonomous AI, you have a piece of hardware manufactured by a certain company in a certain country. Legal responsibility is debatable but easier to isolate. The problem is harder for software-based AI like OpenAI. Who is responsible for an LLM’s output? Is it the user and the quality of their prompts? Or is it the company, their training set, and the invisible limitations they’ve placed? For Jennings, one of the first principles is establishing legal culpability for software use.

The second example concerns transparency. “We should have an idea of the kinds of data that [an LLM] has been trained on,” Jennings said. “Even if it can’t list every single website, article, and artifact that it’s used, they at least should be giving some degree of transparency around that.” LLMs give all sorts of answers — often brilliant, creative, and useful — but we often have no idea where those answers have come from. Transparency on that is a key second principle.

Problem: Regulatory capture

Regulatory capture refers to when a government’s legislative body, like a congress or parliament, is being steered or manipulated by a single, often minority, force. In other words, the regulators have been “captured” by the vested interests of a few when the democratic majority might benefit in a different direction.

The worry is that this is happening with AI regulation. Not only are tech companies multi-billion-dollar, multi-national entities, but they are the ones in charge of releasing data about AI in the first place. Everything we know about AI is either a press release or a product release. What’s actually happening occurs behind closed doors and NDAs. There’s a risk of tech companies grading their own work. As Jennings put it, “I can speak most clearly about the UK, and I observe that the tech companies are exceedingly influential in what goes on in UK politics and the UK narrative around AI. The UK used to have an AI council that was an independent voice with many independent people on it… The whole Bletchley Park event and the AI Safety Institute that’s [replaced it] are very much driven by a few individuals rather than greater governance and perhaps consensus.”

Solution: Global consensus

The answer, according to Jennings, is not to rely on any one country’s legislative body. To do so would be pointless anyway. AI software and technology are borderless. “However big and powerful an individual country’s AI systems and companies might be, those companies will all want to operate outside individual countries as well,” Jennings said. The world is currently dealing with many global issues that require multilateral policy decisions: terrorism, global warming, the energy crisis, and so on. AI is just one addition to the agenda for intergovernmental organizations to grapple with.

It has been done before

Still, Jennings is not an AI doomer. “AI will overall be a net benefit to society for individuals, society, and the planet. I think the promise and the things that we’re going to use it for will be a net benefit,” he said. But that doesn’t mean it won’t be used for great harm. We will still need to regulate, ban, and control AI use in certain areas. The problem is not fundamentally different from the problems we’ve faced with other emerging technologies throughout history. Jennings noted that we should not “try to treat AI differently to other types of software… there’s often that exceptionalism that we have to do something because this is an AI software system as opposed to a non-AI software system.”

Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Related

Up Next