scorecardresearch
Clear all

COMPANIES

No Data Found

NEWS

No Data Found
Download the latest issue of Business Today Magazine just for Rs.49
'We do not understand how these systems work’: Expert Stuart Russell on why AI systems need urgent regulations

'We do not understand how these systems work’: Expert Stuart Russell on why AI systems need urgent regulations

Stuart Russell, Professor of Computer Science at University of California, Berkeley, on why guard rails are necessary for the development of AI

Stuart Russell, Professor of Computer Science at University of California, Berkeley, on why guard rails are necessary for the development of AI Stuart Russell, Professor of Computer Science at University of California, Berkeley, on why guard rails are necessary for the development of AI

Stuart Russell, Professor of Computer Science at University of California, Berkeley, who has been an AI researcher for 45 years, in an interaction with BT, says that while artificial intelligence can be hugely beneficial, it also has the potential to disrupt the world in a bad way if guard rails are not put in place. Edited excerpts:

 

BT: Let’s talk about the open letter to halt the development of AI systems more powerful than GPT-4 and develop guardrails. What prompted you to sign this letter?

We’re calling for a halt on the deployment of large language models (LLMs) that are more powerful than the ones that have already been released. And the reason is simple: we do not understand how these systems work.

So, what is an LLM? It’s a computer program that predicts the next word, given a sequence of preceding words. And with that system, you can have a conversation. The way these systems are built, there’s a large amount of training data. In the case of GPT-4, we think [it is around] 20-30 trillion words of text—approximately the same amount we have in all the books that the human race has ever written.

And then we start from what you might think of as a blank slate, an enormous circuit with about a trillion parameters or more. And then by the process of doing about a billion or trillion small random permutations to those parameters, the system is gradually improved, as is its ability to predict the next word. The result of that is something that when you converse with it, has the appearance of an intelligent entity.

BT: The letter calls for a six-month halt on developing these tools. Do you think that’s enough since the genie is already out of the bottle?

I agree that, to some extent, the systems that are out there are already capable of causing problems. The petition is asking that we not release systems that are even more capable of causing problems. So, six months is not enough. What we’re asking for is [to] develop reasonable guidelines that a system has to satisfy. If you can’t build an airplane that doesn’t fall out of the sky, you don’t get to put passengers on it. This is common sense. We’re simply asking that common sense be applied in the case of these extremely powerful AI systems... I think AI’s potential to benefit the world is unlimited. But if we have a Chernobyl... Chernobyl destroyed the nuclear industry… We do not want to have that [for AI].

BT: The concern for the everyday consumer is: will this eventually replace me and take away jobs?

It’s quite likely that we’ll see a significant impact. I’ll give a couple of examples. One is in the area of computer programming. You might find it surprising that advances in technology are going to make computer programmers redundant. But the numbers I’ve seen suggest that using these tools, you can write software, 5-10 times faster than unaided. And in many cases, you simply say what you want the program to do. And the software just writes it for you. And that means, to me, it’s unlikely that the world needs five or 10 times as much software. So that means that we’re going to need somewhat fewer computer programmers.

If you think of a person who works in a company as sort of a node in a network, [then] what comes into that node? It’s language—emails, phone calls from the boss, requests from customers. What goes out? It’s language—documents, sales invoices and reports for the boss. It’s all language. So, any one of those jobs, in principle, could be replaced.

But we don’t trust those jobs to psychotic six-year-olds who live in a fantasy world. So, unless you’re a psychotic six-year-old who lives in a fantasy world, I don’t think your whole job is immediately at risk. We can’t trust these systems to tell the truth because they hallucinate… they just want to sound plausible, and they have no idea what’s true and false. But there are thousands of companies which are working to fix those problems… so that they can be used in important applications. So, the next generation, I think, will have a much bigger impact on employment.

BT: Can AI hallucinate and spread misinformation if fed with malicious code or training data?

Absolutely. You can simply ask it to generate misinformation; you can say, ‘Write me a letter that will persuade somebody that the earth is flat,’ and it will do a pretty good job of that. Although they’ve tried to impose some kind of constraints… people have found it’s quite easy to ask the question in a different way. And eventually you can get it to give you the answers you want because that information is in the training set.

So I think that the level of unpredictability of these systems is beyond anything we’ve ever seen with AI software and this is only a year or two into it. We need to get a handle on what’s going on. And I think, honestly, we need to start pursuing different avenues for designing AI systems.

BT: A lot of countries have started talking about regulating AI. When we’re talking about tools like ChatGPT, Google Bard and the like, do you think regulation is the way forward? Is it even possible?

The European Union AI Act is expected to be passed by the end of this year. And I’ve been working with the drafters of the legislation and with the [European] parliament and the [European] Commission for several years now trying to make sure that it makes sense that it’s not going to be obsolete before it’s even passed. And as far as I can tell, systems like ChatGPT would probably not be legal to use in any high-stakes application. The Act defines high-stakes applications as systems that can have a significant effect on people. And it asks that there be steps taken to show that the system behaves safely, in a predictable fashion, that it’s accurate, it’s fair, it’s not racially biased, etc. I don’t think there’s any way to show that these LLMs meet those criteria. Interestingly, on OpenAI’s webpage for GPT-4, it recommends that you probably should not use these systems in high-stakes applications.

BT: Elon Musk, a fellow signatory to the open letter, has been vocal about AI and the way the AI revolution is shaping up. Is Musk right in his assessment?

Basically, yes. The point Elon is making is that until we figure out how to control systems that are more powerful than ourselves, we face a very serious risk that we will develop AI systems that are very powerful, and we won’t know how to control them. And it’s not as if this kind of thing has never happened. When we look at what’s happening with climate change, for example, we developed a system called the “Fossil Fuel Corporation”, which happens to have some human components. But basically, it’s an algorithm that’s maximising its objective—quarterly profits for shareholders. And that algorithm is destroying the world. And we can’t control it. So that’s a miniature version of the kinds of problems that we’re going to face with AI systems in the future.

 

@aayush_a6

Published on: May 15, 2023, 2:23 PM IST
Posted by: Arnav Das Sharma, May 15, 2023, 2:21 PM IST