Clear all


No Data Found


No Data Found
Download the latest issue of Business Today Magazine just for Rs.49
‘AI will explode in the next 5-10 years,' says Soumitra Dutta, Dean of Oxford University’s Saïd Business School

‘AI will explode in the next 5-10 years,' says Soumitra Dutta, Dean of Oxford University’s Saïd Business School

Soumitra Dutta, Dean of Oxford University's Said Business School, in a conversation with Kalli Purie, Vice Chairperson of the India Today Group, talks about the impact of AI and what the future holds for this disruptive technology

Soumitra Dutta, Dean of Oxford University's Said Business School, in a conversation with Kalli Purie, Vice Chairperson of the India Today Group, talks about the impact of AI Soumitra Dutta, Dean of Oxford University's Said Business School, in a conversation with Kalli Purie, Vice Chairperson of the India Today Group, talks about the impact of AI

Artificial intelligence is one of the biggest disruptions the world is grappling with right now. From businesses to jobs to social behaviour, AI’s impact can be felt everywhere. At the Business Today India@100 Summit held recently, India Today Group Vice Chairperson Kalli Purie caught up with AI expert Soumitra Dutta, who has a PhD in the technology and has been tracking it for three decades, to discuss a range of issues that this disruption gives rise to. In a session called ‘Managing with AI’, Dutta, the Peter Moores Dean and Professor of Management at Saïd Business School, University of Oxford, spoke extensively about how AI will affect jobs and skill sets, the need for regulations, and the concept of responsible AI. Edited excerpts:

Q: You’ve talked about the Moore’s Law of computer processing power and how it doubles every 18 months and that there’s a tipping point for technology. You have a very good visual related to Lake Michigan to explain this. Please explain what that means and where you think we are right now.

A: We all know that we are living in exponential times, and often that is substantiated by saying that technology is increasing exponentially, both in terms of computing power, and in terms of data and other similar aspects... But what exactly does it mean in terms of trends over time? So this is a small visual… [that] shows the filling up of a lake in the US, Lake Michigan, whose volume is roughly the capacity of the human brain in computations per second. And it fills in water at Moore’s Law frequencies. So you begin with one fluid ounce; in 18 months, it doubles; in another 18 months it doubles and so on. On the top right, [there is] a time scale… [that] goes from 1940 to 2025… so the law is carrying on doubling for 70 years. And then, in 2010, suddenly it explodes. And that’s where the hockey stick part of the exponential curve kicks in... Moore’s Law [for] modern microprocessors started sometime in the 1960s. So in 2023, we are roughly 63 years into the modern microprocessor age... which means what we have seen in the last 5-10 years is nothing compared to what we will see in the next 5-10 years… it’s going to explode.

And one reason why we are seeing all these great applications of AI right now is because of the additional computational power, additional data that’s available right now. So what will happen in the next 5-10 years, it really remains for us to discover and, at the same time, be amazed by.

Q: So just taking on from there, humans are more linear, they think linearly, while technology is moving exponentially. So we’re on two different curves, right? How do we even compete with technology when it starts moving at this rate?

A: I don’t know whether you can compete with technology, per se, but what you can try and do is help to create better lives for people, help to create better organisations and help to become more competitive in whatever we are trying to achieve in the organisation.

I think the challenge of any kind of digital/AI transformation... is essentially combining two things. The first part is building forwards. So all of us have a business... an organisation... [and] you have to apply technology to improve what you’re doing things today.

What is much harder, at the same time much more exciting, is creating the future. Now, what will happen in five years’ time, 10 years’ time? We don’t know... what you can do is try to experiment with ideas, with concepts of business models and different kinds of situations, and learn from it and work backwards. So you have to do experiments and then try to design your organisation so that if it shows signs of success, you move towards that in some reasonable path. So this duality, the dual challenge of building forwards and creating backwards is the challenge out here.

But creating backwards is very hard. Because sometimes it means you have to come up with experiments that are disruptive, that go against the current business model... people might fail in some of the models... How do you deal with the culture of failures? Let’s assume you take two-three great people in your organisation, say, ‘Well go and test this idea. I’ll give you two years; I’ll give you X amount of money.’ And guess what? Maybe they fail... How do you take that failure and learn from it? But more important, how do you make sure that the people who took the risks are welcomed back? The risk management of failure, acceptance of failure becomes a very important part of creating a future. And that is something we are not very good at. That’s the challenge.

Q: If you’re running a business, you probably want to know if AI is going to save costs. And if you are part of a business, you want to know, ‘Am I going to lose my job?’ Is AI going to cause job losses?

A: It will certainly make transitions in jobs more acute, sometimes more frequent. But AI is a long journey. And today AI is just the next phase in the digital transformation that we all are going through. So what do I think is going to happen? There are three axes of impact or change by AI. One axis is you can apply AI in what we’re doing today, and get all the benefits of cost, quality, and time. To take a simple example, in finance or in banking, if you wanted to get your wealth managed professionally before, you had to have a certain wealth level to be able to afford the time of a professional banker or manager. Today all of that is handled by automated algorithms, AI-based systems, which do it much more cheaply for people with smaller volumes of wealth... That’s taking things you’re doing today and doing it better, cheaper, faster.

Then there’s another important area, broadly termed consumerisation. And that is using technology to make the lives of the consumer simpler, easier, hassle-free and friendlier. And the best example of that is how we today interact with Netflix or Amazon or similar systems; we have got used to this ease of use of these systems. And that similar ease of use is something other sectors can learn from , whether it’s healthcare, government services, banking, and so on. So making life simpler for customers and others is an early win we can focus on.

The much more challenging part is creating a new business model, which is really creating backwards. How do you create the new business models? How do you test them out? That’s where you need a lot of new skills, you need a lot of new experimentations. And I think combining all these three things, is an essential challenge... When we start looking at things in the future, it might require major changes in your capabilities, your processes, your skills, and a number of other factors.


Q: Some of the big AI companies are saying, ‘Why are you worried about job losses? This is the end of the workweek; AI will increase productivity, increase GDP; you will have a universal basic income, you can go to work two days a week and be in the Bahamas the rest of the time.’ You know, this is utopia...

A: So some of it is true, and some of it is not. Keep in mind the bigger context of what is happening in the world and tech. Right now, there’s a huge backlash against tech...especially [in] the countries that dominate tech—the US and China. There has been a huge backlash in society, from government, and they’re trying to basically control the power of the tech companies. The tech companies are doing all they can to ease the hype, to reduce the stress, reduce the fear, and basically tell people, ‘Don’t get alarmed, don’t worry, nothing is going to change, or it’ll be just making things better and easier for you.’

I think that’s disingenuous. And it’s not really correct. Yes, it will make things easier. But at the same time, it will change things quite dramatically in some sectors. And it is up to us to be able to decide how you plan for the changes. If you take, for example, sectors like law, a lot of what young lawyers do is basically go through old cases, identify precedents, build some arguments for current cases... A lot of that today can be done effectively, sometimes even better than humans, by AI-based systems. If you take a little bit more physical domain, let’s say truck drivers... In America, in many of the highway routes, some companies are essentially using automated trucks; the driver’s still there for safety reasons right now (but not for long)... and usually takes over in the first few miles and in the last few miles. As the number of truck drivers is reduced—and you see this already in the army and air force, where there aren’t as many fighter pilots [as earlier as] it’s all drone based—what do you do with a 40-year-old truck driver? You might say there are plenty of jobs in looking after human beings… How easy is it to take a 40-year-old male truck driver and make this person a childcare minder? These are huge social transition issues and I don’t think we have the answers for that. These are important answers and questions that businesses have to grapple with. And at the same time, the government has to also step in and provide some support, some regulatory guidance, and in some cases, even direct support for transitioning of skills. I think skills transition is a major issue ahead of us.

Q: I want to go back to something you mentioned early on in the answer about this backlash that tech companies are facing. And so they’re trying to soften the blow. But you have talked about the difference between computing and AI. Computing is more co-pilot—it assists you, whereas this new set of generative AI is learning, is using you to learn, and then eventually it is going to be better than you...

A: It’s very important to understand where we are with AI skills. How good is AI today? If you look at the traditional cognitive tasks—typically tasks linked to recognition of patterns and data, whatever kind of patterns they might be, whether it’s medical images, machine data, pictures of anything—any kind of pattern recognition in data, which is a large part of what human beings do, today is done better by machines. And this you see, for example, in many countries right now, immigration is done on facial recognition. Why? Because the machine can recognise a face better than a human being. That is done better by machines since 2017.

Cognitive and human reasoning—that’s the part where typically human beings did better, because a lot of common sense knowledge that machines have difficulty in understanding like, let’s say, if I give you a sentence, ‘the old man’s glasses were broken,’ we as human beings automatically interpret the glasses to be reading glasses— It could be drinking glasses, but we have a lot of common sense knowledge, old age, poor eyesight, correction of it and so on.... We do that kind of interpretation very naturally that machines have a difficult time [to understand] without the world knowledge. But that is changing now, with ChatGPT-like systems—that the gap is becoming smaller and smaller.

Now, if you look at, for example, creativity, people say, ‘Machines are not creative because they use data from existing stuff.’ But that’s not true. Today, increasingly, industrial design, fashion design, musicians—they are using AI-based systems to suggest ideas, to suggest songs, suggest music tones, music patterns. And increasingly, creativity is getting automated. And then people say, ‘Well, maybe in the creative side, AI can assist humans. But [on] the empathy side, the relationship side, human beings are the best at it. And machines can never do empathy.’ Again, that’s not true... In 2014, Microsoft China released a bot called Xiaoice, and then spun it off into a separate company. And today, Xiaoice is the girlfriend of 600 million Chinese males… Just as many tech companies figured out how to create addictive behaviours… they have figured out how to manipulate the emotions of people to make it [AI] sort of bond emotionally. The only reason they haven’t rolled it out in all our traditional digital assistants is because of the backlash they’re worried [about]. Technically you have systems that are being used in specialised cases, for example, chatbots are used for counselling teenagers…

You combine the three things—cognitive, creative, and emotional—[and] you’re looking at very powerful capabilities. We don’t know where these frontiers will go. And I go back to that vertical curve, the Lake Michigan [example]. So when people say, ‘Well, you know, ChatGPT did not do this. ChatGPT failed that.’ Yes, the technology is only three-four years old. What will happen in five years’ time? Given that kind of vertical rise, we don’t know... I think the issues are quite complicated. And you need governments to come in and look at these things very seriously. At the same time, having said this, there are lots of early gains in the process to be made.

Q: There’s a lot of talk by AI and big tech companies about responsible AI. But it depends on which side you’re looking at it from. One company’s responsible AI is another’s worst nightmare…

A: It’s not just responsible AI but responsible use of technology. And that’s where the regulation part comes in… the regulation part is very important, because we have to somehow help society use technology in a responsible manner. It is the same reason why, for example, any technology can have positive things and negative things. And we have to [as] society be more oriented towards the positive side and be more careful about preventing the negative side... that’s an area that is evolving, and the regulations and societal norms are very important.

My biggest concern is that in the world of AI, to talk about AI and digital technologies, the world is getting split in two halves—the American half and the Chinese half. And the two halves don’t talk and their ability to talk is decreasing year by year. And so do we have any hope for a global system? The chances of that are decreasing. I don’t want to appear pessimistic, but if the two major powerhouses in the world of technology don’t want to talk to each other, don’t want to agree to common norms and principles and regulations, it’s a cause of concern.

Q: And they could have very different views on personal IP. My concern is how do we protect ourselves because anyone could easily make an AI avatar of you and say whatever it wanted to, and if you complain, they say, ‘Oh, well, he has blue eyes and you don’t.’ Where do you draw the line?

A: That’s a very important and hard question, with no easy answers. Today, you’re already seeing challenges in the case where you can use generative AI to learn from an artist’s style of paintings and create a similar style of painting. And the question is, is that fair use of IP? Today, the challenge is that, again, we don’t have rules and regulations that govern this very well. It’s all very fluid and very different across geographies. And also, what we have is that, in some sense, the technology will evolve to a point at which it’ll on the one hand make it very easy to manipulate these things. On the other hand, hopefully, it will give us tools to control it also better from our individual point of view. By which I mean, today, largely, we are in a Web 2.0 world. And the Web 3.0 world is being created; the technology stack for the Web 3.0 world is not yet complete... And the hope is that when the technology stack for Web 3.0 is done and rolled out and more widely available, individual citizens like you and I will have more control over our own data.

So till we don’t have that Web 3.0 [stack] commercially widely available—and it could be 5-10 years from now—in the meantime, we are at the mercy of the tech companies, and to some degree government regulations around them. And that’s one reason why you’re finding the tech companies collecting as much data as possible. Often the data being collected is not directly being used right now, but they are collecting it in the hope that maybe sometime in the future, they’ll find ways to use it... the tech companies who are the leaders in the space have to adopt their own internal standards, internal ethics, in guiding some of their own actions. Some of it is happening, and I hope [will] become stronger, but a lot more needs to be done in this area.

Q: Another side of life that affects all of us is love. Do you think with the amount of data we have, will we algorithmically be able to find our soul mate?

A: You’re asking me all the tough questions. I hope it’ll allow us to find love not just in terms of the person we choose to love, but also in terms of things we love to do. Because I really think that AI will allow us more possibilities, will give us more skills—I might be a terrible musician, but with some AI maybe I can compose better music. So what I’m saying is that it’ll help us to do more of the kind of activities we love, and maybe love and find the right people also. But hopefully, that is a positive note to end on. It’ll help us to become more loving and more loved and hopefully become better individuals and better human beings.

Published on: Sep 11, 2023, 12:26 PM IST
Posted by: Arnav Das Sharma, Sep 11, 2023, 11:10 AM IST