top of page

Sarvada Vartalap 1 : AI & Policy Regulation - The Indian Approach

Artificial Intelligence (Al) regulation is a pressing topic that sparks debates worldwide. The question of whether Al should be regulated, and if so, the extent and method of regulation, remains a key concern. In this Vartalap, we delve into the theme of "Artificial Intelligence and Policy Regulation: The Indian approach." This discussion aims to shed light on what could be India's perspective regarding Al regulation. We offer our insights and perspectives on this crucial subject. We discuss a wide variety of topics - from deepfakes to affixation of liability. We also discuss whether there should be any safe harbour protection.

EPISODE CONTRIBUTORS

Abir Roy_edited.png

Co Founder & Partner, Sarvada Legal

Advocate, Sarvada Legal

Vivek.png

Advocate, Sarvada Legal

Advocate, Sarvada Legal

EPISODE TRANSCRIPT

ABIR ROY : Since machine is perhaps something like human, should it be regulated?


AMAN SHANKAR : We need to first also identify functionally what the AI is doing and then assess if at all there is any regulatory vacuum.


KUMUDAVALLI SEETHARAMAN : Something that at least comes to my mind something which is very prevalent at the moment, you know, impersonation, deep fakes, personality rights.


VIVEK PANDEY : The company is deploying the technology the user is using. So, therefore, naturally the company will be liable.


AMAN SHANKAR : I think we are at a juncture where we should shed away our traditional thoughts of regulating any sector or any technologies.


ABIR ROY : There are multiple issues which are coming up but the tools may be sharpened rather than getting a new axe.


AMAN SHANKAR : We always think that there has to be some prohibitions then only we can regulate.


ABIR ROY : We should not be living in a world where we say humans are 100 % accurate and machines are not.


NARRATOR : Understanding is incomplete without a conversation. The best way to have clarity is to start having good conversations. Sarvada Vartalap is one such effort of Sarvada Legal where we discuss, debate, ideate on emerging questions and more importantly remain curious about law and policy.


ABIR ROY : Guys don't be alarmed by what I'm about to say. We are all surrounded by algorithms. And this is not a recent phenomenon. I think we have been surrounded by algorithms from 2000. Now what makes me say that? Just a simple example of my personal life. You all know I love Suits. So I must have seen that show 10 to 20 times. So now Netflix tells me that you love Harvey Spector, I'm sure you will love Mickey Haller from Lincoln Lawyer. So they'll throw up, pop up all these recommendations and I end up watching them. And then I end up watching more shows because they recommended it. So what is that? It's a machine which is learning about you, thinking about you and giving you recommendations. Netflix is one example. Other example is that we all have iPads on our table right now. We have Siri, Alexa...


AMAN SHANKAR : Don't call it. It will just open up.


ABIR ROY : these are all what? Personal assistants. And then what happens is they again, learn more about you, your family, your colleagues, everything, and then they will throw up recommendations. So this hasn't been happening since what, 2000s? So toward the turn of the century. But what is now more stark when we talk about artificial intelligence or machine learning, for example, is the last two years when this entire thing has exploded. Why it has exploded? Because there is an impression that now machines can think like us, which is the entire concept of generative AI.


So today I was thinking that let's have a discussion among all of us as to what are the implications of AI and law. Then I asked one of the generative AI that we four of us with our expertise should do a round table on AI. So they said, okay, fine, you first define AI. And then I was like, okay, fine, I need to look at some definitions of AI. Then I looked at multiple of definitions. And the thing that struck me was some definitions are so esoteric that it's a very complicated thing. And then I was reminded of the great movie '3 Idiots' as I'm sure all of you must have seen, where if you recall when Aamir Khan goes into the engineering class, he was asked, define a machine. And he innocently said, whatever makes our life easy is a machine. If my memory serves me right, he's like, "pant ki zip se pen ki tip tak sub machine hai. Humne switch on kiya pankha chal gaya, machine hai sir." (Translation - "from the zip of your pants to the tip of your pen, everything is a machine. We switched on the fan and the machine started, that's a machine.")


It was such a simple thing. And then obviously the professor was not happy and the professor asked Chatur Mahalingham. And he gave a very esoteric concept of machine. Which is obviously right and the professor said, yeah, that's what I wanted. What did Aamir Khan say? "Sir ,matlab bhi to samajh mei ana chahiye na sir." (Translation - "Sir, we should be able to understand the meaning too.") So then I was looking at various definitions. You have the AI Act of the EU making a definition. You have Indian authorities also taking a crack at the definition. You have various jurisdictions because they are all grappling with how to regulate AI, because whenever you grapple with something what are you regulating? So what I was intrigued with a definition by an EU commission which really appealed to me, like what appealed when Amir Khan defined a machine. So this new definition says "an AI refers to systems that display intelligent behavior by analyzing their environment and taking actions with some degree of autonomy to achieve specific goals." So this is something that really appealed to me. And then this obviously this entire explosion which has happened in the last two years because of generative AI, has really propelled this question among all of us. Since machine is perhaps something like human, should it be regulated? And if so, what is the approach that Indian authorities should take to regulate AI?


KUMUDAVALLI SEETHARAMAN : I feel like we jumping the gun, maybe we should take a step back and think what is the issue that we need to regulate upon, right? What is the problem statement, like we call it. So if there is an issue, okay yes, we can regulate it, right? Right now, just because something has come up, do we need to regulate it? Where is the issue in that aspect?


VIVEK PANDEY : Ultimately, what AI is just a new technology. It's a medium to do certain things which were being done to different technology or different mediums earlier. So your question...see, there's one way that EU has taken that you frame a regulation sector agnostic horizontal regulations to regulate the technology and its usage. But there's some problems with that, like you mentioned, unless we identify the problems, what do we have to solve?


So let's say if there's any issue in telecom sector, TRAI will be the best authority to take any decision. So for example, over the last couple of years, there were a lot of spam calls and unsolicited commercial communications. So to address the problem, TRAI came up with certain amendments and in fact directed the telecom operators to use AI and machine learnings to detect such spam calls and source and address the issue. Coming back to the original point that AI is just a technology. And perhaps taking a sector agnostic horizontal regulation approach will have to be seen.


AMAN SHANKAR : I think I echo your thoughts completely because right now AI is just a technology or a medium as you mentioned. And we are trying to say that law has to anticipate something which even the technologist can't predict because it's an evolving technology. And looking at the problem statement or the harms, for example, we have set agnostic laws like the Competition Law. Traditionally what we have seen that it used to deal with cartel matters or traditional abuse of dominance matters. Then came the evolution of digital markets. The law was sufficient enough, although it has its own pitfalls, but in theory, it was sufficient enough to deal with those harms. So what we need to really address is, are there any eventualities or theories of harm that may not be covered? We need to first also identify functionally what the AI is doing, and then assess if at all there is any regulatory vacuum.


For example, there may be multiple use scenarios, and AI at the same time can act as an agent. Or it may just act as a data processor or a fiduciary in that case or it may just be an independent service provider or even an hub. For example, so there was an incident in 2017 a child was playing with Alexa. The Alexa ordered some cookies because the child was playing and inadvertently some word came out the parents received the package and they were like okay we didn't order this. But ultimately it was delivered. Now whose liability is there? Because Alexa might be an agent. There may be a situation of hub like we have seen those cases of AI algorithms working for price fixing, at least not in India but US they have in some cases. Autonomous cars, is it an independent service provider? So those are the use cases and functionalities which we need to assess and then see if at all there is any regulatory vacuum or not.


ABIR ROY : So what you're trying to say is you have to look at what the AI is doing. Perhaps it's too soon or let's identify the issue at least.


KUMUDAVALLI SEETHARAMAN : Okay, so if we are thinking of identifying the issue, right, the first thing that at least comes to my mind is something which is very prevalent at the moment. You know, impersonation, deep fakes, personality rights basically, right? And when you think of these sort of situations, let's, if you think of celebrities, you know, let's take Amitabh Bachchan or Anil Kapoor for that matter. Both of them and a lot of others have gone to the courts to, you know, get back their personality rights in terms of voice, terms of know personality, image, everything. These are cases that have taken place in our judiciary. So these are celebrities also who have been affected. But it's not only celebrities, right? There are cases in India. There was a whole slew of you know video calls that went to people, I'm from the police, your package is missing, this that and the other. Now I don't know if that is definitely an AI deep fake sort of a thing but these are sort of issues that everybody across the economic strata are facing. So how do we you know look at all of this?


VIVEK PANDEY : Coming back to the primary point that whether these issues are covered in the existing framework or not. So, like impersonation is covered under BNS, it is covered under IT Act. So, I would say that the issues are already addressed in the existing law.


ABIR ROY : Now coming back to your part about so you spoke about DNS already has it, you spoke about deepfakes. I think you also spoke about what is the AI doing, is it an agent, is it a tool, is it an independent service provider. I think the best way to sort this issue would be take an example which is being prompted by AI. So what I did was I again prompted - "give me a good AI case study which we can discuss here." So again I would like to do that. I think we have all spoken, let's test this situation in a live situation.


So here this example is very, very carefully. And I'll not take names, although the AI example took names. I don't want another deepfake issue. So I want to name that person Mr. X. So it says in 2025, Mr. X, who's a financial entrepreneur, he started up proprietary AI chat box. Say we name it X chat box. Now what the X chat box does is because they give financial services, they used to take personal data from the individuals that they are giving service to, like Aadhaar and the KYC norms. Now this chat box, through APIs, is integrated into the stock exchange and other trading platforms. Now over a period of time, this X chat box was trained on harmful data. Then what it does is, because they have all the identity cards of every person that they are giving advice to, they started creating sham accounts and used those sham accounts to launder money...


AMAN SHANKAR : So it is doing all of this.


ABIR ROY : Yes, all of those things. Then what happens is now obviously the users of the chat box are not aware of this. Suddenly they get a notice from police saying that you have been engaged in illegal trading and money laundering. And now the authorities are investigating this entire matter. So now tell me, we have discussed all of this. Now how would an existing law deal with such a situation because there are quite a few situations here? First situation is obviously impersonation like both of you mentioned. Then is this chat box an agent? Because if it is an agent, then the principal is responsible. Or is the developer of the AI chat box which is training this chat box on illegal data, is the person which is responsible?


AMAN SHANKAR : In US there is a case ongoing basically what it was doing was "price coordination". That system was designed in such a way that it then took autonomous decision, went beyond the mandate. So the company who developed it and the other parties who have licensed that system went to the court saying that this case shouldn't proceed ahead, it doesn't merit. The court said that what you are doing was, although the AI was giving the inputs, but you were accepting those inputs. So there was meeting of minds amongst all of you.


ABIR ROY : But in this case, there's no meeting of mind as such. The users of that platform are not even aware. It is a de-fake issue. So in that case, there is actually a meeting of mind.


AMAN SHANKAR : So basically, then in that case, the developer who has trained that AI in such a scenario, because the developer must be aware what all capabilities the AI will hold. Right? So in that case, the developer can be the person on whom liability can be affixed, either in the common law principles of independent service provider, or there can be a situation wherein strict liability principles can also come into play, right?


ABIR ROY : Because there's a crime involved. Yes. Understood.


VIVEK PANDEY : Perhaps maybe we can see from the consumer perspective, consumer loss, there may be deficiency in service. See, this is very similar to that Air Canada case where a user was using a chat box of Air Canada. That chat box was using AI tools, etc. Now that chat box gave a response which was contradictory to the company policy itself. Now, the case went to the court, but the fact that AI was acting in an autonomous way did not came to rescue the company and ultimately it was held that the company is responsible. Now this is in concurrence with the settled jurisprudence that basically the company is deploying the technology a user is using. So therefore naturally the company will be liable.


KUMUDAVALLI SEETHARAMAN : Isn't this like a classic example of principal agent under contract law, right? If I am working on your behalf under the authority that you have given to me, that means I am your agent and as your agent, you as the principal is responsible, or are responsible, for whatever I do. Unless of course I do some criminal activity, legal activity, etc. That's a separate thing altogether.


ABIR ROY : So in the example that I gave, have deepfakes, which obviously can be dealt under the criminal law. You have money laundering, obviously they are very strong anti-money laundering laws. The biggest thing, I think, what we are discussing on the civil side is the affixation of liability. Who is liable? Now, Aman, I think you made a fundamental point here, wherein what role is the AI doing in that particular fact, situation, or transaction chain?


So I guess one has to really look at the contracts which all these companies entered into when they are deploying those AI tools to see who is liable in case that they go beyond what they are supposed to do. In this case, clearly it happened. Secondly, I think time has come wherein you look at the existing tools. I think what we are discussing right now is I think that's a common echo which is coming out is the problem statement. There is some avenue within the existing laws which is there, perhaps you need to sharpen the axe.


AMAN SHANKAR : I think we are talking about a lot of permutation combinations about what laws can apply. To my personal favorite, I would say, if you see the common law principles, they have stood the test of time, right? Contract law was nothing but it was derived from those principles. We had Tort law, which has been interpreted by the courts from time to time and applied in all kinds of civil cases. And this is nothing but a classic civil example, civil case example, right? For example, negligence. The principles are yet true, right? Courts have interpreted time and again. In this AI scenario also, if you juxtapose the principles of negligence, what does it say? It says that A - you have a duty of care, B - there is a breach of duty, and then there's a causation, right? All these scenarios, the Air Canada example that we discussed, or the hypothetical scenario that you discussed, all of these fit into this category in some way or other, right? Then you have those principles of strict liability that can come into play, as I pointed out. Then you have a principle of volenti non-fit injuria. For example, if I let's say use any generative AI model and ask that I want to invest in the mutual funds or stock market. It gives an appropriate disclaimer that the information that I am giving may not be accurate and true, exercise your own caution. However, I proceed with that recommendation, whatever it has given, incur a loss in the market. Can I sue the AI provider? No.


ABIR ROY : Good that you mentioned this. I just I was reading one of the news items which came. One of the authors in US actually filed a case against an AI engine for defamation and the courts ultimately ruled "no case made out" because when the AI was prompting they said that A - it's prompted by AI B - it may not be hundred percent accurate. So where's the loss of reputation, right? So your point is that exercise caution is like exercising caution yeah and I guess, so that's why the deployment has to have those disclaimers. I think from a deployment perspective, you need to say it is generated by AI because yes, accuracy is an issue. And that is a problem with all humans also. We should not be living in a world where we say humans are 100 % accurate and machines are not. It's a part of learning process.


VIVEK PANDEY : From the perspective of these companies, AI platforms, they also need to self-regulate. Because I see that is the way forward for now. They are guided by community guidelines, safety protocols, etc. Let's say if I ask an illegal question, it will straight away say that, I can't provide you answer because this is against our community guidelines and safety protocols. If anything is operating smoothly, don't fix the things that are not broken. Let's take an example of "dark patterns". One can argue that unfair trade practices, etc. were already a part of law. But perhaps the lawmakers felt that there's a need to bring in these guidelines just because of sheer scale. Because if something is happening at that big scale, any deceptive practice, then that needs to be addressed at the outset instead of taking it to court and waiting for the outcome because that may lead to chaos. So the thing is that if we wait and see, if there is any conduct, let's say you mentioned Deepfake, if tomorrow the practice is so rampant that there's a need to regulate perhaps soft touch regulation like EU AI Act says that this video is AI generated, which a lot of companies still do even today. So that may be the way forward for now.


KUMUDAVALLI SEETHARAMAN : As we stand today, most of these, if not all, you know, generative AI options that we have, they have their own community guidelines, they have their own safety protocols, like he mentioned, and it's clear. Because if today I ask, I know it sounds morbid, but if today I ask how do I kill Vivek or how do I make a bomb using things in my house, it's not going to help me.


ABIR ROY : It'll actually ask you to visit a doctor.


KUMUDAVALLI SEETHARAMAN : It will tell me to visit a mental health specialist and everybody is with you all of that. So these kind of self-regulation soft touch model that he saying, self-regulation in the sense of okay, I as a service provider know that there could be these sort of harms and I have a protocol to deter people from using in a illegal manner.


AMAN SHANKAR : I think we are at a juncture where we should shed away our traditional thoughts of regulating any sector or any technology. We always think that there has to be some prohibitions then only we can regulate. But time has come for a technology like AI which is bound to innovate and it is at a very nascent stage. There should be discussion around what positive obligations that we can have in the law rather than having a negative obligation. For example, the law would ideally like to say that do not discriminate, do not do XYZ rather than the thought process should be and which also syncs with the MeiTy recent AI guideline report that it should be the responsibility of that AI system or the company that you should ensure fairness, you should not have any influence which outcasts a certain community or certain people, certain group, it should have fairness and accountability in its inbuilt system. So those things can help the system to usher the entire sector to grow and it will also in some way sync with the broad ideals that we have in India that is the India AI mission, right? In 2024 when it was announced and to the public it was communicated it was with the lofty ideas that India will become an "Atma-nirbhar Bharat" (Translation - "Self-reliant India") in that tech sector. So if you want to achieve that a positive obligation framework or AI governance framework to that extent should be thought rather than just putting some blanket prohibitions or stipulations.


ABIR ROY : I think MeiTy did a commendable job where I think what they recommended was that you should have an inter-ministerial approach. They said look at the existing laws, and like we all are saying, I think there's a common consensus that, is there an issue, I think definitely there is an issue of deepfakes. There is an issue wherein people are getting misled. There are multiple issues which are coming up but the tools may be sharpened rather than getting a new axe. I guess the thing is even MeiTy said this, wait and watch like you are mentioning, dark patterns like you mentioned. It is not like on day one they came out with dark pattern. They saw the market saying that at the end of that they are dark patterns and the lawmakers in their wisdom thought that okay fine we need to give some guidance that this is dark pattern and this is after years and years of research.


AMAN SHANKAR : And that's also to bring certainty to the business.


ABIR ROY : But the certainty is very important.


VIVEK PANDEY : In fact, taking forward your point, it's still at very nascent stage. I would argue that we have to give them some exemption like intermediary rules that provide safe harbor for the platforms. So the idea is that once your due diligence is complete, you should not be held responsible for any conduct that anyone else is doing using your technology because your part is done. You have taken the sufficient and reasonable steps to ensure that it is not misused.


ABIR ROY : I have a slightly different take. I hear what you're coming from. In fact, I was reading a very excellent book by Anu Bradford. So where she compares the American model, the EU model and the China model. The American model is based on market, whereas EU model is based on rights approach. And one of the things that she mentions and argues is there's been so much exponential growth in the safe harbor like you're talking about. And it bears like obviously our IT Act has it. US had it in way back in '96 and that promoted the entire ecosystem. But one of the reasons why or one of the thought process why safe harbor was there is because they made conduits of data. The intermediaries, the classic intermediaries, the e-commerce intermediary or for example, the matrimonial intermediary. We just saw a case where in Supreme Court has stayed an arrest that is on intermediary because you cannot hold a promoter of a company liable because they're intermediary protections. But in AI case, while I agree with the notion that some kind of protection should be there, it is not quote unquote "an intermediary" because they're acting on the data. They're throwing out outputs. Like we mentioned in the beginning, I actually inserted that I'll tell you what I did. I said I want to do a round table. We have Vivek, who is a litigation expert. We have Kumada, who is a policy expert. You are the legal and tech expert. And I'm managing all the experts. And they threw up a result, they threw up a thing that we should discuss. Now that an intermediary won't do, that is somebody learning from the process will do, but I agree with the point that there has to be some form of safe harbor. What is that form? The lawmaker should decide, but yes, I think some amount of protection, self-regulation...


AMAN SHANKAR : What do you also mean to say that there should be some undertone of those principles because ultimately what is safe harbour? It simply says that if you are upfront with your activities if you come with the "clean hands" as we have the principles and if you have done all your due diligence certain eventualities happen then we have to introspect and see whether it was in your control or not. So that's where coming so for example IT act also what it says if let's say today there's a deep fake on a social media platform, it says that the intermediary, once it receives an actual knowledge, either through a court order or a government agency that is authorized, then within 36 hours it has to take down the content.


ABIR ROY : Yes, that's a content moderation.


AMAN SHANKAR : So ultimately the principle is fundamental that you have to self-regulate, you have to be upfront with your activities and you have to be responsible and accountable to the public that you are providing your services to. That's the limited principle.


ABIR ROY : Yeah, understood. Fair enough.


KUUDAVALLI SEETHARAMAN : So now we've spoken quite about, you IT act and how that is playing with AI. There are other ripe issues, right? There is intellectual property rights, there's competition. So there are lots of other angles that we can look at when you're talking about AI.


ABIR ROY : So see what you have said, IPR, copyright, competition, you missed privacy. Privacy. So these are the three or four things which are a topic in itself. There's so much debate which is happening. So we'll discuss it individually in our subsequent episodes.


KUMUDAVALLI SEETHARAMAN : Great.

bottom of page