00:00 Hello and welcome to NAD Profit.
00:02 The government has released an advisory
00:04 that requires AI platforms to seek permission
00:07 before they launch a product in India.
00:09 That essentially sets up guardrails,
00:11 which some equated to the license
00:13 raj in pre-liberalization India.
00:16 All AI generated content has to be labeled or embedded
00:19 with a permanent unique metadata to determine the creator,
00:22 the advisory says.
00:24 That's aimed at preventing misinformation
00:26 and the threat of deep fakes that come with such platforms.
00:29 To talk to us more about this advisory
00:32 is Shrikant of Fractal AI,
00:35 who has been in the space for more than two decades,
00:38 almost when the internet itself was in its infancy.
00:41 So he's, I think, best placed to take us
00:44 through what the advisory means.
00:46 Shrikant, first of all, welcome to NAD Profit.
00:48 Thank you, Dushyant, great to be here.
00:51 So just talk us through this AI advisory.
00:53 I'm sure you have seen a copy of it
00:55 and what it means for startups like you,
00:58 for creators, as well as for us, the viewers,
01:01 as far as building AI content is concerned.
01:06 Let's first start with the intent behind this advisory,
01:12 which could become a law.
01:14 The intent, which is very good,
01:16 which is idea is that we should protect consumers
01:19 from harmful content.
01:21 And if you think about AI research,
01:24 you can divide into two things.
01:25 One is the pure research,
01:27 which is happening in terms of developing new ideas
01:31 around AI.
01:32 And second is what that research translates into products,
01:36 which all of us can experience, use,
01:38 and then benefit from it or not benefit from it.
01:41 Now, when it comes to regulating AI,
01:44 regulating research is very hard and possibly pointless
01:48 because so much is going on, we can't even control it.
01:52 And unless it is actually in the public domain,
01:54 which people are experiencing,
01:56 I think research should be allowed to do what it's doing.
01:59 So that is the thought behind the guidance as well,
02:02 that regulate the products.
02:05 I think the intent here is to take approvals,
02:08 which I have some problems with in the sense that
02:10 there's AI is everywhere these days.
02:13 Almost every technological product
02:15 either already has AI embedded in it
02:18 or will soon have a lot of AI embedded in it.
02:21 So to say that every product that has AI in it
02:25 should get prior approval will become unsustainable.
02:29 So the intent is good in terms of not regulating research,
02:33 but to regulate products,
02:35 but the execution of that may need to think through.
02:37 Now, let's take a pharma example.
02:40 In case of pharmaceutical industry,
02:42 you can research any number of drugs
02:44 and then you have to go through
02:46 a whole set of clinical trials
02:48 and eventually have to take regulatory approvals
02:50 before launching a pharma product
02:53 because it's extremely powerful.
02:55 There could be good effects
02:56 and there could be negative side effects
02:58 and it could be harmful as well
02:59 because it is so powerful we want it to be regulated.
03:03 In a similar way,
03:04 AI should be regulated if it is extremely powerful,
03:07 not all AI.
03:09 And that's really the sort of the nuance
03:12 that we need to understand as we process this regulation.
03:14 - Got it.
03:15 Yeah, that's a very fair point that you make, Srikanth.
03:19 So essentially you're saying that we have to realize
03:21 where to place those guardrails
03:23 and not clamp down entirely on the products.
03:25 Am I correct in that assessment?
03:27 - Yes.
03:29 So it's good to have regulation,
03:32 but if it's blanket and it covers everyone
03:34 and all AI products,
03:36 it literally covers all technology
03:38 and the entire technology sector,
03:40 it becomes unsustainable.
03:41 If you take the drugs as an example
03:44 or medicines as an example,
03:45 there are people who are equipped to handle
03:48 and understand this data
03:49 and then approve or not approve the drugs.
03:51 We don't have a similar system right now
03:53 in anywhere in the government
03:55 that can approve or not approve the technology products.
03:57 So it'd be very hard to even implement it.
04:00 And secondly, it's not all powerful.
04:03 It's not, you can't regulate all tech.
04:05 You have to regulate only the tech that is so powerful
04:07 that it can create harm.
04:10 So are there such technologies available right now?
04:13 Maybe a handful,
04:14 like there'll be a few of them.
04:16 So my suggestion would be to regulate
04:19 either products that are in use
04:21 by millions of people every day,
04:23 like a Google search is available to every day.
04:26 And Google search is entirely AI driven now.
04:28 Or if you think of GPT-4 or chat GPT
04:31 or other AI products,
04:34 which have hundreds of millions of users,
04:37 there I think regulating,
04:39 making sure that it follows certain set of
04:41 no harm principles is useful.
04:44 - Okay.
04:45 - Either very highly used products
04:47 or highly accurate products.
04:49 Let's imagine that somebody built a new system.
04:50 Maybe it's a small product, it's a new product,
04:53 but it beats GPT-4 in its accuracy
04:56 or chat GPT in its accuracy
04:58 in across a set of benchmarks.
05:01 Then yes, I think it might make sense
05:03 for it to be regulated.
05:04 But regulating all tech will become unsustainable.
05:08 And I saw that the minister also
05:11 issued some clarifications earlier today.
05:13 And the clarification that the minister has issued
05:17 is that it's only going to apply for big companies
05:20 and not startups.
05:21 - Okay.
05:22 - I would say that the distinction is not so much
05:24 between big companies and startups,
05:26 as opposed to the distinction between
05:29 high usage, high accuracy versus all other products.
05:33 If something has high usage and high accuracy,
05:35 then I think we might have to regulate
05:38 in terms of what it produces.
05:40 - Yeah.
05:41 So actually you actually preempted my question
05:42 about the clarification that came to this.
05:44 So how do we define what is large?
05:47 How do we define what is,
05:49 what Mr. Rajiv Chandrasekar said,
05:51 that the advice applies to significant
05:54 and large platforms and not startups.
05:56 But I think your understanding is very clear
05:57 that what product is being used more,
05:59 look at that front as well.
06:01 Couple of other things which you also talked about
06:03 was the advisories aimed at untested AI platforms
06:06 from deploying on Indian internet.
06:08 And the process of seeking and labeling
06:10 consent-based disclosure to user
06:12 about untested platforms is an insurance policy.
06:15 So all the guardrails,
06:17 what Mr. Chandrasekar said is like an insurance policy.
06:19 What is your understanding from this clarification,
06:22 these two bits, untested AI platforms
06:24 and this insurance policy that he's talking about?
06:26 - Yes.
06:27 So number one, in terms of usage,
06:28 if any product has more than let's say a million users,
06:31 then you can see that it has a huge impact
06:34 because million people and more and more are using it.
06:36 So I think by keeping a threshold of a million users,
06:40 we are essentially ensuring that nothing,
06:42 no major harm can take place from these products.
06:46 And so that is one.
06:47 Secondly, in terms of untested,
06:49 so all AI products are almost by design tested
06:52 before they're launched.
06:53 There's extensive amount of model building,
06:55 testing training.
06:56 So I don't understand what untested might mean.
07:00 It's definitely helpful for every AI product
07:02 to provide some advisory right up front
07:05 that answers could be wrong.
07:08 Sometimes AI doesn't get the facts straight.
07:10 Sometimes the pictures produced could be harmful.
07:14 While all attempts have been made to reduce the harm,
07:18 but it cannot be 100% eliminated.
07:21 So this kind of a warning, a content warning
07:25 or a notice that something could be going wrong is helpful.
07:30 I think every AI product should have it.
07:34 And then when it comes to things like deep fakes
07:37 and videos, et cetera, having a very clear policy
07:40 that these are labeled as AI generated,
07:44 watermarks are put in place.
07:46 These kinds of guidance are actually helpful.
07:48 But to take pre-authorization from the government
07:51 for every product, I think that would be very unsustainable.
07:55 - All right.
07:56 Finally, you have actually launched a product called Kaleido
07:59 which is actually a text to images AI platform.
08:02 So what have you done there to protect users, creators,
08:06 to protect them from deep fakes and misinformation
08:09 that may stem from deep fakes?
08:11 So what is the kind of self-regulation,
08:13 if I have to say so, that you have put in place
08:16 so that we are protected?
08:17 - It's a great point, Tushar.
08:19 Self-regulation is a thing.
08:21 So very powerful products can be regulated by the regulator
08:26 but all other AI products should go through
08:29 what is called as a responsible AI filter.
08:32 Now, what is a responsible AI filter?
08:34 Essentially, there are a bunch of principles
08:36 like transparency, explainability, fairness,
08:40 lack of bias, or what kind of data has been used
08:44 to build the models.
08:46 These kinds of information has to be looked at
08:48 and certified by internal methods that look,
08:52 we have taken adequate precautions
08:54 and it passes the test for it being a responsible AI.
08:58 And once it is certified, only then it can be launched.
09:01 So a self-certification might be helpful.
09:04 Now, as I'm part of NASSCOM and NASSCOM across
09:07 a whole host of companies have come together
09:10 to create the responsible AI guidelines that India can use.
09:14 So if government could potentially take those guidelines
09:19 and apply and ask every company
09:21 that's releasing AI driven products
09:23 to use these guidelines and self-certify themselves
09:26 before launching a product,
09:28 I think that would be a very welcome measure.
09:31 Now, coming to Collider.ai,
09:33 Fractal has released this product called Collider.ai,
09:36 spelled as K-A-L-A-I-D-O.ai.
09:39 It produces high resolution HD images
09:42 from just text prompts
09:44 across a whole host of Indian languages.
09:46 So different, it's Telugu or Marathi,
09:50 or it's Urdu or a whole host of other languages.
09:54 It can take those languages and produce realistic
09:57 high definition images, and they're quite interesting.
10:00 And we launched it recently.
10:02 It has scaled up nicely.
10:04 The thing that we have made sure is that
10:06 any celebrity picture that it produces
10:08 is adequately labeled,
10:10 and the watermarks are all over the picture
10:12 in such a way that it cannot be eliminated from the picture.
10:15 So that's one thing we have done.
10:16 Secondly, all pictures,
10:19 if there's a request that comes in for harmful content,
10:22 which is what we call as NSFW,
10:26 or not safe for work,
10:29 any such requests,
10:31 it could be various kinds of harmful requests like those,
10:34 we automatically detect those and we block those.
10:37 And we mentioned to the user that this may violate
10:42 our AI safety norms,
10:44 and therefore we will not be able to produce that image.
10:46 So we are using AI to detect such harmful requests
10:51 and automatically block those requests.
10:53 So that's something that we do as well.
10:55 Overall, the idea has been to make sure
10:57 that it is used for intended purposes and not beyond.
11:01 Now, this cannot be 100% guaranteed,
11:03 but we are using various kinds of AI models
11:06 to certify the quality of the pictures
11:08 that are coming out in the first place.
11:11 - All right, that's a very insightful chat, Srikanth.
11:14 Thank you so much for joining us today.
11:16 I'm sure it helps our readers or viewers
11:19 to understand what this advisory means.
11:21 Thank you so much.
11:22 (upbeat music)
11:25 (dramatic music)
Comments