- 6 weeks ago
AI Centre at the IIT Madras has been studying the government guidelines and has taken steps to support safe and responsible AI adoption.
Category
🗞
NewsTranscript
00:00in the AI governance guidelines that was released last month so we have taken a
00:10slightly different approach than just talking about the harms and risks of AI
00:15right so half of the regulation half of the guidelines actually are about
00:21enabling AI adoption right and the other half is about risk mitigation so in the
00:27enabling adoption we have talked about you know what needs to be done in terms
00:30of you know providing more resources for AI developers and AI research and also a
00:36capacity building and skill development for AI solutions and things like that so
00:42that is one part of it and then we also talk a lot about AI awareness how not
00:48just you know AI developers should be there and AI scientists should be there
00:52but almost everybody who can potentially use AI right including you know
00:58government servants IIS officers and so on so forth and also people in the legal
01:02professions lawyers and judges and everybody should understand the impact
01:07of AI in their domain and their discipline and they should also go to the
01:11general public at large because they are obviously exposed to a lot of AI and AI
01:16decisions so this kind of AI awareness building also is something that we have
01:20recommended should happen and in the second set of guidelines which is about you
01:29know AI you know risk mitigation so the first main recommendation we have made is
01:35that you know depending on which sector AI is being used in right the existing
01:43regulatory framework can cover a lot of harms that can be caused by AI right for
01:48example people talk about bias as one possible thing that comes out from AI but
01:52many many disciplines where bias is a is a is a potential risk right there is
02:01an existing regulation that say that even human beings decision should not be
02:04biased you should not use gender for making a loan decision or something like
02:08that is true so we have already have these kinds of regulation and obviously any
02:12system that uses AI has to satisfy those regulations as well right so the first
02:17thing that we need to do is look at the existing regulatory landscape right in
02:23India and figure out I know where can you know AI harms be controlled by this
02:29regulations and where they will be challenged by AI regulations and if there
02:33are any modifications are needed for the existing regulations so that they can take
02:37into account the impact of AI right after we have done this and if there we still feel
02:42that you know there are harms of AI that would need new regulation then the
02:46government should go ahead and and work on these and these new AI regulations that has
02:52to be passed separately so this is one major recommendation that we have made and in order
02:58to make sure that this kind of a broad coordination happens we also recommended
03:04setting up AI governance group which will consist of representations from multiple ministries and both
03:11the central government and the state government and also some external experts who can take a look at
03:17the continually be monitoring the development of AI and deployment of AI systems and they will be
03:23advised by again multiple groups that will be a technology advisory group and there will also be a AI safety
03:29institute which will be providing both technical and also application input to the governance group which will
03:37then guide the both the AI regulation and as well as any other investments that the government has to make it
03:44in the larger AI space and these are the main recommendations that we have made in the
03:49in the guidelines right and like I said these are guidelines right so the government still has to
03:55you know accept these and then decide to operationalize different parts of the recommendations already
04:01some of the things in the capacity building and also you know providing for more AI resources and so
04:08on so forth that has already been put into operation by by the central government and some of the state
04:12government so that is because of the necessity of the times right that they have already been adopted
04:18they we didn't have to wait for these guidelines to come and the guidelines are essentially saying that
04:22more investment should happen for example we should provide more compute resources and more more free access to data
04:27for Indian developers right not not necessarily let the data go globally so all of these things have
04:34to happen and in the other side we have actually recommending that the governance group should be
04:40should be set up first and hopefully it will happen shortly and after that they will kind of guide the
04:47adoption of the other parts of the ecosystem from the viewpoint of what we are expecting from industry the
04:56developers and so on so forth so first thing is to you know come up with some kind of voluntary
05:02commitments that will help safer development and deployment of AI system so for example many of
05:09the AI companies have already agreed to you know label the AI generated content explicitly as AI generated if
05:16you go to Google and you have some search you can find that it will be very clearly labeled that it is AI generated
05:21and likewise in many of the image generation things you have a thing at the side that tells you very
05:26clearly that it is an AI generated image and so on so forth and the challenge is you know technology is
05:32not at a point where I can authoritatively say if a piece of content is genuine or generated by AI
05:39without some support from the software or the companies basically in this case right that are generating those AI systems
05:46so so we have to be more careful about saying that it will be automatically verified or you should a third party should
05:54automatically verify whether the content is genuine or not so these things we cannot build a foolproof mechanism yet so we have the we are also
06:02encouraging the development of this technology that allow us to do this in a more technology driven manner but to begin with we are saying that the company should agree for this voluntary commitments and the second thing we are also asking for is a more extensive
06:14AI reporting mechanism AI reporting mechanism AI incident reporting mechanism so that whenever a developer or a user of this AI system becomes aware of some new AI risk that was not discovered earlier and that they reported to a central repository so that this can be made aware to the community at large and also perhaps participate in international reporting so that these incidents also get reported globally so we are also requesting that the
06:44or recommending that such a reporting ecosystem should be set up very quickly and that the company should participate in that
06:54so one of the things right so we are really concerned about is the rapid access that has been enabled right the rapid enablement of access to AI systems to the general public right
07:14even if you go to a village right even if you go to a village as long as you have a geo connection or an actual connection you have access to
07:19some of the quote unquote cutting edge AI systems right but the problem is these AI systems are essentially reading things from the internet right they have basically been trained on text from the internet the general purpose AI system right and they are answering questions based on what is that out in the internet so you should kind of attach the same level of reliability that you would attach to a Google search query
07:43to the output coming from this AI model right you are not going to start a new course of treatment just because you found a page in by searching on Google that tells you this so likewise you should not take you know decisions that can affect your health or your life you know based on the recommendations that are coming out from this AI models you should always get it verified from domain experts so if the AI tells you that you have to take a certain medicine for your current health condition go talk to a doctor about it hey saying that hey look my AI said this you think
07:48I should take it seriously or not right right you know actually or not right it actually has happened you know sometimes the doctor has said that you know they have had pointed out of its own
07:57or your life you know based on the recommendations that are coming out from
08:01these AI models you should always get it verified from domain experts of the AI
08:05tells you that you have to take a certain medicine for your current health
08:08condition go talk to a doctor about it hey saying that hey look my AI said this
08:12you think I should take it seriously or not right it actually has happened that
08:16you know sometimes the doctor hasn't thought in that direction though the AI
08:20actually helps them you know come to a better decision but you are not a judge
08:24of whether that decision is better or not if the doctor has to detect that so do
08:28please do not rely on completely only rely on the AI decisions for these kinds of
08:34life life and death questions always go to a professional who can give you a much
08:41more informed opinion on these things
Be the first to comment