Skip to playerSkip to main content
Master Copilot and AI Agents in this essential lecture! Explore how Microsoft Copilot and AI agents work, learn to adopt generative AI in your business, and understand principles of responsible AI. Dive into AI limitations and hallucinations to use these tools effectively. Perfect for boosting your tech skills!

Explore our other Courses and Additional Resources on: https://skilltech.club/

Stay updated with real-world tech skills
Follow us on other social media
LinkedIn: https://www.linkedin.com/company/skilltechclub
Instagram: https://www.instagram.com/skilltech.club/
Facebook: https://www.facebook.com/profile.php?id=61572039854470
YouTube: https://www.youtube.com/@skilltechclub
Reddit: https://www.reddit.com/user/skilltechclub/
X: https://x.com/skilltechclub


#Copilot #GenAI #Machine learning #AI #ChatGPT #SkillTech Club #Copilot #AzureAIFoundry #OpenAI #DNN

Category

🤖
Tech
Transcript
00:00now after all this let's focus on our main important topic which is copilot so guys up
00:14to this point you understood that we have language model we have generative AI we have machine
00:19learning and all those things now if you ask me what is copilot or if I ask this question online
00:25microsoft is going to say that microsoft copilot is a generative AI app that is integrated into a
00:31wide range of microsoft applications and user experiences and we already know this thing
00:36because I have already shown you the flavors of copilot which are available in various tools
00:41so this is going to help you in all these tools actually but then the question is how exactly
00:46you're going to use it there is one common question which organization people ask me actually that
00:51where exactly or what level of adoption we need to do with the microsoft copilot and generative AI
00:57well the answer is here on screen if you are focusing on different level of adoptions of
01:01generative AI in your business in your organization you have to actually focus on the three level of
01:07adoption which are available now obviously the first level of adoption is you can actually use
01:13microsoft copilot kind of off-the-shelf kind of generative AI applications in your organization
01:19in this case you don't need to worry about model training you don't need to worry about customization
01:24you can use microsoft copilot to optimize the way you work in your tools with your everyday kind of
01:30day-to-day work it's going to empower your employees to become more creative you're going to spend less
01:36time in so many time consuming tasks and then it's going to be and your employees can be focusing on
01:42high impact activities only now this is going to be the first level of adoption and that is something which is
01:47one of the easiest one there are a number of organizations which are right now on this stage
01:52one and that's what they are doing when they are incorporating copilots into their organization
01:57the second thing is you can extend microsoft copilot for your business using integration with the
02:03business specific task your business processes your organization specific data you can integrate
02:08those things with microsoft copilot and based on that you can actually get actionable insights from your
02:14own data it's going to help you to improvise the productivity associations with your productivity
02:19apps in your organization and that extension is something which is your level two now other than
02:26this you can go with the third option also which is like you can build your own copilot like agents
02:32ai agent development is also one of the latest and most important thing of the recent time
02:38if you want to develop copilot like agents by yourself you can integrate generative ai into your
02:43custom workloads and applications you can create compelling customer experiences and commercial
02:49products for your organization for your customers actually and this is the third and the final option
02:55in which you are actually going to have full control over the design and development remember the
03:00third option is giving you full control on everything but this is also the one which is the most complex
03:05time consuming and expensive because you're going to have a dedicated team who's actually going to
03:10develop this kind of a copilot like agent the easiest one is obviously the level one and the toughest one
03:17or the one which is giving you the maximum control is going to be the third one which one you're going
03:21to use in your organization well that obviously depends on the requirement but i mostly say that your
03:27organizations are always going to start with the first level and then they're going to eventually move
03:31one to the other if they need that okay now before we proceed to use prompt engineering
03:38techniques with microsoft copilot the one and only important thing which is remaining is principles
03:44of responsible ai this is undoubtedly one of the most important topic of artificial intelligence guys
03:53anyone who's using ai should use responsibly and that's what every organization
03:59including microsoft every country including india us europe all these countries are actually
04:05focusing on this very well now there are six different principles of responsible ai which are
04:10available on screen right now which you need to be very aware and you need to use ai wisely
04:16using these principles and that's the reason you need to understand this thing properly
04:20now you can see right now there are six different principles which are there fairness is talking about
04:25that your ai system should not be biased so basically they're giving you example of everything
04:30in the right side like maybe you have developed a loan approval system and based on that loan approval
04:37system if you have trained that based on some biased data then it is going to give you a result which
04:42is also going to be biased right they're giving you an example like a loan approval model which
04:46discriminates by gender due to a bias in the data in which it was trained if that kind of things are
04:51there it's not a good ai system same way you have to take care of second principle which is reliability
04:57and safety so there are chances that your ai system can cause some kind of an error and because
05:03of those errors it can cause harm to a human being like an example we have an autonomous vehicle
05:09experiencing a system failure and it can cause a collision if this kind of things are there it's
05:14a bad thing so you have to make sure that you are doing all the reliability and safety checks
05:18before you're giving it to your users third one is privacy and security obviously data privacy is
05:24very important thing especially in certain domains like healthcare and finance now they are giving an
05:30example like a medical diagnostic bot which is trained using sensitive patient data which is stored
05:36insecurely if that kind of thing is there again that's a bad thing fourth one is inclusiveness maybe
05:42your solution may not work for everyone if that kind of case is there you have to make sure
05:47that your ai system should empower everyone now let's say there is an example that you have created
05:53a predictive app which provides no audio output for visually impaired users if again that is there
05:59then blind people will not be able to use that application fifth one which is transparency so user
06:06must be trusting your complex system ai systems are most of the time very complex and if there is a complex
06:11people will not believe people will not trust on that system if that is the case that's a bad thing again
06:17like maybe you have an ai based financial tool which is making an investment recommendations
06:22when this kind of thing is there you have to be very clear on what kind of recommendations are based
06:27on this so basically what data you have used to train that model you have to be very specific and
06:32crystal clear on that then only people or users will trust on that last but the most important one
06:39accountability let's say you develop an ai application or you are using an ai application and you are taking
06:46care of first five principles still something can go wrong if anything is going wrong with ai systems
06:52who is accountable for that a company who has developed that application a government who has
06:57allowed that application who's responsible like for example an innocent person is convicted of a crime
07:03based on an evidence from facial recognition which was maybe faulty if that is there who's responsible
07:09for that the six principles are actually those principles which microsoft and every other organization
07:14who's using ai agents or developing ai agents are taking it very seriously for example right now i'm
07:21showing you a url which is microsoft.com slash us ai principles and approach if you go into this
07:29particular url it's basically talking about those six different principles which i have told you
07:34fairness reliability and safety privacy and security inclusiveness transparency and accountability and not
07:40only the six principles is actually showing you the real-time use cases some of the videos some blogs
07:46some articles some white papers in which they are actually trying to highlight what are the right
07:51ways to use ai with this now let's say if i focus on reliability and safety right now and if i say i want
07:57to see reliability and safety in action it's going to take me to a page where i can see some videos
08:02it's going to show me certain maybe examples where i can take care of reliability and safety in my applications
08:09these principles these principles of responsible ai are the most important things which you have to
08:15keep in mind before you actually use generative ai in your organization and that's why i strongly
08:21recommend everyone to go through this website at least once and check the six different principles
08:27in depth before you use any generative ai tool now let's say you understood responsible ai but not
08:34everyone in your organization is actually familiar with that that's the reason you have to take two
08:39steps in your organization before using any generative ai tool step number one you have to
08:44establish a system for ai governance now there are multiple ways to do this thing let's say your
08:49organization is going to hire or is going to decide that there will be one chief ethics officer who will
08:55be part of the organization this chief ethics officer is going to be one of the centralized way to
09:00take decisions about responsible ai basically this will be one person who's going to be accountable
09:06for all ai specific decisions and usage in the organization or maybe you want to go for the
09:12distributed way for this you can go for ethics office or ethics committee in your organization
09:17ethics office is going to have dedicated ethics team for different levels of the organization
09:23basically these people will be internal team members only but they are going to be focusing on
09:27ensuring the ethical principles which are being followed by all the employees in the organization
09:32or maybe you can go with ethics committee in which even external people can also be part of the
09:37committee they are going to provide perspectives from people with a wide range of diverse background
09:43and expertise unbiased opinions from external members and buying from our senior leaders across the
09:48company no matter which way you're going to go on you're maybe going to have chief ethics officer or ethics
09:54office and ethics committee also with that whichever way you're going to focus on you also have to
10:00take certain actions for ai governance in your organization now let's say you are going to do the
10:06first action which is like you're going to make resources available for your employees for your customers
10:11maybe a handbook maybe a manual or maybe you can just have a training sessions for them so that they
10:17will be familiar with governance of ai and responsible ai guidelines or maybe you can share my video also with them
10:23so that they can understand all those things you should have a centralized repository for couple of
10:29ai related inventory mechanisms so maybe you have document of all ai models associated with that so
10:35that everyone in the organization should be able to check those things and they can understand that very
10:39well also you need to develop certain tools which can actually help you to automate the system of
10:45monitoring with the ai governors anytime if anyone is not following the ai governor's guidelines
10:51that's reporting tool should have some notifications some alert mechanism by which maybe it's going to
10:57notify to chief ethics officer or to the ethics committee about the compliance issues which are
11:02generated because of generative ai now these kind of actions and those kind of setup if you do any
11:08organization then you can see that your organization is ready for generative ai tools now before we proceed
11:17further i just want to add on some of the limitations of ai systems now remember there is the first
11:23limitation of ai which is known as ai hallucination ai hallucination is actually that particular issue
11:30because of which ai systems can actually give you false or misleading ai responses like if i go to my
11:36copilot.microsoft.com this is a url based on which i can get my copilot inside the browser now as you can
11:42see right now even on the home page where they are showing me that what's on your mind today maruti if
11:48i can put a prompt here but exactly if you focus on the below of this particular page it's showing me
11:53what copilots may make mistakes so remember even microsoft and everyone who's developing a generative ai
12:01application knows that the generative ai can have certain limitations and issues ai hallucination is one of
12:07them now if you ask me what kind of causes which can cause ai hallucination well first of all remember
12:14there are chances that the models on which the ai systems are based are actually having incomplete
12:20data or outdated data associated with that when these kind of things are there ai is actually going
12:25to fill the gap with guesses so if it is not having proper data it can maybe trying to imagine it or try
12:32to create the things which actually never happen um it is always going to be giving you the answers
12:37based on the pattern based output so basically it's not true it's not having any true understanding
12:42associated with that no fact checking which is done by ai you need to have at least a human being who's
12:48going to do the fact checking on the ai generated answers but there is no internal fact checking mechanisms
12:54on the generative ai applications because of this there are so many examples which we have actually in
13:00the history where people have used generative ai but they didn't got a correct answers even there
13:06were a couple of fake research papers which people have created for their phd thesis they have used
13:11generative ai to generate those kind of phd this is kind of a white papers and it was actually having
13:17effects which was not the real fact it was actually all the imaginary hallucinated uh which were generated
13:23actually inside that now there are so many cases also which are available in which uh incorrect historical
13:29scientific facts were actually there and these are the common issues with generative ai now because
13:35of all these limitations of generative ai which are there which can be created because of the knowledge
13:39gaps or accuracy issues or maybe there are certain ethical risks are also available in that we should
13:45be familiar with this kind of thing because misinformation privacy or even deepfakes are some of the common
13:51issues of ai now when you're doing this thing remember how can we mitigate that thing well the first and the
13:58most important thing which you have to do is you need to have a fact checking mechanism set up with
14:02the human oversight only a human is going to be able to understand what kind of things are realistic and
14:08what are not that's the reason while starting this particular thing i told you that that copilot is a
14:13copilot it's not a main pilot you are a main pilot so you have to be making sure that whatever things
14:19which you're using which are generated to copilot is actually valid is actually proper for the audience
14:25customers or employees of your organization and do not forget you have to use ai responsibly keeping
14:32all this thing in mind i want to add on one more thing which is different types of models which are
14:36available now i'm sure you've heard about something called deep seek r1 deep seek r1 is actually a
14:42reasoning model now as of now we actually have two different types of models available in the market
14:48we have chat models and we have reasoning models chat models are actually going to help you in the
14:53natural language conversation because these are best for the responses based on your natural
14:58language conversation on the other hand reasoning models are going to be helping you in performing
15:03structured reasoning logic and problem solving kind of mechanisms some of the examples of chat
15:09models are chat gpt bad copilot and all and some of the examples of reasoning models are maybe
15:14o1 model or deep seek r1 or symbolic ai kind of models now remember both the models are actually
15:21having pros and cons and that's the reason modern ai agents should use both of them they should use
15:26combination of this thing even the latest version of copilot is actually a copilot which is ai agent
15:33which is actually using chat model powered by releasing models it's actually using both of them
15:37internally now chat models are actually good in predicting next word based on the pattern because of
15:44that it's fluent it's context aware and it's more creative responses basically it's going to be more creative
15:50when it's going to give you responses on the other hand creativity is not there in the reasoning models
15:56reasoning models always use logical inferences structured rules and mathematical reasonings
16:01associated with that that's where reasoning models are going to give you high accuracy associated with
16:06that this shows me that this is my choice now i want to go for more creativity or i want to go for
16:12high accuracy now obviously more creativity comes with issues like hallucination lack of deep logic and
16:20inconsistent as well as high accuracy is going to come with the issues like it's going to be always
16:25rigid it is lacking the flexibility in handling natural conversation that's the reason depends upon the
16:31scenario you can actually use one or the other models or maybe you can combine both of them if required
16:46or maybe you can use one or the other model but the other models that are going to be
16:50wanted to take the focuswow on the idea of the reaction well-eventure and that's what
16:53you need to make sure that.
16:56So we're doing a lot of work in general with…
16:57…we're doing a lot of work in general you can do that in your given that for a lot of energy
17:00so we can do a lot ofա.
17:02So we're not doing that in our hands-outs are always taking the주art that we're doing
Be the first to comment
Add your comment

Recommended