Saltar al reproductorSaltar al contenido principal
  • hace 3 días
Descubre un marco de tres pasos para transformar tu empresa con IA generativa, basado en casos reales de compañías que ya están obteniendo valor con esta tecnología.

Categoría

📚
Aprendizaje
Transcripción
00:00Bienvenidos a nuestro webinar, Scaling Gen.AI, Get Big Value from Smaller Efforts.
00:13I'm Abby Lundberg, Editor-in-Chief at the MIT Sloan Management Review, and I'll be moderating the event.
00:21Today's program is sponsored by Sloanis.
00:24You know, despite two years of a lot of experimentation, most companies aren't seeing the large-scale Gen.AI transformation they first envisioned.
00:35Our speakers today have developed a three-step framework to help organizations generate real value at different levels of investment,
00:43while effectively managing the risks that Gen.AI innovations can bring.
00:46In today's webinar, you'll learn how successful companies like McKinsey, CarMax, and Morgan Stanley are getting value today from small-t transformations,
00:58and how to move from pilot to enterprise scale while managing data, security, and compliance.
01:05Our speakers have written about this for MIT Sloan Management Review.
01:09You'll find that article in the handout section for download, and we'll also post the link in the chat.
01:14So let me introduce our speakers.
01:18Melissa Webster is a Senior Lecturer in Managerial Communication at the MIT Sloan School of Management.
01:24Her research focuses on how organizations communicate and implement new technologies
01:30with a particular focus on digital transformation and AI adoption strategies.
01:35George Westerman is a Senior Lecturer at the MIT Sloan School and co-author of the award-winning book,
01:43Leading Digital, Turning Technology into Business Transformation.
01:48He's a leading expert on digital and AI transformation, advising numerous Fortune 500 companies on their digital strategies.
01:57Welcome, Melissa and George.
01:58Hi, Abby, and hi, everybody.
02:04I'm really glad to be here to share this research.
02:06Hopefully, it can be useful for you as you go about making your own strategies happen.
02:12So if you can't tell, I'm George.
02:14Melissa is the other person.
02:16And let's talk about this research here.
02:19So this is the big question.
02:20When will we see the major transformations with generative AI?
02:23We set out to write this article about the major transformations that were happening with generative AI.
02:29And to our dismay, we didn't find any, but we found very few.
02:36And when we sat back and looked and said, why is that?
02:40We actually found there was a much more interesting and I think much more smart approach that companies were taking
02:45than just jumping into the big things.
02:47And that's what we want to share today.
02:49So here's what we heard when we talked to leaders around the world about what they're doing.
02:58It's easy to do the proofs of concept, but bringing it to the right level of trust in a large group of users is much more difficult.
03:04Another leader said it much more succinctly.
03:07She said that, you know, that low-hanging fruit, it's not really so low.
03:11And so that was the question.
03:13How are people approaching it when they see this challenge?
03:16So what we want to do in the next hour is to these three things.
03:21Number one, just setting the scenes.
03:23I'll set the scenes and then we'll talk about this growth slope.
03:26And Melissa will take over from there and then we'll go back and forth.
03:30And we'll end with what's next, where we think things are going.
03:34So just to set the scene, here's the challenge.
03:38It's a huge opportunity from generative AI.
03:41We've all been talking about that, but there are also huge risks and concerns in implementing these things, especially if you start talking about taking the human out of the loop.
03:54So what is the right strategy?
03:55How fast do we want to go?
03:57Where do we want to use it?
03:58And how do we handle the capabilities that we need to develop in between there?
04:05So this is the article that Abby talked about.
04:08And this is what we're going to go through in the next hour or so.
04:12So what we found is not that companies are either jumping or either staying away from the big changes or jumping right into the big changes.
04:22What we found is that the smart leaders are taking a much more measured and systematic approach to getting to the large things that they want to implement.
04:31And we saw that really in three stages, these things are happening.
04:35Number one, they're starting with creating a safe environment for individuals to do their own productivity work.
04:43And we call that level one.
04:45They're then taking on specific, well-defined tasks and roles and starting to implement Gen AI improvements in those areas.
04:56And those are setting the stage to take on these bigger challenges.
05:00But what they're doing is that every step of the way, they're learning how to manage risk.
05:06They're learning about their tools.
05:08They're building up capabilities in the technology people and the rest of the employees to be able to move forward to those bigger opportunities later.
05:18So having said these three things, we want to ask you a poll.
05:21And I'll go back again to the picture in just a second.
05:24But the poll is this.
05:25When you think about these three things I just talked about, the individual productivity, changing some specialized roles such as coding and call centers or customer support, maybe in some marketing efforts, or if you think about the more greater autonomy in the bigger things, what's the highest peak that your organization has climbed so far?
05:48So, Sam, can you launch that poll?
05:52There we go.
05:53So people are already starting to fill out this poll.
05:56And let's keep going and see who wins.
06:01What's happening so far is we see that the red, the individual productivity, is in the lead.
06:05And this is what we're seeing in companies also, that this is a good place to start.
06:11It's relatively low risk.
06:13The cost doesn't have to be too high.
06:15And it's an opportunity to get in there and just get people comfortable and also get the technology people comfortable.
06:23We do see some piloting in level two.
06:26And Melissa is going to talk about that a lot more, too.
06:31A little bit of piloting in level three.
06:33And we do have a few intrepid souls that are putting level three in production.
06:38And that'll be interesting.
06:39We'll come back to that in a bit.
06:43Those of you who are doing none, it's probably a good chance to start moving into level one as soon as you can, because your employees are doing it.
06:51I would say your employees are doing it whether you know it or not.
06:53So what do you think, Melissa?
06:56Should we take it from here?
06:58Yeah, I was just going to comment that these numbers look similar to other groups that I speak with as well.
07:06So everyone, you're in good company.
07:12On to level one.
07:14What we're going to do now is we're just going to go through each of the levels and talk about them in more detail.
07:18So you can think about what you are doing and what else you might do as you're in these levels.
07:24OK, so level one, individual productivity, which is where the greatest number of your organizations are.
07:33This will sound probably somewhat familiar, but let's let's visit anyway.
07:39And this was my team's research in the spring that these were very common use cases among the advanced users that we were talking to.
07:50So you can see inbox management, meeting support, calendar optimization, briefing prep.
07:57A couple use cases I'll call out that I hear less commonly, but might be interesting.
08:03One for written communication or even planning spoken communication to ask ask the LLM to help you put it in a different voice or different cultural norms.
08:17So I remember someone in the audience from Europe saying, oh, yes, I recently started working with American teams and I've been asking the LLM to translate my English into a more American style of business writing.
08:33So as a communication professor, I definitely appreciate that use.
08:37All right. So on to look at a few specific things.
08:39And let's actually start with the right hand side, the company specific tools.
08:43Those tend to be a little bit later in the adoption, depending on the organization.
08:50McKinsey, many other companies have taken similar built similar tools where it is allowing employees to access many,
09:01many, many, many hundreds of thousands of pages often of internal sources and be able to draw from those to work more effectively.
09:11You can see within a year, 75% of employees were using it actively and up to 30% time savings, also improved quality.
09:21And a couple of notes about external tools.
09:25The logo that the black logo that may be a little less familiar to you, that was a product manager in big tech using Super Whisper to dictate, summarize and clean up their thoughts during performance reviews.
09:39And that flexibility around voice capability is another one that I see more advanced users doing, such as talking with ChatGPT on their way home during the day to pull their thoughts together,
09:54synthesize the day's happenings, and then be able to reference that transcript later.
09:59And Copilot and Gemini, for organizations that are Microsoft houses or Google Workspace houses, they get a lot of the benefits of McKinsey's Lily, for instance, of being able to access and draw on the existing files and systems.
10:20And lastly, I'll mention that perplexity, which is one my team doing research, uses a fair amount.
10:28That was one that a product manager in a big tech firm that had the existing, had the authorized LLMs to use was also using perplexity to cross-reference info.
10:42And so that employees making choices alongside or maybe even outside of your guidelines is a very real thing, sometimes called shadow AI or your, yes.
11:03So to summarize here, two things to keep in mind.
11:09On one hand, on one hand, a lot of this may seem familiar by now.
11:13And if it doesn't, as George encouraged you, time to jump in.
11:19On the other hand, mid-level managers and executives that I meet with could be doing more here.
11:27The common use cases tend to be around writing, summarizing, maybe travel, but there is so much more that you can do.
11:36So I encourage you to keep expanding in level one, even as you're looking at where to go in level two or three.
11:45All right.
11:46So on to level two, and this is where, as George mentioned, we're looking at these specialized roles and tasks.
11:54We'll look at a few of those now.
11:55So starting with coding and data science, this was early adopters of Gen.AI.
12:03You are seeing developers using it for code, finding useful libraries, et cetera, and a lot of stats in terms of more productivity, as well as I find this an interesting one.
12:17A lot of developers finding that they're happier about the work, that now they have a partner developer that they get to work with.
12:25They don't stay stuck on things.
12:28And they are able to address tickets that may have been sitting around for a couple of years because there wasn't a good way to solve it before.
12:37And now, with help, they're able to do that.
12:40One quote from our research in the spring, Gen.AI can mirror a junior analyst or engineer that works 24-7 and improves with better prompting.
12:52So speaking to the impact on this data systems architect coding workflows.
13:01And we're noticing this, of course, in headlines and in jobs around the world in terms of AI tools reshaping the coding workforce and, in some ways, bringing a lot of uncertainty with it.
13:19Some other early uses, customer service and sales, being ones we were seeing a lot, as well.
13:29So Amazon Pharmacy using it, using LLMs to support customer care representatives.
13:36CarMax for summarizing reviews.
13:39And they found that the hours of reviews that, I think it was 5,000 reviews, it was more than 5,000, that instead of taking multiple worker years to go through those, it was just a few hours.
13:54Cisco generating personalized scripts for sales calls.
13:58And then John Hancock with the chatbot assistants handling common queries.
14:04And then the more complex ones being rooted to human agents.
14:10And that's a theme in general that you see in Level 2 is a working with LLMs, that it's humans and AI together, and finding the places that the AI can support the humans, and for the humans to be overseeing the work of the AI.
14:33And then this research by my colleagues here at MIT and elsewhere found that in a call center, the tool increased productivity by 14% on average, including the 34% for novice and low-skilled workers, which, with the turnover in call centers, is really helpful to be able to help folks with moving up that learning curve.
15:00All right, then finance and auditing, this was, I was not expecting to see a lot in this, but we were, we did see that that adoption is beginning.
15:12So one international energy company using it in the auditing group to suggest mitigations, help rewrite an audit report.
15:21And then an interesting piece in the Wall Street Journal about Amazon and the finance function, the uses there, leading to improved performance and a shift of focus to work that involves more critical thinking.
15:35And then this one, I know George loves to talk about this, would you take this one?
15:44So Dentsu is a really interesting idea for what they're doing, and this really combines level one and level two.
15:51It also really talks about something that's coming up in the chat, which is how do you orchestrate the fast movers and the slow movers?
15:57You know, you can think about orchestrating those from unit to unit, but we can think about doing it also within particular organizations.
16:06So Dentsu is an advertising and creative company, and they do the kinds of things you'd expect, a lot of short projects and long projects where you've got to plan those projects, a lot of writing and graphics, and a lot of iterating to come up to a common, something that works for the client, not just for themselves.
16:21And they found an opportunity to use generative AI in many different places.
16:27You can imagine this might have felt threatening to many of the people in this world.
16:31And what they did is they started off in a way to say, let's not make this threatening.
16:34Let's make it better for people.
16:36So what they did, first of all, is they just did level one.
16:39They created a set of tools and a set of practices that they said, have fun with this.
16:43Go do whatever you feel like is the right thing.
16:45But what we want you to do is get together on a weekly basis to share what you're learning so that if you learn something cool, tell other people about it.
16:56Now, at the same time they did that, the technology people were also building on capabilities and things that could happen.
17:02But what they learned from these office hours were two things.
17:05Number one, you didn't need to wait.
17:07You could already do things to make yourself productive.
17:09Number two, though, some of these are really good ideas.
17:12So if they saw two or three different approaches to the same problem, now the technical people could come up with a common approach that they could make as an easy task for people to do.
17:21So the level one work that they used to get people comfortable, to reduce some of the fear, then moved into level two things that started to transform the way the organization operated.
17:32So, for example, now budgets are you push a button, your budget happens, and then you tweak it.
17:37Same thing with schedules, proposals, these kinds of things.
17:43And the other thing they learned is the ideating and visualizing in a room, they no longer had to say, hey, come back in a week.
17:50We're going to have some more creative stuff for you.
17:52They said, let's do this together, and let's figure out what works, completely changing the process with the clients.
17:57That worked better for the clients and also for the people in the organization.
18:00So this is an idea that level one led into level two that's leading to really interesting things for the organization.
18:08So let's go on to the chat.
18:10What are you doing in level two in your organization?
18:14I'll pull up the chat screen here.
18:16While we're waiting for folks to enter their responses about implementations in level two, I noticed a question in the chat about the difference between generative AI or regular AI.
18:32And that's a great one to touch on, which AI is the largest category within that you have machine learning within that you have deep learning and then within that is generative AI and generative AI, the key word there is generative.
18:49So you'll see the LLM generating text, generating images, generating voice, video, etc.
18:56The challenge comes is now that people are using the terms very interchangeably.
19:04So you really have to pay attention when someone is talking about AI.
19:08Are they referencing Gen AI specifically?
19:10And the other term you'll hear people using is large language models or LLMs such as ChachiBT, Gemini, Claude.
19:18And sometimes they may mean Gen AI.
19:22Sometimes they might mean a combination of them.
19:24And yes, so it does take a little attention.
19:29We are primarily talking about Gen AI, but sometimes you'll hear us talking about companies using a mix.
19:35George, anything you want to add on that?
19:38I thought that was great.
19:42The key point that I want to reinforce that Melissa said is just because we're all saying the word AI doesn't mean we all mean the same thing.
19:49So it's worth it to develop some shared understanding, make sure you're talking about the same things.
19:54Thanks, everyone, for entering in the chat what you are doing in level two.
20:00So we see things from cybersecurity assessments, very nice, and dashboard creation, agents handling customer service calls.
20:13Perfect.
20:13So these are examples we definitely saw in our research.
20:16Some things that we did not see is, for example, vehicle service maintenance invoice analysis.
20:22That's very interesting, Alexander.
20:24Thanks for sharing that.
20:25I wouldn't in the middle.
20:27Right now I'm in the middle of a car repair.
20:29And this would have been very nice instead of calling three different people and getting three different answers.
20:33Yeah, I also see some folks mentioning agents.
20:38So stay tuned on that.
20:40We'll have another question later about agents.
20:44So and then we have Bart automatically processing invoices.
20:47Now, it's interesting you said without losing the human touch.
20:50So that if you could, Bart, say a little bit more about how you're balancing the replacing people versus keeping the human touch in there.
20:58As you're talking about writing user stories and not only helping in the coding process, but actually writing the story cards, too, which is very interesting.
21:07Thank you for sharing these.
21:11So let's jump on to level three and then please keep sharing these things in your in your in the chat because we're all learning from these examples.
21:20So level three, we move to the higher risk, more worrisome kinds of things that are happening.
21:29And often.
21:31And these can be these can require a whole lot of capability development.
21:35These can require a whole lot of risk management.
21:38And that's why companies are taking a careful approach to get up to this stage.
21:43Now, we do see companies doing things in level three.
21:46Gen AI features in products and services is one place we're seeing a lot of this.
21:51We're starting to see more direct engagement.
21:53And, you know, the question is whether it's the very simple.
21:56OK, I need to change my password.
21:57We know how to do that.
21:58We're the more complicated engagements that are happening here.
22:01And we're seeing so we're starting to see, but not yet as much the really transformation of the underwriting process or the regulatory process.
22:08It's starting to happen, but we're not seeing as much of it yet.
22:11So just to give you some examples here, here's Penty again, who we saw earlier.
22:17What he's being really clear about and our other informants are being very clear about it also is we're not going to see a Gen AI solution for these level three problems.
22:27What we're going to see is level three solutions that include Gen AI and other things in there, technologies, tools, and people.
22:35You're going to look at the process and we're going to figure out among those tasks, where is Gen AI a good opportunity, a good solution?
22:41Where are the other things?
22:42And also, how are we going to integrate across?
22:44And as Melissa gets into the HANIC, that integration also becomes much more possible than it was before.
22:52Remember that the numbers are saying somewhere, last numbers I saw from ISG was that 15% of successful bench pilots ever scale.
23:02So it's that low-hanging fruit, maybe not being as low.
23:07These are tough challenges and companies are making progress, but it's not happening everywhere yet.
23:12So let's look at what Adobe's doing.
23:14One of the things we are seeing a lot is software vendors adding this to their tools.
23:20And this is really good to see because, Gen AI can do amazing things in that case.
23:24So Adobe, for example, they can help with rapid content creation.
23:29And the kinds of things you can do to customize the graphics and other elements that you're doing, it's really phenomenal.
23:37Adobe and other tools.
23:38But then they start to move towards level three.
23:41Can we help use this to help manage your content?
23:44Find, reuse, and share content with governance rules applied.
23:50Can they start to manage your campaigns for them?
23:53Can they deploy the right assets?
23:55These kinds of things start looking much more like level three.
23:57And we're seeing this in more and more software companies.
24:00So, for example, SAP, they now have chatbots that can do a lot for you.
24:05And they're moving beyond the chatbots to actually make decisions and do work for you.
24:11Same thing in Workday.
24:13So if you have certain tools that you're already using, one opportunity you have is to start activating the level three features that they have.
24:25Now you want to do it safely, but at least they've handled part of the problem for you.
24:30And where you can go is things like this.
24:33Lemonade sells insurance.
24:34And they're selling 98% of their policies.
24:38And they're paying out 50% of their claims from start to finish without any human intervention.
24:43Now, Lemonade has a business model that is more amenable to this kind of work.
24:49But we're starting to see this in many companies, especially the simpler work being done just automatically.
24:55This idea of paying, you know, and not only taking the information for a form, but actually making decisions and paying out claims.
25:02We're starting to see more of that in organizations.
25:07So, and what's happening, for example, is organizations that are starting to use agents, and Melissa will talk more about that, where the agents can start to say, okay, we can figure out whether this is an easy problem or not.
25:22Let's look at underwriting insurance policies or something like that.
25:26And where we agree that this is right, we can take action.
25:30We'll recommend the action.
25:31And where we don't agree it's right, we'll make sure a human continues to be part of that loop.
25:36So this leads on over to level beyond the going forward part.
25:41And this is where, Melissa, you get to take over again.
25:45Great.
25:45All right.
25:46So going forward, you may have heard the word agents this year.
25:53And Jensen Huang of NVIDIA is optimistic that AI agents will become the next big thing for artificial intelligence.
26:01It is predicted to be the year of the word.
26:04Sorry, the word of the year.
26:05But five days ago, Andre Kaparthi, co-founder of OpenAI, five days ago, he said year of the decade, year of the decade, that long to really implement and see the full effects of agents.
26:23Okay.
26:23So we're going back to you for the chat in terms of what you're doing for agents.
26:27I know some people named agents in level two, but feel free to rename them or bring ones you didn't mention into the chat now.
26:40I'd love to hear what you have implemented.
26:49Meanwhile, I'm catching up with all the comments on level two.
26:52It's going to be fun to read these later, too.
26:54Thank you for all that.
26:55Yeah, I really like to see the engagement here.
26:59This is we're trying to respond to a few of them, but we really look forward to digging into them later.
27:11And we also have some people that say we're still trying to get off the block, and that's totally fine.
27:16You know, that that's there's nothing wrong with that.
27:19Just make sure you get off the block.
27:20Yeah.
27:21Yeah, there's a company I'm talking to right now about doing a session for their C-suite and direct reports, and they are looking at the block.
27:33They have not really launched yet.
27:36So I look forward to being part of that.
27:38So let's take a look at some of the level threes, and I'm trying to remember where we asked.
27:44Oh, planning my holiday is.
27:45Okay, here we go.
27:47Complete automation of a core marketing team, copywriter, all those people.
27:52This is really interesting.
27:53And one of my students in a class on AI leadership has done that very same thing.
28:01He has a small 40-person marketing organization, and they are really automating most of their stuff so that they can get more value out of the human elements that happen in there.
28:12And he's finding that people are happier that way.
28:15Yeah.
28:15All right, shall I move us along?
28:20We see a lot of not really, a lot of none at this stage.
28:25And that's kind of what we were hearing when we did our dozens of interviews also.
28:30Yeah.
28:31For me, my team, it was a distinct shift between the spring research and the summer research, that in the summer, agentic was something that people were paying attention to a lot more.
28:41I have a stat from that in a moment.
28:45And please, by all means, continue in the chat.
28:48I encourage you also to reply to each other.
28:52Meanwhile, I'll point out that the big vendors have released agentic capability within the LLM.
28:59So within Claude, for instance, within ChatGPT, Manus, that was certainly very news, very talked about when Manus came out, the agent capabilities there.
29:11So you see them within the LLMs as one option.
29:16Just to back up for a minute here in terms of what do we mean by agent?
29:21So autonomous task execution.
29:24So to take it to a human model for a moment, think about maybe back in the day when you worked with a travel agent, a human travel agent.
29:34And so you would call up this travel agent and tell them the trip that you were interested in, your budget, your timing, various preferences that you had.
29:43And then the travel agent would go off by themselves and do all of that exploration for you, depending on your particular request and how familiar they were working with you.
29:56They might come back to you to check on some of those preferences, or they might just go ahead and make the bookings for you, presenting the whole trip to you when you're done.
30:07All right. So that is acting agentically, acting as an agent for you.
30:13And so technology also, when is it able to act autonomously on your behalf?
30:20So the one that I use the most is actually in ChatGPT's Deep Research, and I give it my research request, my research specifications, much like I would give them to one of my human research assistants.
30:36And it goes off, looks things up, checks them out, decides what else to look up without me having to look every step of the way.
30:44I can open it up and see how it's doing as it's in that process, but it is acting agentically on my behalf.
30:54All right. So multi-agent systems or agentic AI, and these terms do get sort of used interchangeably.
31:02I have noticed people using AI agents.
31:06I've noticed them using that term pretty broadly to apply to a lot of things.
31:09So again, another area for potential confusion.
31:12So with a multi-agent system, you have an AI manager overseeing specialized agents.
31:19So if we go back to your vacation, you don't want to just book your vacation.
31:26You also need to take care of maybe some shopping to do in advance.
31:29You need to arrange pet care.
31:31You need to arrange somebody to come water the plants.
31:34Whatever the everything else that needs to happen in order for you to take this trip are all things that need to be done.
31:40And with a multi-agent system, it is using individual agents for those different tasks.
31:47So an agent handling travel for you, an agent handling shopping for you, an agent handling the arrangement of pet care.
31:54I don't know if this exists yet as a virtual AI multi-agent system to make everything for your trip happen, but I suspect it's coming.
32:08So to give you that analogy on the human side.
32:10Okay, then this is the data from my research among our interviewees.
32:19Yes, a small number, 30 interviewees this summer, that 37.5% of them were doing some form of agentic or semi-autonomous use.
32:32Again, this was a big jump from the spring when the word rarely came up.
32:37And that contrasts to 62.5% really staying with bounded use.
32:45And as I mentioned, this was a big step up in terms of the experimentation we're seeing.
32:51And it is quite a range.
32:52So looking here at an illustration from Gartner and seeing agentic AI as a continuum.
33:02So on that right-hand side, you see the rigid reactive static at one end of the spectrum.
33:08On the right-hand side, the proactive independent evolving.
33:13So that spectrum and AI assistants on the lower end of that, but then AI agents at the higher end.
33:21And that when people are talking about AI agents, they really could be talking anywhere along that spectrum.
33:28So certainly when it comes to implementations that you're considering, you need to be more specific about exactly what your needs are.
33:36I will say that the AI assistant end of the spectrum is a great place to be experimenting.
33:43The deep research, for instance, or other easily accessible options for you.
33:48ChatGPT's agent mode, for instance, is another one where you can have it do research, create documents, many steps along a process acting independently for you autonomously.
34:03All right. So one of our interviewees, sales VP at a big tech company, I do these things with essentially an assistant where it uses agents in the background to invoke SAP success factor processes, transferring employees, providing increases, stock awards, and so on.
34:21And so starting to see some of that internal process transformation that George talked about in level three happening in this organization.
34:32And one more to share with you, everybody's trying to crack automated research, we might literally have a machine that generates ideas soon.
34:41This from a portfolio manager at a hedge fund.
34:45So certainly plenty of enthusiasm, and also a certain amount of skepticism at the same time.
34:53And my advice, don't rush, don't rush, but do wherever, and this is our overall message, wherever you are, be looking to that next mountain.
35:05What is the growth that you can move into next?
35:07So if you're just in level one, maybe the AI assistance would be the exploration area, but build up the competency before you go for the larger swings.
35:19So a couple key things to think about, look closely at the work, how standard is the work, how varied is the work, and thinking about what agents or not will be suited to that.
35:35So thinking about the work and the needs.
35:37Yes, agent is the year of the work, potentially, or the year of the work, decade even, but the solution for what you need might actually be simpler than an agent.
35:50So in the fervor of agents, do remember to look still at rules-based automation, at predictive analytics, and at LLMs.
36:00So which one of those forms of AI is something to consider.
36:06And lastly, McKinsey, in some coverage on this, talked about companies thinking about the benefits of reusable agents.
36:15So avoiding very single task, single process agents, and looking for agents that could plug in in multiple ways for the company.
36:26All right, so taking us to the wrap-up.
36:30So I want to come back to this quote, because I'm working with a lot of companies, interviewing a lot of people about what are they doing here, and what are they doing in Agendic.
36:39And the answer I hear very often now is, oh, yeah, we're doing that.
36:44We will have something for that soon.
36:47And so just to highlight that it is the year of agents or the decade of agents, and everybody's going after it, really getting it scalable is a challenge.
36:59And people will continue to work on those challenges.
37:02So as a wrap-up, really, there's a gold rush mentality going on with generative AI right now.
37:10And that's a serious push with the hype that's happening to get something done now and have something that the company can talk about.
37:18And Prem Natarajan from Capital One said it very clearly here.
37:25Building the right strategy doesn't go hand-in-hand with this gold rush mentality that we have to mine it now.
37:31And figure out instead how to do it thoughtfully and responsible.
37:35Build the scaffolding to bring everybody along.
37:37And that's really what we found in the research and why we published this framework,
37:42is this framework can give you the scaffolding to build up, each step of the way, the employee engagement, the risk management, and the tools,
37:51and also the answers to move along to each step of the way.
37:55So don't feel bad that you're not yet at level three,
37:58but certainly think about how are you going to start climbing those mountains to get to the goals that you want to go for.
38:04I do want to remind everybody that also this is not a hammer looking for nails.
38:09Look for problems that need to be solved that this tool might be a useful tool for,
38:15because some of the problems to be solved may be better solved by a different tool,
38:19or even by leaving them as manual efforts.
38:22So back to what Pentty Tofty said,
38:25the answers to these bigger problems will be combinations of Gen AI, other AI, traditional IT,
38:31even non-IT tools, and people executing the right tasks in the right way, synchronized the right way.
38:37So as we think about building your generative AI strategy, think about it this way.
38:42First of all, think about some key pioneers.
38:45Who is already doing this, whether they're telling you or not,
38:48and make it safe for them to tell you and to share their ideas.
38:53Number two, think about where you are on the slope and how you're going to move forward on there.
38:57You do want to build some management buy-in.
38:59You know, this is risky.
39:02You want to make sure that the managers above you understand what you're doing
39:06and how you're doing it in a safe way while still trying to make progress.
39:12Then start these small T transformations.
39:14Start to climb those mountains, and then think about how you're going to climb the next mountain.
39:18And the way to start that is not to force it on everybody.
39:21Find people who want to work with you and help them do better for themselves and for the organization.
39:28But you do want to be careful in your promises and your activities not to get ahead of your capabilities.
39:34That idea that 15%, only 15%, only one out of six successful bench pilots actually make it to scalability
39:43is a strong warning that maybe what we need to do is make sure that we've got some capability development,
39:49not just a cool bench test.
39:51And that's where the green comes in.
39:53What are you doing for your data investments, your skill investments, the models you're building?
39:58How are you moving through the steps?
40:01And the other thing to be really clear about, and it came up in the comments also,
40:05some parts of your organization may be in a better position to move quickly than others.
40:09And that's an opportunity because you can learn, you can get around the idea of this won't work here,
40:15and you can also make some mistakes there so the people who are a little more nervous
40:20or need to go more slowly can learn from that.
40:23And then finally, don't forget, we're talking about a mix of Gen.AI and other methods,
40:27not a full Gen.AI solution at level three.
40:30So we do have this, the Gen.AI strategy and governance toolkit that we put together
40:37in a management review, and this is available.
40:40It's got some good frameworks and some good articles and other things to really help you
40:45figure out not only what is your strategy to get started, but how do you think about
40:49the organization-wide governance and the capability building you need to do
40:53to reach those more scalable, more level three AI applications when you are ready?
41:04We would ask you also this one.
41:06Please stay in touch.
41:07We love talking to people.
41:09We love hearing from you, and hopefully we can help you also.
41:12So these are ways to get in touch with us on LinkedIn.
41:16Over to Abby.
41:19Melissa and George, thank you so much.
41:21So much good information, a lot to absorb, and we're getting a lot of great questions
41:26from the audience.
41:27Not surprising.
41:28We've got a terrific audience.
41:30So I'm going to start with this one that sort of gets to the fundamental question
41:35of your model.
41:38And so this is from Daniel.
41:41He says, can you give an estimate of how great the productivity and other gains are
41:46at each of the three levels?
41:48You don't have to answer that yet because the real question, I think, that comes out of
41:54this comment is more about the model itself.
41:56So it says, I see and recommend exactly the same steps as you brought up, but I also think
42:01there is some merit to Sangeet Paul Chowdhury's statement that if you just focus on individual
42:06productivity gains, you will only cement and work along current workflows rather than reinventing
42:11them.
42:11And I think the question that's implied in that statement is, are companies missing out
42:17in some way by focusing on small T transformations?
42:20So, Melissa, okay if I jump on this one and take the answer?
42:23So I think two things you want to think about.
42:25Let's take the first one, the second one first.
42:27So if you think about the opportunity space as a very rugged landscape where you're climbing
42:32hills and trying to reach the best peak that's out there, certainly what Sangeet said and
42:37what we all say as researchers on innovation is just climbing the slope, taking steps on
42:43the slope that you're on is not enough because there may be another whole mountain over there
42:48you don't even see.
42:49And so that's why in this three-step process we've talked about, individual is only step
42:54one, and it's just an idea of building capability, helping your people get comfortable, but step
43:00two is not looking at individual tasks.
43:03Step two is looking at reinventing roles, reinventing well-defined tasks that we can manage.
43:11And then step three is even better, just transforming the whole thing.
43:14So certainly Sangeet is right.
43:16He's a really smart guy, but that's not what we're saying here.
43:20That first peak is just the start.
43:22So you can do the exploration and the exploitation that academics like to talk about.
43:26Now, the second one is, do we have ideas on how to measure the performance gains in each
43:31of these tasks?
43:34Every time an innovation comes out and every time a technology comes out, people say, tell
43:39me their ROI.
43:40And the answer that I've seen in the research that I've done is always the same.
43:45The ROI will vary depending on what task you're trying to improve and how you're improving
43:51it.
43:51So it's hard to say, here's a general ROI.
43:54Now, we certainly had the Microsoft stuff on how the general ROI on coding earlier.
44:02But the way I would think about it is, what are you trying to improve?
44:06Let's look at the before and the potential after, and there's the ROI you're going to get.
44:10Great.
44:13So Mohit raises the question, sort of linked to what you just said, about the, you know,
44:20there's this data point from MIT that everybody cited when it first came out, about 95% of
44:28Gen AI initiatives don't deliver.
44:31Can you speak to that?
44:34You know, do you think that we're in this Gen AI bubble?
44:38And to what extent is it a bubble?
44:41How much of it is real?
44:43And do you think that this bubble can disrupt markets just like the dot-com bubble did?
44:51A couple, two questions in there, I think.
44:53Yeah.
44:53Yeah.
44:54I think, so we have both the usage question and the markets question.
44:57And in terms of usage, so one thing to remember is the Gardner hype cycle.
45:04So to refresh for folks, if we think about the expectations around a technology, and then
45:10we think about time, that once a technology comes out, it rises up to the peak of inflated
45:17expectations.
45:18Then it goes into this trough of disillusionment.
45:20But after the trough of disillusionment, you continue on to the plateau of productivity
45:25and then on to your real measures.
45:32And so that can be going on.
45:36And what I have found is where people think we are on that group depends on the person and
45:42depends on the organization.
45:43So some have really come through that trough of disillusionment and are working on that
45:51plateau and that slope of enlightenment and plateau of productivity.
45:55There we go.
45:56Slope of enlightenment, plateau of productivity are moving up that.
46:00So your perspective will change depending, your view on that will change depending on your
46:06experience, your organization.
46:07And in terms of the study, one, the headline that was taken from the study was one piece
46:15of it.
46:16And it was, I think, well chosen to be a very attention-getting headline.
46:21The study overall was actually a pretty small number of interviews, not that many more than
46:28I've done in my research this year.
46:30And also, it was based on reports, annual reports from companies, and did they mention AI or not?
46:42So I think it's much more nuanced than the headline would suggest.
46:47My interpretation is that there is a percentage of failed pilots.
46:52They are, it's nothing as high as 95%.
46:55George would have a better sense of kind of what is a normal percent with any new technology
47:00innovation and that the experimentation required here is going to mean that there is a failure
47:07rate.
47:08And a failure rate does not mean that you're not getting to good things.
47:12It means you're figuring out where the good things are.
47:14You're finding all of the needles in the haystack.
47:18So George, additional thoughts here?
47:22I agree with Melissa.
47:24You know, what you want to do when you look at these numbers is look at what is their source
47:29of data, who are they talking to, are those people, what kinds of roles do those people
47:35play, and how are they interpreting the idea of successful failure and initiative?
47:42And so for the methodology that that team used, their numbers are fine.
47:51Other people use other methodologies, other informants, and they get different numbers.
47:55What we do find, though, is there's a large number, right?
47:5870, 80, 85, in that case, 95%.
48:01There's a lot of organizations are not getting the value that they want, and that's very typical
48:06early in the hype cycle.
48:09So one thing you could think about is, are they making good choices on what to try?
48:16Because that's where we go next, is thinking in the governance angle, how do we choose opportunities
48:21that are both high in value and high in feasibility, and then you're more likely to be successful?
48:27The second is a change management, of course.
48:29Are we doing this as a bunch of technology people saying how we're going to change how
48:33you market, or are we doing it with the marketing people to help them change their own processes?
48:38Great.
48:39So Yaroslav asked, you know, so getting specific about some of your research, he asked, are
48:48the examples you give, particularly in fraud detection or finance, in wide production?
48:54And if so, what is the error rate for these, and what are the resolution mechanisms when
48:59and if mistakes do happen?
49:03Oh, I wish I knew.
49:05I was going to say.
49:06You know, maybe that's the next study we'll do.
49:09Certainly, you know, one thing to keep in mind here is that these models do make mistakes.
49:14Now they're making fewer mistakes as you put in RAG and other opportunities to make sure
49:20that the hallucinations are less.
49:23But on the other hand, human workers make mistakes too.
49:28And the way we handle human workers making mistakes is we put the right level of oversight,
49:32review, and controls in place so no human, so very few human mistakes turn around to hurt
49:39the company.
49:40And we want to think the same way with the AI.
49:44How do we have the controls and the reviews in place to avoid those mistakes?
49:48Even if it's better, there is a typical desire among us as humans to really play up the stories
50:00when the robots make mistakes.
50:03And robots are going to make mistakes.
50:05People are going to make mistakes.
50:06How can we manage them so they make fewer mistakes that hurt us?
50:08Great.
50:11On the people front, you know, there's a lot of unknowns around how this is affecting people
50:22that are in your company, the people you're bringing into your company.
50:25And so Michael said a lot of initial deployments in a scaled approach may remove skill and experience
50:32building tasks from the education process of employees, especially people new to the industry
50:36or the process.
50:38How do you deploy solutions while maintaining the foundational skills needed in your future
50:43executives?
50:44Should access to AI solutions be based on established core capabilities?
50:51So I grapple with this as somebody teaching communication or this semester teaching communicating
50:58with data.
50:59And my students, MIT undergraduates, have access to an assistant that's three years ago.
51:06None of them had access to.
51:10And that has really changed both the expectation for what they need to be able to do in the
51:14future workplace, but also what they can do in my classroom.
51:18And I will say I have not solved for me, I have not solved the question of how do I teach
51:23them to do those higher level skills when they don't have to do as many repetitions on the
51:30lower level skills.
51:31I'm trying out different things, but I have not reached that to my satisfaction.
51:35And I have not heard in my research of organizations that seem to have really solved this either.
51:41Some of the things that I think about are figuring out how to teach people to do evaluation.
51:48And so I mentioned that my students have assistants now.
51:51How do I teach them to manage assistants?
51:53And like I said, haven't cracked it yet, but experimenting with that.
52:00And that is absolutely, that is one of now the challenges of our age is how do we prepare
52:09to be the leaders, the managers of tomorrow when our process of getting there is changing?
52:15George?
52:16What Melissa said, right?
52:21It's very difficult.
52:22I do want to highlight though, that I am of a certain age to tell you how long ago I started
52:27work, pocket calculators were introduced when I was a young student.
52:32And there were some very smart engineers saying, if people don't remember how to use slide rules,
52:37they'll never be good engineers.
52:40I've never used a slide rule.
52:42I thought I was a pretty good engineer when it was happening.
52:44And the same thing, on the other hand, you know, we all rely on GPS and many of us no
52:49longer can find our way along on a paper map or remember where we're going.
52:53That may be a skill that we want to retain.
52:55And so thinking about what skills are okay to leave, what skills do you want to go and
53:00really taking a, this is a great opportunity for HR leaders or people working with HR leaders
53:05to take a really systematic look at talent strategy and how are you going to maintain the
53:10skills that you need over time?
53:12The issue you raised is a real issue and we need some solid thinking to make sure you manage it.
53:18Well, here's maybe a little easier question from Madhav.
53:24Do you have a recommendation for the best way to adopt Gen AI tools?
53:28In other words, do you focus on one tool or multiple tools in the initial stages and then
53:33mature into one or two over a period of time so that you're not boxed in?
53:37You know, is that, is there experimentation, not just in the use cases, but in the tools themselves?
53:42And how do you see that evolving?
53:44Yeah.
53:45Yes.
53:45Right.
53:46Eventually you want to get to some kind of standards because that way you get better integration,
53:50you get better use of your money, but maybe you want to experiment.
53:54The other thing to think about is some tools are better for certain tasks than others.
53:57And so you want to do that experimentation, but ideally you want to settle on some standards,
54:03at least at the task level, to gain all the values you get from standardization and integration.
54:09Yeah.
54:09I like Ethan Mollick's Substack post.
54:12If you're not familiar with Ethan Mollick, his Substack is one useful thing.
54:16And one of the thought leaders in this space, he has a post a little bit earlier this year
54:23called, it's about the lab, leadership, the lab and the crowd, and that these three different
54:29elements you need to think about when you are working to implement, because there's a
54:34lot of testing and experimentation that goes to answering your question.
54:38And it's going to vary depending on the organization.
54:41So having the structure of leadership lab and crowd is part of how you handle that process.
54:48That's great.
54:50We have time for one or two more questions.
54:55This maybe is too big a question, but Julio says, how do ethical and social responsibilities
55:01related to AI, how should those be addressed and implemented at those different levels?
55:07Are there different approaches to the ethical and social responsibilities at the different
55:11levels?
55:13I think I have to go now.
55:18That is a whole webinar on its own.
55:21Yes.
55:21Or a day or two days.
55:23Yeah.
55:24Okay.
55:26I'll just say one thing though.
55:27Well, certainly some things that as we've been doing research on culture in my research
55:33teams, what cultures do you need to adopt these AI?
55:36One culture change you need is you need to have people gain some humility, that the tools
55:41may actually be smarter than them in some places, and that's okay, right?
55:44You can work with them.
55:45But the other one, this is really the time to bump up in what you do with your culture,
55:50your culture awareness and your culture improvement, the ethical side of things, the empathy side
55:56of those things.
55:57Because it's very easy to do things that may or may not be good for people.
56:00And if you can make sure people are considering those, you'll make better decisions.
56:05And, you know, you talked about that robots make mistakes, humans make mistakes.
56:13So when you think about the sort of quality control and governance and managing that mix,
56:20are you seeing models emerge where people have struck that balance well?
56:26And what does that look like?
56:30Well, certainly things like retrieval augmented generation are helping a lot because that
56:36way it's checking on itself before it goes in.
56:40But the other things you want to do is your standard oversight processes, right?
56:43Where are wrong decisions being made?
56:45Where are we seeing bias in what's happening?
56:48And how do we then improve things beyond there?
56:52So like I said, the standard things you should be doing with your people anyway, you want to
56:56make sure you're doing this with the models too.
56:58You can train a model really well to find out it actually has really good performance,
57:04but on a very biased data sample, and then you've got problems.
57:07Yeah, I would just add there that we've been talking about bringing AI into our human processes
57:14and how do we change what have previously been just human processes.
57:19And what George just spoke about was actually we need to change our human processes because
57:25we're bringing in AI.
57:27So with new tools, with Gen AI, we have to think about our evaluation process in a way that
57:33we didn't before.
57:34Or now that you can produce hundreds of slide decks in very little time, what are we doing
57:42with a hundred slide decks?
57:43Do we need all of these slide decks?
57:45What are we here for?
57:48And yeah, so part of what I am drawn to with this technology is all of these human aspects
57:54that are called into question and that we have to think about and rethink how we work, how
58:00we interact.
58:01And yeah, it can be a confusing or scary or daunting place, but it's also an exciting place.
58:11And it's a technology unlike any that we've interacted with before.
58:17Yes, very, very interesting times that we live in right now.
58:21So lots to explore and learn and to help our organizations move forward.
58:27So George and Melissa, thank you so much for this really fabulous presentation, engagement
58:33with the audience.
58:35Really appreciate your spending time with our audience to do this.
58:39And thank you to our sponsor, Salonis, for sponsoring this event.
58:44Have a great rest of your day, everyone.
58:46Bye, everybody.

Recomendada