- il y a 5 semaines
Thriving On the AI Frontier Product Strategies for Growth and Innovation
Catégorie
🤖
TechnologieTranscription
00:02C'est Startup Thursday
00:04et c'est stage 4 et notre session
00:06asks, comment on align AI
00:09integration et notre product
00:11strategies?
00:11Qu'est-ce que les pratiques
00:13en développe intelligent,
00:15innovant et
00:16user centric products
00:18avec AI?
00:19Pour nous répondre à ces questions,
00:21nous voulons à notre
00:22panel et à notre moderateur
00:24de l'EY, Hanna-Jessica-Bax.
00:26Merci.
00:53Bonjour, bon après-midi,
00:55et bienvenue.
00:55Nous espérons que nous allons commencer
00:57à avoir une très intéressante discussion
00:59avec mon amazing panel.
01:00Je ne vais pas lire
01:01tous les titres,
01:02parce que vous pouvez voir
01:04les deux côtés
01:05next à moi.
01:07Je pense que,
01:08plutôt que de me présenter
01:09lesquels,
01:10je vais vous demander
01:10de leur présenter
01:11et de leur présenter
01:14un peu plus simple question.
01:16Qu'est-ce que leur favorite
01:18current activation
01:19de l'EY?
01:20Donc,
01:21je vais vous demander
01:21de leur présenter
01:22un peu plus
01:25de l'EY.
01:27J'ai travaillé
01:27pour l'EY.
01:28Je suis responsable
01:28pour ce qu'ils
01:29ont besoin de l'EY.
01:29Je vais juste teacher,
01:30je vais changer
01:31de l'EY.
01:32Je vais mettre
01:32des comptes
01:33et je vais
01:34prendre un temps
01:34en l'étonnement
01:35qui nous ont
01:36un peu plus
01:37de l'EY.
01:37Donc,
01:43le gentlemen
01:43qui me connaît
01:44plus plus
01:45de l'EY.
01:45de l'EY.
01:46J'ai learned
01:46a lot
01:47along the way.
01:48So,
01:49why don't I start
01:50with you, Robert?
01:52My question is
01:53my favorite activation
01:54of AI.
01:54Yeah,
01:55maybe through that
01:55say who you are
01:56and what you do.
01:57Well,
01:57I'm Robert Chaitwani,
02:00president at DocuSign.
02:01I've been with the company
02:02for about a year
02:03and just passionate
02:04about technology
02:05and look,
02:06we're going to talk
02:06a lot about
02:07probably some
02:08pretty serious
02:08and exciting
02:09AI developments,
02:10but one of my favorite
02:13things is when technology
02:14becomes very human
02:15and very personal.
02:16I recently discovered
02:18a service called
02:18Mindy.ai,
02:20which is a personal
02:21chief of staff
02:22for anyone
02:23and I introduced it
02:25to my 14-year-old son
02:26and I told him...
02:28You needed a chief
02:28of staff.
02:28I said,
02:29your father
02:30has gotten access
02:32to a personal
02:33chief of staff
02:33to you
02:34and now his emails
02:36and his school homework
02:37all gets beautifully summarized
02:39in ways that he couldn't
02:40have imagined.
02:40but, you know,
02:42it's a funny example
02:43but anytime
02:44I think we can make
02:45technology accessible
02:46in ways that are practical,
02:48it's an entry point
02:49into really
02:49a new frontier
02:51of how technology
02:51can unleash potential.
02:53So, there you go,
02:54your personal
02:55chief of staff.
02:55I'm going to steal that
02:56because my daughter
02:57turned 15
02:58and she needs one,
02:59otherwise I have to do it all.
03:00So, love it.
03:01Sachin.
03:02So, I'm the founder
03:04and chief wizard
03:04at Builder.ai.
03:05We are a composable software platform.
03:08Really think about us
03:09as Lego got married to AI
03:11to build software applications.
03:13You know, we think
03:15really the sort of
03:16the raison d'etre
03:17for the company
03:18was more and more people
03:19want to be digitally native,
03:21build software,
03:22be entrepreneurs.
03:23But building software
03:24is really hard
03:25and we're really
03:26taking that away.
03:27So, it's really voice
03:28to application.
03:29You just tell our AI
03:30what you want.
03:31My use case is actually
03:33probably a little similar.
03:35So, I sat one weekend
03:37and I said, look,
03:37I keep getting asked
03:38what were the conversations
03:39you had with XYZ company?
03:41And I'd be sitting there
03:42like many people
03:43searching my email
03:44trying to figure out
03:45who did I meet from EY?
03:46All the different conversations
03:48I had.
03:48And so, I sat there
03:50one whole week
03:50and I said, I'm going
03:51to solve this problem.
03:52And I wrote a Python script
03:53that uses Azure AI
03:55to basically summarize
03:57my conversations.
03:59So, that I can now type
04:00in any company name
04:01or any person
04:01and I get the whole history
04:02in two minutes.
04:03and so, that's my personal tool.
04:05I loved your title
04:06by the way.
04:07Chief Wizard.
04:08I mean, I said, I need
04:09to have a new title.
04:10I love Chief Wizard.
04:11I'm going to take that one.
04:12All yours.
04:14Jean-Philippe.
04:14It's actually a wonderful tool.
04:16It's called Copilot.
04:17Oh!
04:17I was waiting for that.
04:18Yeah.
04:19We should hire you.
04:21No.
04:21My name is Jean-Philippe Courtois.
04:23I'm EVP President
04:24of National Transformation Partnership
04:25at Microsoft.
04:26What is that?
04:26I have the pleasure
04:27after 40 years
04:28in Microsoft.
04:30to shape partnership
04:31with governments,
04:32enterprise, civil society,
04:33NGOs,
04:34impact entrepreneurs.
04:35Well, to actually drive
04:36a responsible innovation globally.
04:38One example of that,
04:39maybe to make it more concrete.
04:40A week ago,
04:41we had the pleasure
04:42to host President Macron
04:44in my home country in France,
04:45where we announced
04:46a pretty large cloud AI
04:48infrastructure investment
04:49probably in euros
04:50in the country.
04:51We also announced
04:51we're going to train
04:521 million people in France
04:54on AI to make it accessible
04:56and relevant
04:57for many people
04:58remote from jobs.
04:59And three,
05:00we're going to really engage
05:02with 2,500 startups
05:03across the countries.
05:05My last scenario
05:06which I loved engaging on
05:08was in Cannes,
05:10just back from Cannes.
05:11And for the first time ever,
05:12Microsoft was partner
05:13of Festival de Cannes.
05:14Yeah.
05:14Happened to love movies
05:15for years.
05:16I love movies.
05:17And for the first time,
05:19I had discussion
05:19on a beach
05:20in a cafe
05:21with copywriters,
05:24producers,
05:25directors,
05:26using actual AI
05:27and copilot
05:29to go all the way
05:30from ideation
05:31to the script
05:33to basically
05:35the board
05:37and the story board
05:38to actually
05:39the budget implication
05:40if you were to get
05:41tax incentive
05:41from this country,
05:42this region immediately reflected
05:44into post-production
05:46producing digital assets
05:47in 30 seconds
05:48that you can post
05:49on TikTok,
05:50whatever it is.
05:51and I could see
05:52the incredible engagement
05:53of those cultural professionals
05:55all over.
05:57So when we,
05:57we may discuss fear,
05:59worries about AI
06:00and there's a risk
06:01I'm sure we're going
06:02to touch on,
06:02but it was wonderful
06:03to see those people
06:04at the core of creativity
06:07engaging deeply.
06:08And actually,
06:08as we said
06:09on our booth,
06:10AI is not creative,
06:11you are.
06:12And I love the way
06:14people actually reacted
06:15engaging with AI.
06:17Great example.
06:18Andre.
06:19Hello.
06:20Yeah, my name is Andre.
06:21I'm the CEO and founder
06:22of Mira,
06:23the leading visual
06:24collaboration platform
06:25that serves more
06:26than 70 million users
06:27and more than 90%
06:29of 100,
06:30Fortune 100 companies.
06:31So I have the simple title
06:33of CEO.
06:34Maybe I need to be
06:35more creative.
06:36And I think
06:37Gen.AI activation
06:38is actually be more creative
06:39for myself.
06:41Before Mira,
06:41I founded a creative agency
06:43and I love to give names
06:46to some projects,
06:47products, ideas.
06:48ideas.
06:49So my kind of regular use case
06:51is when I need to create
06:53the name,
06:53I go to co-pilot
06:55and brainstorm with it
06:56about what's the name
06:57could be.
06:58So for whatever I'm going
07:00to name.
07:01That's kind of my one use case.
07:02The second use case
07:03is clustering insights.
07:05Of course,
07:06I use Mira for that.
07:07But I mean,
07:08like where you have
07:09a lot of customer feedback,
07:11like hundreds of lines
07:12or employee feedback,
07:14you want to cluster them
07:15by sentiment,
07:16by kind of other criteria.
07:19And this is super powerful
07:21because otherwise,
07:23you go through
07:24all this random stuff,
07:26but with the clustering
07:28capabilities,
07:28you can really synthesize
07:30and see what are the pockets
07:32of the most impactful areas
07:35to double click in.
07:36So that's kind of my activation
07:38with Gen.AI so far.
07:40There are more use cases,
07:41but these are favorites
07:42so far.
07:44I picked one,
07:45which was maybe
07:46a little bit like yours,
07:47but then applied it
07:48to my private life.
07:49So there is,
07:51all of us have
07:51our own version of AI,
07:54right?
07:54So my digital twin.
07:55And so,
07:56how did they take care
07:57of some of the stuff?
07:58So it's not just us
07:59talking to AI.
08:00I think it's our AI
08:01talking to your AI
08:02or talking to
08:03where I have my subscription.
08:05So I think that's
08:05the other way
08:06of looking at it, right?
08:07It's not just us interfacing.
08:08That was the one I picked.
08:09So we all definitely
08:11are looking for
08:12simplification.
08:13We talked a little bit
08:14before we came on the stage
08:16on the maturity of AI.
08:17I mean,
08:18you cannot go anywhere
08:19right now
08:20and we only talk about AI.
08:21But we're still,
08:23it's not new,
08:24new, new,
08:24because it's been in a development,
08:26but it's still quite early
08:27in the maturity curve, right?
08:28So as every technology,
08:29it goes up and down
08:31on the hypes.
08:32So where are we
08:33and where is it going to go?
08:35So I'm going to start with you
08:37and then maybe some of the others
08:38will disagree with you.
08:40Yeah.
08:41Look, I think this is
08:42a good topic of debate.
08:43My take is that, you know,
08:45there's been a lot of analogs
08:48made between this stage of AI
08:50and the magnitude of AI's potential,
08:53comparing it to electricity
08:54or fire
08:55or some of these early human inventions.
08:58My view is that we are still
08:59at the very, very earliest stages
09:01of this curve.
09:02and oftentimes what you see
09:04when technology is at this stage
09:05is open questions and uncertainty
09:08around the role it plays.
09:09And I think a lot of the debate
09:10that's happened has been,
09:12is this going to displace
09:14human intelligence, right?
09:16Yeah.
09:16Is AI a substitute?
09:18And I think what you're hearing here
09:19from organizations and companies
09:21that are in the creation
09:22of these technologies
09:23is that it's really a mechanism
09:25and a tool by which it could unlock
09:29unprecedented amounts
09:30of ingenuity and creativity.
09:31So my sense is that this is a discovery phase
09:36and this phase is probably
09:36going to last quite some time.
09:39But the beauty is that capabilities
09:42around human potential will be unleashed
09:45that you can't imagine.
09:47I'll give you a very simple example.
09:48At DocuSign,
09:49we're historically known for e-signature.
09:52We have petabytes and petabytes of data
09:55from our customers who trust us
09:56with that information.
09:58And agreements and contracts
09:59can be a few pages or sometimes hundreds of pages.
10:03It takes an intense amount of human capital
10:06and knowledge to comb through all of that
10:08to understand what's in those agreements.
10:11We can now make that much more simpler.
10:14And so, in essence, we're freeing up capacity
10:16so that those who are interacting with agreements
10:18can spend that time on more creative endeavors
10:21or perhaps faster pace of doing business.
10:24And so, this is just a simple example,
10:25but I think we're just at the onset of what is possible.
10:30Andre, you'd agree with that?
10:31Or do you see...
10:32Yeah.
10:32When is going to be the prediction into the future?
10:35When is going to be the hype?
10:36Yeah.
10:37I think I fully agree.
10:38And I think it will take time for people to adopt.
10:41Like, I mean, OpenAI is a great example
10:43because this company runs the whole business
10:46as an AI-first company.
10:47And when I kind of thought and emphasized with that concept,
10:50I was like, okay, how much it will take time for me
10:53to transform my company into an AI-first company?
10:56And it takes time even with an effort, correct?
10:58And I'm sure everyone in the room also want to do that
11:02because it unlocks so much kind of time to market
11:05and creativity if you do it right.
11:09So, I think it will take some time
11:11because the biggest thing that Gen AI will compete with is inertia.
11:15And our human inertia, our human habits.
11:19And I think it will take time for the generation
11:22to kind of embrace the opportunity.
11:25and five, ten years, my belief, will take to really reshape
11:31how the companies operate in the first place and rethink that.
11:35So, but in general, like, yeah, someone yesterday said
11:38we are in 1998 today.
11:41It's like with the internet and everything.
11:44So, that's how we can think about this.
11:46So, there is so much opportunity moving forward.
11:49So, I'm excited to see how it all develops.
11:53Yeah, just by jumping on it, I would say,
11:55you know, I've been, as I said, I've been 40 years.
11:58So, I've seen actually all the inflection points.
11:59I was there when most of you were not born probably.
12:02The first PCs came to life.
12:04Apple II, IBM PC.
12:06And I was working in the software industry
12:08and then joined Microsoft.
12:10And I would say between the PC burst, the internet rise,
12:14mobile computing and cloud computing, all the way to AI.
12:18In my 40 years, this is the first time I've seen such an incredible pace of innovation.
12:24And yet, as Bill Gates would say always,
12:27we always overestimate what's going to happen in the short term
12:30and underestimate what's going to happen in five, ten years.
12:33Having said that, just to give you a glimpse of what's going on.
12:36As you know, we are working as a company.
12:38We are a platform company, Microsoft.
12:39We are working on all layers of the so-called stack.
12:43From the AI infrastructures, we just announced a chipset called Cobalt,
12:47which is going to optimize, in our data centers, the processing of AI.
12:53And then you keep going on all layers.
12:56And what you observe is the following.
13:00You remember the Moore's law? Does it ring a bell to you?
13:02Moore's law has been something, you know, invented by Intel engineer, I think, in 65.
13:07And the law said, you know, every two years,
13:10you double the number of transistors on your chip and you'll half the cost.
13:16Guess what happened with the infrastructure we are enabling open air?
13:21As you know, open air is training and rolling out these AI models amongst Azure Cloud.
13:27Actually, between ChatGPT4 and ChatGPT4O,
13:32on ChatGPT4O, you've got actually, in terms of the token request,
13:37you've got a product which is actually twelve times cheaper
13:42and six times faster in the last kind of 18 months.
13:46So I think this gives you just a sense of what's going on.
13:49On one side, you've got this incredible computing capacity, which is needed.
13:55But all of those companies are working to, I would say, optimize the stack at different levels of the stack.
14:02And this is incredible to see when it comes to SLM, as an example, small language models.
14:09What's happening now?
14:10We and others are participating to the party where you could be incredibly efficient on some small models
14:17dedicated to business process or an application in a much cheaper way than what we thought it would take six
14:25months ago.
14:25So anyway, I'll stop there, but I think that's something which is pretty amazing and a bit different than Moore's
14:30Law
14:30because it goes even faster, more exponentially.
14:32I think the speed is really what is probably surprising many of us, but it also makes it a bit
14:37more exciting.
14:38Sachin?
14:39Yeah, look, I think, you know, there's a, so I completely agree very early.
14:42And just to sort of give some analogies and maybe a different lens in this.
14:47Computational conversation has been around since the 1930s.
14:50You know, what we're seeing today is a modern-day equivalent of that.
14:53So let's look at some parallels.
14:55You remember Railplayer.
14:56You know, so the promise was you could watch videos on the internet.
15:00And so you would launch a Railplayer video, you'd hit play, you'd wait 20 minutes of buffering, and you'd see
15:06a 30-second clip,
15:07and then you thought, oh, my God, I can watch videos online.
15:10Compare that to today where you're watching 4K on Netflix or on Apple TV.
15:14Where's Railplayer?
15:17And so the thing that I think is really interesting in what happened in November 22,
15:22it's not Transformer models were invented because they've been around for a while.
15:26It's not that GPUs were invented. They've been around for a while.
15:29But I think what was really unique is you suddenly had very complex technology
15:34that previously was very hard to use sitting in a construct of a chat window.
15:40Now, chat or WhatsApp, you know, is an interface that was reserved for humans.
15:46And so suddenly you had a machine sitting in a conversational channel using a canvas that was reserved for humans.
15:53So actually, I would argue the big innovation was we shifted from these really complex user interfaces
15:58that people found really hard to use to something where you could just chat.
16:02The second, I think, was as an industry, we reshaped the language.
16:07You know, you remember if you ever spoke to a tech person, it was all full of acronyms, right, and
16:12codes.
16:13And we made it complicated. We went from that to hallucination.
16:17And anything, any mistake, it's a hallucination.
16:20Actually, it's not. But let's call that for a side.
16:22But we rebranded it. We made it a feeling that people could kind of resonate with.
16:27And then I think the third one, and I've said this a few times, and I'm not sure if it's
16:30by design, but I kind of think it is.
16:31The response was slow. And I remember growing up, my teacher used to say to me when someone asked you
16:37a question,
16:38count to three, you sound more authentic before you answer.
16:41And this thing is actually giving you a word-by-word answer.
16:44And so it's slowing the response down. And so why is there mass adoption?
16:48Why are CEOs in boardrooms, entrepreneurs, like the average person, why are they so excited suddenly?
16:55Well, it's because the user interface suddenly made it so real. But then it's like the real player days.
17:01It's only as real as playing a 30-second clip yet, and you still have to wait 20 minutes to
17:05buffer.
17:06So let me stay with you and pivot maybe a bit deeper into the real topic of today, which is
17:11AI and product development.
17:12Now, you introduced yourself as the cheap wizard with Lego blocks.
17:15So, I mean, I have to start with you, right? So talk to, how do you use AI in actual
17:21products?
17:23So I think that's a great question. And obviously, I see the timer, so I'm going to be very, very
17:27mindful of how much time I use in it.
17:29But let me start by first saying, I don't think everything is a generative AI problem.
17:34And the reason I say that is, if we think about how human beings think, if you remember that picture
17:41of you swimming as a kid,
17:43well, that's a knowledge graph. You're not generating that video. That's a memory you've had.
17:48If I said to the audience, you know, what do you think I'm going to, could you tell me the
17:52next word?
17:53And the word you're thinking of is say, maybe do, right? But there's a million words from us to choose.
17:57That's a graph neural network. And then I say, you know, let's come up with a poem about us and
18:02we're known entities.
18:03That's generative AI. Now, how does that transcend to our world?
18:06You know, the thing is, most products, especially digital products, actually even non-digital products, they're made up of features.
18:14So I'll give you one example. Luggage. The wheel was invented in 2000 BC. The bag was invented 300 years
18:22ago.
18:22The luggage is just a bag with wheels. But it's a new invention about 100 years ago.
18:28Similarly, when you think about digital products and you break it down into features, the data that we have seen
18:32from thousands of applications we've built is 80% of the features that make up most applications are the same.
18:38And so the first part of actually building product, especially when the customer or the user or the person that's
18:45trying to build is not technical, is to ask the right questions.
18:49But you know what? If the features are common, the questions will be common. And so we don't need to
18:55generate them.
18:56You know, you can have enough volume institutionally and industrially to be able to use a knowledge graph to be
19:01really precise.
19:02Well, I asked you this question. Now I need to ask you this question. And as you give me more
19:05and more answers, ultimately I'm acting like a human product manager, but I don't need to.
19:10The second part of it is that, you know, once you've figured out here are the features and journeys someone
19:15needs to build for the digital product or just the features, I guess, and the use case for the physical
19:19product, the next question is what's it going to look like?
19:22Well, here's the awakening. There's six ways to do a login screen. We don't need to reinvent it. I don't
19:29understand why in the world today 10,000 people are building a login page.
19:33Like that is a sole problem. So is checkout. So is profile. So is map view. But you will want
19:38to make it unique. And this is where generation becomes really powerful because you can take the construct of what
19:43is reusable and embed with, which is by the way reusable, putting the reuse in the right order is a
19:49neural network problem.
19:50So it's still an AI problem. And then you can generate within it. Now the benefit you have is it's
19:56not going to go AWOL or hallucinate. It's going to generate in a small guideline.
20:00And then you go to the next step. And so in this product development process, you know, the thing is,
20:04you don't go to an LLM and say build me an insurance platform.
20:08Right? What you do is you start having conversations like you do with a human, you get to some answers,
20:12you get to features, you get to some design.
20:14And you have this multi-agent approach where each agent passes something on to the next agent. Some agents will
20:21be graph based, some will be neural network based, some will be LLM based, and you cascade that through.
20:26And then what you end up building is something actually really sophisticated that can produce really high end outcome of
20:33complex systems.
20:34I love that explanation. Robert, it looks like you're going to comment. You're going to contribute.
20:38No, look, I think your question is around AI and the product development cycle. And I think there's just two
20:43ways I'll answer that. One is for every single company in the world, AI will play a much more powerful
20:49role in helping to connect behavior of your customers, especially digital products, behavior of your customers into insights that can
20:57help you create a better product.
20:58So for us at DocuSign, very similar to when I was at an e-commerce company, which is you're looking
21:03at product behavior and insights and using that to much more rapidly give you very specific understanding of what to
21:10improve in your products.
21:11And I think this is true for digital products, but increasingly as physical products become digitally enabled, you'll start to
21:18see that as well.
21:19And then as we think about AI at DocuSign, you know, we simplify the world of agreements into three steps.
21:26You create agreements, you commit to them, and then you manage them.
21:30And AI for us is playing a really powerful role across that entire journey in our platform to help make
21:36negotiating smarter, automatic red lines, summarization, the signature negotiation process,
21:42and then the management process to unlock the tacit knowledge that's locked or trapped inside of agreements.
21:49And so I look at it two ways, which is making your products better and then embedding the AI capabilities
21:54within the products themselves.
21:56And I think this is true for, or will be true for nearly every single digital company in the world
22:01and increasingly even hardware-based companies.
22:04So you mentioned the famous customer experience, customer, right? How do we, how do we focus on that customer of
22:12whatever service?
22:13Maybe I come back to the, to Andre and Jean-Philippe, but, and I think it's something very high on,
22:18on, on your agenda, Andre, so.
22:21Yeah, yeah. We think about this because like we are seeing the heart of product development lifecycle, correct?
22:26So it all starts with this divergence and workshop in the ideas, but then it moves to further stages.
22:32So our belief is that the product development lifecycle will be significantly shortened with the Gen AI.
22:39And also it will be way more simplified because now you need a lot of different roles to kind of
22:45move things through the product development lifecycle.
22:48With Gen AI, we expect that humans can be more T-shaped and Gen AI can assist them and extend
22:56their capabilities.
22:57So I can give you a few examples. One of those is you have a workshop, you workshop with the
23:04customer, you workshop with the team who is working on something.
23:09And the result of that workshop then sits somewhere. So it sits on the canvas or on the physical board.
23:17Then someone needs to go and digitalize and summarize and come back to the team and show the product brief
23:23that happens out of that.
23:24And this is like, this takes days or weeks in some companies.
23:28And with Gen AI, you can easily converge all of that into a summary, correct? Like just in one click.
23:34So it creates this solve this cold start problem.
23:38Same for prototypes. Like a lot of people can't do design, can't prototype.
23:45But with Gen AI, it opens the capability for people who are not used to design tools to prototype and
23:54to communicate their ideas in a visual way, not in a way with the words.
23:58Because now the whole process is slowed down because only designers can visualize what others think of.
24:04With the Gen AI, this problem can be solved. And there are a bunch of other things.
24:08And so you asked about insights. The same thing is like now there is a role of product manager who
24:14is sitting on the backlog and asking all these questions to the customers.
24:17But with Gen AI, there are capabilities to summarize all the insights and surface them to engineer or designer so
24:24they can move it into production faster.
24:27So it's like all of those pieces that I'm just mentioning is like small bricks of the kind of bigger
24:33process of product development lifecycle.
24:37And if it's done right, it can be all streamlined pretty significantly.
24:41I think that's a huge opportunity. And I think every company wants to move as fast as open AI is
24:47moving these days, correct?
24:48And every CEO is worried that their companies would be not viable in the next few years because of the
24:54race or the speed of innovation that is happening.
24:56And I believe this is kind of the biggest problem that everyone has to deal with is like how to
25:01accelerate time to market of what they're building.
25:05Yeah, just for discussion. First of all, I think we all agree on the fact that everyone, every company realization
25:09can become a product manager and can become an AI company.
25:15I do believe that's the case. I believe that an NGO, a startup, a large enterprise, I don't know, a
25:21municipality, they are all shaping and are going to build AI products representing the needs of their customer, constituencies, partners,
25:31ecosystems, and so on and so forth.
25:32So just to use some colors example though, because I love always exposing customer stories.
25:37You know, we are just showing on stage, we have a big event this week called Build Microsoft, where we
25:41announce a bunch of stuff for techie people.
25:44You can check it out on YouTube, you'll find all the videos. And we had a partnership announcement with the
25:49Khan Academy.
25:49I'm sure many of you have heard about Khan Academy, Sal Khan. And this guy has been embracing technology in
25:56a wonderful way.
25:57And we've been showing, and you may have seen some videos from Sal Khan himself, on the way he's using
26:03AI to become the tutor of his son.
26:06The video is his son on, you know, on trigonometry and trying to resolve a problem, where AI was not
26:14responding to the question of the problems.
26:16Because Sal was basically prompting, coaching with the voice, that's the chat GPT for O, actually model in Azure.
26:24He was telling them, please don't give the response. Just giving some nudge, some...
26:29And you had this incredible personalized learning experience of the kids, who was pushed, encouraged, stimulated to find the solution
26:38to the problem.
26:40And so, when you see that at the core of the education system, you say, wow, the way it is
26:46right now is going to transform the learning journey.
26:49And the teaching journey, on the other hand, of the teachers, professors, is incredible.
26:54Another example, I mean, medicine is just incredible.
26:58I mean, we as a company, we've been acquiring a great company called Nuance a few years ago,
27:03which has developed some pretty deep AI models for medicine.
27:08In particular, to enable physicians, doctors, to spend most of their time with the patients,
27:15as opposed to a mystery task, a bunch of stuff.
27:18So basically, AI models in terms of the voice,
27:22and are transcripting in a very, very detailed, precise way the conversation,
27:28and translate that into a clinical kind of diagnostic.
27:33Of course, that is going to be, you know, signed off by the physician, not AI.
27:39But we'll be doing a wonderful job of gaining a lot of time in terms of checking a lot of,
27:45you know,
27:46you know, potential negative impacts depending on the disease and prescription and so on and so forth,
27:51and getting the digital memory of the patient.
27:53So the next time around, if he's in another place, you've got all that context available for the healthcare system.
28:00So those two give me a lot of hope of the wonderful products that other companies,
28:06all kind of companies are going to build at the core of their mission.
28:10I get a lot of examples in the medical, both the one that you gave,
28:14because clearly you can only diagnose so many things and you, I mean, what does a doctor diagnose on?
28:21Clearly what they learned in school, but also on the number of cases that they see more often,
28:26which is always the problem.
28:27When somebody has something that's rare, it usually takes a very long time to find out.
28:31So I hear that example a lot.
28:32But the other one that I hear as well is, as soon as the patient walks out, I mean,
28:39especially family doctors have a very, very, very busy schedule.
28:42The next patient comes in, they now kind of download everything.
28:45And so it's also just rather than having to do the whole write-up of what you have discussed, there's
28:49a huge bit.
28:50Well, I was going to say one thing.
28:51You wanted to come in, yeah.
28:51There's just one thing which I think is really interesting in how you use AI today,
28:55which is you can actually reduce the amount of volume of information.
29:00So we think, for example, every patient and doctor conversation is unique.
29:05At a high enough volume, it's not unique.
29:08No.
29:08So I give you an example.
29:09You know, we asked, I think, something close to like 2 million questions we were asked by our customers last
29:13year.
29:14Actually, what was unique was about 1,300.
29:17And so if you take any pool of data, any set of conversations with a customer, any set of tasks
29:22that are doing,
29:22any amount of information being created, what you'll find actually is that at an industrial scale, a lot of it's
29:27not unique.
29:28It's just, it's the same thing said several different ways.
29:31And I think that's the real power from a product development side,
29:34where you can really reduce the problem down to, well, what's unique in this?
29:38Yeah, you get to the essence of what you're trying to do.
29:40Yeah.
29:40So let's move, also looking at the clock.
29:42So let's move at the risk topic.
29:44You already mentioned it earlier, Jean-Philippe.
29:46So we see all the upside.
29:49Everything's faster.
29:50We focus on the real issues.
29:51We get more creative.
29:52So all sounds really good.
29:55We live in a world, especially here, that we have almost every country going to elections.
30:00So there's a lot of concern.
30:02What's the risk?
30:02We have the teeth fakes.
30:04We have everything.
30:05So what is that risk?
30:07Maybe I start with you, Jean-Philippe.
30:10What is that risk?
30:11And is that risk stopping a lot of people and companies to move forward?
30:16Or are they so worried of getting gone behind that they say, well, the risk of not doing anything is
30:20maybe worse?
30:21Yeah.
30:21So start a little bit about your views on this one.
30:25Yeah.
30:25So it's very clear, first of all, that we should neither be dystopian or utopian.
30:31That's my view.
30:31And at the same time, we have our eyes wide open as a tech company on all the risks happening
30:39like every second of our lives.
30:40And the risks span across a number of fields.
30:44It starts, number one, with security.
30:47Yeah.
30:47Security.
30:48If you don't have a secure foundation for your data, for your models and the rest, well, okay, that's a
30:55painful starting point.
30:56It continues, of course, with the personal privacy kind of rise to the people.
31:02And the way not just you manage them, not just talking about legislation, GDPR, and so on, which makes sense,
31:06by the way.
31:07I'm talking about the way you take care of that in a very ethical way all along the journey of
31:12your customers and people you are dealing with.
31:14So privacy, security are two of the building blocks.
31:17And then we've been adding as a company to really mitigate and work on all of our developments with our
31:22customers what you call another four principles of responsible AI.
31:27The other one is basically fairness.
31:30Yeah.
31:31Well, how do you define fairness?
31:33Depending on the problem you are trying to solve.
31:35That's an interesting question.
31:37So having development tools, methodologies, and a very diverse set of people to define the thresholds of that fairness.
31:46You've got obviously an obvious one, such a big one, which is diversity and inclusion.
31:53And guess what?
31:54Humanity is, humans are not necessarily, I would say, humans are actually very biased.
32:01We are all biased.
32:01I'm biased.
32:02All of us are biased.
32:03So when people talk about AI, AI biases, guess what?
32:09It's the result of a bunch of humans who have been behind.
32:12But there's ways, the good news, there's ways to do a great job.
32:16Quick example on that, given the risk to the society.
32:21Just an impact startup I've been working with called Mivitae from the UK, a small company.
32:26I've been using the power of AI to actually make sense of the 2025, the key biggest cognitive biases when
32:34hiring people, which are very well known, by the way.
32:38People look at a CV usually, take one minute, and 70% of the time, do you know what they
32:43look at?
32:44They look at the name, they look at the gender, they look at the address.
32:48And so if you anonymize that, and a number of the things, you have a very different set of flow
32:55of candidates into your company.
32:56And they do that, they just want a huge contract with the US Army, by the way, to apply AI
33:00for their hiring process.
33:02And they continue to transparency, which is the other principles.
33:06And we've just been issuing a transparency report, I want to be long, but any AI company, any company that
33:12wants to build AI needs to open the hood, in a way.
33:15Talking about the way you train your data, what you do.
33:18Talking about the risk metrics you have, the way you measure that.
33:22And talking about the price you're making or not, on a bunch of regulatory issues as well.
33:26And the last one, not the least, is accountability.
33:30At the end of the day, we, it's not just businesses, companies, people, have to be accountable for the AI
33:37models.
33:37So that accountability is going to be critical in the way society, regulatory bodies, which happen in the US, in
33:45Europe, everywhere in the world is happening.
33:47And this is that balancing act that you need to put at the center when you design your AI products
33:53that are going to help you learn to the pain.
33:57Not just of hallucination, as we said before, but of the real risk happening every day as you release new
34:02AI products.
34:04So before you comment, just one sec.
34:06So what I'd like to do, if we can bring up the Slido QR code, some of you might still
34:10have your Slido open, really would be good to get a sense from the audience.
34:15I mean, we talked about the risk, the legislation's coming.
34:18And so just on a scale to one to five, how comfortable are you that you can be compliant with
34:24the legislation when it arrives?
34:26Just kind of giving a sense.
34:28So now I come to you, then we can see what the audience says.
34:31Well, I think there's a topic here that's really interesting to call out, which is if you look back at
34:36the earliest innovations of the web,
34:39it was millions of little fires of innovation started, and it was very bottoms up.
34:44I think there's actually some intrinsic and massive advantages of the large language models being developed by extraordinarily well-capitalized
34:53companies who also have a tremendous reputation to protect.
34:57Microsoft, right? Amazon, Google, Adobe, many others.
35:02Having intrinsic trust, security, privacy, and equity principles designed into the models from a very early stage, I think combined
35:15with responsible regulation.
35:17And we talk about, you know, responsible technology innovation, but there's also this principle of responsible regulation.
35:23At such an early stage of technology's maturity, I think serves humanity in terms of really mitigating the tremendous amount
35:33of risk associated with this technology.
35:34But that is very unique from, I think, a lot of the innovation that has come previously, that the companies
35:40that are really innovating at scale are the ones who actually have the most at stake in terms of ensuring
35:46that these technologies are deployed in very responsible ways.
35:49No, I think there's one thing to add to that, which I really agree with you.
35:52And I say there's two realms, right? There's culture and skills.
35:57So imagine you have a good salesperson.
35:59Who owns that skill?
36:01Is it the salesperson or the company they learned to be a good salesperson?
36:04Because I know what we're doing, which is we're taking what every good salesperson says across our company and saying,
36:10can we make every other salesperson as good as that person and use AI to achieve that?
36:14And so that opens up a really interesting question of, forget likeness, but what is a skill?
36:20And is the real skill learning new skills?
36:23Because every skill that you master, it's a matter of time before we can use machine and data to basically
36:28remove the peer disadvantage.
36:32So how agile, how adaptive?
36:33Right, and then the second one, I think, is culture.
36:37And the reason I talk about culture or sovereignty or history, you know, every version, every country is a nation.
36:42Every nation is based on culture. Culture is a function of history.
36:46History, by default, always has multiple versions of the truth.
36:50Now, the current state of being, because we're so early in this technology, is it's all based on volume.
36:55So if I can create volume of something, it is assumed to be the truth.
37:01However, we know that if I asked a seven-year-old in New York or in London about American independence,
37:07we would teach history differently.
37:08But if an LLM is now, you know, in Khan Academy and is teaching kids, well, which version of history
37:12are they going to teach?
37:14And so, you know, whilst that is not an existential risk today, the fact that we're having a huge question
37:20around what is a skill,
37:21and actually history cannot be homogeneous, is a huge problem.
37:27Any surprises? I actually think the outcome is most people are in the middle, three, but that feels that they're
37:35okay-ish.
37:36But there's still, there was actually more fours and fives than I had expected.
37:40I don't know, Andre or Jean-Philippe, any surprises here?
37:44No, I think, honestly, the jury is still out, because it's still so early.
37:48I think, I mean, we have to acknowledge, and as a company we've been certainly, you know, around the table
37:53with EU policymakers,
37:56that they've been actually pretty thoughtful in defining the risk taxonomy and defining what could go wrong.
38:03The key question that has been asked is always the same with regulation, is how do you balance the cursor
38:08between regulation, risk, prevention, education, and innovation?
38:14And I think that's where, we don't know yet, what are the next generation startups happening in biotech.
38:21It's not just tech and AI, because all disciplines are using right now, capital investment and more, using AI to
38:30create new business models, new products, new inventions.
38:33And I think this is where nations are playing a big role, and the U.S. have been more cautious,
38:39and they've been defining frameworks, executive orders.
38:42Then you go to China, it's a white east, I should say, not a white west, but at least, in
38:46terms of what, anyway, the party and the country does.
38:50So I think it would be super important for all the citizens, the business community and civil society, to be
38:57part of the dialogue with governments,
38:59to make sure they understand where to move that cursor as we learn more about AI implications in the real
39:06economy and the real society.
39:08So Andre, the trust issue, I mean, whether we have regulations, whether we listen to our ethics, so how do
39:16you approach this in your company, in Merrill?
39:19I mean, maybe good for us, we are not touching that too much.
39:24Like, we are accelerating product life cycle, so it's not the issue internally.
39:28I mean, but of course, like, hallucination and everything can happen, but I don't think we are touching that sensitive
39:34area,
39:35where it's more of creating some, in some industries where it can create issues.
39:41So it's not a big topic internally, because, I mean, again, like, we're working with the data that the company
39:48provides.
39:49Like, there are backlog, kind of feedback, design concepts, so it's less of the issue there.
39:55Like, and then there's a human in the loop, and human in the loop should be there to kind of
39:59review whatever is happening with the GNA
40:02and then kind of ship it to productions.
40:05When I spoke to the member of your team last week, I stole her quote.
40:09She said, we call it co-pilot, we don't call it autopilot.
40:13And I thought that was a very simple but good way to say it.
40:15I mean, Microsoft call it co-pilot, but yeah.
40:20It's good to think that way, because it goes back to what all of you are saying, right?
40:26Yeah.
40:26It is something to help you, but you should still be the one deciding where you take it, right?
40:33How far does it go?
40:35And I always call it augmented rather than artificial, right?
40:39It's augmented intelligence.
40:40It helps, you know, it gives me all this brain power.
40:43Back to your point, Sergeant.
40:44I only have to focus on the 1,300 real issues rather than the 50,000, so.
40:49It's also the cape that makes humans superhuman.
40:51And I think that's the key.
40:53It's not, the human doesn't make the AI super AI.
40:55The AI makes the human superhuman.
40:58Well, and I think this, you made a very important point about the difference between regulation
41:03and ethics in principles.
41:06And what you see, I think, is a lot of companies who are being very transparent about their responsible
41:10technology innovation principles.
41:12But internally within your organizations, I think something really powerful is happening,
41:16which is the silos between functions are breaking down.
41:20At DocuSign, as an example, our legal team and general counsel spends an extraordinary amount of time
41:26with our technology team and our data science team.
41:28And those connections, I believe, are really important.
41:31Where companies are taking ownership of ensuring that, not just for their reputation,
41:36but to deliver innovation in a responsible way.
41:39And I think what you're going to see is increasingly organizational design start to evolve
41:45much more rapidly in order to meet the pace of innovation that customers expect,
41:50but to do that in a really responsible way.
41:52You know, and I think that responsible is a really important word because you think about power consumption.
41:57So the number of H100 NVIDIA chipsets in circulation in 24 is the power consumption of Sri Lanka.
42:03Just a whole country.
42:05And so when you start thinking about, you know, like this early stage to like what the future looks like,
42:10to me the thing that's missing in a lot of the conversation is what is the efficacy per kilowatt hour?
42:17Because the power consumption actually knocks this whole thing for us six,
42:21if we think about five and ten years from now.
42:23And we think about this a lot because for us it's like, does everything need to be generated?
42:27Do we need the tenth picture of a cat standing on a cow as an image that's generated?
42:32Or could we have used something that was generated before and modified?
42:34And I think that efficacy per kilowatt hour is a really important part of the conversation.
42:38So just going back to the, let me try this out.
42:42But maybe on the ethics point, because you mentioned it and Jean-Philippe said it.
42:47So maybe actually that's, it is a good byproduct from AI.
42:52Because as you said, we all have biases.
42:54We've had lots of these issues.
42:56But maybe because of the risk sides that AI brings,
43:01it has actually raised the topic of having the more ethical discussion ahead of having legislation.
43:08So, does that make sense?
43:10I mean, maybe there's, there's, because people see it as a risk.
43:13You know, you can always look the glass half full, half empty.
43:15I actually think it has raised the level of discussion around this.
43:19I think it has raised awareness.
43:21It has raised awfully the appetite.
43:24But then you need to really rally your teams.
43:27Whatever you do, you need to establish, you know, in our company,
43:30we started more than 12 years ago, what you call a responsible AI team.
43:34It's cross-disciplinary.
43:36It's across R&D teams.
43:38It's across products, legal, sociologists, and others.
43:41Yeah.
43:42And we go through what we call all our use case.
43:44Not just Microsoft building products, but our customers' more sensitive cases.
43:49We even actually, a body, we're signing off or not on some use case that our customers would like us
43:57to partner with them to do and develop.
43:59And we had to actually pull back from some projects and say, we're not going to do it.
44:05So I think it really means that you need to build that governance process.
44:10It's actually kind of a four-fold approach, I think.
44:12One is govern, which is governance.
44:14And we create the stakeholders representing your world, your constituencies, your customers, your discipline, so on, to ask yourself the
44:21right questions.
44:22And it's about mapping, mapping the risk, mapping and doing some right teaming on your models and the rest, and
44:28really trying hard to make you default before you go live.
44:33Then it's about measuring, or do you measure each one of those values?
44:39We talk about ethical diversity, fairness, so on.
44:41You have to measure that as you build software and AI.
44:45And at the end of the day, it's about managing it as it slides, because then it's in life itself.
44:51So anyway, that's a framework we use with our clients as well.
44:54Just looking at the clock, I was going to get each of you one word a year from now.
44:59One word.
45:00Where are we going to go?
45:03You can start.
45:05I think async AI. Async is the word.
45:08Async, okay.
45:11What do you think?
45:13Superhumans.
45:14Superhumans.
45:15With the cape.
45:18Can I have more than one word?
45:20Two.
45:21Well, I think the AI is going to play a powerful role in unleashing human health and wellness.
45:27That's a lot of words.
45:28And that's a lot.
45:29But I think that's the next frontier, is wellness and health.
45:34Jean-Philippe.
45:35Sorry, I'm going to be commercial, I'm saying co-pilot.
45:37Ah.
45:39Easy.
45:39Well, thanks to my panel.
45:41I hope they've given you some new thoughts to think about.
45:43Thanks.
45:44Thank you.
45:45Thank you.
45:47Honey, honey, two words for you.
45:49Thank you.
Commentaires