- il y a 2 semaines
Trust by Design: Engineering the AI Companions of Tomorrow
Catégorie
🤖
TechnologieTranscription
00:00Bonjour, Luc et Renaud.
00:03So happy to have you here.
00:05So let's start, jump straight in, let's start with a clear picture.
00:10AI today is everywhere, but how exactly is it transforming your business, your industries?
00:18Renaud, I would like to start with you.
00:20So how is AI reshaping your internal operations and the client's relations at BNP Baribas?
00:28Perhaps you can start with some concrete examples of real services.
00:33Thank you, thank you, Annette.
00:35I'm speaking not only on the banking industry, but the financial industry has a whole meaning insurance, banking, asset management,
00:43wealth management.
00:44And this industry is from a long time data reach and used to base this decision process on models for
00:56a long time.
00:57And that's why we benefit from a flying start for AI.
01:03And AI is already everywhere in our work, in our processes.
01:08Not at a big scale, I have to be frank.
01:11It's the very beginning.
01:13But AI is everywhere.
01:14It's had an impact on how we work.
01:17It has an impact on how we sell products and how we code and how we interact with our customers
01:23and all these stakeholders.
01:25I will take one example, if you don't mind.
01:27One example is for insurance, the insurance of your mobile.
01:34Mobile is very important for everyone.
01:36So when you have a claim, when your mobile is broken, you expect, as a customer, to have a very,
01:42very fast answer.
01:43And that's one area where AI is faster than human beings.
01:48So it takes less than five seconds to receive an answer.
01:52Yes, we will replace your mobile.
01:55Yeah, that's the world of finance.
01:58Fast, data-driven and deeply personalized by AI.
02:02But what about the world of mobility?
02:04Look, what does AI mean concretely?
02:07In Renault's global world, when we look at the factory floor to the cockpit.
02:15So first of all, I mean, I'm not going to talk about Renault because, you know, I'm going to talk
02:20about AI in general.
02:24You cannot trust AI, just to be very clear.
02:30You know, AI is something that is meant to be, when it's very, very generic, which is something that we
02:39see today with Gen AI and the GPT or Gemini of the world.
02:45Those models, they are too large to be trustable.
02:52So the more you are going to specialize them, the more you are going to do what is called RAG
03:02or fine-tuning for very specific models that you are going to have smaller, basically.
03:10The more you are going to be, you know, the more you are going to be able to trust them.
03:15So, in general, don't trust them.
03:19In particular, maybe you can.
03:23Okay, so we have seen very, two different industries, but already really shaped by AI.
03:30Now let's get really to the heart.
03:32What you mentioned already, we can't trust AI, but how can clients, how they experience trust today?
03:41For example, for BNP Paribas, it's all about protecting money and data.
03:46For Renault, it's about safety on the road.
03:48We want to be safe on the road.
03:50So, look, many people still hesitate.
03:53They are to trust AI in their cars.
03:57So, why is this emotional resistance still alive?
04:02Why?
04:02What's going on?
04:04What are you afraid of?
04:06Because it doesn't work.
04:08I mean, a lot of it, I mean...
04:10Is it only emotional barriers?
04:13So, there are some systems that are working very well.
04:16Okay, so those guys, little by little, we learn how to trust them.
04:21And we learn that, you know, it's going to be better than us.
04:24Basically, at the end of the day, it has to be better than us, in order for us to trust
04:28them, right?
04:29So, we see a lot of those systems, little by little, they are better.
04:33But if you're asking me, for instance, you know, to trust a car that is going to drive by itself,
04:39I'm going to hesitate.
04:41It depends, you know, how it's going to drive by itself.
04:45If we are talking about the robotaxis, you know, that are driving in San Francisco,
04:50yeah, that's fine, because I know that it's going to be in a very limited, you know, area,
04:54and I know that it knows this area very well, and I know it's fine.
04:58If you tell me that it's going to go anywhere, any situation and so on,
05:03I'm not going to be able to trust that.
05:04So, the thing is that we need to learn how to, you know, trust those systems,
05:12but for that, the systems have to show us that, you know, they can be trustable.
05:16That is trustful.
05:18Renaud, now the same question for you in finance.
05:23Clients may accept AI for simple tasks, but when it comes to investment advice,
05:29I think it's a bit more complicated.
05:31What barriers do you see?
05:33It's true that when clients come to a bank or an insurance company,
05:39he doesn't put his life in your hands like in a car.
05:43But at the same time, when, for example, when he decides to invest
05:47and he asks us to help him to invest,
05:51he puts his future, his financial future in your hands.
05:55So, that's why it's very important.
05:57It's a question of his project.
05:59It's a question of his family, very intimacy issues.
06:03It's all the heritage.
06:04Exactly.
06:05Exactly.
06:05What will be my future?
06:07It depends on the way we will manage his wealth and his protection,
06:13his insurance, and so on and so forth.
06:14So, it's true that for very simple tasks, we have no issue.
06:18The customers have trust.
06:21But for investment, for long-term investment, to prepare the retirement, for example,
06:27it's true that they have some reluctance.
06:30How to explain that?
06:32On my side, I will explain that with some different issues.
06:37The first one is cyber security.
06:39When you give all the information about yourself, about your family, about your situation,
06:44you expect the counterpart, meaning the bank, to protect the data absolutely against any kind of attacks.
06:52The second thing is about transparency.
06:55The most complicated is the question, the most precise we want to be with AI, with a model.
07:01The most difficult is it for us to explain the decision of the AI.
07:07And that's absolutely key because the customers want to have some explanation.
07:11So, there is an arbitrage between the precision and the answer, meaning the complexity of the model,
07:17and the trust, the explanation we are able to give to the customer.
07:26Yeah.
07:27Do you agree that trust begins with transparency?
07:31Yeah.
07:32It starts with explainability, right?
07:34I mean, the thing is that if you are not able to explain why the decision was taken,
07:39you are not going to trust that, right?
07:41If it's a black box, it's a black box.
07:43You wonder what is inside.
07:46So, at the end of the day, I mean, the reality is that education is key, right?
07:52So, we need to educate ourselves of how it works.
07:55It doesn't mean that we need all of us, you know, to become data scientists or computer scientists.
08:00This is not what that means.
08:01But it means just that the systems, they need to, you know, teach us how they take the decisions,
08:08how they make the decisions.
08:09And then, you know, maybe we'll be able to trust that.
08:12Everything that you just mentioned, can you be very concrete what I asked my first question?
08:17At Renault, how do you handle this?
08:20From the factory to the cars, how do you see that?
08:27Transparency, trust, how do you reassure, how do you tackle this?
08:31I mean, the systems that you put in the car, they are going to be, you know, doing something, right?
08:38So, this is something you need somehow, you know, to step by step explain them, having, you know, some...
08:46This is what we did, for instance, you know, in the last Renault 5 that we released in December.
08:52We have a little avatar that is right there, you know, to explain what is happening sometimes, right?
08:58So, if you want to know more about some functions of the car, you can ask it, you know, tell
09:05me more about that.
09:06And maybe it will explain and maybe you will start, you know, the trust process with this little guy.
09:13So, somehow, you know, we don't want to anthropomorphize the car.
09:17I mean, this is not what we want to do.
09:19But we want to give some keys that are going to educate the people, that are going to help them
09:25to be more comfortable when they are going to use those systems.
09:30Because those systems, you know, again, they are new.
09:34They are wondering, people are wondering what they are for.
09:37So, we need to explain.
09:38Okay, and for example, would you say that your industry have a head start on trust,
09:46thanks to its last or the long-standing culture of responsibility?
09:52Would you like to go first?
09:56Trust for a bank, for a financial counterpart, is absolutely key.
10:02If you don't have the trust of the customer, you die.
10:05So, that's existential and that's the strength of the brand, and this is the case for BNP Paribas, is of
10:12course the trust we can generate on our customers.
10:15So, and why we, very traditional players in this industry, have a head start, as you say,
10:22is the fact that we have worked on risk, on compliance, on data protection for years now, and we have
10:32independent teams, we have methodologies, that's in our DNA right now.
10:37So, that's a way to generate, but that's absolutely key.
10:40And even now, we have to communicate on that, because this is an advantage compared to some player.
10:45Some might say that the consequence is a small lack of agility.
10:51Maybe it's true, but at least we are fully resilient.
10:55Yeah, what do you think?
10:57Are you in a better position, or what people expect today from AI is perhaps different?
11:06I mean, the thing, the reality also is that those systems, you know, they are new, right?
11:11So, I mean, they are really new, even though AI exists for 70 years.
11:15I mean, the systems that are going to be acting next to the people, and to do something that is
11:23going to, you know, manage their money,
11:26or, you know, take their life into their hands, you know, in the cars.
11:32I mean, this is pretty new.
11:33So, we still, ourselves, need to learn how people are going to behave, you know, in front of those systems.
11:40So, there is still some learning process that has to be done by, you know, the collective us, you know,
11:49in order to decide how to manage and how to do the best possible way to handle those situations.
11:59We still have a lot to learn.
12:01And today, how do you handle the issue of sustainability of the carbon impact of AI?
12:08As we all know, ChatGPT inquiry really requires 10 times more energy than Google inquiry.
12:17How do you see that?
12:19People want to know, clients ask you for that?
12:22It's new, but it's, yes, more and more, we have requests from our clients to measure, first, the impact, the
12:30carbon impact of their portfolio,
12:32but at the same time, the impact of the way we are working.
12:37I'm not sure they question the model right now, but I'm for sure it will come in the next years.
12:44This question of green AI will come on the table very soon.
12:49How do you have this in the automotive industry, mobility, green AI?
12:54Is this a topic for you?
12:56You mean AI in general?
12:58Yeah.
12:59ChatGPT in particular?
13:00No, or other, not only ChatGPT, but the green AI.
13:05Is this a topic?
13:07So…
13:07Sustainability.
13:09Sorry, I didn't get what you wanted me to say, sir.
13:14Okay.
13:14So, let's now step to the future.
13:18Let's have a look to the three or five years from now on, okay?
13:23AI, we think AI will be widespread really everywhere.
13:28So, how do you see the question of trust evolving?
13:30We talked about now.
13:31How do you see it evolving?
13:33How do you see the future of, yes, of that?
13:35So, for the systems that are going to be the systems that are going to be more specialized,
13:39I'm sure that it's going to be, you know, more trustable and it's going to be better
13:42and people are going to understand them.
13:44For the systems that are going to be more generic, like, you know, the Chats, GPT, Gemini, whatever,
13:51I think it's going to be worse.
13:53And actually, there is a paper that was just released, you know, a few days ago by OpenAI
13:58that is showing that the accuracy, the accuracy of the actual AIs, when they are very generic,
14:07is going down, which is bad because, I mean, when the accuracy is going down,
14:11it means that you cannot trust it.
14:13I mean, hopefully, this is what happens, right?
14:16So, I mean, first of all, it's very, very difficult to calculate accuracy, right?
14:21So, I mean, this is very difficult because those AIs, what they do when you ask a question,
14:28they just answer the way you want, you know, to be answered.
14:33So, but there are some studies that are showing that the accuracy can be as low as 64%.
14:4064% is low.
14:42You know, 36% of the time, it says, you know, wrong things, which is bad.
14:48So, I cannot trust something that is going to be 36% of the time wrong.
14:54And the issue is that because we are creating more and more synthetic data with the AI themselves,
15:00the AI, you know, is now generating data that is going to be used for the next generation of AI,
15:06the synthetic data is potentially wrong.
15:08And so, the accuracy is actually going down.
15:12So, we need to be very, very careful.
15:13That's why the solution, the only solution, is to do, you know, some very specialized AI's
15:20that are going to be more trustable because they are going to be, first of all,
15:24with the data that we trust ourselves.
15:26You know, when you create systems that are going to be financial systems,
15:30you are going to use the data that you trust to build your own system.
15:35You are not going to take care, you are going to take the data that is coming from you don't
15:40know where.
15:40So, first of all, you build the system with the data that you trust.
15:44Once you have that, you can explain how it works, and you can have people, you know, trusting your system.
15:51So, there are different models of AI.
15:54So, how to choose the best one?
15:57This is a complex task, right?
15:59So, I mean, today there are, as you said, many, many, many, there are thousands of models, right?
16:05So, when you build a system that is going to be a system that you are going to want to
16:10be trustable,
16:11you are going to choose what you believe is going to be the best model for you,
16:15and you are going to put your data in it, so to fine-tune it with your own data, for
16:20instance, you know.
16:21And then this model is going to be yours, and it's going to be really representing whatever you want to
16:27represent.
16:27And then you can yourself trust it and explain how it works.
16:32But you're right, the issue is that there are so many models, there are so many things that you can
16:37choose from,
16:38that you can also make mistakes in the way you are going to choose the models.
16:42Yeah, super interesting.
16:43Renaud, how do you expect clients to delegate more decision-making to AI?
16:50Definitely, the AI we spread everywhere in our industries, that's a fact.
16:58We will have a full automatized possibilities in the future.
17:04That's the first evolution.
17:06The second one is something which will happen, which is not here today,
17:10but more and more we will have boats speaking to boats,
17:15because today we have boats, but speaking to human beings.
17:19And the next evolution, I don't know if it's in five years or in three years or ten years,
17:24but we will have to adapt our boats to be able to speak to other boats.
17:29And of course, the last evolution I would like to mention is the demography.
17:35The next tech-savvy generation is coming, so we have to prepare ourselves.
17:40But I have two main convictions in terms of evolution on the trust aspect of AI.
17:47The first one is that the question is not only to improve the trust on AI,
17:53being, for example, specific.
17:55The question is to find the right combination of AI and human beings for empathy.
18:03because in our industry, definitely our clients are waiting for empathy and listening,
18:10which is not possible.
18:12And what with AI pure?
18:13And why don't we do think that empathy will be, according to me, not possible with AI?
18:20First, it's a question of conscience.
18:24Conscience, of course, when we listen to a human being,
18:29we listen to our customers with our own culture, with our own experience.
18:35So it's a human reaction to a human situation.
18:38And that's something which is absolutely lack of empathy, unpredictability.
18:42Because without conscience, AI is only a question of statistics.
18:45It's only a question of models.
18:47And that's why we won't be able to trust 100% of AI.
19:01And that's why every time we are building a pure self-care process,
19:06meaning an automatic process, we have to put exit doors.
19:09Because one of the customers want to speak to a human being because he wants to be listened.
19:15We have to put a human being somewhere and to put an exit doors of the automatic process.
19:19We need empathy.
19:20Absolutely.
19:22Empathy, consciousness, which AI does not have.
19:26So we need that.
19:27I mean, the thing is that the human has to be in the loop anywhere at one point.
19:32So for sure, there is another domain that is not the banking domain,
19:36but that is the medical domain when trust is going to be very, very important, you know.
19:42And we hear a lot that AI is going to replace the radiologists,
19:47or AI is going to replace the doctors or whatever.
19:51BS, right?
19:52I mean, it's not going to happen.
19:54And what is going to happen is that those guys,
19:57they are going to be augmented by those tools that they are going to use.
20:02And they are going to have a lot of time now for empathy.
20:05They are going to have a lot of time now to talk to the patients.
20:09What today, you know, is not happening anymore
20:11because, you know, they are doing, you know, patient every one minute, you know.
20:16Now they are going to have time to talk to the patient
20:20and to explain what was the decision by the tool or by themselves, you know.
20:27And it's going to be much more trustable.
20:29And the full interaction is going to be much better
20:33for both the patient and the medical staff.
20:39So we are almost out of time.
20:42Let's end with the spark with one last question for each of you.
20:46So just in one sentence or shortly,
20:48what is one principle to build on AI we can trust?
20:55We can truly trust.
20:57You want me to start?
21:00No, once again, it's a question of, as I said, conscience.
21:05So if you need only one sentence, according to me,
21:09it's the real trust will come from a very well-balanced combination
21:17between AI and human to get the power of AI,
21:23the speed of AI, the precision of AI,
21:27the calculation possibilities of AI,
21:30and the empathy, the culture, the conscience of a human being.
21:34So we just have to find the right harmony between AI and human beings.
21:41Yeah, thank you.
21:42Luke?
21:43Yeah, so it depends on the domain.
21:45It depends, you know, where we are.
21:46Sometimes, you know, in the car, for instance,
21:48there won't be a human to explain everything all the time.
21:51So, of course, you know, we'll have to explain in another way,
21:56maybe with an avatar, you know, that is going to be here,
21:59you know, to explain what is happening.
22:01So, but for sure, I mean, the key is explainability.
22:06The key is, you know, explaining, educating
22:09on how the decisions are being taken.
22:12Yeah, and before we prepare this roundtable,
22:16you talked about we are re-entering
22:18in Siècle des Lumières.
22:23Cain, it's so interesting to have a more philosophical view.
22:27You know, because of this doubt that we need to have,
22:30we need to doubt, because, as I said in the introduction,
22:34you know, we cannot trust AI.
22:36I mean, bottom line is that we cannot trust it
22:38as a generic thing.
22:40And because we cannot trust it,
22:42we need to start to doubt,
22:44and doubt, and doubt, and doubt more.
22:45This is exactly what happened, you know,
22:47with the Siècle des Lumières, you know,
22:50back, you know, 300 years ago.
22:51And I think that it's going to happen again.
22:55By doubting, we're actually going to learn, you know,
22:59and this is exactly what happened, you know, 300 years ago,
23:02and this is going to happen again.
23:03Super final word.
23:05Thank you so much for this great talk, really.
23:09AMA driver, girls, yeah, thank you so much.
23:11Thank you for attention.
23:13Thank you.
Commentaires