- il y a 2 mois
AI Ubiquity The Good, the Bad, and the Ugly
Catégorie
🤖
TechnologieTranscription
00:00Sous-titrage Société Radio-Canada
00:33And as an introduction, let me just say this, AI is obviously already present in countless aspects of our lives,
00:41from social media and what the algorithm brings up to you as content, the software running, our power grids.
00:48But as its capabilities grow in accuracy and in sophistication, so do, of course, its applications.
00:55So where will this increasing use of AI take us?
00:59What is AI bad at?
01:02And what are the risks, obviously, and opportunities?
01:05These are some of the questions that we're going to be putting to our next panellists.
01:09And this is, in fact, one of these interactive sessions.
01:13So please send in your questions.
01:16We want to be able to hear what you think about these issues.
01:19Here's what you do if you've not been through this before.
01:22Connect to the VivaTech platform via the app.
01:25Then you select the tab in the drop-down menu titled Interactive Sessions with Slido.
01:30You go to Stage 3 and then you're ready to interact with us.
01:34Please do.
01:35Don't be shy.
01:35We're loving all the questions we've been having throughout the day so far.
01:39Now, speaking of our next session, and I think it's time for me to introduce you to our moderator.
01:45We'll make sure to take your questions.
01:47So this will be Jennifer Schenker, Editor-in-Chief at The Innovator, and she's going to be hosting four eminent
01:54panellists.
02:21Good afternoon.
02:23I'm Jennifer Schenker, Editor-in-Chief of The Innovator, and it's my pleasure to moderate this panel on AI ubiquity,
02:33the good, the bad, and the ugly.
02:36Now, this morning, we heard from McKinsey that as much as $44 trillion in value could be unlocked by generative
02:47AI,
02:47and that between 30% to 50% of the tasks that knowledge workers perform could be done by AI.
03:00The change is coming.
03:04It's coming swiftly, and it will have a huge impact on business and society.
03:09This panel is going to focus on how do we prepare for this very different future.
03:15So it's my pleasure to introduce, to my immediate right, Hanel Bejava, who is a principal at Creandum,
03:26which is an early-stage venture capital firm with a focus on Europe and beyond.
03:33We have, next to her, Carolina Bassega, who is Innovation Lead at Xtreme Networks, a networking company.
03:45And finally, we have Françoise Souly, who is Scientific Advisor at HUB-EA, and, you know, one of Europe's most
03:55distinguished AI experts, if I may say.
04:01Extremely distinguished.
04:05And finally, last but not least, we have Bertrand Payas, who is Head of Technology and Innovation at the CNIL,
04:13which is the French Data Protection Agency.
04:18So with that, let's just dive right into the conversation.
04:22And, you know, we're supposed to look at the good and the bad and the ugly.
04:27Let's start with the good, and let me ask what you see as the upside, the potential of AI.
04:36And I'll start with you, Hanel.
04:39Great.
04:40I think the microphone's on, right?
04:42Everyone can hear me?
04:43Okay, perfect.
04:44Just checking.
04:44Move it up.
04:45Okay, I'll put it up here.
04:48Okay, I'll pick one example to start with, and that is one statistic, which is that the first weekend that
04:56ChatGPT was released,
04:57to the public, you had over a million people use it in 48 hours.
05:03And I think that's amazing.
05:05From sort of a technological adoption perspective, it's one of the recent shifts sort of in the last decade where
05:12you have, you know, tons of people fascinated by a piece of technology
05:16and wanting to use it and try it for themselves.
05:19So I think that alone is pretty fascinating and amazing.
05:23What's your take, Caroline?
05:26Hello, everybody.
05:28So when it comes to the benefits from AI, I like to divide them in multiple categories.
05:36The first thing that comes into mind is solve problems that are enough complex so humans have troubles at doing
05:47them.
05:47For example, decision-making when you have a very complex manufacturing problem.
05:53You also have large-scale personalization.
05:57It's impossible for me to know, for example, in this audience, each one of you what you like.
06:01But if we had AI here, I might point at one of you and understand what you want us to
06:10be saying.
06:11And also it unlocks a lot of other things.
06:14And I want to leave a place to my colleagues, but I want to give an example more from the
06:20more humanitarian perspective.
06:22For example, enable people that have disabilities to have a better life.
06:27AI can help people that have impaired vision to translate the world, the context of the world around them for
06:37them to be able to move in a much more easy way.
06:41So AI definitely can benefit us from all perspectives.
06:46Françoise, you've been in this sector for a long time.
06:49Yes.
06:50Like, how giant of a leap is this?
06:52And what do you see as the potential?
06:55Maybe I will talk about a project that we've been running now for three, four years for the Paris region.
07:02The idea was we have many SMEs.
07:06SMEs represent about 70, 80% of a job and of the, you know, basically what we make in France.
07:15So the idea was can we help SMEs to do their first AI project?
07:22And we said they can do it in three months with a given budget of 37K euros, funded half by
07:33the region.
07:34And we've done now 70 projects.
07:38All of them were success except two.
07:41Two projects were basically, they were fighting, right?
07:44So we basically couple an SME with a startup usually who's providing the technology.
07:54And it's extremely very, like, let me give you an example.
07:58A company whose business is to sell actors who are going to dub movies.
08:05So I come to this company, I say, I want a movie actor capable of dubbing George Clooney.
08:14And they describe the voice of George Clooney, right?
08:18They describe the job.
08:20And then the company looks for all their files to find the appropriate guy.
08:26You put AI in that and you do that in 10 seconds.
08:29So the thing which is amazing is that among the 70 examples of projects we have, there is a huge
08:39variety.
08:40Every single SME I have on the phone for one hour usually, I will find projects where you can benefit
08:49from AI.
08:50Very often it's simplifying business, getting less cost, reducing the cost.
08:58But sometimes it's developing a new product, a new offer.
09:02You create value like that you couldn't do before without AI.
09:06I think that's a super important point because a lot of the discussion about AI is always about increased efficiency
09:13and productivity, which is important.
09:16But it's not the only thing.
09:18And I personally wrote about an example of an SME in Germany in the mining sector.
09:24They mine some kind of things, minerals that are used in paints and other things.
09:32And they realized they had kind of gone as far as they could in that market.
09:37And they also needed to be more sustainable.
09:40And they ended up applying, cleaning up their data, hooking up with a startup, applying AI.
09:46And long story short, they figured out, hey, if we use, structure our data in such a way, we have
09:53the knowledge that instead of selling minerals and shipping them across the world, we could just sell the formulas to
10:01companies elsewhere.
10:02And then, you know, we become greener, and we have a new source of revenue, and we've completely kind of,
10:10we might not even have to mine anymore in the future.
10:13So it completely changed their business model.
10:16And this is the power of the transformation, I think.
10:19Yeah.
10:19This is what we see with these SMEs, that AI basically is a game changer for them.
10:25At the beginning, they don't even know how to spell the word, right?
10:29But at the end, they say, I want AI everywhere, right?
10:33So.
10:33Three months.
10:35Bertrand.
10:37Hi, and thank you for having me here.
10:40What I see good about AI is what technological progress has been for the last perhaps two centuries.
10:48is when you have a system that can replace painful tasks that is performed by humans, and it has been
10:56the case in mines.
10:57I think nobody regrets the times where you had to have humans in mines to mine.
11:02You're happy to have robots.
11:04And probably with chat GPT, for instance, or generative AI, a lot of people will be happy to not be
11:11obliged anymore to draft some reports that no one reads,
11:17and to have a robot to do that in some way.
11:20So this is a good part.
11:21I think it's in the long run, it has short-term effects that need to be addressed.
11:25But in the long run, all this progress will help humans focus on more interesting ideas, more interesting tasks.
11:34And I think it's really the good news about the development of AI.
11:37Okay, thank you.
11:38And so, because the topic here is not just the good, but the bad and the ugly, let's try to
11:44have a serious conversation about the downside.
11:49Because, you know, in the news, we see some people in AI or Silicon Valley saying, literally, AI will save
11:58the world.
11:59And others literally saying, AI is going to destroy the world.
12:02It's going to destroy democracy.
12:04And some go as far as saying, it's going to make humankind extinct.
12:10So what's the truth?
12:15It's a great question.
12:17And I think if any of us had the answers, you know, we probably wouldn't be on this panel.
12:22We would sort of be out there trying to accelerate it or stop it, depending on what we thought.
12:28I mean, really, I think I have more questions than answers.
12:31I think, so a couple of things.
12:33I think we're still in very much the early innings of adoption, not just for consumers, but businesses.
12:40And, you know, even when you think about OpenAI and sort of Google and some of these large players, I
12:46mean, you have new players getting funded.
12:49You know, there was an announcement today about a French company called Mistral that's building an open source model aiming
12:55to compete with OpenAI.
12:56So I think we're just in the earliest days around observing the good, the bad, the ugly.
13:03I think one topic that has been on my mind and has been in the news a lot is around
13:09the data that goes into training AI models.
13:13And there's sort of an interesting debate that I'm kind of watching curiously between the sort of like black box
13:21models and open sourcing everything.
13:23And I think there are a lot of compelling arguments as to why transparency and open source is sort of
13:29the best path for visibility as to what these models entail.
13:33But, you know, unfortunately, I don't have an answer to your question on whether it saves us or kills us.
13:38But I'm eager to sort of watch and find out.
13:42Okay.
13:43Thank you, Caroline.
13:44Okay.
13:45So great question.
13:48So AI, this is my opinion.
13:52I think that we're still far from arriving to this general AI that can maybe take over the world.
14:00So I will focus on the downsides that we have in hand, on the current things that we actually need
14:08to be looking at.
14:09And I think that the most important thing is understand the limitation of those models and also understand, I will
14:18say the limitations.
14:19And let me go with an example here.
14:21Probably everybody in this room uses or have used ChatGPT.
14:26ChatGPT came out to the world and took everybody by surprise.
14:31Now, if I go to ChatGPT and I ask, hey, can you give me a reference of a case where
14:40AI can be useful in medicine,
14:42ChatGPT is going to come with a very, very, very clear example.
14:47Even if I ask, can you give me the reference, it's going to tell me, oh, in this medicine journal,
14:53blah, blah, blah.
14:53And all that was invented.
14:55And it was invented not because ChatGPT has an error.
14:59It's because it is a generative model.
15:02So when OpenAI launched that, launched that very simple interface for everybody to use, the idea was to launch that
15:11more in a research phase to see how everybody interacts with it.
15:16So it doesn't have the guardrails that are necessary.
15:19And that will come with my advice at this point is that what we need, the thing that we need
15:27the most is education.
15:29We need to understand people that is not in traditional AI roles, people that are product managers, UX designers.
15:37These people need to understand, if I'm going to deploy this model, what data did I use for training?
15:43What is this data bias?
15:46The model needs to be enough transparent for me to understand what it's doing.
15:50And I need to understand what is the limitation.
15:52Is this generative?
15:53What means that this is generative?
15:55Has it guardrails?
15:57Or decide to put in my product the right guardrails?
16:00So yes, it has downsides.
16:02But I think that we can solve them.
16:06Francoise.
16:07Yeah, I think I would like to go back a little bit about risk.
16:12If you remember a little bit about what happened with AI, it basically was deployed in the last 10 years
16:20in companies.
16:21And I'm not talking about ChatGPT.
16:22Let's say we stopped in December 2022.
16:26Yet at the time we had already identified many risks that AI has, like, for example, discrimination.
16:34We train AI on data.
16:38Data comes from the real world.
16:40And the real world is, let's say, crap.
16:44It's biased.
16:45It's full of wrong things.
16:48So if you train on data which comes from the real world, you're going to get a system which is
16:53biased and which can discriminate against women,
16:57discriminate against people who are not white or whatever.
17:01So we know that there is risk in AI.
17:05And this is why, in Europe, at least, we started, the commission started in April 2021 to produce what is
17:15now called the AI Act.
17:17The AI Act was really produced to say, we want the benefits, but we do not want the risk.
17:24So if we do not want the risk, we have to protect, to regulate, so that if we put an
17:33application on the market,
17:34it's going to get, you know, there with a stamp, which is CEC, you know, approved, so that you can
17:44trust it.
17:44So trustworthy means we know there are risks, but we know we can trust this technology because it has been
17:52through a process to check everything,
17:56and we can make sure that we will benefit, not risk, right?
17:59Absolutely. So I think you touched on a very important point that, you know, never mind thinking about, and Caroline
18:09alluded to this,
18:10never mind thinking about, you know, some kind of theoretical harms in the future.
18:15We know today that there are harms in terms of bias, but also data privacy, which is a great segue
18:23to you, Bertrand.
18:25So, you know, if you talk to the Silicon Valley companies, they will tell you, oh, regulation kills innovation, and
18:37we can self-regulate.
18:39What do you think about that? And, you know, why do we need to take steps in Europe to guard
18:46data privacy?
18:49I really, I'm really in line with what Francis said. I think we can make AI for the better if
18:56we are able to organize a dialogue,
18:57not only among Silicon Valley industry players, but with the civil society in the U.S., but obviously in Europe,
19:06and the regulators in every continent, and the end users, the people that will be affected by the system.
19:13And I think this is really the key point that we need to build. So first is to build this
19:18dialogue.
19:18I know companies are engaging themselves in that kind of dialogue with their own users, their employees, their regulators.
19:28But obviously the dialogue will not be the same in every country, in every society.
19:34And in Europe in particular, we are particularly sensitive to data protection and privacy and a specific way to protect
19:45privacy.
19:46Actually, other continents as concerned with privacy as we are, but we have built this framework, this legal framework on
19:55data protection
19:56and a few principles that do not exist in other continents and that should apply to those AI systems.
20:03So in the end, we need to find the right balance.
20:07But sometimes I feel like I'm talking, on this aspect, I'm a little bit like in the chemicals industry one
20:17century ago,
20:18where I say, oh, it's great, you can do many things with all the chemistry and all the science that
20:24we have invented,
20:25that we have discovered.
20:27But obviously, in the end, you will need some rules because it's not the same to put chemicals in the
20:32body of people or to just have it to do toys.
20:35It was not harmful to build plastic bags.
20:38Everybody will say, okay, it's okay, plastic bags, but now it appears to be not such a good idea, probably.
20:43So we need to, I'm convinced that we, 10 years from, 20 years from now, we will have a regulatory
20:53framework
20:54and hopefully it will be balanced and the way the AI Act is approaching that with the high risk thing,
21:01okay, law enforcement, health, HR, those are high risk, but the rest is not.
21:06I don't know, video games is not, even ChatGPT actually is not at this stage.
21:11From a data protection point of view, we don't see many risks.
21:15We see less risk with that than we see with classification AI that, for instance, target anti-fraud system,
21:24where you could have more discrimination and more effects on people.
21:29but it's also, I see you react, but it's also because generative AI is, we're not sure about the context
21:37in which it will be used.
21:39Obviously, if it's used by a doctor to draft a report, it's different than when it's used by a student
21:45or for commercial content.
21:49Well, not completely.
21:51From what Bertrand says, it's not only protecting, you know, personal data.
21:57If you talk to a company, their data is their asset, number one asset.
22:04So, if ever an application, an AI application, you know, ChatGPT or other, is using their internal data,
22:12they don't want that.
22:14They want that to be completely protected.
22:16And maybe you can tell us a little bit about the Cloud Act, for example, right?
22:21Yeah, I agree.
22:23I agree that, actually, personally, we are a data protection authority.
22:27We are here to remind to all companies that they need to care about not only their strategic data, but
22:32the data of their users.
22:34So, this is our role, and some integrate that, others don't.
22:39I agree with you, they're cautious about generative AI, they're cautious to put sensitive data in ChatGPT, but in the
22:47same time, most of them are using AWS or, I don't know, cloud services where they put all their data.
22:53So, I'm not sure they reveal that much more data when they use generative AI than they actually do today.
23:01Caroline, you wanted to say something.
23:03Yes, I want to make a comment.
23:04So, I work in the private sector, so in a company, and I actually think that regulations actually help companies
23:14at this point, more than stop the innovation.
23:18And the reason why I'm saying this is because, imagine my company wants, and in fact, we are thinking about
23:26launching AI products using generative AI, let's say, in Europe.
23:32For me, it would be much simpler if I understand what are the regulations rather than increase the risk by
23:39launching a product that I don't know is going to be regulated after the fact.
23:43So, for the people who want to do things right, that is most of the companies, I think that having
23:50the right regulations and understand what can be done and what not protects the company and helps the company to
23:58lower the risk when it comes to AI products.
24:01Okay, great point.
24:02So, let's now move towards impact.
24:07So, we've talked about, you know, how generative AI is going to replace a lot of the tasks that knowledge
24:17workers do.
24:18What is the impact of this going to be on companies?
24:23How do they have to reorganize themselves?
24:27How do they, what can they do to prepare for this huge change?
24:35I take it?
24:36Okay, yeah.
24:38One thing is clear is that when you start developing an AI application in a company, you're not done when
24:48the AI application is finished.
24:50You have to put it in production within your environment, within your business processes.
24:57And usually what happens is that the fact that now people have to work with AI makes a big difference
25:04to their life.
25:05So, it makes a big difference also to the way they are organized.
25:10So, the first thing which companies have to do is actually to remodel, rethink, reorganize all their business processes.
25:21It is what big companies, you know, consulting companies call change management.
25:27I would say that AI is a beautiful thing for companies selling change management because it's absolutely needed.
25:35If you just put AI on the field, it will fail.
25:40The people in the company will not want to use it.
25:44Suppose I'm a real example.
25:47I'm an insurance company who is paying back, you know, on claims that customers put because they had an accident.
25:56So, obviously, you have doctors who examine the claim, the images, the various medical services that the customer got and
26:07say,
26:08yes, okay, you can pay, no, you should not pay, right?
26:10So, they try to detect whether the claims are right and should be reimbursed.
26:18Obviously, in many cases, you can automatize the work of a doctor.
26:23Like, you know, you have to look at a radio.
26:26An analysis by AI usually is going to be more efficient than a doctor.
26:30Give this application to a doctor like that, he would say, no, thanks.
26:35So, you have to really train them, make them learn how to use things like that.
26:42And what the people who did that told me is that they, we have a word in French, how do
26:49you translate that?
26:50Basically, they played with the AI trying to fool it, right?
26:54Trying to see that, you know, the AI was making a mistake.
26:59So, they had a long period of playing with the AI until they could trust it
27:07and until they could figure out how their work was going to be organized for them to really profit.
27:14And now they use it, but they not use it for everything.
27:18Sometimes the AI says something and the doctor says, no, you're wrong, right?
27:23So, you have to learn that.
27:25Well, this morning, McKinsey was saying that, you know, AI will act as a co-pilot with workers.
27:31And so, it's not going to make the employees redundant, but the employees do need to understand it.
27:38They need to, and they have to learn how to use it.
27:42Yes, so, I want to touch exactly on that point.
27:46So, I think it's unavoidable that every single position at a company is going to be impacted by AI.
27:54And by impacted, that doesn't mean that you are going to lose your job.
27:58It's that the way that you approach to your job is going to be different.
28:02For example, one of the things I do on a daily basis is I write code.
28:07I'm a programmer.
28:08I write code.
28:09And often, I use AI tools.
28:13Like, for example, there is a tool called co-pilot that you can use in order to generate some pieces
28:19of that code.
28:21Sometimes, by doing that for very simple functions, or for example, generate a documentation.
28:27This is super annoying.
28:29So, I tell the system, hey, generate a documentation for me.
28:33I read it.
28:34A task that was taking me half an hour now takes me five minutes.
28:39Does that mean that I'm going to lose my job?
28:41No.
28:42What that has meant for me in the last three months means that I have been much more creative.
28:47I have been able to innovate more, to create more things.
28:51I have much more time for the things that are really important.
28:55So, now, what companies need to ask themselves is what roles in my organization can be positively impacted by these
29:04tools?
29:05For example, you are a product manager, and you need to write in all these stories in Jira.
29:09Super annoying.
29:10So, how all these tools are going to impact you, and how can you create more and more value for
29:16the enterprise with the time that now AI is giving you?
29:21So, this is what companies need to be thinking about, and you as employees, too.
29:25This comes with a warning that, as you said earlier, you need to check, right?
29:33You need to check the AI.
29:34So, we were speaking earlier.
29:36There was a press report a few days ago about a lawyer in North America who asked ChatGBT to write
29:43his whole argument for court.
29:46And he used it, and it turned out that all of the cases that he cited were made up by
29:53ChatGBT.
29:54And now he's being sued because he misled the judge during court.
29:57So, let me piggyback there a little bit.
30:00This goes to what I said in the other answer, that you need to understand the limitations of AI.
30:08You cannot use this blindly, and you need to check, and you need to understand when you can use AI
30:13for something, or a particular tool, because AI is too broad.
30:16I'm talking really very broad here.
30:18But when you can use a tool for something, and what level of check you need to give, and this
30:25is also basing what risks that impose for your role and for the enterprise.
30:30So, that's a good point.
30:32On the legal example, I mean, I think exactly to your point, figuring out sort of what are the actual
30:38use cases where you can trust AI.
30:40And, you know, generating a creative legal argument is probably not the right use case.
30:45But there are a few startups that are quite interesting that are automating something called document review, which is the
30:51process that paralegals go through where they spend hundreds of hours basically combing through these extensive legal summaries and essentially,
30:59you know, highlighting or digging for information.
31:02Like, that's something actually that an AI tool or a software tool can help you do quite well.
31:08And then, I think it's augmented by human work.
31:11So, I really agree with the co-pilot framing that McKinsey used.
31:15I think that's the right way to think about it in business applications.
31:18Sometimes AI can surprise us, too.
31:21There was also a story yesterday, which I found really interesting because I think most of us feel like we
31:29have certain human qualities that AI won't, you know, figure out how to break bad news to patients.
31:36Because they need it to supplement their empathy, which is amazing to me.
31:44So, sometimes I think, you know, we will be surprised at how people will use it.
31:51But going back to the business applications, we talked about how workers can use this and become more efficient or
32:00it can, you know, or totally replace some of the tasks they're doing and unleashing creativity.
32:06What about senior management?
32:09What about the people running companies?
32:10What about the people on boards who know nothing about artificial intelligence but yet are responsible for the governance of
32:18the companies?
32:19What needs to be done to get these people up to speed?
32:22Is it possible to condemn them for murder?
32:29Well, we might not be able to do that.
32:32We should, because they are basically killing their companies.
32:37If the big manager doesn't understand what's going on in this field, one way or another, their company is going
32:44to go down with all the collaborators.
32:47This is why I say you should condemn them for murder.
32:51Do you want to answer that, Bertrand?
32:54No.
32:55I think what's important to implement AI within companies is really to train everybody and train not to say, okay,
33:05be familiar with AI but to really be able to identify which are the processes.
33:10AI is good for anything that is repeated a lot and that has structured data.
33:18So I used to say that you have like three main fields.
33:23So computer vision, natural language processing and structured data.
33:27And so first, as a company, you need to even just understand that, okay, what are my processes that rely
33:33on that kind of asset?
33:36Well, do I read draft reports all the time, which is the case of an administration like mine?
33:41So obviously now with generative AI, I ask myself, so okay, it's probably useful for us as well because we
33:49like 70% of the time is drafting, is writing things.
33:53So obviously an AI that helps write things is probably useful.
33:59But in many other cases, it's really to have like good data and being able, and if you don't have
34:05good data, you won't have a good AI.
34:07And so you need to be trained about what is possible, what will be possible, and what are the key
34:17factors of success.
34:18What are the quality of the data, the quality of the people that I have, and perhaps what are the
34:24new functions that I will need to enter.
34:28As we mentioned the co-pilot, probably in the near future, you will have some AI operators in different fields.
34:35Their expertise is really to discuss, to prompt for generative AI, or to fine tune a model.
34:43All those are new functions in the companies that leaders should be aware of and be able to say when
34:53it's the right time to integrate those.
34:55So we've talked now about impact on business, we've talked about reskilling of workers, we've talked about what leaders need
35:03to learn.
35:04What about the impact on society? What about deepfakes?
35:09We already know about the unintended consequences of social media.
35:18We know that generative AI is going to make deepfakes, it's going to be deepfakes on steroids, right?
35:26How do we, and we know that our children are already starting to use this stuff, and so how do
35:36we train society at large to discern the truth?
35:43Will we even be able to discern the truth anymore?
35:48So this is actually a very, very complex question.
35:53Because actually I see this as a race, a race that has happened in the past.
35:59I can imagine, for example, the race between cyber security and the hackers.
36:04So always people want to make their systems more secure, and on the other hand you have the hackers always
36:10being smarter trying to do something, something bad.
36:13So a similar situation is happening, and it's going to continue happening with AI.
36:20AI grows, the technology becomes really useful.
36:24For example, it might be very useful to take videos and change the voice for real reasons to translate to
36:34another language.
36:35But at the same time, the same technology can be used in a bad way.
36:39So the first thing I think is that technology, and this can be a new career path.
36:44I'm pretty sure that people is going to have, in the same way that there is cyber security, there is
36:48the AI security,
36:50where there is people that is going to be developing algorithms and software in order to be detecting all these
36:55things.
36:57You might say, oh, the ideal situation is to have regulations and teach everybody ethics, and this is great and
37:04I think that we need to do it.
37:06But do we really think that everybody will do it only because of the regulation or because ethical principles?
37:16So I actually think that AI needs to develop technologies to be detecting their own fakes and become better than
37:25the bad people who want to use them in a bad way.
37:28The problem is that this is going to be running and running and running and running, always behind the thing.
37:36For example, today, if you are a journalist, one of your big tasks is to check your source.
37:44It's becoming almost impossible.
37:46Suppose you have a video and you want to analyze this video into your stuff.
37:54It has to be true.
37:56How do you check it's true?
37:58The people in the hub that we have as members, they tell us that they spend a day on one
38:03video to check.
38:05So it's becoming almost impossible to do today to check.
38:10Are we going to invent new solutions to say this is fake, this is not?
38:14Yes, we are.
38:15And then other people will invent new solutions to put more fake.
38:19So it's just the beginning of running forever behind.
38:23So maybe what's the most important is teaching critical thinking.
38:29So let me go back to how we deal with this as a society.
38:35And your job at the Kineo, you know, is to deal with some issues that are at the heart, like
38:43data privacy.
38:44How are you dealing with that so far?
38:46How did your publication in May address some of the issues?
38:51So the first aspect, so we're getting data privacy in Europe.
38:56The main difficulty is that you have very high level principles of data protection, like purpose limitation, proportionality, and you
39:05need to apply that to new technologies that is changing.
39:08So the idea is really to find the right balance to say, okay, please try things and we will look
39:15at these.
39:17One of the key aspects of our regulation is when we set the rules is to try to anticipate the
39:23side effects, saying that, okay, this is a load.
39:25What are the side effects of allowing that or forbidding that?
39:29And it's always difficult.
39:30So at this stage, I think we will be quite cautious to see that, to regulate those AI.
39:41The main point would be to say, please give rights to the users, help them to understand what is happening,
39:48help them to oppose, to have the right to oppose, to say, I don't want to be part of that.
39:53And I think the right key point for the society debate, for the ethical aspects, even for the companies to
40:02know what is acceptable or not is really transparency.
40:06So we've been talking about open sourcing some of the models.
40:10I think it's really a great idea because it helps a lot of people to check on the system, to
40:16test it in ways that a limited number of persons within the company or within the regulator cannot do.
40:26So I think the key aspect, the first aspect at this stage is really to increase transparency and to have
40:33documentation, to have source code if possible, and to have information about data sets.
40:40And that will help when you see, I don't know, you've seen all the open source models, you have all
40:47the people trying to play with that, and they see the defect, they see the discrimination, and it helps the
40:53community as well.
40:54And I think it's what we need to focus on for the next couple of months.
40:58Thank you. So let me ask the other panelists, given the potential upsides of AI and the fact that we
41:09need to progress, what in your mind are the most important things that we need to do to establish trust
41:19in AI?
41:24I think there is only one thing. You need to train people to understand. You need to train them to
41:31see what you're talking about.
41:33You need to understand them what will be their benefits, what their risk will be.
41:39So if you train the people, after that you can put a regulation in place which will be accepted and
41:47which will be meaningful.
41:48The only trouble is that the regulation works for the good guys. It doesn't work for the bad guys out
41:56there.
41:56So we know that we are going to, we need to train the people, of course, otherwise they lose their
42:03job to people who are trained, actually.
42:05So we need to train them, we need to put this regulation in place, but then we still have a
42:12problem which is all this dark web, what it's called, or dark side of the world where attacks can come
42:19and we need to control these people because they will not obey the regulation, of course.
42:26So in my opinion, I think that right now we are in a transition phase.
42:33So when I say transition, it's a transition to trust. So as this is a transition phase, I see it
42:39as a journey.
42:40So as a journey, at the very beginning, we need to actually be disclaiming a lot on terms of the
42:47transparency, on terms of how this model was trained, what data was used.
42:53And over time, users, the final users, are going to develop this critical thinking in understanding what model is good
43:04for a reason, what model is good for another reason.
43:06This is going to come in our normal vocabulary, and it's going to be something that in five, ten years,
43:12everybody understands.
43:13The limitations of AI, what can be done, what is important of the data, what means fairness, bias.
43:21Right now, we need to be really, really digesting that for our users, adding a lot of guardrails, and I
43:27totally agree with Francoise that the most important thing is education.
43:33So every opportunity you have to educate somebody, not on how the algorithm works, but actually in the important aspects
43:42of putting AI into production, do it.
43:46So, Anel, you work with startups, you invest in them. What do you think, what's your message to them?
43:53What are the things they could do to help build trust in AI?
43:58Well, I think the trust framing and the trust question is super important.
44:03Something I've been thinking about is how, as a startup or company, do you create alignment with your users, right?
44:11And I think the sort of trusted brands that will emerge and be successful are the ones where users feel
44:17that the platforms have their best interests at heart.
44:21And I don't know yet what that means.
44:22One example might be, I want the ability to switch from, you know, assuming that we're very close to a
44:30world where an AI agent can act on my behalf on the internet.
44:33It can make financial decisions for me. It can book travel for me.
44:37If I want to switch from one provider to another, you know, how can I port my data and preferences
44:43from one service to another?
44:44I mean, I have, again, have more questions and answers, but I think creating models that are aligned with sort
44:50of user incentives and behaviors will be really key to sort of building a trusted brand.
44:56Thank you. Good advice, I'm sure. We are almost out of time. Any quick last thoughts for the audience?
45:09I only want to add one thing. No matter what is your role within the organization, you don't need to
45:17be a data scientist or an AI researcher to have the interest into learning AI for your role.
45:25And again, I know I'm emphatic on this because every time I see courses on AI, they talk about mathematics
45:32and algorithms and you need to know Python.
45:35No, this is not what you need to know. You need to know what means data, what means fairness, bias,
45:43limitations, generative, all that.
45:46Okay, last thought, Francois. Learn. Learn. Train. Protect the personal data. Okay. And also the data from your company.
46:03So I think what we've learned here today is that, you know, every company will need to adapt to AI
46:11and that, you know, AI will not necessarily erase your job, but you could lose your job to someone who
46:22knows how to use AI if you don't.
46:25And with that, we're out of time. Please don't go away. We're going to be back in less than five
46:31minutes with a panel on AI and liability.
46:35So thank you very much. Let's have a nice round of applause for our, thank you.
46:56Thank you.
Commentaires