Passer au playerPasser au contenu principal
  • il y a 9 heures
Can Democracy Survive Ai

Catégorie

🤖
Technologie
Transcription
00:01Sous-titrage Société Radio-Canada
00:33So we're really happy to be with you today, and we have an incredible limited but of great talent, a
00:41number of experts with us today.
00:44As you can see, Aurélie and Antoine, I guess most of you know who they are, but in a minute
00:50we'll be asking them to tell us more about what they work in, when they started, how they met AI.
00:59Today we have a theme which is incredibly broad and challenging in a way, which is, can democracy survive AI?
01:07If you want to define a more threatening and horrible question, please feel free.
01:13This one is really, really hard, you know.
01:15So I think the answer is going to be yes, but there will be limitations, there will be threats, and
01:21there will be measures to take in order for the answer to be positive.
01:25So maybe in order to begin now, Aurélie, why don't you tell us more about who you are and when
01:33you started working in AI?
01:35Sure, so actually, my name is Aurélie Jean, I'm a computational scientist, entrepreneur and author, and in fact I started
01:42to work on AI a long time ago during my second year of undergrad when I took computer science as
01:48a minor.
01:48And then I kept working on that until my PhD that I defended on computational mechanics on soft matters, and
01:56then I pursued on postdoc, and then as a research scientist in the United States in medical applications.
02:03And today, I have two companies, one of them is actually the result that we're launching today, is the result
02:10of a work that we've done with our collaborators on early breast cancer detection, using specific trained hybrid algorithms.
02:20And you're also an author.
02:22Yes, yeah.
02:23You write books, very good books.
02:24Well, thank you.
02:26Recently, one of them was published.
02:26Yes, yes, I usually write essays, non-fiction, but recently I wrote my first sci-fi novel, co-written with
02:35Amanda Stairs, who is a famous movie director and writer.
02:39So you should read that book because you're going to learn a lot and spend a very good moment.
02:44Antoine, maybe your turn.
02:47Yes, thanks for inviting me today to share the panel with Aurélie, a long-time friend, to talk about AI
02:53and democracy.
02:54So actually, Emilie and I, we've started AI almost at the same time.
02:58I've been working in AI for 20 years.
03:00I did my PhD in Paris, and then I worked for CNRS here in France in academic research.
03:06And then I joined the Fair Lab in Meta 10 years ago, roughly, where with Yvonne Lequin, we started the
03:14lab together at the same time.
03:15And at the end of my career at Meta, I was the director of the lab at the global level,
03:21which is one of the, I would say, maybe the best AI lab out there.
03:26And since a few months, since four months, I took on a new challenge, and I joined the scale-up
03:31Helsing, which is a European company that is tackling the topic of artificial intelligence for the defense sector.
03:39And I think that's how it's connected to the topic of today, because AI will have a definitive impact for
03:45defense, and I'm happy to talk about it.
03:47So you can see we have two incredible experts.
03:50You can understand from the career they described that AI is not a recent discovery for them.
03:57I mean, you started like 20 years ago to dwell into the pleasures and the realities of AI.
04:03And you can also tell that they have different lives, from a major Facebook Meta company to a startup, and
04:10already with different startups and writing.
04:12So I think it's interesting to share their experience with you today.
04:18If we can begin with a very broad question, you know, like the team of this roundtable, we're talking about
04:25the threats to democracy.
04:26How would you today describe the dangers that our democracy is facing, and in what way is AI one of
04:35the threats?
04:35How would you address that broad question, Aurélie?
04:38Well, the threat, I think, is around information in general, how we retrieve information, how we process information, and how
04:45we communicate on this information.
04:47So that being said, obviously, it's related to the technologies and the media we're using on a daily basis.
04:52So a long time ago, still today, we're using newspapers, as you know, with Les Echos and Parisants, but we
04:57also use technologies to raise those newspapers, and also social media and other technologies to share information among others.
05:04So that being said, obviously, those technologies are running on algorithmic models, AI, even though I do like this word,
05:12but we could talk about that later.
05:13And in fact, those technologies, based on the revenue model of those platforms, those algorithms are made in such a
05:22way that the fake news, wrong communications, any kind of those threats, you know, is propagating much more and faster
05:31among much more people.
05:33So this is like a threat, but the algorithm and the AI model in general is not guilty, in a
05:40way.
05:40This is the people who decide to build those technologies that way, who are in charge and responsible of those
05:45threats.
05:46And two, it's really important because there are so many threats in many technologies, by the way.
05:50Technology can do good, can do bad.
05:52It's just a matter of how you build those, do you design those.
05:55And even the technologies that are designed well, they might not be used the right way.
06:00So we have to be careful also on such things to uncouple the different subjects.
06:08When we were preparing that session, we mentioned the fact that AI and the technologies can be both a poison
06:15and the antidote.
06:16They can be the two sides, and they all depend, as Aurélie mentioned, on how they were programmed.
06:22I mean, they are not autonomous.
06:25They don't have a conscience, as far as I understand.
06:27And so someone had to write a program and to determine what kind of paradigms they were going to work
06:33on and how they were going to act in the future.
06:35So, Antoine, one of the things that appeared also during the preparation is among the threats that we can identify,
06:43there are the fakes, the deep fakes, which are linked to technology.
06:47Also, I think all of you know what the filter bubbles can be.
06:51Do you know about filter bubbles?
06:52It means that if you look at a content, the algorithms will propose you more of those contents.
06:59It's a bit like more of the same or sometimes worse of the same, in the same direction, in the
07:04same field, because it is linked to the monetization, obviously, of the contents.
07:10So, how would you describe the way a broad company like Meta works on this, even though I know you
07:15don't want to, you know, understand not to talk too much about that, but, you know, in general terms?
07:21Yeah, sure.
07:21I mean, it's an important issue, and I think it's a general question about the access to information and the
07:28vast amount of information that are offered.
07:29data that are generated, right, it's something that's already said, and a corollary to this, that AI is now also
07:36helpful to generate much more information, to create, to create, to invent new things.
07:41And so then, how do you process it, and how do you show to people what they want to see,
07:46what they can see, because the amount of time is limited, is a big question.
07:50And, of course, we are in a stage where algorithm and AI is used to filter the information that you
07:55see.
07:55That's just the world we are in, the same way, and it could be on the platform like Facebook or
08:01Meta or any other platforms.
08:03So then, the question is like, how do you choose what people are seeing, okay, and I think that, I
08:10think Facebook has tried many, many different ways.
08:13TikTok is trying different ways as well. Google is trying different ways. I think there has been a lot of
08:17research on that.
08:18I think it's a bit like 2015, 2016, we entered this field without really knowing what was happening and the
08:25effects, right, it was like building the plane as you're flying it, as we say.
08:30I think now there is much more knowledge about it, and I would say that the companies, they go into
08:33this like, much more like, eyes wide open.
08:36So then the decision is there, but I think we cannot say that we don't know how it works. Now
08:41we know.
08:42And the corollary to this as well is the question of the moderation as well, because there is a filter
08:46bubble, okay, which we can treat different way,
08:49because you can also use AI to open the field of view of people, right? You can turn it different
08:53ways.
08:55But the question of moderation as well, which is like, what do I remove from the platform? It's a big
09:01question.
09:01I think we have Elon Musk tomorrow, right? So I guess it's a good question for him on Twitter, for
09:06instance.
09:06But AI can now create the tools to do good moderation with tools. So, you know, when I hear we're
09:13going to be flawed with disinformation,
09:15we can, but I think you can also put the tools into the hands of people or news organization or
09:21government to be able to decide what's actually credible or not.
09:25And then the decision would be, why do I do with it? Do I leave it? Freedom of speech. Do
09:30I remove it? For safety.
09:31And this question about what do I leave? Why do I stay? Is the decision more politic and product?
09:37But the tool that AI is providing will give you more and more of this level to be able to
09:41choose.
09:42Maybe just one more question on what you mentioned. When you work in a lab like, I mean, you were
09:48running the lab actually of Yann Lequin.
09:50Do you have in mind the fact that what you do has a direct consequence to democracies, to people's opinions?
10:00I mean, is it on a day-to-day basis that you realize that or is it a bit abstract?
10:04I think there are two things. There's a lot of self-conscious of researchers about the impact of the research
10:09they do.
10:10But I would say, as much as a lot of the tech industry and the AI industry, people have been
10:14caught by surprise.
10:15I think 2014, 2016, 2017 was a time where AI and all the impact of information was not as big
10:21as now. And everything came pretty fast.
10:24The same way that now we talk about large-scale language model, TGPT. Really, TGPT came out six months ago.
10:31So the time of change is very fast. And the researcher, they adapt very well. And yes, they're conscious about
10:37that.
10:37But it's really like, before you were working in a lab, you write the scientific papers, you put your code
10:42out there, nobody cares.
10:44And one year later, it might change the whole picture. So this is a, and so the responsibility we built
10:49in a lot of also the community took on the challenge as well.
10:53I think to talk about FAIR and Meta specifically, the research values have always been around transparency and open sourcing,
11:00which I think has been for us a real moral compass in terms of the research we do will be
11:04published in open source.
11:05So there's full transparency about what we do, what the status of research is and where it is going. So
11:10that the community around us is not surprised that, oh, we didn't expect that to happen.
11:14We'll go back to transparency and open sourcing in a minute. But already on your side, did you feel that
11:21you were going to work on something that was going to be so important?
11:26Well, not really, because when I started my undergrad in math physics with a minor in computer science, everybody told
11:31me, oh, you're not going to have money, you know, like, so, well, let's see how it goes, you know,
11:36and so, and things have changed very quickly.
11:38More important, I, because after my PhD, I worked in medical application with medical doctors using AI algorithmic models to
11:46make prediction to understand phenomena.
11:48I actually, I had to have an ethic class for six months. And it was a very rigid, very long,
11:55intense class lecture. And actually, I learned a lot.
11:58And I learned something that I knew Antoine is doing because he's a very good scientist, which is an every
12:03single scientist and engineer I know are doing, which is, we talk a lot about tech for goods.
12:08Science for good, which is great, you know, we all want to make good, but in reality, when you work,
12:13you should not think of the good you can make, you should think about what bad you can do.
12:20Because in reality, even if you have a technology that works 99% perfectly, what's happened on the 1%
12:29left, you know?
12:30Especially in the medical field.
12:31Yes, so you have to be very careful. And so every time that I build something, I'm actually thinking about
12:37how bad it can be. Because even though I can reach 99%, 90%, there is the, you know, the few
12:44percentage, what's happened?
12:46What's happening to the people who are not treated correctly. And we know with all this, the matters of like
12:52this technology discrimination, algorithmic bias, that we know that there might be a few percentage of people who can suffer
13:00from those things.
13:01So we have to be very careful when we build those. So if people of the 1% decide to
13:07really drive AI in a, let's say, bad direction, what are the main tools and the main defenses that we
13:16should build in order to get some protection against this?
13:19Well, for such a long time, I thought that we wouldn't need any regulation because every single actor should have
13:26the right governance, algorithmic governance, AI governance, meaning that we build technology by testing the data set on which we
13:34calibrate or train the algorithm.
13:35We test, we have some best practices in terms of code writing, code reviewing, deploying, as well as code testing,
13:42and even algorithm validation and algorithm back testing once it's used sometimes with millions of people.
13:49But in reality, I realized that some actors don't do that because the revenue model and all the in and
13:55outs, you know, economy in outs are so huge that they cannot really do anything except when there is a
14:04regulation.
14:04So I'm a strong advocate of a good regulation, but a regulation that actually encourage innovations by protecting the rights,
14:15fundamental rights of people, but still encourage innovation.
14:17So, yes, the regulation is one of them, but the underlying process is what I call the algorithmic governance, which
14:23is how we build technologies and to make sure that we test carefully, rigorously the data sets, as well as
14:30the algorithm, the technology, until the user, once the user is facing the technology,
14:34what the user knows. Antoine was talking about transparency. This transparency is not the transparency about the algorithm itself. It's
14:41about whether there is an algorithm running, whether there is an AI running, what does it do, which data is
14:47used.
14:47And so the user is more, is a power user, you know, of such technology.
14:52So you're describing a kind of organic AI, something, you know where the products come from, you know, it's local.
14:58Yeah, and we know that it's actually tested. And I know that you always did that, so.
15:03Yeah. How would you describe this transparency?
15:07Yeah, I think, I mean, I think already says something that's, that I really subscribe to. I feel that AI,
15:13and we said it at the beginning as well, AI has been here for a while, right?
15:16We work on it for a while. There's an acceleration right now because the technology is making more mature and
15:21can do more things. That's, that's the reality of it, especially the generative AI space.
15:26That's true. That said, it's still AI, and it's still the same tool. So AI is a tool that you
15:30can use for good and bad usage. Okay?
15:32So for me, the thing is that the usage should be regulated. So you could say the same way that
15:38if you have a hammer, you know, you can use it to do a lot of construction in your house.
15:42But if you use it to actually threaten someone, it's, it's actually threatened by law. So, you know, AI, of
15:47course, it's a, it's a, the scale is different, but the problem is more or less the same.
15:51With AI, you could do this. If you want to do that, and I work in defense, you will be
15:55heavily regulated. You will need to have a lot of approval and a lot of clearance because it's important.
15:59That's for like the usage. What I feel is actually where we need to be very careful about regulation is
16:05about the fundamental bricks.
16:07You know, the building blocks, large scale language model, generative AI, et cetera. Those ones, I think we should let
16:12set free, especially in Europe, a lot of innovations.
16:15We have a lot of examples now, a company that rise up to build these fundamental pieces, but maybe encouraging,
16:21I don't know if we should force, but encourage transparency and open sourcing of these building blocks.
16:25Because first of all, it would be much helpful to understand how they work, but also it's much helpful to
16:31build the whole ecosystem of the use case on top of them.
16:33Yeah. If this is useful.
16:35Yeah.
16:35Yann Lequin, that I worked with very well, talks about this large scale language model as the new information highways.
16:41You know, the same way that the World Wide Web at first was created, there were some proprietary standards. People
16:48were saying, we are going to patent the web.
16:50And in CERN and in Europe, we said, no, this is going to be open source because everybody should be
16:55able to connect on the web.
16:56Well, you could see a little bit the similar thing with LLM. They should be open source and people will
17:01build stuff on that and will regulate what's built on that.
17:03Yeah. One of the things about transparency, I think it's very clear, if I may say, is that it needs
17:09to be really transparent.
17:10You know, when you talk about insurance policies, it's written in very small terms at the end of the page.
17:16But okay, they cover your risk, but they don't fully change your life usually.
17:20When it comes to LLMs, when it comes to AI and its propagation around the world, you really need to
17:25have a transparency which is understandable.
17:28Yeah. So you need to bring it out to the public, you need to bring it out to the politicians
17:32also.
17:32So because you need a framework and you need a transparency, a window, you know, in that framework.
17:39Those two things are in discussion now in Europe. Yesterday, President Macron was with us and he said, well, we
17:45hope that Europe is not going to enforce something that would also crush creativity.
17:53And there is always this tension already, I think, you know, between creativity and the framework.
17:58So do you think we need to push? Do you think we need the framework now? It's very difficult.
18:03I think we have to do the thing that we've done with the GDPR a few years ago.
18:09I've read very quickly, you know, articles about the new regulation that was out like two years ago, actually, the
18:13first time which is in April 2021.
18:14And I've seen something at that time that I still see today. It's we don't really make any difference between
18:20research and industry, you know, commercialized products.
18:24And I think we have to be careful because, as Antoine said, we have to encourage innovation.
18:28And if we don't allow research scientists, private or public, by the way, we won't have any opportunities to innovate
18:36as much as we could.
18:37And I think making the difference, you know, for instance, if you look at the medical data, which is the
18:42most sensitive data in the GDPR, if you work on a research project, you know, you are much more free
18:49than anyone in a company, you know, doing commercialized products.
18:54Free to find.
18:54Yeah. So I think we have to be careful on how we distinguish those two, because if not, we're going
19:01to close doors.
19:02But in fact, we still want to protect a minimum which goes with transparency, which information we provide to the
19:09user as much as other things based also on the level of the risk of the given algorithm.
19:16Yeah.
19:16Great. OK, so we have four minutes left. And I have a very simple question, actually.
19:21At Les Echos de Parisiennes, we decided to publish a manifesto of the way we use AI, generative AI.
19:28It was super simple. We said, we're not going to use it to generate text.
19:32We're going to use it to prepare the text and the information that the journalists are going to write.
19:37There will always be a journalist acting, and we're not going to use pictures except to illustrate a text about
19:43generative AI.
19:44OK, so it's super simple in order to be frank and transparent.
19:48Now, how would the two of you, each of you in two minutes, which is going to be a bit
19:51short, how would you describe in general terms for our audience the difference between AI and human intelligence?
19:59What can human intelligence do that AI cannot do? For instance, I have in mind intuition, emotional intelligence.
20:06How would you, because it's super interesting to know how we're going to be ranking with AI.
20:11Maybe I'm going to start quickly and then you're going to finish.
20:13So, to answer that question, you have to describe what is intelligence.
20:17Yes.
20:17And so you have to go back to the theory of intelligence.
20:19Yeah, so very quickly, at the end of the 19th century, we thought that intelligence was only analytical intelligence.
20:25So that's why we talked about the IQ only.
20:27Then in the 80s, Robert Stenberg, you see, it's an American psychologist who worked actually on the description of the
20:34intelligence, saying that there is not analytical, but as well the emotional, creative, as well as practical.
20:39And that being said, you understand that the machine, the algorithm, you know, is mastering the analytical intelligence only.
20:45It can simulate the others, like if a chat bot is telling you, I love you.
20:51Well, it doesn't mean that he loves you.
20:53It means that it simulates the emotion.
20:56So, yeah, and I let Antoine finish.
20:58Yeah, thank you.
20:59I feel that, you know, just we talked about democracy and now I work for defense.
21:05And I think the work for defense and AI and defense is because democracy is not a given and democracy
21:11needs to be defended.
21:12And that's something that we see now.
21:13So I think we talked about the threat of AI for democracy.
21:16I think they are just real threat about just like army to threaten democracy.
21:20And that's the start.
21:21And when you ask the question about human intelligence and artificial intelligence, if you look at the battlefield, I think
21:26what we're seeing now is that the decision about engaging or addressing the target is a human decision.
21:31Okay, this is not something we play with, right?
21:35This is a human decision.
21:36This is what the army actually working with.
21:38This is a doctrine.
21:39So because the human takes the responsibility, the human can apprehend the situation, the human has been trained for that.
21:46So I don't know if you want to call it intelligence or not, but that's the human factor in the
21:50force.
21:50But we are reaching this place where human only to integrate all the information on the battlefield becomes limited.
21:56Either you have much more humans or you actually help the humans to fight the decision with AI.
22:01And that's what we're trying to do because the next war will be decided by AI, whether we like it
22:07or not.
22:07It's a reality.
22:08And so now it's like, how do I build a symbiosis between AI systems that can integrate the information and
22:14the human at the center that can use this information to decide what to do with it?
22:17Yeah.
22:18I think that's an incredible conclusion also for us.
22:22AI can also help defending democracy.
22:25Of course.
22:25Of course, yeah.
22:26And this is probably what you're doing now with Helsing and the company you're working with.
22:30Yeah.
22:31Can you tell us a bit more about that?
22:32Yeah.
22:33I mean, the statement is really that I don't think we have a choice to build AI to defend the
22:39democracies and to help the armed forces.
22:42Because the amount of information that's on the battlefield and the new armaments that are being built are too fast
22:47or they are too complex for humans to be able to apprehend the complexity.
22:51And so the same way you have AI algorithms that help you to try edge information, as we discussed at
22:56the beginning, in your daily life, we feel it's the time for the army to be able to have the
23:01same level of quality of the AI that they have, which is not the case right now.
23:04And so the premise of a company that they are seeing, which is a private European company, is to bring
23:09this technology for the European forces.
23:11Wonderful.
23:12Already, one last word.
23:14Well, just last word, because we talked with Antoine, but when I was in Boston, I was, my research was
23:19funded by the army, the US army and the Navy.
23:21So I know very much the defense.
23:22And I'm a strong advocate of the creation of a DARPA in France, you know, where the defense will be
23:27in charge of deciding which project to found, you know, at large scale.
23:31And because the defense in France and the United States, actually in Europe as well, they have a very strong
23:37capability to innovate and we have to acknowledge that and we have to encourage that.
23:41Yeah.
23:42Thank you so much for that word of hope.
23:44Time is out, but the discussion can continue around the conference.
23:49Thank you so much to the two of you.
23:51Thank you, everyone.
23:52A round of applause.
Commentaires

Recommandations

Vivatech
il y a 17 minutes