Passer au playerPasser au contenu principal
  • il y a 13 heures
AI, Trust and Media

Catégorie

🤖
Technologie
Transcription
00:00Thank you so much James and Anne for that invigorating conversation.
00:05I absolutely adored the complicity with which they engage with each other.
00:11Now remaining on the social aspect of AI, we'll also explore the intricate relationship
00:18among artificial intelligence, the media and journalism integrity.
00:25In today's digital age, AI plays a dual role in shaping media narratives, offering both
00:32opportunities and challenges.
00:35From AI-generated news articles and photos to the proliferation of fake news, we'll delve
00:41into the nuances of distinguishing fact from fiction.
00:46We will also examine how AI can offer a competitive advantage when it comes to investigative journalism
00:53and even wartime reporting.
00:57Prepare to hear very contrasting and dynamics here as we navigate this complex terrain and
01:03uncover the intersection of AI, trust and the media.
01:08Please allow me to welcome Arthur Mensch, co-founder and president of Mistral.
01:15Meredith Whitaker, president at Signal Foundation.
01:21Gianluca Mezzofiore, editor at Open Source Intelligence of CNN.
01:27The moderator of the session is Pierre Louet, president of the group Les Echos, Les Parisiennes.
01:34The floor is yours, Pierre.
01:37Thank you.
01:38So hello everyone and thank you for being so many here today with us.
01:42We have brought to you an incredible panel of people who are, I think, willing to share
01:47views, insights, experiences with regard to a pretty broad and general set of themes because
01:55I was reading again the paper, AI, trust and media.
01:59Which word did we miss?
02:00Is there one word, one important word?
02:02Maybe democracy should have been there or freedom also.
02:05So I'd like to give you a few words of introduction with regard to this panel and then we will
02:13introduce each one of our incredible panelists and then start the conversation.
02:17So first of all, just a couple of figures.
02:20You've already heard so many figures about AI and the propagation of AI in our societies.
02:26I was reading those two figures that showed the incredible excitement about AI, 100 million
02:33ChatGPT users worldwide in just two months when it took nine months for TikTok to reach
02:39that 100 million barrier and 30 months for Instagram.
02:44So the propagation speed is always, always quicker and apparently ChatGPT and other LLMs
02:50have met the public interest in a major way.
02:53At the same time, and this is very relevant for the people who, like me, are in media,
02:5960% of the global public does not trust information.
03:03And that's an issue that we share, Gianluca, but it's a major problem for many of us.
03:09This figure even goes to 68% in the US, 70% in France, according to the Reuters Institute
03:16reports from last year.
03:18So there is both excitement and fear.
03:21Fear, the Latins would say it's be France.
03:24It has two sides.
03:26Excitement comes with fear sometimes, but AI probably can be the best of the world and
03:31also the worst.
03:32And people tend to feel that and they need more explanation.
03:36This is exactly what we're about to try to share with you and them in this very moment.
03:41So first of all, I want to introduce Meredith Whitaker in the center of the stage.
03:47Meredith has an incredible career spanning from Berkeley to 13 or 14 years at Google, and
03:55now she's the chair of the Signal Foundation.
03:58So we will talk more about Signal, but maybe Meredith, just a few words from your side about
04:03yourself and what took you here, actually.
04:06Yeah.
04:06Well, that's a very long story.
04:08But I think I entered tech in 2006 and kept asking questions about how it worked and who
04:14it worked for.
04:16I've ended up working on issues of privacy.
04:19I co-founded the first university-based research institute looking at the social implications
04:25of AI, not just the technical methods.
04:28And all of that, through some winding path, led me to Signal, which is the world's most widely
04:35used, truly private messaging app, which has enabled me a very privileged inside view on
04:43the interconnectedness of issues of surveillance, communications privacy, and these large-scale
04:50AI systems that depend so much on massive amounts of data, often collected via surveillance.
04:57Good.
04:59We'll go back to you in just a second.
05:01Gianluca Mezzofiore, you must be Italian in some way.
05:04Yes.
05:06Not fake news, right?
05:07I need confirmation about that fact.
05:10And so Gianluca is the award-winning editor of the open-source intelligence OSINT team based
05:16at CNN's London Bureau.
05:18So just a few words from your side on yourself.
05:21Yeah.
05:22I started at CNN as a social discovery producer, so pretty much social news gathering.
05:29And then my career led me to work on OSINT, which is open-source investigations.
05:39At the time, in 2020, there wasn't really an open-source intelligence team at CNN.
05:46So we started working on several OSINT-led investigations, and it kind of became mainstream.
05:54The watershed moment was with the war in Ukraine, and obviously, we're deploying many of these
06:01techniques and tools during the current war in Gaza.
06:06So, yeah, that's pretty much it.
06:09Okay, thank you.
06:11And so, Arthur Mench, who's on my right-hand side.
06:16Arthur has been, after very extensive studies, if I understand well, studied for many years.
06:24You have to study many years.
06:26AI is complicated.
06:27It's a complicated matter, so you have to study for many years.
06:30And then he came up with this incredible idea of Mistral, and Mistral has been very, very central
06:35in the worldwide and French press coverage in the recent month, because you've raised money.
06:42You've been able to come up to the forefront.
06:44So what would you tell us about your path to AI?
06:49Because all of you had a path and a way that took you to AI matters.
06:53So I got into AI, just like my co-founders, as a scientist, as a scientist who started to
07:00work in the French public sector and then joined DeepMind for a couple of years, and left
07:05to create an alternative for AI that would be more decentralized and more empowering for
07:12developers, I would say.
07:13Okay.
07:14Okay.
07:15So to start, you know, sharing views, maybe we would like to have a few words on the situation.
07:22I tried to mention the fact that we both had, at the same time, a lot of excitement and some
07:27fear.
07:29Maybe if it's, I know it's very difficult, but for all of you, how would you, in a nutshell,
07:35describe the promise of AI to our societies?
07:39I know it's very broad, but in a nutshell.
07:42And also, bear in mind that some of the most recognized, I say, specialists of AI tend to
07:50have this opinion saying AI will not be beyond human intelligence.
07:54I think it's an interesting point of view also that we need to discuss.
07:58How would you take position on those two things?
08:02Meredith?
08:03Oh.
08:03I'm sorry.
08:04We're starting with you again.
08:05I'm happy to start with that very big question.
08:08Yeah.
08:10You know, I think the promise of AI is fairly straightforward, but offers a lot of pathways.
08:17AI, as we are describing it now, because this is a very flexible term that has been used to mean
08:23in many things over its over 70-year history, describes pattern recognition, statistical technologies
08:33that recognize patterns in large amounts of data, generally, and can reflect back to us
08:40what those patterns are.
08:41So, there is a promise there, if we are methodologically sound with our data collection and creation,
08:49if we are using the insights that those patterns give us to tell us things about our world and
08:57to act on them for the social benefit, that those insights can really help with things like
09:03climate, with these big issues where we need to process and understand a lot of information.
09:09I think the peril here comes from the political economy of the AI industry as it exists right
09:19now, which currently relies on huge amounts of computational infrastructure that is incredibly
09:26expensive, huge amounts of data that is often unethically, illegally, or otherwise collected
09:34through surveillance practices that can include biases or other harmful information, and that
09:41I think very important for a French and European context is largely controlled by a handful
09:49of U.S.-based companies who have a monopoly over the platform market, have a significant advantage
09:59in data collection and creation, and are able to cross-subsidize massive computational infrastructures
10:06based on their business model.
10:09So, right now you have the U.S. cloud companies with over 70% of the global market, and a
10:18very big question
10:20about how other jurisdictions create an AI that serves their citizens and their public interest.
10:27I think there are some really interesting efforts there, but looking at concentration of power, particularly
10:32at this very politically volatile moment when the U.S. stands on the brink of a number of pathways, some
10:39good and some bad,
10:41is a really important task, particularly in the European context.
10:46Good. I think we'll go back to two points that you addressed. Regulation, we'll go back to that point, and
10:54also media.
10:57I want to address at one point the relationship that we can all have in the media world with AI
11:02-driven solutions.
11:04Gianluca, for you, entering into AI has been something that has added, I think, to your profession.
11:10You've been on the ground many times, even on difficult grounds, but now you've changed the way you work.
11:19Yeah, I think, as the introduction to this panel said, you know, AI can be both a challenge and an
11:28opportunity.
11:28I think we're living in a time in which disinformation and misinformation is flooding the zone, and it's becoming incredibly
11:38easier to flood the zone with misinformation, disinformation,
11:43and AI, in a way, is supercharging that, because once you needed a troll farm in Macedonia or in Ghana,
11:54which is one of the stories that we covered back in 2022 about this troll farm that Russia founded in
12:06Ghana,
12:07and you had to actually hire people to create misinformation and disinformation, now it can be easily done just with
12:14AI, AI-generated websites.
12:16There's been an example of this very shady website called DC Weekly, for example, that has been spreading misinformation about
12:27Zelensky's wife,
12:29the wife of the Ukrainian president, buying expensive jewelry while New York, and this website is pretty much made of
12:40AI-generated content.
12:42So you can see how easy it has become to flood the zone, and this is posing an incredible danger
12:50for society, and for journalism as well, because journalists need to rebuild trust with their audience in the face of
12:59such amount of misinformation, disinformation.
13:01And the way we are doing it at CNN is trying to show our workings, be transparent, show the methodology
13:10behind our investigations, so many of our investigations now have a long methodology note.
13:19In terms of the opportunities, we used AI, as Meredith said, for pattern recognition, for an investigation back in December,
13:30to detect craters in a bunch of satellite imagery in northern Gaza,
13:36and that has been incredibly useful for us because coming through such a vast amount of data would have required
13:44us months of work while it was easily done with AI partnering with an AI company synthetic and training the
13:53AI algorithm to detect those craters.
13:55And then, obviously, that had to go through the manual process of reviewing the findings, sifting through the false positives,
14:04and make sure that the findings were accurate.
14:09So I think, in a nutshell, there are huge societal dangers of AI, but there are also a few opportunities
14:18for journalists for these pattern detections.
14:21Yeah, it's obviously one of the themes of this panel to draw things close to the media, but obviously, I
14:29think AI was not meant especially for media.
14:31So there are so many fields that you mentioned in which AI will make a huge difference, will be able
14:36to address huge amounts of data and process them, and it will be a lot of good, you know, probably
14:42in health tech and education.
14:44In media, it's true that it gives examples of things that we don't want to see, you know, so it's
14:50one of the angles that had been put forward.
14:52Meredith, you wanted to add something that I saw during the conversation.
14:55Yeah, I think there's a key point here around the misinformation and disinformation capabilities of particularly generative AI, because I
15:05don't think we can understand that danger without also focusing on the fact that we have a very small handful
15:12of large social media platforms that are increasingly dictating our global information ecosystem.
15:20Or to put it another way, we don't walk down the street and just trip over a deep fake.
15:25We don't trip over a website that is spreading disinformation about Zelensky's wife that is amplified via these platforms.
15:34And if we look deeper, we begin to see that one of the most profitable and most applied uses of
15:44these AI technologies is actually in ad targeting.
15:47It is by these social media platforms to tune their algorithms for engagement, to target their ads based on pattern
15:56recognition of who clicks and who doesn't and all of the data they collect about all of us.
16:01And therefore, there is an incentive built in there to elevate shocking, confusing or simply false information.
16:11And I think we need when we talk about the misinformation dangers of AI, we should be focusing at least
16:17as much on the ad targeting and engagement driven algorithms used by these monolithic platforms as we are on the
16:25tools that are then used by the people feeding content into these platforms, particularly because four of the five of
16:32these platforms are jurisdiction in the U.S.
16:34And we are looking at a very volatile election season in which governance and regulatory intervention over those platforms could,
16:44albeit diplomatic, change dramatically in the next few months.
16:48So let's talk about a platform that is not based in the U.S., that is French born.
16:53So what is Mistral's promise in that field of AI, trust, truth, media?
17:01The technology we build is, I think, enables two things. It enables interactive access to knowledge and it enables to
17:09create software that behaves in a fuzzy way.
17:11So you can automate some processes that you previously could not automate because of the noise that you typically have
17:19in a process.
17:19And that's a noise that an AI, a generative AI, is able to handle as well as a human.
17:25So it brings a new abstraction to create programmation languages and that's a great tool.
17:30It's basically a more abstract tool than what we used to have for programmation language that enables to create new
17:35software.
17:35That's the one thing that we want to enable. And on the other side, basically what we're doing is compressing
17:41the human knowledge and exposing it in a conversational way.
17:44And if you look at this, this tool of accessing knowledge, this is a great tool for media.
17:50Because now, as a user, instead of reading things that have been written by journalists, you can navigate in an
17:59interactive way into that journalist-edited content.
18:03So that's a way of saying, OK, I'm interested in that specific thing about Gaza and then asking questions over
18:09it and being able to progressively iterate into a whole corpus of data.
18:13So that's the promise, I think, that the kind of technology we're building is caring for media and journalism.
18:23It enables to provide better information to citizens.
18:26So it's useful for media, it's useful for public services as well.
18:30It allows us to dig into the law in a way that is conversational, that is adapted to the audience.
18:37And I think that's really the benefits of what we're building.
18:42And now, obviously, the issue that comes with it is that since it's generating content, there's a cultural aspect to
18:48it.
18:48You need to, as an application maker, as a journal, for instance, you need to make your own choices when
18:55it comes to what the model is going to output and what the model is going to direct the user
18:59toward.
18:59And now, as Meredith mentioned, this is both a great thing because you can modify it, you can edit it
19:08in a way that suits the editorial tone of the journal.
19:11But if it belongs, if the technology only belongs to a couple of players that happens to be US-based,
19:16then the control is a little bit lowered.
19:19You do, in a world where no actor is trying to provide a decentralized way of accessing AI, which is
19:27what we're providing with open source technology, we might end up in a setting where US companies are setting the
19:35editorial tone of the entire world and are basically dictating how we should think about things.
19:41And that's really something that, by making our own technology from Europe and addressing the entire global market, we want
19:49to prevent.
19:49We are bringing customization capabilities, decentralized deployment capabilities for journalists to get the technology, make it their own, create great
19:59products for information consumers.
20:01Well, that's an incredible promise. I hope we can all use it. Because when I hear some companies talk about
20:09the next generations of AI, I feel it's the same story that I heard many years ago as sort of
20:17a veteran creating websites in 1996 or those years.
20:21And the new platforms appeared and said, we're going to make all of you very rich by sending you traffic.
20:28We will share traffic and you will be hugely rich.
20:30And meanwhile, you have to bear in mind that, you know, the French media press world lost 50% of
20:37its revenues over 10 years, 50% disappeared and it didn't go anywhere.
20:42It went to Google to be transparent. So in this new emergence of a new generation of technologies, we all
20:50want to make sure that the ingredients of technology knowledge, the ingredients of democracy are preserved.
20:57If you don't have any journalists writing anymore, if they don't go on the field anymore because they're not funded
21:02anymore, we're going to have a huge problem.
21:04So I think Mistral's point of view from that point of view is super interesting and makes us very hopeful
21:09that we will be able to protect that field.
21:13One thing that I wanted to address also with Meredith was it's a bit of a paradox when you mentioned
21:18sharing information, distributing information.
21:22And your company has a reputation of being the most encrypted one. So the most private one in the world,
21:29actually.
21:29So we're talking about sharing. We're talking about making things easier to be shared and understood by the people.
21:35And I was reading this article from Les Echos Start, which is one of our publications, and saying, you know,
21:40Edward Snowden says he uses it every day.
21:42The European Commission recommends it to its personnel for any discussion with an exterior speaker.
21:49Signal puts all of the guys and the people that are worried about protection at ease, the protection will be
21:56complete.
21:57So how do you, you know, explain this need to be super protected in a world that needs to share
22:03information in order to be richer?
22:06Well...
22:06Apparent paradox only.
22:08Yeah, I think there is, you know, the world contains multitudes. We all need to sleep and we need to
22:14be awake, right?
22:15You know, there's room for a lot of different modalities. But maybe we can back up for a moment.
22:21You know, we are in an age of unprecedented mass surveillance, generally conducted by, you know, conducted by large corporations
22:30via a platform business model,
22:33whether that's, you know, via social media or marketplaces or our cell phone provider collecting our location data as they
22:40ping a cell phone tower.
22:41There's more data about us than has ever been in centralized hands in human history at a time when we
22:50see authoritarian rising, authoritarianism rising.
22:54So I think we need to be sober about that because we do have an opportunity to change it and
22:58then understand Signal as a nonprofit committed to building truly private technology for communications,
23:07for journalists who are talking to their sources, for human rights workers who are getting vulnerable people out of authoritarian
23:15contexts,
23:16for whistleblowers at corporations who may be breaking the law or endangering the public.
23:21For all of these uses, we know instinctively that we require privacy, the ability to speak intimately and openly to
23:30explore ideas and to develop them outside of, you know, those who have power over us scrutinizing and potentially weaponizing
23:42that information.
23:42We don't have to go far back in European history to look at the terrifying specter of centralized surveillance used
23:49for oppression and social control.
23:52So those are the stakes and that's why Signal exists and why we continue to be committed to not only
24:01developing an app used by millions and millions of people around the world for truly private communication,
24:08but to actually rewriting parts of the tech stack in the open, making those openly available to others who might
24:16want to raise the privacy bar for the industry overall.
24:20Because we don't think mass surveillance should be an economic driver of a centralized industry that is controlling so much
24:26of our sensitive core infrastructures across governments,
24:29across industry, across core social institutions, particularly given that there are only two jurisdictions that have serious industries in terms
24:39of platforms and tech and those are the US and China.
24:42So we're talking about a global issue that needs to focus on these dynamics and I think needs to recognize
24:48that there can be another way.
24:50There can be open possibilities. There can be private possibilities. Technology doesn't have to look the way it looks now.
24:56And we actually can, I think, preserve privacy and get the benefits of computational tech.
25:04So we only have three more minutes to go, but I'm super happy that it's the first time that in
25:10one of those conversations,
25:11it's not a European that raises the question of regulation.
25:14Thank you for doing this. It's an epiphany for me. It's really the first time that I hear it.
25:19But we need to applaud Marilyn. Just a word on regulation and global governance.
25:28One thing that I haven't really heard today when it comes to AI and the massification of AI-based solutions,
25:36I think it's the first time in the history of humanity that we externalize so much of our knowledge.
25:41It's the massification of externalization. You know, computers were a place to put some knowledge in.
25:48But now it's become huge because it's like we know that the AI industry today lacks data.
25:53They would need more data to improve and to train their models.
25:57And everything will be included. Everything will be computed.
26:00And a lot of the answers will be outside our brains. You know, we will get used to prompting in
26:04the right way.
26:04I mean, at least it's a danger. So we need to organize that. And you were mentioning very rightly the
26:11idea of governance.
26:11So in two minutes, maybe, how do we work on a global governance which is not exclusively based on two
26:20continents
26:21ruling the knowledge of the world?
26:23I think the only way that we, the only validated, one of the only validated way of doing global governance
26:28for software
26:28is through open source governance. So sharing the technology, enabling as many actors as possible to take a hold on
26:36the technology
26:36and deploy, for instance, AI systems in a way that escape the control of very large companies and very large
26:43public cloud companies.
26:44And so in that respect, it's very much like Signal. Signal allows to have private conversations.
26:49If you want to have private conversations with a chatbot and access the knowledge without having other people knowing about
26:55it,
26:55you do need to deploy it yourself. And in that respect, that's one of the things that we offer. And
27:01the open source is part of it.
27:02So that's, I would say, that's probably the most sensible way. I know that we are working together at finding
27:10ways of regulating AI on a global level.
27:12It would be useful for us as a company because we would pay a lower cost of compliance. But again,
27:19we have a validated way of regulating software
27:22and I think we should, as much as possible, stick to them.
27:27I think it's important for media organizations and newsrooms to self-regulate when it comes to AI, to have some
27:36internal guidelines,
27:37making it very clear when AI is being used, making it very clear about the methodology.
27:41And I think, to go back to what Meredith is saying, you know, we use Signal every day to communicate
27:50with our sources and whistleblowers.
27:52And it's absolutely key to preserve the rights to privacy, particularly at a time in which we see so much
28:00predatory behavior from tech companies and corporations
28:04in terms of stealing information and archives that sometimes, you know, we're talking about decades of work, of professional work,
28:17of news companies.
28:20And I think it's important. So I think both sides, the rights to privacy, but at the same time an
28:27openness and transparency
28:27when it comes to media organizations showing their workings, showing how we know this, what we know, what we don't
28:34know,
28:34and make sure that our audience is informed and engaged about our findings.
28:40Okay.
28:41And we'd love to help journalists to work.
28:43Thank you. Any help is welcome.
28:45Thank you very much for this conversation and a warm applause for you guys.
28:52Thanks so much.
28:58Thank you.
Commentaires

Recommandations