Passer au playerPasser au contenu principal
  • il y a 5 semaines
Is Generative AI Feeding the Fake News Machine

Catégorie

🤖
Technologie
Transcription
00:02Welcome, it's great to see everyone here, and sorry for those of you, I love to talk to the audience,
00:07so sorry to those of you I will be showing my back to.
00:11Welcome this afternoon, you've heard a lot about AI and how it's going to really change and shape, you know,
00:19change things when it comes to what we believe in.
00:22And of course, you know, a picture is a thousand words or means a thousand words, and what we're going
00:29to enter, we've entered this time in history of humanity right now where even pictures, we won't believe our own
00:38eyes, we won't know that something is true or not.
00:41But the real question, and I'm going to get started with my co-panelists in a little bit, is even
00:47when we know something is fake, are we going to really care that it's fake?
00:51And is that going to stop us from disseminating and distributing fake information?
00:57So the discussion today is about just that and more.
01:01So I'm going to start real quick with introducing to you my amazing co-panelists, Chin Labbe, she's with NewsGuard,
01:11Alexander Nikolas, he's with XPRIZE, and Mario Vasilescu with Redocracy.
01:17So we'll start with you, Chin.
01:21From your own work, can you explain a little bit the state of affairs?
01:24What do we mean by fake news?
01:26What is really happening?
01:28Kind of give us the lay of the land.
01:29Right. So, I mean, maybe just to start, this year, misinformation and disinformation was ranked the biggest short-term risk
01:40globally by the World Economic Forum.
01:42That jumped from 16th position last year.
01:46Why? And it's very clear in their report, because of the rise of synthetic content, so AI-generated misinformation.
01:53So what do we mean when we talk about AI-generated misinformation?
01:57There's, of course, deepfakes.
01:59You've all seen, I think, deepfakes of Zelensky, of Biden, of all the world leaders.
02:05There's voice cloning.
02:06And, I mean, we've had recent examples of how threatening that could be to democracy.
02:10In Slovakia in the last elections, there was a fake audio extract that was released just days before the election.
02:19And then there's counterfeit websites.
02:21And that's the one thing I think that people have less in mind when they think about AI-generated misinformation.
02:27And that's something that actually, the company that I work for, NewsGuard, we monitor a lot.
02:32So AI-generated news sites that are completely automated, generated by AI tools like, you know, chatbots, the ones that
02:41we use too, ChatGPT, Google Gemini, etc.
02:45And those sites are not edited.
02:47There's no human editor behind them.
02:50So they're, by definition, unreliable.
02:51So they're the new content forms.
02:53It's not new, but it changes the scale.
02:57And just to give you an example of the massification that it represents, we started monitoring these sites back in
03:04early 2023.
03:07In May, we had found 49 of these sites.
03:10And now we're up to 840.
03:12So it's not just a growth.
03:14It's exponential growth.
03:16It's explosion.
03:17Not all these sites, to respond to your question, are spreading misinformation.
03:21Some of them are just made for advertising.
03:24Actually, half of them carry ads.
03:25So that's their main goal.
03:27But some of them push misinformation.
03:29We've already seen that.
03:30And I'll just give one example.
03:32Last November, there was a big hoax that you might have read because it really made the rounds on social
03:37media, according to which the psychiatrist of Netanyahu had committed suicide.
03:43Well, the alleged psychiatrist doesn't exist.
03:46So by definition, he didn't commit suicide.
03:48But this was pushed by an AI-generated news site.
03:51And it was all over.
03:53So I think the risk and what I wanted to stress today for me, the big risk with AI-generated
04:00misinformation is when you'll have really, really bad actors seizing on that opportunity, creating propaganda news sites entirely generated with
04:10AI that cost nothing to create, nothing to keep alive.
04:15And just as one final note, and I'll end there, to give you a sense of how cheap and how
04:21easy it is to create these content farms.
04:23A colleague of ours at NewsGuard decided to try.
04:27So he went online, asked a web developer, can you create a content farm for me, make it a local
04:33news site in Ohio, and generated false political stories to support such candidate.
04:39It took him two days and $105.
04:42And he said it was as easy as ordering on Uber Eats, basically.
04:46So that's where we are today.
04:48Not to depress everyone.
04:50But the threat is very real.
04:52Thank you for that.
04:53Actually, just an illustration of, like, this is exponential.
04:56It's breathtaking how fast, how cheap, and how quickly, how everything can just manifest.
05:03So we'll talk, it won't be all negative.
05:06We'll talk a little bit about solutions in a minute.
05:08But I want to go to you, Mario.
05:10So tell me a little bit, from your vantage point, what are the implications for consumers, given the trust deficit
05:18between audience and the fourth estate?
05:21I'm talking, when I say fourth estate, I mean not just the news business, but also the media industry in
05:27general.
05:28Yeah, or experts in general.
05:30The loss of trust and expertise.
05:34I think it's a perfect segue, because what Sheen was mentioning about, you know, this effortless, low-cost proliferation.
05:42When we talk about trust, one of the main issues is what is visible.
05:47The physics of attention, there is only so much time in a day.
05:50There's only so much attention you can have.
05:53And one of the best tactics to combat your enemies, whether it's true or false, is to flood the zone.
06:00And so when it becomes so cheap and easy to flood the zone, I think this is going to be
06:04one of the biggest issues where it is easy to flood the zone.
06:07And if you're not familiar with the concept of the bullshit asymmetry principle or Brandolini's law, you should be.
06:13Because if we could solve this, we'd probably solve a lot of the issues in society.
06:16And that says that for every lie somebody puts out, every provocative lie they put out,
06:21how much extra effort does it take to clean that up and re-educate people?
06:24And this goes hand in hand with flooding the zone.
06:26And so AI is going to make our trust in institutions much easier to harm, much easier to hide the
06:32credible information by flooding the zone,
06:34and then having to do all this extra legwork, which we're already struggling with.
06:39And I think that brings me to the second point I want to emphasize, where with all the mania around
06:44AI and the hype around AI,
06:46we are so compulsively swept up to dive into it and pretend it's this whole new thing.
06:52And in some ways it is, but I really think we have to remember that it is a layer on
06:56top of the fundamentally broken information ecosystem,
07:00which we have not addressed.
07:01And so when you talk about lack of context, AI could help with that.
07:07Now you can more efficiently add context on the information you're getting.
07:12But, you know, for example, the fact that we haven't addressed the issue that we tend to only think of
07:18the platforms for accountability,
07:20but we don't hold the major influencers accountable.
07:23Why is it that you can have over a million followers and you're held to the same standard as somebody
07:28with three followers?
07:29And this power is going to increase.
07:31And we haven't addressed that from a legal perspective, which is like a regulatory vacuum.
07:35And so now we're, again, looking to the platforms to regulate, to look at what they're doing with AI when
07:40it's actually the bad actors.
07:42It's kind of like if you had your town full of drug dealers selling hard drugs and you would spot
07:48them all around you.
07:49And instead of calling them out, you'd say, OK, you keep doing you.
07:52I'm going to go find out where you got the drugs and just like letting it be.
07:55That's what we're already doing.
07:56So when you throw AI on top of that, it becomes even more explosive and toxic.
08:00So I think that dynamic is going to we have to watch that.
08:03Thank you for saying for saying that.
08:05Also, it's not that this is brand new, right?
08:10It's existed.
08:11We've had misinformation, disinformation.
08:13This has happened across human history.
08:16It's just that the toolbox now is exponential.
08:19So I think that's the part, too.
08:21And it's not like we were dealing with a very cool trust situation to begin with.
08:26Things are just getting worse in that sense.
08:28And Alex, I want to go to you next and talk about we now know the landscape.
08:35It's not looking good.
08:36It's actually looking pretty dystopian.
08:39What is in a super election year like the year where, you know, half the world's population almost is going
08:45is going to have democratic elections.
08:46What does that mean for policy to be dealing with such an exponential storm, basically?
08:54No, thank you for that.
08:56And I do want to carry forward this conversation on trust because at its core, we're dealing with essentially broken
09:05trust in society.
09:05And most democratic societies are actually held together by trust.
09:10When citizens no longer believe that they can trust an election, when they no longer believe that their voices count,
09:19then they often resort to other means of exercising their voice.
09:23So what's at stake is essentially our democratic systems.
09:30To provide some additional context to the conversation around policy.
09:35So I'm from an organization called XPRIZE.
09:37And our focus is on finding the hardest problems in society and rallying the world's mind to solve them.
09:44I think we landed on a pretty hard problem.
09:50And so how we are thinking about this problem where we see there's a clear policy intersection.
09:56One is we can think of the problem in context of the attention economy, the algorithms that are driving behavior.
10:04We can think about how do we actually counter those forces?
10:09How do we incentivize different business models?
10:12What are the policy opportunities for us to create new business models to compete with existing business models?
10:20Or to force the big tech giants to change their behavior?
10:26Another bucket of problems where there is a huge opportunity for a policy to play a significant role.
10:31And I think you've seen a number of conversations throughout the last couple of days on deepfake and best-in
10:37-class technology for identifying deepfake.
10:40From our research, I think it's become increasingly clear that just identifying deepfake essentially is not enough.
10:49Because what we have learned from this process is that our willingness to believe whether something is true or not
10:58is often tied to identity.
11:01And so for us to actually, for the deepfake for identifying what's true and what's not true,
11:08we must couple that with media literacy and this notion of pre-bunking.
11:13How do we better prepare society to actually deal with essentially this deluge of misinformation that's coming at them?
11:21What we are also experiencing is that, you know, every citizen is just not prepared to deal with a lot
11:29of information.
11:29So often, they go to trusted sources, which gets us to this notion of echo chambers,
11:35which essentially calcifies certain belief systems, which makes it increasingly harder for true information to actually take hold.
11:44And this goes, you know, we're going to switch over.
11:48This is a great opportunity to kind of start talking what can be done.
11:51But what you mentioned is actually quite striking, right?
11:55If the solution is labeling something that is fake, right?
12:00Well, that's not enough.
12:02Because if it fits with my ideology or it fits with what I believe in or I want to believe
12:07in,
12:08or it fits with my agenda, will I continue?
12:11Would I care that it's labeled synthetic content, for example?
12:15So I think that goes to the heart of, like, the next question that I have for all three of
12:18you.
12:19What do we do about it?
12:21If it's not as simple as labeling it as fake news or that is synthetic, what can be done?
12:27Well, I think we should still label.
12:30But I agree with you that it's a short-term solution.
12:34And labeling, we know it detects identifying synthetic content.
12:38It's not perfect.
12:39You can't say with 100% certainty that it is.
12:42But I think it's good that we have more and more tools that help us because of the law that
12:47Mario is just talking about,
12:49the bullshit asymmetry.
12:50When you are a fact-checker, when you are a journalist trying to fight the battle of disinformation,
12:55it can be really depressing.
12:57And there are days when you feel like you're trying to empty the ocean with a teaspoon, really.
13:02And so it's important that we have more tools.
13:05I think it's important that we see how AI can help us as well while working on solutions.
13:11So two things that I think are important in terms of how AI can help us and what we can
13:18do to mitigate the risks of AI.
13:20I think to mitigate the risks of AI, the first important thing is to make the generative AI industry consider
13:30and understand its vulnerabilities when it comes to disinformation.
13:34So that's something that we try to do.
13:36So we did, for example, audits of the main text AI generative tools to see what was their propensity to
13:46replicate false narratives if put in the wrong hands.
13:49And we found that that was for, with ChatGPT at the time in its fourth version, and at the time
13:56it was Google Bards, now renamed.
13:59We found that in 80 to 98% of the cases, it would repeat false claims.
14:05It would create very persuasive propaganda.
14:07And academics have shown that AI-generated propaganda can be as nearly as persuasive as old school traditional propaganda.
14:16So the risk is there, but I think what's important now is that the industry understands it and works on
14:22mitigating those risks.
14:24How can you mitigate the risks?
14:25There are many things that can be done, even though they might not be perfect.
14:29I think any step we can take to diminish the volume and the propensity of these tools to repeat false
14:36narratives is important.
14:37So, for example, what we do at the company that I work for, at NewsGuard, is we do red teaming
14:42of those different tools.
14:44And we help the companies see, okay, in what percentage of the cases do my tool repeat false claims?
14:51And what can I do to mitigate the risk?
14:53We just did that with a text-to-image tool from Microsoft called Designer.
14:59And it can seem little, but I think it's huge.
15:02We found that we generated images to repeat or reinforce very known false narratives.
15:09And we found that in 12% of the cases, the tool was producing dangerous images.
15:15It can seem not a lot, but it is a lot, considering how many people are using these tools now.
15:22And so, Microsoft worked on additional mitigating measures.
15:27And then when we redid the exercise, and we found that this had dropped down to 3.6% of
15:34problematic content.
15:35So, I think any step we can take in the direction of mitigating the risk and first making this industry
15:42aware of how, what are the potential adversarial uses when it comes to disinformation is very important and can really
15:51make a big difference.
15:52If you think, as we were all saying, that it's nothing new.
15:57It's just massifying problems that we've always had.
16:01And so, we need to find a way to combat that on a larger scale.
16:03Right.
16:05And I think that's a very interesting way of having the industry be part of the solution, because they're the
16:12ones who actually have the tools to do it.
16:15Mario, and then Alex, same question for both of you.
16:18I come at it from the fact that, well, now with AI, they can use themselves sometimes.
16:24But generally, the main issue is humans using these tools to proliferate, flood the zone, effortlessly use this.
16:30And so, you come back to the incentives of why the people would do this.
16:34And so, when we talk, this is, again, first principles.
16:36These are things we have to solve in general, but it's even more urgent now.
16:39And I think it comes back down to the incentives of the attention economy.
16:42When you ask, what can we do about this?
16:44I think even before we deal with it, this is fundamental.
16:48Really, the world right now is only running on one infrastructure, which is the quantity of attention you get.
16:55This defines where we become our politicians.
16:57You get power.
16:58You get visibility.
16:59It's how much attention you can get.
17:00We do not have an alternative, really.
17:02If you think of any feed, how else would you organize it?
17:05Can you organize it by how informed the person is, provably, or anything like that?
17:08You cannot.
17:09And so, I think we need to think about alternative incentive models, as Alex alluded to.
17:14And, for example, what we're doing, one way to frame it is, if you can, make it possible to understand
17:20how credible somebody is on a subject.
17:22When they are commenting.
17:23If you can organize a feed or assign power around that, you can adjust it so it's not about the
17:28quantity of attention you get that defines power,
17:31but it's the quality of attention you give.
17:34So, it is a flip.
17:35And, ultimately, I think what we have to do when we talk about alternative incentive models and think about first
17:39principles,
17:39we can't play this, like, generous, gentle game of we're going to teach you theory of media literacy,
17:45and then when you're busy at home with your family, we expect you to remember those things.
17:48It's not that realistic.
17:49We need to do that, but we also need to have incentives which are co-opting the social media model
17:55and fighting fire with fire to make incentives that are good for society but are still about leveraging vanity
18:02and ego and economic mobility and inclusion and self-discovery
18:06and understanding that you can make good things tied to those benefits as well.
18:12We just have not done it yet.
18:13We're letting a single system run unopposed and then wondering why we haven't made a change.
18:18Thank you for that.
18:20So, the tool set is not as simple as finding the right technology, little chips to figure it out,
18:25but, Alex, do you agree with that or what's your take?
18:28No, absolutely.
18:30So, our organization, our starting point or, I guess, our hammer is technology.
18:37And so, when we started on this journey of looking at this really, this problem of misinformation
18:42and disinformation, we started with labelling.
18:48But what was interesting is that the concern was that we would enter this arms race where bad actors,
18:55would improve their technology and making your current model essentially obsolete every six months.
19:03So, you're in this perpetual arms race of labelling content.
19:07The second part is, I said, what we underappreciated was the human psychology part, human behavior.
19:16And so, if this was simply a technology problem and just writing better algorithms,
19:24I think that's way more solvable and way more exciting and easier to talk about.
19:30What we were dealing with, the problem we were dealing with requires essentially all the above.
19:35It requires a different incentive model, requires labelling, but it also requires work at the human level
19:43because what we have learned through this process is that for reasons that are often tied to human identity,
19:53your political affiliation, we have actually moved to this place where we are less critical of information
20:01and we use proxy measures for the quality of information.
20:07And that same behavior is actually showing up with the next generation,
20:12essentially next generation, where they're using proxy measures for the quality of information.
20:18So, it might be likes, it might be clicks, it might be other measures that gives them some sense for
20:24clue
20:24for how relevant or how important this is.
20:27So, it's back to, it's the media literacy, but I think what gets me excited,
20:32and this is this notion of, we've talked about this inoculation,
20:38it's not as bad as it sounds, but it's simply the notion that we actually need almost like a Manhattan
20:47-esque project-esque approach
20:50to really preparing, you know, the next generation of our citizens to actually deal with the deluge of information that
20:58we're dealing with.
20:59So, we're not paying enough attention to the human behavior and human psychology.
21:04It's interesting you said inoculations, which is a very smart way of saying vaccination.
21:11I have a few follow-up questions for all of you.
21:14I want to go back to Chin a little bit and talk about some of the safeguards that you mentioned
21:20earlier.
21:22Can you point to some, you know, can you point to some, not studies, but case studies,
21:29or concrete examples in the news business where that's working out for you?
21:34Where, where, what is working out for you?
21:36Where NewsGuard has done some interesting stuff in the solution part of things.
21:40Yeah, well, in the solution part of it, that was the most striking one is the Microsoft designer.
21:45But in terms also of using AI and harnessing AI to its, to help us in our work,
21:52I would say that something exciting that we've been, that we've just launched is a partnership with an AI company.
21:59So when we started NewsGuard, we were our, our, really our raison d'être was human needs to be,
22:07it's the response to missing disinformation needs to be human, 100% human,
22:11because only human eyes can detect the distinction between satire, political exaggeration, false news.
22:21It's just too complicated for an AI to distinguish.
22:24And we still strongly believe that AI cannot help us alone in that fight.
22:29But what's been really exciting for us in the recent year to sort of combat what I was just describing
22:35as this depressive impression of trying to empty the ocean with a teaspoon
22:40is partnering with an AI trust and safety company to combine our human analysis
22:47of what are the false claims that are circulating online, but deploying it at scale.
22:51And I think that's where AI can be super exciting and should be seen as an opportunity in the fight
22:58against misinformation.
22:58It's not just the dystopian state of affairs that I was describing,
23:04it's helping us really having a bigger impact and deploying at scale.
23:09So we have this partnership with this company called Safety Kid,
23:12and basically it allows us in like a few seconds to, it's the best of both worlds.
23:18The narratives are the ones that our analysts, human journalists have found.
23:22So we know that it's not in the realm of opinion.
23:25We know it's really probably false, but it helps us, you know, find all the narratives
23:32in the blink of an eye on all formats, in audio, in video, in text, in more than 100 languages,
23:40things that just the human alone cannot do.
23:43And because we are facing this wave, this tsunami of misinformation
23:47because of the rise of synthetic content among other that come on top of what we were seeing before,
23:56so like very, you know, low-tech fakes and artisanal misinformation,
24:02we have to use AI in that way, and I think that's exciting.
24:05So I would say that this is mitigating the risk, but also using AI just to empower us more
24:12is a way to look at it with a more optimistic view.
24:16The reason I asked you, Jean, to say about more concrete examples
24:20is because I've heard you speak before and I wanted the audience to hear it too.
24:23For me, when you talk about AI, I think about not artificial intelligence,
24:28but augmented intelligence, a tool that a human can use.
24:32I think this is the theme that I'm getting from all three of you today
24:35is that we're using AI because we need to basically fight fire with fire,
24:40but at the end of the game, it's the human at the center
24:44and the human that's creating these new incentives and all of that.
24:48So speaking of incentives, you've talked about this quite a bit.
24:51and what kind of incentives can we really create in this,
24:55and talking about fighting fire with fire?
24:59Well, there's the fundamental one I was talking about.
25:03So again, how do you map your behavior to something positive
25:06while still getting that popularity boost, that inclusion boost,
25:09maybe that financial boost you could get before?
25:12Ultimately, the system is just made up.
25:14It doesn't have to be attention you're getting that you win by.
25:16It can be anything else.
25:17We have to design those systems.
25:19We have to build those systems.
25:20We have to lean into them and accept them.
25:21What I find really interesting is the incentive model
25:25that AI might be driving itself,
25:28in the sense that you've probably seen the headlines
25:32that the LLM providers, the AI companies,
25:34are running out of data, training data.
25:36They're running out of quality data.
25:37They're realizing that quantity data is not enough
25:40and they need more quality data.
25:41And so right now, they're hoovering it up.
25:44You're seeing the headlines.
25:45They're paying Reddit $60 million for your data.
25:47They're paying News Corp $250 million.
25:50It was just announced to be able to get access to their content.
25:52And so right now, similar to what happened with social media,
25:56that data of yours from your comments,
25:58from your participation, from what you've written,
26:00is just being hoovered up and sold by these companies.
26:03But at a certain point, it's inevitable,
26:04just as we had with social media where people said,
26:07hey, I want a piece of this pie.
26:08this is my data, we're inevitably going to get to a point
26:12where your knowledge graph, or that of your company,
26:15or your school, there is going to be a democratized way.
26:18There are going to be rails by which you can monetize
26:20your knowledge graph and start profiting
26:23from this intellectual data of yours.
26:25Now, when that becomes the case, which seems inevitable,
26:29the quality of your knowledge graph is going to decide
26:31how much you can charge.
26:32And the quality of your knowledge graph is going to be dictated
26:35by the thoughtfulness of your participation,
26:38the signal versus the noise in that,
26:40what you've given your attention to,
26:42and how consolidated what you know is
26:44in that knowledge graph.
26:46And so at a certain point, I'm very curious to see
26:48if this emerging marketplace
26:52is going to actually cause people
26:54to have to think about things in a different way anyways.
26:57But I do think fundamentally,
26:59one of the key things we have to think about
27:01with incentives, whether it's this that might emerge anyways,
27:03or what we're doing with Readocracy,
27:05is the opportunity cost around participation and consumption.
27:10Right now, you can spend six hours going down a rabbit hole
27:14of the most garbage tabloid stuff.
27:16I don't know, something about the Kardashian family.
27:17I don't care.
27:19And it won't matter.
27:20Nobody will know.
27:21You will not see the data for yourself.
27:23Nobody will.
27:24It just doesn't matter.
27:25So why shouldn't you do it?
27:26You fall down the rabbit hole.
27:27The moment that counts for something,
27:30and you are either given data about yourself,
27:32just like you would have a Fitbit for your body,
27:34you have one for your mind.
27:35The moment your information diet
27:37can count for maybe micro-credentials,
27:40or badges, or VIP access
27:42to a finally trusted discussion,
27:44you will care more about how you spend your time,
27:46and maybe when you're 30 minutes
27:48into that Kardashian deep dive,
27:50you'll say, wait a minute,
27:52this could be affecting a lot of things about myself,
27:54and I'll have to confront them.
27:55This could be affecting how others see me.
27:57This information becomes status and identity.
28:00We do not have that today.
28:01We do not have an opportunity cost,
28:03and I think when we talk about incentives,
28:04that's maybe one of the most important points.
28:06So flipping the model completely
28:08and giving you the individual agency
28:11to not just own your data,
28:13but to monetize it,
28:14and to kind of flip the script on big tech.
28:16Do you think that's feasible, Alex?
28:18Talking about the inoculation part again.
28:22No, I'll say absolutely.
28:23I think the other part that gets me excited
28:25is this notion of critical thinking tools
28:30that allow us to engage really complex content,
28:34because often that's where we see a lot of tribalism,
28:41where people struggle engaging folks
28:45who think differently.
28:46They don't have the language.
28:48They don't have the appropriate level of questions to ask.
28:51I think this is also a place where we can see AI
28:54playing an incredibly important role
28:58in essentially bridging some of the divide
29:00between different groups or different fractions in society.
29:05So I'm a huge fan of the incentive model.
29:09I think we have to figure out how to actually do that,
29:14because part of the challenge right now
29:16is competing with big tech.
29:22And so I would say internally,
29:25part of the conversation is,
29:27are we teeing up something
29:29that we're pushing big tech to behave differently,
29:35or are we trying to see the next generation
29:39of social media platforms?
29:42And that's a super tough question.
29:45And so I love this notion of incentive models,
29:48because ultimately, at its core,
29:51that's what we're struggling with.
29:53Yeah.
29:54Okay, I'm going to flip the script a little bit.
29:56I want to see if you're paying attention.
29:58I have a question for the audience.
30:00And I'm going to leave a little time in the end
30:02so you can ask questions
30:04and even just tell us what you think about this question I have.
30:07Do you believe,
30:09think about this right now,
30:10do you believe that AI,
30:12not artificial intelligence,
30:14but augmented intelligence,
30:16will help us make a better system for the media
30:20and in the information economy and the near future?
30:23And when I say near future,
30:24I'm talking about the next 18 months to three years.
30:28So think about this,
30:29because I'll come back to you,
30:30but I want to go back to my panel first.
30:32I just wanted to give them a little time
30:34to think about the question.
30:35Do you feel that AI,
30:40both artificial and augmented intelligence,
30:42will actually be our savior in a way
30:45to the malaise of the fake media
30:49or fake news that we're dealing with right now?
30:52I mean, I think it,
30:53savior is a bit strong.
30:56But I think if we are able
31:00to put the right incentives in place,
31:02I think AI can be a powerful tool
31:06to help us reconnect as humans.
31:10and I think AI,
31:13and it's back to the core notion
31:15of augmenting human beings,
31:18not essentially being a sort of an independent actor.
31:22And so if we are actually augmenting human beings,
31:25I think there is an absolutely profound opportunity
31:27to actually bridge a lot of the gaps
31:30that we're currently seeing.
31:31Okay.
31:32So one part, one optimist.
31:34Optimist too.
31:35As a journalist,
31:36I think the potential of AI
31:40helping us refocus
31:42on what journalism and the media should be.
31:45So, you know,
31:46help us, assist us in doing faster
31:49the tasks that are not at the core
31:51of what we should be doing.
31:52Maybe eliminating some of the writing
31:54that some journalists used to do,
31:57but helping us focus on reporting,
32:00doing, you know,
32:01looking at what people care
32:03and making sure that we cover
32:05what's happening in their communities, etc.
32:08Refocusing on the people that we serve,
32:11journalists,
32:12I think there's a tremendous opportunity
32:13and that's how we should see it
32:15because it'll force the news industry
32:17to reinvent itself,
32:18to be more transparent,
32:20to prove its worth
32:21in the face of synthetic content.
32:23And I think that's a chance,
32:24that's an opportunity
32:25that we should all be excited about
32:27and seize.
32:28Great.
32:29So two optimists.
32:30Mario,
32:31you would be the tiebreaker.
32:32Well, that's going to be a problem
32:34because I'm somewhere in the middle.
32:35I am optimistic for tomorrow
32:38because I'm a pessimist today.
32:40Okay.
32:40Which means that I think
32:42this is going to make things
32:43get worse very quickly,
32:44but it's going to be helpful
32:46because it's going to finally
32:47break the camel.
32:48It's going to be a straw
32:48that breaks the camel's back
32:49for so many things
32:50we've been holding off of
32:51in fixing the system
32:52we already had before AI came along.
32:54I mean, one of them is, again,
32:55the use of your data.
32:57So as all this money
33:00is being spent hoovering up your data,
33:02inevitably we're going to have
33:03solutions final,
33:04which we didn't even have
33:05with social media properly.
33:06There's going to be
33:06the thoughts of the rails around this.
33:08The use of these technologies,
33:10I think it's so interesting.
33:11Remember,
33:11we have something called
33:13social media,
33:14social media,
33:15which somehow created
33:16an isolation and loneliness epidemic
33:18we've never had before.
33:20and now you're saying
33:21we're going to have
33:22this augmented intelligence
33:23and these tools
33:24which are going to make young people
33:25have their perfect friend
33:26in a screen
33:28that will never have to deal
33:29with another human being again
33:30and data going into an ecosystem
33:32that has been trained
33:33on surveillance capitalism.
33:35So that's obviously
33:36a horrifying prospect
33:38that we don't tend to talk about
33:39when we talk about
33:40this utopian vision
33:41of everybody having
33:42their personal friend
33:43and tutor
33:43that knows everything about you.
33:45So I think inevitably
33:47the data is already
33:48not good in that direction.
33:49This is probably
33:50going to make it worse
33:51but that will lead to us
33:53finally doing something
33:54about this
33:55in so many directions
33:56whether it's
33:57screen time
33:58with young people,
33:59whether it's use of technology
34:00with young people,
34:01monetization of data,
34:04funding journalism
34:04so it doesn't have to
34:05sell its soul every time
34:06to the next big tech play.
34:07Last time it was
34:08you know,
34:09go on Facebook,
34:10now it's sell all your IP
34:11to open AI.
34:12So I'm optimistic
34:14about what this will
34:15cause
34:15because it will get
34:16bad in the near term.
34:18Interesting.
34:19Okay, so as a parent
34:20I don't even want to hear it.
34:22It keeps me up at night.
34:24We have a few minutes left
34:25and I want to be able
34:26to address your questions
34:29so feel free to let us know
34:32if you have a question
34:32and raise your hand.
34:33I think we have people
34:35that can help facilitate
34:38and if there's no questions
34:39or comments
34:41I have one final question
34:43where do we have someone?
34:47Can we have a microphone please?
34:53Okay, so thank you
34:55first of all.
34:56I think like
34:56it could be a good thing
34:58but just
35:01it's gonna be a good thing
35:02from the moment
35:04that people are not gonna see
35:05AI as a monster
35:07because today
35:08like still people
35:10see AI
35:11as something
35:12like
35:12not
35:13like a robot
35:14it's not
35:15it's not a robot
35:16and
35:17they are
35:18we are very detached
35:19from what
35:20actually it is
35:21because
35:22like normal people
35:24do not have
35:24like a real
35:25explication
35:26about what
35:26what is it
35:27so
35:27it's something
35:29like big
35:29huge
35:30and
35:30it is not
35:31it's human
35:32because it's human
35:33behind
35:35you actually make
35:36a really good point
35:37with human eyes
35:39we anthropomorphized
35:40the technology
35:42in fact
35:43there were the cute
35:44little robots
35:44outside
35:45with the googly eyes
35:46we immediately
35:46was like
35:47oh
35:47it's so cute
35:48like it's a cute
35:49little puppy
35:49kind of thing
35:50they're not
35:51they're tools
35:52like the same way
35:53this is a tool
35:54right?
35:54do you guys
35:55want to comment
35:56on that?
35:57no I absolutely
35:58love that point
35:58I think
35:59that's where
36:00there is such
36:01a huge need
36:02for AI education
36:03and now the UK
36:05has launched
36:06their AI education
36:07roadmap
36:08so that citizens
36:09can actually engage
36:10in a very different way
36:12and really think
36:13about how to use AI
36:14what are the privacy
36:15what are the privacy
36:16concerns
36:17what are the best
36:18sort of use cases
36:19and you know
36:20what are effective
36:21strategies for connecting
36:23with others
36:23using AI
36:24so I absolutely
36:25love that point
36:27should we have
36:28any other questions?
36:30we have one
36:31then back
36:38thank you
36:39I would have
36:40one question
36:41so basically
36:42searching
36:44labeling
36:44using AI
36:46against the AI
36:47but how will
36:48prevent
36:49the let's say
36:50labeling war
36:52so
36:52I'm labeling
36:54someone's information
36:55someone's labeling
36:56other information
36:57because of their beliefs
36:58and their interest
36:59how we can prevent
37:00the labeling technology
37:02itself
37:02being used in propaganda
37:04thank you
37:06Mario
37:07you want to take that
37:08well I just think
37:09this actually
37:10is such a good question
37:11because it touches
37:12on something we have
37:13today for example
37:13where you have
37:14plenty of platforms
37:15that are building
37:16databases to label
37:17misinformation sources
37:18and polarization
37:19right
37:21and that's
37:22because there's a human
37:23in the loop
37:24that's immediately
37:25weaponized to say
37:26oh this is clearly
37:26a conspiracy
37:27this is clearly
37:27propaganda
37:28these are the bad guys
37:29that's what the bad guys
37:30are saying
37:32so the human
37:33in the loop
37:33is a vulnerability
37:34for exploitation
37:35by the bad guys
37:36having more data
37:38seems like the solution
37:39now in this case
37:40it tells you
37:41well that's not
37:41the end point
37:42it's almost like a stack
37:43where you have
37:44the human behavior
37:45the labeling
37:46and then more context
37:48on the people
37:50who fed the labeling
37:51so that you can understand
37:52and you can transparently
37:53see that the people
37:55who are behind the labeling
37:56were themselves
37:56qualified and balanced
37:57you can see their record
37:58of commitment
37:59to informing themselves
38:00that creates a full stack
38:02where you have
38:04if you want to call it
38:04provenance
38:05transparency
38:06of the credibility
38:07of a system
38:08rather than
38:09the labeling being
38:10just the be all
38:11and end all
38:11where you don't know
38:12where it came from
38:12and just very quickly
38:14I think
38:15even through our research
38:17we have actually
38:17moved away from
38:18labeling
38:19false information
38:20and focus more
38:21on what's true
38:22because you can actually
38:24trace it back
38:24to its source
38:25go ahead
38:26I think time's out
38:28but I don't know
38:29if I can add something
38:29in terms of the labeling
38:31where I mean
38:31I work at a company
38:33that does label
38:34misinformation
38:35and news sources
38:36and some people
38:37have made it a badge
38:38of honor
38:38to say
38:40I'm badly scored
38:41by NewsGuard
38:42and this is great
38:43so I think
38:43the way we stand
38:45to respond to that
38:46is just maximum transparency
38:48and just making
38:49some people
38:51trust us
38:52because they know
38:52who we are
38:53because we're transparent
38:54and so I think
38:54it comes back
38:55to the whole discussion
38:56we had about trust
38:57building brands
38:58that people trust
38:59and not everyone
38:59will trust the same media
39:01the same brands
39:02but that's just always
39:03been the case
39:03over the history
39:04of journalism
39:04and media
39:06thank you so much
39:07to this panel
39:07thank you to you
39:09for your attention
39:09thank you to your attention
39:11and Origins
Commentaires

Recommandations