Passer au playerPasser au contenu principal
  • il y a 19 heures
Media Literacy in the Age of AI

Catégorie

🤖
Technologie
Transcription
00:00Sous-titrage Société Radio-Canada
00:31Sous-titrage Société Radio-Canada
01:18Great. Thank you, everyone, for being here.
01:20I know we're competing with Elon,
01:22so I really appreciate those of you who have come to listen to us.
01:25For what promises to be a really fascinating discussion.
01:28Let me start by introducing our panelists.
01:31We have to my right, Claire Leibowitz,
01:33Head of AI and Media Integrity at the Partnership on AI.
01:38And then to my left, Sonia Solomon,
01:40who is Deputy Director of the Center for Media, Technology, and Democracy
01:44at McGill University.
01:46And then at the end, Charlie Beckett,
01:48Professor of Media at the London School of,
01:51Media and Communications, sorry,
01:53at the London School of Economics,
01:54and Founding Director of Polis,
01:57which is the LSE's journalism think tank.
02:00I think we'll start out just very quickly
02:02sort of defining what is generative AI
02:06and how it's sort of different from what came before.
02:10Claire, can you give us a very quick definition of,
02:12I don't want to spend too long on this question,
02:13but maybe the 30-second definition for those who don't know.
02:19Does this work?
02:20Yes.
02:20So I'll start by saying AI in general.
02:22I like to think of as software systems
02:24that take in data and interpret it in some way.
02:27And that is a bigger picture than generative AI.
02:30Generative AI is a subsection of AI that creates.
02:34So it can create code, text, media, or even 3D artifacts.
02:39Great.
02:40And then when we're talking about media literacy,
02:42this is obviously a bigger term.
02:44But Sonia, what do we generally mean
02:46when we're talking about media literacy?
02:48Yeah.
02:49So I think media literacy encompasses
02:51a very broad remit of things.
02:53We like to think of it as anything
02:55that empowers the public
02:57to be able to ask informed questions
02:59about AI systems,
03:01about their use,
03:02about their rights.
03:04And I think Claire would certainly have a lot to add
03:06on media literacy.
03:08Yeah.
03:09Do you have a different definition?
03:10or is that more or less how you would define it as well?
03:13Yeah. So it's funny you put AI in the definition.
03:14I actually don't.
03:15I think of media literacy pretty broadly
03:18as just our capacity to analyze
03:20and interpret all types of information.
03:23And I'll add two, I guess, variables
03:26that might make being media literate
03:28in the AI moment possible.
03:31One is the need to understand context and culture.
03:34I often give the example of
03:36how do you know years ago
03:38that a photo of a historical event
03:39was staged without understanding cultural context,
03:43for example.
03:43I give an example also of like the swastika
03:46in certain contexts is Nazi imagery
03:48and in others may be a South Asian symbol
03:50depending on where it's affixed.
03:52So the need to really understand context.
03:54And today also understand institutions
03:56to grasp media literacy.
03:58The social media institutions
04:00making decisions about what gets shown
04:01and what doesn't, what is allowed
04:04and even not just the original newspapers
04:07you kind of needed to understand
04:08in order to be media literate.
04:10So power, I guess, in institutions
04:11and cultural complexity
04:13as part of media literacy in the 21st century.
04:17Great.
04:18Well, we already live in an information
04:20sort of environment
04:21that is rife with misinformation
04:23and disinformation.
04:24You could argue that, you know,
04:25propaganda is as old as time
04:27and photo manipulation
04:29has certainly been around
04:30almost since the birth of photography itself.
04:32and deep fakes have been possible
04:34for a number of years.
04:35So I'm curious what the panels think about
04:37what's really different here
04:38in this era of generative AI, if anything.
04:42You know, is this really represent
04:43a different kind of threat,
04:45a different kind of phenomenon
04:47than what we've seen before?
04:48Charlie, why don't we start with you on that one
04:50because I know you have an answer on this.
04:52What do you think?
04:53Well, I think the obvious thing, first of all,
04:55is that in the same way that generative AI
04:57is much quicker and better
04:59at creating wonderful imagery, audio, and text.
05:04Well, obviously, it can therefore
05:07do the same thing with bad information.
05:11However, in a funny way,
05:12I'm less worried about it,
05:14partly because, firstly,
05:16you can use those same tools
05:18to try and filter and counter it,
05:20but also, we are already in a bad situation.
05:23There's already a lot of propaganda,
05:26there's a lot of partisan,
05:28and there's a lot of false information out there.
05:31And do you know what causes it?
05:34It's actually people.
05:36It's politicians, journalists even,
05:39corporations, conspiracy theorists,
05:42who are, they're the actors.
05:44It's humans.
05:45So it's a question of policing or regulating that,
05:48but also understanding how you can promote
05:52better information for those people who do want it.
05:56Yeah, Sonia, I'm curious to get your view on this.
06:00As Charlie says, all this is created by people,
06:03but now they have this tool
06:05that can do it at such a scale.
06:07And does the scale and speed make a difference?
06:10Yeah, I think that's a really great question.
06:12I think that gets at particularly what is new,
06:15because I think there are certain things
06:17that are not new.
06:18Certainly disinformation as a word
06:20and as a campaign strategy,
06:22as a propaganda strategy,
06:23has been around far longer
06:24than the advancement of AI or social media before it.
06:29But I think when we talk about disinformation,
06:32when we talk about AI,
06:33we're really talking about power.
06:34And I think that's what's potentially new here.
06:37We're talking about the political and public stakes.
06:39We're talking about how these systems
06:41are going to be used.
06:42and where they should belong
06:44if we want to draw some boundaries
06:46around those use cases.
06:47What are responsible contexts,
06:49as Claire has outlined?
06:51What are some high-risk contexts of use?
06:54But I do think the sheer scale
06:56of something like generative AI,
06:57what was it, under two months
06:59that it took for ChatGPT
07:01to reach 100 million users?
07:03So I think the reach
07:04certainly has some novel implications.
07:08Right.
07:08Claire, what's your view?
07:09I mean, is this represents something fundamentally different?
07:13So I'll start by,
07:14if I can,
07:16just giving a little context
07:17for the aperture
07:17with which I come to my judgment.
07:19So I work at an organization
07:21called the Partnership on Artificial Intelligence.
07:24It's a global multi-stakeholder nonprofit
07:26devoted to the very broad mandate
07:28of responsible AI.
07:29and we were founded
07:31by the heads of AI research
07:32at some of the largest technology companies,
07:34so Facebook, Apple, Amazon,
07:36DeepMind, IBM, and Microsoft
07:38in 2016,
07:39based on a recognition
07:41that the challenges
07:41we're talking about today
07:43transcend any one tech company
07:46figuring them out
07:46and also any one sector.
07:48So we work with over 100 institutions
07:50that range from OpenAI
07:51to the New York Times,
07:53journalists play a major role
07:54in AI's impact on society,
07:56and even witness
07:57a video and human rights nonprofit.
07:59And so for the past
08:00four and a half years,
08:01we've been working
08:02in AI-generated media
08:03asking this question of
08:05is it a paradigm shift
08:06in what it means
08:07to manipulate the truth,
08:09which as we've identified
08:10always has relied
08:12on the manipulation of truth
08:13and not just
08:14the articulation of falsehood.
08:15So it's more than just lying.
08:17It's kind of this tweaking
08:18of the truth
08:18that we have always had
08:20to deal with
08:21and what makes
08:22this moment different.
08:23And four years ago,
08:24we had a workshop in London
08:26at the BBC
08:26that was right after
08:28a video of Nancy Pelosi
08:29that was merely slowed down
08:31went viral.
08:32A testament to, you know,
08:34our community saying,
08:35are we really focusing
08:36on the right thing
08:37by focusing on AI-manipulated content
08:39if, as Charlie said,
08:40people are susceptible
08:41to low-tech manipulations as well?
08:44And then I'll borrow
08:45from my colleagues
08:46at Witness,
08:47this global human rights organization.
08:49They used to say,
08:50prepare and don't panic.
08:51That was their very alliterative,
08:54catchy phrase
08:55around generative imagery.
08:57And now we say,
08:59act and do something.
09:01So that is because
09:02of a variety of things.
09:04To some degree,
09:05it's concentration of power
09:06in the spaces
09:07with which this content
09:08is spreading.
09:09But it also has to do
09:10with volume,
09:11accessibility,
09:13and reach,
09:13which can be an accelerant
09:14for all of the very human complexities
09:16that people bring to bear
09:18on encountering content.
09:19The last thing I'll say
09:20is I am more worried
09:22about what's known
09:23as the content
09:23of the liar's dividend,
09:25which is this notion that,
09:26and this is actually found
09:28in the ChatGPT model card
09:30in their report
09:31from OpenAI.
09:32This is a term
09:33that was coined
09:34by legal scholars
09:35several years ago.
09:36The idea that in a world
09:37where anything
09:38can be AI-generated,
09:40people will increasingly
09:41distrust real media.
09:43And while that is
09:44a very human phenomenon,
09:46especially for the topic
09:47of this panel,
09:48which is media literacy,
09:49how do we encourage people
09:50to be skeptical
09:51and media literate
09:53in an increasingly generative age
09:55and at the same time
09:56not hit this threshold
09:58where people just distrust
09:59all media that they see?
10:02And people in power
10:03have always relied
10:04on questioning
10:05implausible deniability
10:06as a way to confuse,
10:07but that is going
10:08to get easier
10:09as you all are able
10:11to kind of insert
10:12a prompt
10:12and spit out an image,
10:13let's say,
10:14in the current moment
10:15that we're in.
10:16Interesting.
10:17And I want to encourage
10:17everyone here,
10:18if you do have questions
10:19for the audience,
10:20please submit them on Slido.
10:21I've got the iPad here.
10:23I can see your questions
10:23and I'll try to ask them
10:24to the panelists.
10:27Does disinformation
10:28even matter
10:29is one of the questions
10:30I have.
10:30I mean, we have such a,
10:32in many of societies,
10:34you know,
10:34we're so polarized
10:35politically already,
10:36there is already
10:36quite a lot
10:37of misinformation.
10:38We already live
10:39in a world
10:39of filter bubbles
10:40due to social media,
10:41so, you know,
10:43if Genitive AI
10:44does allow
10:45for the production
10:46of this
10:47at greater volume
10:48and perhaps greater speed,
10:50you know,
10:50does that really
10:50make any kind
10:51of difference?
10:52Do we have any good
10:53sort of data
10:53on this, Sonia?
10:55Yeah, so this is
10:56a really great question.
10:57I'm going to repeat
10:57a story mostly
10:59because it was told
11:00to me by a very wise woman.
11:02In fact,
11:03she was so wise
11:04that she recently
11:05was awarded
11:05the Nobel Peace Prize,
11:06but she recounts
11:08an example
11:08of a former
11:09KBG officer
11:10who answered
11:12this exact question
11:13around exposure
11:13to disinformation
11:14and his analogy
11:16is disinformation
11:17is like cocaine.
11:19If you take it
11:21once or twice,
11:22you're largely unscathed.
11:24If you take it
11:25hundreds of times,
11:26your brain
11:27is fundamentally altered
11:28and I think
11:30that this example
11:30really gets at
11:31some of the particularities
11:33around this challenge,
11:34especially around
11:35repeated exposure,
11:37especially around
11:37targeting of particular
11:39communities on the ground
11:41and then, you know,
11:43for democracy more broadly.
11:45So we've done
11:46a lot of research
11:47at McGill
11:48around COVID-19 misinformation
11:50at the outset
11:51of the pandemic
11:52and we certainly found
11:54that exposure matters
11:55in this context.
11:56So for instance,
11:56those that were
11:57more misinformed
11:59by consuming
12:00typical misinformation
12:02narratives on social media
12:03were less likely
12:05to follow
12:05public health
12:06recommendations.
12:08So that's like,
12:09you know,
12:09and then of course
12:10from that,
12:11we also found
12:11it led to greater
12:12polarization
12:13about the pandemic
12:15and other political
12:17issues more broadly.
12:18So I would say
12:19there are contexts
12:21in which it matters
12:22more,
12:24but I would in short
12:26say that it
12:27certainly matters.
12:28Right.
12:28And Claire talked
12:29a little bit about
12:30the liar's dividend.
12:31Charlie, is that something
12:32you worry about
12:33that you have trusted
12:33news organizations
12:34out there that people
12:35no longer trust
12:36because they just
12:37don't trust anything.
12:38Yeah.
12:38And in some ways,
12:40you know,
12:41we beat ourselves up,
12:43for example,
12:44by trying to measure
12:45trust in the media
12:47constantly,
12:48when I think
12:49it's a completely
12:50fake metric.
12:51People trust the media
12:52they agree with.
12:53They instinctively
12:55will tell you
12:56when asked.
12:57Don't trust.
12:58I'm not going to trust
12:59journalists in general,
13:01because we've been
13:02encouraged to be
13:02sceptical and to be
13:04critical thinkers.
13:06You referred to the
13:08pandemic.
13:08And what was interesting
13:09during that period
13:10was people went back
13:13to legacy mainstream
13:14media as a safe space
13:17with at least they had
13:18some kind of basic
13:19expectation that they
13:21were going to get a
13:22more accurate,
13:23objective,
13:24fact-based narrative.
13:26So I don't think
13:28this idea that
13:29suddenly everyone's
13:29turned into mad
13:30conspiracy theorists
13:32simply because they've
13:33been on YouTube
13:34really stands up.
13:36What worries me is
13:37actually the interplay
13:38between those
13:40conspiracy theorists,
13:42for example,
13:42and populist
13:43politicians.
13:45And what worries me
13:47is not just that
13:48they're doing it,
13:49but the way that
13:50can degrade
13:51politics,
13:53it can degrade
13:53the relationship
13:54between the citizen
13:56and those in power.
13:58So, you know,
13:59sounds a bit
13:59philosophical,
14:00but that's what
14:00worries me rather
14:02than necessarily
14:03the volume of fake
14:04news that's out there.
14:06Right.
14:06And that is sort of
14:07this liar's dividend
14:08you were talking
14:09about, Claire.
14:09I mean, how concerned
14:10are you that we're
14:11at the threshold
14:11of that now?
14:13That's hard to measure
14:14and actually I've been
14:14talking about that
14:15with some colleagues.
14:16I will say there are
14:18two things can be
14:19true at once.
14:20One is that this is
14:21a very human problem
14:22and two is that
14:23not only content
14:24production enabled
14:25by generative AI,
14:27but also algorithmic
14:28recommendation of content
14:30can exacerbate
14:31those power
14:31and human dynamics.
14:33So I just want to,
14:33I guess,
14:34punctuate this point
14:35about this very
14:36socio-technical challenge.
14:37I don't think we have
14:39hit the liar's dividend
14:40yet.
14:41Some would argue
14:42we never will
14:43or we already have
14:44and everyone's
14:44already skeptical.
14:46In terms of how
14:46we would measure that,
14:47I actually am quite unsure
14:49and I'm really curious
14:50what that means.
14:51but I think
14:52what comes to mind
14:54for me
14:54and maybe we'll
14:55transition to this
14:56is so what do we do
14:58about this kind of synergy
14:59between the disinformation
15:00environment
15:01and technological development?
15:03Are there technical
15:04interventions,
15:05institutional interventions?
15:07Oftentimes,
15:08you know,
15:08I work in AI governance
15:09but I say
15:09we might need
15:10media literacy interventions
15:12in bolstering local newsrooms
15:13that are more trusted,
15:14not just a tweak
15:15and a design button
15:16or a label
15:17on a platform.
15:18So I'm interested
15:19in this confluence
15:20of like social
15:21and technology
15:22that will allow us
15:23to kind of break down
15:24the problem
15:25of media literacy
15:26and also the question
15:27of who gets to decide
15:28what it means
15:29to be media literate.
15:31We have trusted sources
15:32but oftentimes
15:33the way in which
15:34people encounter
15:35those sources today
15:36are on platforms
15:37that are profoundly distrusted
15:39which is why
15:40people are skeptical.
15:41We did research
15:42years ago
15:43on people's attitudes
15:45towards misinformation
15:46interventions
15:46which in essence
15:47are media literacy methods.
15:49So basically
15:49when you see a label
15:50on a platform
15:51and around 50%
15:53of those we surveyed
15:54in the U.S.
15:55of 1,500 people
15:56noting it's just the U.S.
15:57and I know Charlie
15:58probably has a more
15:58global aperture for this
16:00felt that they
16:0150% of the time
16:03the labels were wrong
16:04and either algorithmically
16:05dictated
16:06or dictated
16:07by the platforms
16:08who they already
16:08don't trust
16:09in the first place.
16:10So this question
16:11of who does
16:12media literacy
16:13who affects people
16:14that really trickles
16:15back to the power comment
16:16that Charlie
16:17I think meaningfully
16:18brought up.
16:19Yeah.
16:20Interesting.
16:20Just follow up on that
16:20because as you said
16:23Jeremy
16:24there's another guy
16:25speaking elsewhere
16:26at this time
16:27and you know what
16:28the great thing
16:29the one great thing
16:30that Elon Musk
16:31has demonstrated
16:33since he has taken
16:35over Twitter
16:35is that in response
16:37to what Claire
16:37has just said
16:38you can do
16:40something about it
16:41if you want to change
16:42a platform
16:45according to your ideology
16:47if you want to
16:47shape the way
16:48that people interact
16:49on it
16:50if you want to
16:51allow certain kinds
16:53of information
16:54to appear on your platform
16:56you can do it
16:57you can change platforms
16:59Elon Musk
17:00has demonstrated that
17:01in the last 6 to 12 months
17:03with Twitter
17:03I think in horrible
17:05and pernicious ways
17:06personally
17:07but the great thing
17:08is he shows
17:09if he can do it that way
17:10then surely we can do it
17:12the other way
17:12because we shouldn't
17:14put all the responsibility
17:15for media literacy
17:16upon the citizen
17:17it's the responsibility
17:18of the news media
17:20the responsibility
17:21of the platforms
17:23you know
17:24it's not just about
17:25us having to learn
17:27how to read
17:28what we're seeing
17:29That's interesting
17:30although
17:30do you worry with that
17:32that there's a potential
17:33for people to say
17:33oh but
17:34I'm only being shown
17:35this information
17:36because the platform
17:37is dictating
17:38that I see this
17:39and not other things
17:39you know
17:40do people
17:40I think they don't like
17:41the idea
17:42that there are these
17:42organizations or institutions
17:44that they feel
17:45they don't have control
17:46or influence
17:46over their dictates
17:47I agree with that
17:48I'm pretty much
17:49laissez-faire
17:50but my point is
17:51that Elon Musk's
17:53Twitter
17:54is not neutral
17:55it's not some sort of
17:57space of absolute equity
18:00he's literally
18:01selling blue ticks
18:03that give people
18:05privileged positioning
18:07in his algorithm
18:08now there you go
18:10that's his decision
18:11to shape the information
18:12like that
18:13right
18:14Sonya I want to
18:15get you in here
18:15on what can we do
18:17to try to promote
18:18you know
18:19better media literacy
18:19what does that even
18:20sort of mean
18:20in this context
18:21yeah
18:22so
18:23maybe to
18:24back into this question
18:25I'd like to echo
18:26some of Charlie
18:28and Claire's
18:28earlier comments
18:29I think
18:30the synergy
18:31between the disinformation
18:32ecosystem
18:33and the amplification
18:35and circulation
18:36of that type of content
18:38and those possibilities
18:39calls for two engagements
18:42I think around
18:42having very publicly informed
18:46deliberative democracy
18:48as well as policy solutions
18:51because I
18:52as we've been saying
18:54here today
18:55you know
18:56disinformation
18:56or propaganda
18:57is not necessarily
18:59about truth
19:01versus fact
19:02there are many reasons
19:03why people share
19:04mis and disinformation
19:06even when they know
19:07it's false
19:08right
19:08they find it compelling
19:09or provocative
19:10or funny
19:10I think what's harder
19:12to get around
19:13Charlie
19:13in your example
19:14is some of the systems
19:16that automate
19:18the circulation
19:19and the amplification
19:20of that content
19:21and then
19:22the monetization
19:24policies around
19:25content
19:26that might be
19:27inflammatory
19:27but it's certainly
19:28very engaging
19:29to users
19:30so I would say
19:31we need things
19:32like
19:34deliberative
19:35democratic convenings
19:36we do a lot of
19:37citizens assemblies
19:38over at McGill
19:39it's kind of a very
19:40Canadian way
19:42to take up
19:43some of these issues
19:44and we talk a lot
19:45about collaborative
19:46policy making
19:48and then I would say
19:49we need some
19:49responsible policies
19:51around
19:51you know
19:52the design
19:54of certain systems
19:55that we do have
19:56evidence
19:57of harm for
19:58do we need to worry
19:59in you know
20:00people talked a lot
20:00about
20:01I'm sure you've all
20:02used chat GPT
20:03or Bing
20:04or Bard
20:04you have these
20:05sort of giving
20:06a single answer
20:06to a question
20:07very often
20:08and there's already
20:09been concerns raised
20:09about well
20:10what is that answer
20:11and you know
20:11Elon I think
20:12has used the term
20:13woke AI
20:14we don't want
20:14woke AI
20:15do we need to be
20:16worried about
20:17you know
20:17that these kind
20:18of filter bubbles
20:19is you know
20:19is there such a thing
20:20as woke AI
20:21Claire
20:22I don't think
20:23there's such a thing
20:24as woke AI
20:24but the notion
20:25that there are
20:26decisions being made
20:27about what information
20:28one goes into
20:29these data sets
20:30that build these models
20:31but also how that
20:33gets communicated
20:34and the model
20:34interprets them
20:35is very real
20:36open AI
20:37when they release
20:38DALI
20:38they have a really
20:40interesting
20:41mitigation piece
20:42that they put out
20:43on the internet
20:44that explains
20:44how they filtered
20:45some of the data
20:46that went into it
20:47so for example
20:48they didn't want
20:49if a lot of Reddit
20:50and a lot of the web
20:51include sexually
20:53lewd imagery
20:54often focused
20:55on women
20:55they filtered
20:56that out
20:57but then they realized
20:58that when they did
20:59that filtering out
21:00they then lost
21:02a lot of the
21:02female visual imagery
21:04or faces
21:04and then it skewed
21:06the data set
21:06in a different direction
21:07open AI
21:08whether or not
21:09you think they're
21:09a benevolent actor
21:10what I just described
21:11is the right thing
21:12to do
21:12was making
21:13very concerted
21:14decisions
21:15based on their
21:16own interpretation
21:18as to how the model
21:19should be filtered
21:20and what I think
21:21is so meaningful
21:22about what you brought up
21:23Sonia
21:23and open AI
21:24actually just put out
21:24a call
21:25for these deliberative
21:26democracy mechanisms
21:27is who gets to decide
21:29that logic
21:30by which they filter
21:31and who is it
21:33that will
21:33you know
21:34interpret
21:34how the data
21:35will be massaged
21:36and navigated
21:38in order to
21:39have an output
21:40that's either
21:41to some woke
21:41or not
21:42maybe you could say
21:43a different actor
21:44would have said
21:45we actually want
21:46a data set
21:46full of sexually
21:47lewd imagery
21:48there are examples
21:49of this
21:50and that's why
21:50the whole debate
21:51over open source
21:52versus not
21:53is really meaningful
21:54and also there's
21:55a question here
21:56of will we
21:58kind of just
21:58increasingly personalize
22:00our AI systems
22:02to get around
22:03the question
22:04of who should
22:05arbitrate the truth
22:06which platforms
22:06often do
22:07they want to avoid
22:08arbitrating the truth
22:09and yet that might
22:10choose your own
22:11adventure mentality
22:12let's say
22:12for everyone
22:13let's say there's
22:14a Jeremy chatbot
22:15and a Sonia chatbot
22:16and if we all live
22:17in increasingly
22:18personalized realms
22:19that could exacerbate
22:20the problem
22:21so ultimately
22:22I'm nervous
22:24about any single
22:25institution
22:27deciding what
22:28the parameters are
22:29by which they filter
22:31and build out
22:32their models
22:32and then not
22:33explaining it
22:34in a meaningful way
22:35right
22:35and are there
22:36issues here also
22:37in terms of
22:38how you might
22:39set up rules
22:40around this
22:42particularly in
22:42societies that
22:43believe in free speech
22:44Sonia
22:45is there an issue
22:46there in terms
22:48of prohibiting
22:49certain content
22:50from being generated
22:52yeah so that's
22:53a really tricky
22:53question especially
22:55in the North
22:55American context
22:56in Canada
22:58we are neither
23:00blessed nor cursed
23:01by the first
23:02amendment
23:03but if you
23:04took part
23:05in some of our
23:06recent public debates
23:07you might think
23:08that that is the
23:09case
23:09because a lot
23:10of some of
23:13these issues
23:13kind of boil
23:14down to this
23:15question around
23:16free speech
23:16but I think
23:17for us at least
23:19it's really about
23:20the systems
23:20so we're looking
23:22at the system
23:23design
23:23how can we
23:24build in things
23:25like risk
23:26assessments
23:28what do we do
23:29if a company
23:31identifies a
23:31potential risk
23:32can they show
23:33preparedness
23:34and resources
23:35to mitigate
23:36those risks
23:37who defines
23:38what high risk
23:39is
23:40I certainly
23:41to echo
23:42Claire
23:42think it should
23:43be up to
23:45the public
23:45to define
23:46the parameters
23:46of some of
23:47that risky
23:48context
23:49or use
23:49so yes
23:52it's certainly
23:52challenging
23:53but I think
23:54a better way
23:55to get at
23:55some of these
23:56questions
23:56is to look
23:57at the systems
23:58and the design
23:59right
24:00Charlie
24:00I wanted to ask
24:01you
24:01who should be
24:02responsible
24:02for trying
24:03to educate
24:03the public
24:04and to raise
24:06the level
24:06of media literacy
24:07in society
24:08I mean
24:09should this be
24:10something that's
24:10taught in schools
24:11should it be
24:12something that
24:12news organizations
24:13themselves have
24:14to take
24:15some responsibility
24:15for
24:16yeah
24:17I think
24:18there is one
24:19problem I should
24:20say first
24:20about the idea
24:21of media literacy
24:22oftentimes
24:23people say
24:24they get to the
24:26point where
24:26they've looked
24:27at every other
24:28attempt to solve
24:29the problem
24:30they've looked
24:31at regulation
24:32they looked
24:32at market
24:33reform
24:34they looked
24:34at censorship
24:35whatever
24:36and then they
24:37say we can't
24:38think it's too
24:38complicated
24:39let's do media
24:41literacy
24:41because if
24:42everyone is
24:43educated
24:43and everyone's
24:44clever
24:44then everything's
24:45going to be
24:46fine
24:46now if that
24:47was possible
24:48then the world
24:49would have
24:49no problems
24:50okay
24:51now I'm not
24:52against education
24:53I work at a
24:53university
24:54you know
24:55I literally
24:55work in a
24:56media literacy
24:56department
24:57where we try
24:58and teach
24:59understanding
25:00and understanding
25:01as you I think
25:02mentioned earlier
25:02with agency
25:03there's no point
25:04understanding
25:05that the world
25:06is full of
25:07crap information
25:08if there's nothing
25:09you can do
25:10about it
25:10and at the
25:11moment we've
25:12got a real
25:12problem
25:12certainly in the
25:13UK where
25:14education around
25:15media literacy
25:16has actually
25:16declined
25:17there's less
25:18resource going
25:19into it
25:20even though
25:20all our lives
25:21you know
25:22you pay your
25:23taxes
25:23you do your
25:24shopping
25:25you probably
25:26book your
25:26doctor's appointment
25:27online
25:28increasingly
25:29we're mediatised
25:30and yet there's
25:31less education
25:32about it
25:33but education
25:34is only a start
25:35I know I'm
25:36getting old
25:36I'm old
25:36but I can't
25:38remember anything
25:38I learnt at
25:39school
25:39okay
25:40you learn
25:41you learn
25:41especially with
25:42media
25:43through your
25:44user experience
25:45and so yes
25:46it's absolutely
25:47about the
25:48tech companies
25:49but also
25:49the media
25:50companies
25:50like news
25:51organisations
25:52being more
25:53open and
25:53transparent about
25:54how they do
25:55their work
25:56and also
25:57just trying to
25:58encourage
25:58an accountability
25:59and transparency
26:00and that's the
26:01biggest problem
26:02we've got at
26:02the moment
26:03with the
26:04technology
26:05companies
26:06because
26:06often times
26:07they don't even
26:08know what's
26:09happening
26:09within their
26:10own systems
26:11let alone
26:12that they want
26:13to share
26:14the process
26:14with the public
26:15right
26:15and Claire
26:16where do you
26:17guys at PAI
26:18see this going
26:18and are there
26:19places where you
26:20feel like
26:21this sort of
26:22education component
26:23should be in place
26:25so when I think
26:26about education
26:27there's kind of
26:27the feeling
26:28that you're
26:28sitting in school
26:29learning about
26:30what propaganda
26:31is and what AI
26:32is and then I
26:33also alluded
26:33to this earlier
26:34but there's also
26:35kind of education
26:36in the form
26:36of like a
26:37design intervention
26:38on a platform
26:39or a label
26:40and I also
26:41want to take
26:41a step back
26:42I know I was
26:42the one tasked
26:43with defining
26:43media literacy
26:44at the top
26:45of this session
26:45but we've been
26:46kind of skirting
26:47around two
26:48categories
26:48one is just
26:49a general
26:49understanding
26:50of how AI
26:51works
26:51and methods
26:52for manipulation
26:53work
26:54and at the
26:54same time
26:55there's also
26:55just what it
26:56means for
26:56something to
26:57be a fact
26:57what is
26:58health information
26:59what is
27:00high stakes
27:00information
27:00about elections
27:01and those
27:02intersect
27:02but they're
27:03actually a
27:03very different
27:04but related
27:05skill set
27:06and when I
27:07think about
27:07media literacy
27:08in the AI
27:10domain
27:10I see that
27:11sometimes
27:12falling short
27:12and I'll give
27:13an example
27:13Twitter many
27:15years ago
27:15during the
27:16Trump election
27:18in 2020
27:18period
27:20though this
27:20will probably
27:20happen again
27:22experimented
27:23with labels
27:24that conveyed
27:25that content
27:26was manipulated
27:26it didn't
27:27say anything
27:28about whether
27:28or not it
27:29depicted something
27:30true or false
27:30it just said
27:31if it was
27:32manipulated
27:32and we
27:34studied some
27:34users around
27:35the country
27:36and I've
27:36shared this
27:37anecdote
27:37with my
27:38co-panelists
27:38but one
27:39of them
27:39when we
27:40asked him
27:40how he
27:40interpreted
27:41this
27:41manipulated
27:42media
27:42label
27:43which was
27:43supposed
27:43to convey
27:44and help
27:45him understand
27:45the media
27:46as being
27:47manipulated
27:47said he
27:48was a
27:48Republican
27:49he said
27:49that he
27:50thought
27:50Twitter
27:50was telling
27:51him the
27:51media
27:52as in
27:52CNN
27:53the New York
27:53Times
27:53was manipulating
27:54him
27:55the institution
27:57I hope
27:58there are
27:58some
27:58chuckles
27:58as you
27:59grasp
27:59that this
27:59man
28:00completely
28:00misinterpreted
28:01the attempt
28:02at media
28:02literacy
28:03on the
28:03part of
28:03the platform
28:04because people
28:05bring their
28:05confirmation
28:06bias to
28:07a label
28:08and I say
28:08this to bring
28:09up the fact
28:09that no
28:10mere design
28:11intervention
28:12is going to
28:12solve the
28:13fact that
28:13people bring
28:14their own
28:14confirmation
28:15biases to
28:15bear
28:16however I
28:17am very
28:18interested in
28:18the increasingly
28:19synthetic
28:19world
28:20something that
28:21will help
28:21with the
28:22liar's dividend
28:22and I think
28:23is very
28:23tactical
28:24which maybe
28:24we'll talk
28:25about watermarking
28:25is this
28:26idea of
28:26disclosure
28:27and this
28:27year we
28:28put out
28:28I'll do
28:29a shameless
28:30unelegant
28:30plug
28:31for
28:32syntheticmedia.partnershiponai.org
28:34a set of
28:35practices for
28:35how different
28:36types of
28:36institutions
28:37can play a
28:38role in
28:39media literacy
28:39and this
28:40is a
28:40meaningful
28:41point
28:41which is
28:41if you're
28:42open AI
28:42you might
28:43have a
28:43different
28:43responsibility
28:44at the
28:44model level
28:45to tell
28:46your users
28:47and educate
28:47your users
28:48in a
28:48different
28:49way
28:49than a
28:49TikTok
28:49who's
28:50merely
28:50distributing
28:51content
28:51and those
28:52are very
28:52different
28:52roles
28:53in the
28:53ecosystem
28:54than
28:54kind of
28:55you know
28:55just being
28:56blanketed
28:56together
28:56but I like
28:58to differentiate
28:58I guess
28:59in the
28:59literacy
28:59component
29:00between
29:00direct
29:01disclosure
29:02which is
29:02end user
29:03facing
29:03and other
29:04technical
29:04signals
29:05that may
29:05actually
29:06dictate
29:06the way
29:06platforms
29:07interpret
29:08content
29:08and choose
29:09to moderate
29:10or adjudicate
29:11what gets
29:11shown
29:11so what
29:12is user
29:13facing
29:13is one
29:14way for
29:14the field
29:15to move
29:15forward
29:15with media
29:16literacy
29:16and we
29:17tout this
29:17in our
29:18responsible
29:19practices
29:20for synthetic
29:20media
29:21which have
29:21been adopted
29:22just this
29:22week
29:22by meta
29:23and microsoft
29:24and at
29:25the same
29:25time
29:25there are
29:26technical
29:26interventions
29:27at the
29:27scale
29:28you don't
29:28need to
29:28know every
29:29detail
29:29of traceable
29:30elements
29:31in an
29:31artifact
29:31that might
29:32help at
29:33the ecosystem
29:33level
29:34help platforms
29:35deal with
29:35this challenge
29:36which is a
29:37very very
29:37palpable
29:38one as
29:38there's more
29:39AI generated
29:39content
29:40yeah well
29:41Claire brought
29:41up watermarking
29:42and I want to
29:43ask the other
29:43panels about
29:44this
29:44I mean
29:44there's
29:45different
29:45thoughts
29:46about this
29:46some people
29:47feel like
29:47all
29:48synthetically
29:49generated
29:49content
29:49should carry
29:50some kind
29:50of a
29:50digital
29:51watermark
29:51that will
29:52enable you
29:52easily
29:53or an
29:53expert
29:53to at least
29:54identify
29:55it as
29:55manipulated
29:56then there's
29:57another idea
29:58is to maybe
29:58have any
29:59true content
30:00carry metadata
30:01that makes
30:01it very
30:02clear
30:03where was
30:04this image
30:04taken
30:05what was
30:05used to
30:05take it
30:06and if
30:07there are
30:07any
30:07editing
30:08or manipulations
30:09applied
30:09that that
30:10becomes
30:11transparent
30:11as well
30:12Charlie
30:12what's
30:12your view
30:13on this
30:14yeah
30:14I think
30:15there's
30:16anything
30:17that can
30:18contribute
30:18to
30:19better
30:20hygiene
30:20for
30:21information
30:21is good
30:22but I
30:23think
30:23there's
30:23a danger
30:24with
30:25something
30:26so
30:26sort
30:26of
30:26universal
30:27such
30:28as
30:28that
30:28partly
30:29because
30:30it
30:30raises
30:30expectations
30:31that
30:32somehow
30:32you can
30:33eradicate
30:34disinformation
30:35and of
30:36course
30:37disinformation
30:37is ultimately
30:39often just
30:39subjective
30:40your
30:41wonderful
30:42idealistic
30:43ideology
30:45may be
30:45my
30:46conspiracy
30:46theory
30:47you know
30:48so
30:48and I
30:49worry
30:49about
30:49that
30:50expectation
30:51because
30:51do you
30:52know
30:52the
30:52country
30:53that
30:54has
30:54the
30:54highest
30:54level
30:55of
30:55trust
30:55in
30:56media
30:57and
30:57the
30:57highest
30:58level
30:58of
30:58trust
30:59in
31:00their
31:00politicians
31:01well you
31:01know which
31:02one it is
31:02don't you
31:03like North
31:04Korea
31:04or something
31:05yeah
31:05it's China
31:06of course
31:07and I
31:07don't
31:07personally
31:08I don't
31:08see China
31:09as a
31:09great
31:09model
31:10for an
31:11information
31:11ecosystem
31:12so I
31:13think we're
31:13partly having
31:14to accept
31:14that we
31:15are living
31:16in a
31:16kind of
31:16complex
31:18information
31:18ecosystem
31:19that reflects
31:20much more
31:21truly I
31:22think who
31:22we are
31:23the reason
31:24that you
31:24keep mentioning
31:24this that
31:25people don't
31:25make rational
31:26choices
31:26they never
31:27have done
31:28so why do
31:29we suddenly
31:30expect them
31:30to be super
31:31rational
31:32when they're
31:33online
31:33it's crazy
31:35so you know
31:36I welcome
31:36I'm a journalist
31:37I'm a professor
31:38so of course
31:39I welcome
31:40anything that
31:40makes people
31:41think more
31:41critically
31:42and rationally
31:43and resorting
31:44to evidence
31:45and citation
31:45and so on
31:46but my
31:47fear is that
31:48if you have
31:49too high
31:50an expectation
31:50you're actually
31:52going to end
31:52up screwing
31:53things up
31:54that's interesting
31:55Sonia what is
31:56your view on that
31:56yeah I just
31:57have a very
31:58short one to add
31:59it's really just
31:59to echo
32:00Charlie's comments
32:00I think
32:04making a very
32:05simplistic
32:05so well
32:07it's not
32:07simplistic
32:08in the sense
32:08that it's not
32:08sophisticated
32:09but I wouldn't
32:11I would be weary
32:11of making
32:12a very
32:13technological
32:14solution
32:14to a problem
32:15that is
32:16moves beyond
32:18technology
32:18it is a social
32:19problem
32:19it is a political
32:20problem
32:20as we've
32:21you know
32:22been discussing
32:23here today
32:24right
32:25Claire do you
32:26want to jump
32:27in on that
32:27again
32:27I'll offer a
32:27somewhat
32:28contrarian
32:29take
32:29to spice
32:30this up
32:31and actually
32:31it's going
32:32to be contrarian
32:32but not
32:33something that
32:34would go viral
32:34on Elon Musk's
32:35platform
32:36because it is
32:37a yes and
32:37so I agree
32:38that there is
32:39a concern
32:40with over
32:41reliance on
32:41a technical
32:42signal
32:43that might
32:43obfuscate
32:44our attention
32:45to how social
32:46of a problem
32:47this is
32:47and at the same
32:48time I don't
32:49think that
32:50duality should
32:51paralyze us
32:52into being
32:52passive
32:53and when we
32:54say watermarking
32:55we actually
32:55I wish
32:56we defined
32:56that
32:57because even
32:57within our
32:58community
32:58where we have
32:59Microsoft
33:00research
33:00cybersecurity
33:01engineers
33:02and user
33:03experience
33:03designers
33:04at Meta
33:05and folks
33:06at the New York
33:06Times
33:06writing about
33:07AI policy
33:08no one
33:09can agree
33:09on what
33:10they mean
33:10by watermarking
33:11when people
33:12say watermarking
33:13some people
33:13mean this idea
33:14of direct
33:15disclosure
33:15which is
33:16what do you
33:16actually label
33:17an artifact
33:18with
33:19others
33:19more nerdy
33:21in the AI
33:21field
33:22forgive me
33:23for that
33:23definition
33:24will say
33:25well
33:25it's actually
33:26invisible
33:27watermarks
33:27that are
33:28steganographic
33:28methods
33:29that are
33:29cryptographically
33:31sealing
33:31in the content
33:32and tracing
33:33over the
33:33element
33:34so we're
33:34actually
33:35working on
33:35what does
33:36it mean
33:36to disclose
33:37to watermark
33:38visibly
33:39invisibly
33:39and what
33:40is the
33:40suite
33:41of options
33:41we may
33:42have
33:42and there's
33:43a really
33:43meaningful
33:43effort
33:44kind of
33:45catalyzed
33:46by Adobe
33:47called the
33:47Coalition for
33:48Content
33:49Provenance
33:49and Authenticity
33:50C2PA
33:51for the
33:52buzzword
33:52version
33:53and in
33:53that they're
33:54trying to
33:54do this
33:55kind of
33:55baking
33:55of a
33:56signal
33:56into
33:57content
33:57of course
33:58some
33:58human rights
33:59folks are
33:59worried
34:00that that
34:00will be
34:00used
34:01to
34:01surveil
34:01people
34:02over time
34:03so there's
34:03a fear
34:04with certain
34:04watermarks
34:05that it
34:05will actually
34:05be used
34:06in authoritarian
34:07context
34:08to stifle
34:09speech
34:09but there
34:10are ways
34:10to look
34:11at this
34:11list
34:12of
34:12fingerprinting
34:13and digital
34:14signatures
34:14and figure
34:15out how
34:15that works
34:16and I'll
34:16give one
34:16example
34:17that I think
34:17shows the
34:18promise of
34:19this
34:19without
34:19overselling
34:20it
34:20again
34:20as a
34:21solution
34:21and what
34:22is
34:22interesting
34:23or yes
34:23I disagree
34:24a little
34:24bit
34:24with Charlie
34:25on
34:25is
34:26these
34:27watermarking
34:28methods
34:28are a way
34:28to not
34:29actually
34:29say what
34:30my conspiracy
34:31or your
34:31conspiracy
34:32is
34:32it's a way
34:33to actually
34:34say where
34:34content came
34:35from
34:36and not
34:36judge the
34:37actual
34:37content
34:38of it
34:38so I've
34:39used this
34:39example
34:40I've written
34:40about
34:41there's a
34:41meme
34:41that went
34:42viral
34:42a few
34:42years ago
34:43which was
34:44purporting
34:45to be
34:45George Soros
34:46in a
34:47Nazi
34:47uniform
34:49and
34:50in it
34:51it's
34:51not
34:51George Soros
34:52it's
34:52a
34:52Nazi
34:52and
34:53based
34:53on
34:53the
34:54C2PA
34:54standard
34:55the idea
34:55is that
34:56you would
34:57be able
34:57let's say
34:58a YouTuber
34:58found that
34:59picture
34:59and copied
35:01and pasted
35:01it from
35:02a real
35:02article
35:03about
35:03the
35:03Nazi
35:03and then
35:04made
35:04the
35:04meme
35:05you would
35:06be able
35:06to click
35:07into
35:07that
35:07meme
35:08that
35:08then
35:08goes
35:08viral
35:09on
35:09your
35:09Facebook
35:09feed
35:10and
35:10see
35:11that
35:11it
35:11actually
35:11comes
35:12from
35:12an
35:12original
35:12Associated
35:13Press
35:13article
35:14and is
35:15in fact
35:15not
35:16George
35:17Soros
35:18which
35:18isn't
35:18saying
35:19anything
35:19about
35:19the
35:20content
35:20it's
35:21giving
35:21you
35:21kind
35:21of
35:21the
35:22track
35:22and
35:23origin
35:23of
35:23the
35:24content
35:24to
35:24better
35:24empower
35:25you
35:25to
35:25make
35:25that
35:26decision
35:26so
35:26the
35:27TLDR
35:28of
35:28my
35:28long
35:28spiel
35:29is
35:29I
35:30think
35:30that
35:30there
35:30are
35:30meaningful
35:31ways
35:31for
35:32watermarking
35:32to be
35:33a way
35:33around
35:34the
35:34value
35:34judgment
35:35and
35:35arbitration
35:36of
35:36truth
35:36that
35:37platforms
35:37struggle
35:37with
35:38it's
35:38imperfect
35:39and
35:39also
35:39there's
35:40a
35:40real
35:40need
35:40to
35:41understand
35:41how
35:47huge
35:47open
35:48area
35:48in
35:49the
35:49AI
35:49field
35:50right
35:50now
35:50can
35:50watermarks
35:51be
35:51stripped
35:52from
35:52the
35:52content
35:53and
35:53if
35:53so
35:53that's
35:54kind
35:54of
35:54any
35:54bad
35:55actor
35:55will
35:55just
35:55manipulate
35:56the
35:57image
35:57right
35:58we have
35:59one
35:59question
36:00for the
36:00audience
36:00which I
36:00want
36:01to
36:01get
36:01to
36:02and
36:03it's
36:03a very
36:03broad
36:04question
36:04which
36:04is
36:04will
36:05AI
36:05influence
36:06political
36:06elections
36:07in the
36:07future
36:08I'm
36:09just
36:09going
36:09to go
36:09very
36:10quickly
36:10down
36:10the
36:10line
36:11here
36:11Charlie
36:12what
36:13do
36:13you
36:13say
36:14depends
36:14what
36:15you
36:15mean
36:15really
36:15I
36:15mean
36:17I
36:18think
36:18probably
36:19if it
36:20does
36:20influence
36:21it
36:21it
36:21will
36:21be
36:21well
36:22it's
36:22already
36:22happened
36:22in the
36:23UK
36:23we had
36:24personalised
36:25Facebook
36:26advertising
36:27at the
36:28last
36:28election
36:29but
36:29British
36:30politics
36:30is so
36:31crazy
36:32at the
36:32moment
36:33that
36:33frankly
36:34you know
36:34any
36:35AI
36:35programme
36:36could
36:36not
36:36cut
36:37through
36:37the
36:38madness
36:39that
36:39is
36:40happening
36:40right
36:41now
36:41so
36:41I
36:42don't
36:42think
36:42it
36:42will
36:42have
36:42a
36:42big
36:42effect
36:43I
36:43think
36:43people
36:44will
36:44obviously
36:44try
36:45and
36:45use
36:45it
36:45to
36:47it's
36:48probably
36:48about
36:48data
36:48collection
36:49actually
36:49I
36:49think
36:49it would
36:49be
36:49more
36:50a
36:50hundred
36:50it
36:51would
36:51be
36:51about
36:54gathering
36:54data
36:55and
36:55canvassing
36:56and
36:56stuff
36:57like
36:57that
36:57rather
36:58than
36:58some
36:58sort
36:59of
37:00shadowy
37:01image
37:01generation
37:02to try
37:03and swing
37:03your vote
37:03yeah
37:04although
37:04I guess
37:05in
37:05Turkey
37:05we already
37:06have
37:06an
37:06example
37:07of
37:07deep
37:07fake
37:08that
37:08may
37:08have
37:08played
37:08a
37:08role
37:09in
37:09that
37:09election
37:09recently
37:10but
37:10we've
37:11seen
37:11already
37:11look at
37:12India
37:12as
37:12well
37:12you
37:13didn't
37:13really
37:13need
37:13AI
37:14the
37:14amount
37:14of
37:15lies
37:16and
37:16propaganda
37:16and
37:17fakes
37:17that
37:18were
37:18on
37:18whatsapp
37:19you know
37:20in
37:21Indian
37:21politics
37:22and
37:22again
37:23it's
37:23generated
37:24by
37:24people
37:25it's
37:25not
37:25generated
37:26by
37:26AI
37:26right
37:27Sonia
37:27what's
37:28your
37:28view
37:28so
37:30I
37:30you
37:31know
37:31Claire
37:32kind
37:32of
37:33said
37:33it
37:33as
37:34soon
37:34as
37:34you
37:34asked
37:34the
37:34question
37:34but
37:35I
37:35think
37:35it
37:35already
37:36has
37:36we
37:36do
37:36have
37:36some
37:37examples
37:38where
37:38attempts
37:41to try
37:41and
37:41use
37:42AI
37:42and
37:42advanced
37:43systems
37:43to
37:43influence
37:45if not
37:45elections
37:46then the
37:46integrity
37:47of
37:48elections
37:48and our
37:49democratic
37:49processes
37:50whether that's
37:50you know
37:51to dilute
37:51trust
37:52to
37:53sow
37:53division
37:54in the
37:55populace
37:55so
37:55for me
37:57I
37:58think this
38:00is a
38:00perfect
38:00use case
38:02of where
38:02we want
38:03to draw
38:03some
38:03boundaries
38:04we want
38:04to think
38:05carefully
38:05and collectively
38:06about
38:07where we
38:08would like
38:09to build
38:09some safeguards
38:10and some
38:10guardrails
38:11around
38:12transparency
38:13accountability
38:15keeping a
38:15human rights
38:16based framework
38:17in mind
38:17for any
38:18regulation
38:18Claire answered
38:20that she said
38:20yes right away
38:21but
38:22maybe I'll say
38:23what I think
38:23is different
38:24now because
38:24I agree
38:24with my
38:25co-panelists
38:26one is
38:27what I'm hearing
38:27from some
38:28of the
38:28platforms
38:28I work
38:29with
38:29is
38:29the content
38:31production
38:31is not
38:32usually
38:32the issue
38:33for
38:33propagandists
38:34and bad
38:35actors
38:35meaning
38:35they don't
38:36have a hard
38:36time
38:36creating
38:37crap
38:38to put it
38:38lightly
38:39what might
38:40be
38:40interesting
38:40though
38:40is
38:41they have
38:41a hard
38:41time
38:42imitating
38:42the behavior
38:43of real
38:44accounts
38:44it might
38:44take a long
38:45time
38:45to create
38:46a real
38:46persona
38:47that confuses
38:48let's say
38:48the meta
38:49security team
38:50and what
38:51generative AI
38:51might enable
38:52is not just
38:53a bellwether
38:54deepfake
38:55that changes
38:56the market
38:56or changes
38:57the outcome
38:57of an election
38:58but might make
38:59it easier
38:59to simulate
39:00these real
39:01behavioral accounts
39:02if you can make
39:03six accounts
39:04at once
39:04because you can
39:05create an image
39:06and create text
39:07more quickly
39:07than if you
39:08already were doing
39:09it
39:09that's something
39:10from a behavioral
39:11standpoint
39:11I'd be worried
39:12about
39:12I'll also say
39:13with the election
39:14context
39:14at least in the
39:15United States
39:16it is an interesting
39:17policy lever
39:18for generative AI
39:20more broadly
39:21because there are
39:22restrictions on
39:23speech related to
39:24elections
39:25there are restrictions
39:26in the U.S.
39:26on if you lie
39:27about polling places
39:28and the voting
39:29behavior
39:30and that has been
39:31used by certain
39:33legislators in the U.S.
39:34as a way to kind of
39:35finesse generative AI
39:37legislation
39:37that might allow us
39:39to then have
39:39trickle down effects
39:40into the broader
39:41AI policy space
39:42so elections
39:43are interesting
39:44in terms of their
39:44kind of precedent
39:46setting for the
39:48broader AI policy space
39:49and I'd argue
39:50that it's not
39:51content production
39:52but kind of
39:52behavioral patterns
39:54that might be
39:55amplified or
39:56accelerated
39:56even though AI
39:57has already
39:58definitely been used
39:59in election contexts
40:01interesting
40:02I want to get
40:03to Charlie
40:03you know
40:04a lot of news
40:05organizations
40:05are thinking about
40:06using this technology
40:07to help produce
40:07content themselves
40:08in potentially
40:10writing entire stories
40:11using it for
40:12illustrations perhaps
40:14what do you sort of
40:15think about this trend
40:16and is there a danger
40:17here in that
40:18again people might not
40:19know
40:19well here I have
40:20a trusted source
40:21but it's using
40:22this technology
40:23where there's no
40:23human involved
40:24in creating this content
40:25do I believe the content
40:27again does it help
40:28or does it sort of
40:29hurt in terms of
40:30well bear in mind
40:31people have been using
40:31traditional AI
40:32in news production
40:33for many years
40:34you know automating
40:35things like weather
40:36financial results
40:37subscriptions etc etc
40:40with the news stuff
40:41and I'm going around
40:42the world talking
40:42to newsrooms
40:43and there's a kind
40:46of controlled panic
40:46going on
40:47you know
40:48the good newsrooms
40:49are very excited
40:51about how this might
40:52be able to make
40:53them more efficient
40:53and effective
40:54so they're looking
40:56hard at it
40:57but they're very
40:58conscious about
40:59the risks
41:00so the good news
41:01organizations
41:02are not rushing
41:03to use this
41:04to publish
41:05straight away
41:06they're keeping it
41:07away from the front
41:08line
41:08but obviously
41:09there are some
41:09dumb news organizations
41:10out there
41:11who have done
41:12things like the
41:12Michael Schumacher
41:14interview
41:15and you can see
41:16in a way
41:17I'm grateful to them
41:18because they've shown
41:19what a reputational
41:21rebound they're going
41:22to get
41:22if they get it wrong
41:23right
41:24we got another
41:26question from the audience
41:26which I will ask
41:27before we run out of
41:28time here
41:28which is
41:28do regulators
41:29understand
41:30the possible influence
41:31of AI
41:32Claire you probably
41:33talked to a lot
41:34of regulars
41:34around the world
41:36what's your view
41:37on this
41:37are people
41:38understanding
41:38this technology
41:39or not really
41:40it depends
41:41is the honest answer
41:42I think there's
41:44skepticism
41:44that any congressperson
41:45let's say in the US
41:46or those in EU
41:48government
41:48understand
41:49but there are
41:49people fighting
41:50the good fight
41:51who are technologists
41:53there needs to be
41:54more education
41:54is the short answer
41:55and it's not just
41:57understanding the AI
41:58technology
41:58but in some ways
41:59it's becoming
42:00literate
42:01in all of the
42:02use cases
42:03for AI
42:03if you are thinking
42:04about AI
42:05in public health
42:06you have to be
42:06somewhat aware
42:07of the precedent
42:08in public health
42:08regulation
42:09if you are thinking
42:10about AI
42:11and voting
42:12and elections
42:13there's election
42:14literacy
42:14but it's very hard
42:16to offer a rebuttal
42:17to the technology
42:18companies
42:19if you don't have
42:19a degree of
42:20sophistication
42:21and literacy
42:21and how these
42:22models work
42:23which sometimes
42:24we don't even know
42:24so there is a
42:25dearth of knowledge
42:26that doesn't mean
42:27it's completely absent
42:28and there needs
42:29to be better
42:30cross-pollination
42:31and that education
42:31shouldn't just be
42:32coming from the
42:33large technology
42:34companies
42:34it can't just be
42:35anthropic
42:36and open AI
42:37educating government
42:38all the time
42:39there needs to be
42:39other venues
42:40for collaboration
42:41and neutrality
42:42frankly
42:43in how policy makers
42:45understand the landscape
42:46interesting
42:47and Sonia
42:47you talked about
42:48the sort of
42:48Canadian model
42:49of this kind of
42:49community building
42:52and discussion
42:53around topics
42:54to then feed
42:55into policy
42:55do you think
42:56that can work
42:57sort of more broadly
42:58and maybe you can
42:58talk a little bit
42:59more about the
42:59mechanics of that
43:00yeah
43:02so I think
43:03a lot of the people
43:06building these systems
43:07would also say
43:08that at some point
43:09you have no way
43:10to understand
43:12what the potential
43:13outputs or results
43:14would be
43:14right
43:15especially when we
43:16think about
43:16machine learning
43:17and things like
43:18generative AI
43:19I think that's why
43:20we can go
43:22to the input data
43:24and we certainly
43:26do have
43:27ways to
43:29make that
43:31understandable
43:32to policy makers
43:33in an accessible way
43:34I think it also
43:36gets at
43:36you know
43:37policy makers
43:38may not
43:38understand the exact
43:40technical parameters
43:42but you know
43:43they are very
43:44well versed
43:45in things like
43:45recourse
43:47accountability
43:48you know
43:48we do
43:49have fantastic
43:50lawyers around the world
43:52that are building
43:52really great solutions
43:54for what happens
43:55when I'm potentially
43:56harmed by an AI
43:58there's a case
43:58in the US
43:59in Detroit
43:59where someone
44:00was wrongfully convicted
44:01based on a
44:02recommendation algorithm
44:03used in the
44:04criminal justice system
44:05now again
44:06that just goes back
44:07to the use case
44:08of something like this
44:10in a particular
44:11political context
44:12the US is a very
44:13particular kind of place
44:15so yeah
44:16I think we do
44:17have
44:18possibilities
44:19certainly
44:20well that's a great
44:21optimistic note
44:22to end on
44:23I'm afraid we're out of
44:24time but thank you all
44:25for coming and listening
44:26and thank you to the
44:27panelists
44:27it's been a fantastic
44:29discussion
44:29thank you
44:30thank you
44:34thank you
44:46thank you
44:48Merci.
Commentaires

Recommandations