Passer au playerPasser au contenu principal
  • il y a 3 semaines
The New Information Gods: From Fact-Checking to Reality-Checking

Catégorie

🤖
Technologie
Transcription
00:00Good afternoon. Thank you so much for being with us today.
00:04As you all know, the conversation is about misinformation, the new information gods.
00:12So what I wanted to do first is kind of do a bit of a scene setting.
00:17I'm sure you've all heard about all the horrible things that AI is bringing to us
00:22and how difficult it's going to be to discern reality,
00:26to tell what's synthetic, what isn't.
00:30But what we're realizing is that that's just part of the question
00:35or part of the issue we're dealing with.
00:37I don't think anyone in this room, but even the general public,
00:41has an issue with figuring out what the reality is.
00:45More and more we'll have a larger toolbox that will tell us how to discern reality.
00:50So the real question then becomes when we do know that something is real,
00:55when we do know that there is a blue check mark that tells us something is to be trusted.
01:00Will we continue to trust it?
01:02So today, the reason I wanted to talk about all of this is because truth feels contested.
01:11And the traditional gatekeepers are weakening.
01:14The algorithms are shaping more and more what we see and how we deal with reality.
01:22And, of course, misinformation spreads much faster.
01:25It's much easier to spread than anything else.
01:27So today's conversation isn't about how powerless we are, but how we are going to build better.
01:35And so from platforms to policy, the speakers that we have here today are going to talk about individual skills,
01:43systemic incentives.
01:44We'll explore how we move from simple fact checking, because now we know that's not enough to know that something
01:52is true,
01:53but to something a little bit deeper, reality checking, and a way for us to re-anchor truth in a
02:00world that's shifting.
02:01In the past, we've looked to institutions to build that trust.
02:07But what we're seeing now with data and research and everything is that we are not trusting institutions.
02:14We're not trusting not just institutions, but also people of authority.
02:19So it's not like we've shifted from institutions to individual human beings.
02:23We've shifted completely to what used to be trustworthy for us.
02:28So then the question that I'm going to lead with is, if that's the case, how do we build a
02:34system where trust is restored,
02:37where we are collaborating and contributing to that trust-building exercise?
02:43But should that fall on us just as individuals?
02:46That's a lot to bear, right?
02:48So my opening question to all of you is, I wanted you to maybe begin, and we can start from
02:54Kate and then come back here, just to keep you on your toes.
02:59We can start with what it is, the work that you're doing, and how do you answer that question of
03:06building trust?
03:07We can start with that, and then I have a lot more questions for you to Kate.
03:12Thank you for having me here.
03:14I'm Katie Sanders.
03:15I'm the editor-in-chief of PolitiFact.
03:18This is a nonprofit, nonpartisan fact-checking website that fact-checks politicians, influential pundits, and also social media claims.
03:28And I've been involved with PolitiFact for more than 13 years at this point.
03:33And so I've really focused a lot of energy and thought around how we can promote truthful information online.
03:43One thing that we do at PolitiFact that is different from a lot of traditional news media is that we
03:48show all of our sources.
03:51We do the reporting, but we show audiences exactly how we came to that conclusion.
03:56And so I think one of the nice effects of fact-checking journalism has been maybe prodding traditional institutions, traditional
04:05journalism organizations, into doing more of that showing the work.
04:11Including more context and more background information around some of the most inflammatory policies or claims that are happening.
04:19That's where I'll begin.
04:20And it's an incredibly crucial skill to build for the general public just to show us.
04:27You know, we talk about it in finance shows where the money comes from to do that with the information
04:32ecosystem.
04:33Mario.
04:35Really good dovetail to show your work.
04:37So I'm the CEO and co-founder of Readocracy.
04:39We like to say it's like a Fitbit for knowledge, a Fitbit for your mind.
04:43And I think around trust, the most important component of what we do is, first of all, if you want
04:49to have trust and realize that even you are aligned with your own thoughts,
04:52having visibility on what you've consumed and the data that represents how that might be manipulating your emotions and your
04:58worldview is quite important to have trust even with your own beliefs and consistency.
05:03And the other side is trust outwardly.
05:05So if we can measure the effort people put into the subjects they are claiming to be experts on, when
05:11somebody comments on X or on LinkedIn, do they actually know what they're talking about?
05:15Did they spend the time?
05:16And so having provenance of that is how we can gain trust as well to show your work and understand,
05:22you know, are you just a thought leader because you post a lot?
05:25Or what's the provenance on the effort you put in to get here?
05:28So.
05:28Thank you, Mario.
05:29The academic in me is so excited about that.
05:32Sunny.
05:32Yeah, really nice to be here.
05:35My name is Sunny Liu.
05:36I'm director of research at Stanford Social Media Lab and a research scholar at Stanford Cyber Policy Center.
05:42So I think about trust, I think there are three pros we can understand how to rebuild trust.
05:49First, I think that is fundamentally we want to have a new, better information ecosystems working with civic organizations and
05:57platforms to have those infrastructures that can better support
06:02individuals' information choices.
06:05And the second one, I think it's really important to empower users.
06:09Individuals have those abilities, competence, and capacities to really navigate the systems so they can make the decisions based on
06:18their own values and choices.
06:20And third one, more importantly, is how we can work with policy makers so we can use our evidence from
06:27research so they can better build and sustain policy making processes using evidence based for policy making.
06:36Thank you.
06:37Thank you so much.
06:38Such crucial to have all these stakeholders involved in this process.
06:42Steve.
06:43Hi, everyone.
06:44My name is Steve Rathje.
06:46I am a postdoctoral researcher in psychology at New York University.
06:50And I'm also an incoming assistant professor of human computer interaction at Carnegie Mellon University.
06:56Yay.
06:57Thank you.
06:59And broadly, what I study is the psychology of technology.
07:03And I do psychology experiments and big data analysis to understand questions like what goes viral on social media?
07:11Why is it that what goes viral on social media is often what we don't think should go viral on
07:16social media, such as misinformation and toxic content?
07:19And what are the consequences of our misinformation and information diets on factors like polarization and trust?
07:25So I try to explore these questions from a psychological perspective.
07:29And I also want to add that I am an active science communicator.
07:33I have a science communication TikTok channel called Steve Psychology with over a million followers that I've been, you know,
07:39working on for the past five years where I try to share evidence based psychology studies and scientific studies.
07:45And I think this is interesting to bring up in the domain of trust because a lot of what people
07:50trust nowadays is less and less institutions and the media.
07:55But people go to influencers to build trust.
07:57And we're in a time of decreasing trust in science.
08:00So it's interesting, you know, from a personal perspective to try to rebuild trust in science from showing a little
08:07bit what the perspective of a scientist working in the field is like.
08:12And it's such important work, Steve, you know, to making sure that our influencers are just the right people with
08:23the right incentives to help educate the public because they hold so much of that public trust in their hands.
08:31So there is a lot of responsibility that comes with that power.
08:34Totally.
08:35So I wanted to dive a little bit deeper with each of you.
08:39So I'm going to start with Sunny, our researcher, AI and literacy advocate.
08:44I wanted to ask you, we've talked a lot about this, you know, even before, before we joined here.
08:50We need to let people know we need to educate.
08:54Education comes up in every single panel discussion I've had about AI.
08:59But is literacy enough?
09:00And what kind of literacies are we really talking about?
09:03Right.
09:05I think traditional digital literacy skills, like fact checking skills, Katie use a lot.
09:11Lateral reading, reverse image search.
09:13That is, we see a photo, you can use Google lens and then see whether what's the original source of
09:19those photos.
09:20Those ones actually still relevant, especially in today's information ecosystems.
09:26There's a lot of content even curated or generated by AI.
09:30For example, Google has this AI mode.
09:33I think that at that time, should we just go with Google's AI mode?
09:38Or maybe we can go a little bit deeper to using lateral reading that is open, not a browser.
09:44And then check for other sources to double check.
09:46So I think those are still important skills to think about at information or factor level.
09:52So another thing I think it's also understand that we call this algorithm literacy.
09:56This is the idea that we have to understand fundamentally what is algorithm.
10:01How we get this piece of information.
10:04Why that I was recommended by that shoe advertisement instead of book.
10:09So how does that happen? Why recommend that?
10:12Fundamental understanding of these algorithms, how they work.
10:16And figure out ways to train those algorithms to fit our own needs.
10:22That's the second literacy skills.
10:24The third one thing lots of us talking about today and everybody trying to figure out is AI literacy.
10:31What do we mean by literate with AI?
10:34So in our lab, we're trying to focus on three important aspects.
10:39First one still fundamentally is understanding.
10:42We need those basic layers to understand this technology so we can feel comfortable.
10:48The second part we call is critical evaluation.
10:52I think what we missed in the social media era is really fundamental understanding about when is the time to
11:00use social media.
11:01When is too much? How do we know that and how to draw the boundary?
11:06I think AI now will have this opportunity right from the beginning.
11:11Teach people, share with them what is AI.
11:14When is a good time to talk to AI?
11:17When maybe it's a time we should not talk to AI.
11:20Go to colleagues, friends, families.
11:24So I think those are important parts for critical evaluation.
11:27I think the third part is actually the most challenging for industries and for educators.
11:33Is how to teach people to become more authentic users of AI tools.
11:39I think about these AI tools not just like I want a new recipe.
11:43I can search for that.
11:45But think about people when they go to job market.
11:47They want to build their resumes.
11:50If you never did a resume before, it's actually a really daunting job.
11:54I remember how many times I did my first resume.
11:58So maybe AI can help young people like that.
12:00Make them easier for them to basically those skills.
12:03From building resumes, figure out how to do the interviews.
12:08How we can use those tools to fit the needs and to become authentic users of AI.
12:14So I think those are three skills.
12:17Traditional literacy skills, algorithm skills and AI literacy.
12:21That's so important.
12:23So when we think in the education realm that numeracy and literacy are foundational.
12:28Would you say that AI literacies, the three that you mentioned are also foundational in this coming age?
12:36Yeah.
12:37I do think that it's exactly foundational to think about AI literacy.
12:43I think the core of AI literacy is very different than other literacies.
12:48Other literacies we can, if I take a class, that skill set about fact checking, I can use that for
12:55my next five years.
12:57Algorithms, I can use them maybe for one or two years.
13:00But for AI literacies, the class I took today, maybe for the next month, that's half of the content have
13:06to keep updating.
13:07So the core part is really not, I think it's really what we call meta learning.
13:13It's about meta skills, how to keep asking questions.
13:17We want to nurture all those curiosities and lifelong learnings and to do that.
13:23Thank you so much, Sunny.
13:25I'm going to go to you, Steve.
13:27I find how you've connected the fuzzy and the techie in a very interesting, they coexist in a very interesting
13:34way.
13:34Thank you.
13:34So what I want to ask you, as both a technologist and systems thinker, you talked about toxic incentives.
13:42We know that disinformation, misinformation sells.
13:46We've built an economy around it.
13:48So as a systems thinker, as a psychologist, what do you think those, you know, how can we undo, how
13:56we can put that genie back in the bottle now that we know it's so profitable.
14:00It's also feels good.
14:02The dopamine hits, you know, that we get from ENCO.
14:06Yeah, I mean, I don't have the clear like solution or answer for you, but I can give you a,
14:11you know, hint.
14:13So a lot of my early PhD research was on what goes viral on social media.
14:19And I spent a lot of time in my early PhD analyzing large data sets of social media data from
14:25Facebook, Instagram, from politicians, from influencers, etc.
14:29And I wanted to answer the question of what goes viral on social media.
14:33And I found that one of the biggest predictors of virality, of all the predictors we measured, was when, was
14:40what we called outgroup animosity.
14:41When politicians or media organizations were especially negative about the outgroup or the opposing party, that went viral.
14:50And this coincides with a lot of other research suggesting that moral outrage or negativity goes viral on social media.
14:58Now, it's interesting because when you survey people and ask them, what do you think should go viral on social
15:04media?
15:04Or what do you think should be amplified by social media algorithms?
15:07People don't think that negativity should go viral at all.
15:10People would prefer that accurate or positive or scientific content go viral.
15:15But that's not what goes viral.
15:17And we call this the paradox of virality or the idea that widely shared content is not necessarily widely liked
15:24on social media.
15:25So why does this paradox exist?
15:27It's probably for the same reason that you eat a bunch of junk food but, you know, want to go
15:32to the gym or be eating a salad or why you stop and look at a car crash on the
15:36side of the road.
15:37Basically, social media platforms and increasingly AI companies have incentives to create addicting products that keep you engaged as long
15:46as possible.
15:48So what is the answer to fixing these toxic incentive structures?
15:52I conducted a series of psychology experiments that perhaps gives a preview into this question.
15:58So in these psychology experiments, we showed people a number of true and false news headlines and we paid half
16:05of participants to accurately discern between the true and false headlines to provide the right answer about what was fake
16:12and what was real.
16:13And paying people to be more accurate made people more accurate.
16:18It also made them slow down and spend more time reading the content and discerning truth from falsehood.
16:24So if people have a financial incentive to be accurate, they will be more accurate.
16:28We also had another experimental condition where we paid people to identify articles that would be liked by members of
16:36their political in-group.
16:37And we found that this incentive distracted people from accuracy and actually made them worse at discerning between true and
16:45false news article.
16:46So essentially, people respond to incentives. Influencers will respond to incentives to create the most viral content and sometimes they
16:54will create false content because false content plays into our tendency to be attracted to outrageous or negative content.
17:04So I think the answer comes down to changing the incentives of social media.
17:08We've even seen some tech companies play around with this like WhatsApp, for instance, put a limit on how much
17:14you could forward an article because it seems like a lot of the issue with social media is just this
17:19tendency for things to go viral incredibly quickly without people stopping and thinking.
17:24So if there is some way to fix the incentive structure so people are motivated more by accuracy and less
17:30motivated by profit, that would be good.
17:33And, you know, as an influencer or content creator or whatever you want to call it, I'm sort of in
17:38this bind as well where I want to go viral on TikTok with my videos.
17:41But my primary identity is a scientist. And as a scientist, I have a strong identity and reputational incentive to
17:50be accurate at all costs.
17:51So I think that prevents me from being sucked into the toxic incentive structure of social media and always sharing
17:56outrage.
17:57So if there's a way to get people to incentivize people to care about accuracy, I think that is the
18:03answer.
18:03Now, how to do that is a bigger question.
18:06Oh, I know someone who can answer that question.
18:09Mario, you want to take that one?
18:10He talks about reputational incentives.
18:13What do you think?
18:15I love it. I mean, what a setup. Thank you.
18:19So I think we all have to be aware that the Internet doesn't have to be the way it is.
18:26Social media doesn't have to be the way it is. It is a choice of what we make the incentive.
18:32And there is a reputational component.
18:34I mean, I'm going to sound like a broken record to my friends here, but just to give you this
18:40fact again, which you probably don't know.
18:42Everybody in this audience, you'll probably spend about as much time consuming and discussing content in your life as you
18:49would studying for about 35 college degrees.
18:52I don't know about you, but a college degree is expensive.
18:54We're in a knowledge economy where people largely make decisions based on what they think you know.
18:58And what we're doing today is just sharing a link here and there or having a line item about what
19:03we claim to know.
19:04There is an opportunity there.
19:06If you could today, the monetization is how you you get rewarded by the quantity of attention you get.
19:13That defines power, no matter how you do it.
19:16If you're distracting, if you're lying, if you're violent, you will be rewarded.
19:21So we shouldn't be surprised that we have the world we have.
19:24But why not make it about the quality, not the quantity, the quality of attention you give?
19:33There's a missing half of the internet.
19:34Every time you look at a tweet on X or you look at a LinkedIn post or anything else, you
19:40just see the reactions.
19:41How do people react?
19:42This tells you half of the problem.
19:44Where is the other half on top?
19:46What did somebody do to give you this opinion?
19:49What went into this or are they just there to provoke you?
19:53And so there's an opportunity here.
19:56And we have to remember again that actually the digital advertising ecosystem is, depending on how you cut it,
20:01is under a trillion dollars, a lot of fraud and uncertainty.
20:07If you map things to the knowledge economy and we make our time count for provable reputation,
20:15then we're talking about touching on over five trillion dollars of value.
20:19And so there is an incentive opportunity there that I think we need to tap into and be more mindful
20:25of.
20:26Because ultimately, when we talk about incentives, there is an infrastructure around us.
20:33And in the past, the infrastructure of what we trust belong to trusted media and trusted education institutions.
20:40We abdicated that to social media.
20:43And now trust is defined by somebody else who has a specific incentive mechanism.
20:47We can take that back.
20:49But we have to invest and think about the alternatives of the infrastructure of thought and behavior.
20:54And we can do that, but it's a choice.
20:56We can't just let ourselves keep being brainwashed like this.
21:00That was heavy.
21:03Thank you, Mario.
21:05So much to unpack there.
21:07And I'm not going to divert our conversation to something else.
21:10But I wanted to bring you, Katie, into this discussion from the point of reputational incentives.
21:16There's a huge reputational incentive for the media world to kind of get back to some of the stature, the
21:25fourth estate, the power they held, that compact with the audience and that trust compact that has diminished now.
21:36How do you do that in a way that doesn't, how do you point to the misinformation?
21:42How do you point to what is valuable for the information consumer without alienating?
21:48Because a lot of the public audiences out there, what they're saying is, we do not want to be lectured
21:54by an organization or an industry that has made so many mistakes.
22:00I get it.
22:01But first of all, I want to sign up for this new system where we get coins for being right
22:06and being accurate.
22:07Like that sounds like a new infrastructure.
22:09And I'm very interested.
22:10So let's connect about that.
22:11Make an effort, right?
22:12That should be a factor.
22:15You know, I get it.
22:16The fact checkers are often demonized in online spaces.
22:21And I think it's because we have a tendency to come across like know-it-alls at the table.
22:26We're at the unpopular side of the internet cafeteria because we're kind of boring and nuanced.
22:32And so I understand.
22:34No one likes a know-it-all.
22:36So at PolitiFact, we try to put on a friendly fact checker voice.
22:41We're like the Spider-Man of media where we're like, you know, coming across.
22:46We're trying to introduce ideas and evidence in an approachable manner that kind of walks you,
22:52holds your hand and walks you through it because we know that any air of condescension or I know more
22:58than you
22:59and you should just trust me because I work for a news organization is really turning audiences off.
23:06PolitiFact was founded back in 2007 ahead of the 2008 election to fact check presidential candidates.
23:13And we do that on both sides in the U.S. context.
23:18But that work is inherently controversial because if you're throwing a flag on a play,
23:25the people who like that team are not going to like it.
23:28Even if what you said is totally accurate, they're not going to like it.
23:31And so we have an 18-year track record of throwing flags.
23:36And we were a part of the meta fact checking program that worked independently with fact checking organizations like mine
23:45around the world.
23:46It no longer is in the U.S.
23:48But we were paid to fact check content on meta platforms.
23:52And it did turn off some people who were making a lot of penalties or sharing content that was obviously
24:04polarized and factually inaccurate.
24:08And so we became easy targets by being the people who were pointing out with total reports and evidence and
24:17context for why we were saying this is wrong.
24:19We got a lot of hate for that.
24:22And it culminated in meta adopting more of a community note style function and no longer working with U.S.
24:29fact checkers.
24:31So we think a lot about how to make our work more approachable.
24:36But I think more and more what we have to talk about is how to arm the new users of
24:42platforms who are fulfilling that function with basic media literacy skills like Sunny was talking about.
24:48So my PSA is if you have children maybe who are starting to navigate online spaces, make sure they know
24:56how to do a reverse image search.
24:58We use it every day in our fact checking work.
25:00It's like the first thing we do when we see suspicious content.
25:03The other thing is just Google searching for keywords.
25:07It's so important, the work that you're doing.
25:10And it makes me think of, you know, a year ago we talked about disinformation on this very stage.
25:16And some of the answers were, well, there's all sorts of ways we can use AI or AI products and
25:22agents to help us figure out if something is synthetic or disinformation and all of that.
25:29And it comes back to it's not enough. Grammarly, which is I'm sure a lot of you have heard about,
25:37suggests when you're writing something whether, you know, you should check it.
25:42That doesn't sound like you have the spelling or the grammar correct, right?
25:46But it doesn't change it for you. It doesn't tell you if it's right or wrong.
25:50It's just suggesting that, you know, you need to think about it before you finalize your written product.
25:57So, but at the same time, we're putting the onus on the individual.
26:03We're telling the individual now, this is a lot of this falls on you to not be misinformed, to all
26:10of, you know, all of the things that are happening on this ecosystem, all the bad things.
26:15Now it falls on the individual. Some of it, it's the individual's fault because we're not letting the institutions help
26:24us anymore.
26:25And the other it is because we created the system. So how do we change the system where it doesn't
26:30all fall on the individual, but the system itself can help change in a way that the individual feels more
26:37trust?
26:38It's a broad question that I have for all of you. So I don't know if someone wants to jump
26:42in. Mario.
26:45So I will volunteer.
26:50Voluntold. I'm going to meet you halfway on that because something I wanted to add on the incentives piece, which
26:55comes from the institutions, it comes from education policy.
27:01It's a number of things. Media literacy is essential. AI literacy is essential.
27:06But I also think we have to design solutions that if we're going to be the good guys, we also
27:13have to be realistic.
27:14And the problem is that what made social media so effective is the aspects of leaning into the most fundamental
27:23aspects of what humans desire, right?
27:25Ego and vanity, inclusion, power, wealth.
27:30If you come to somebody and just logic your way through it and say, guys, I just think about your
27:36grandchildren.
27:37Think about the future. Like you get them in the moment, maybe.
27:39And then they go home to their busy lives and that's it.
27:43And so I think if we're going to create solutions at an institutional level, we also have to think about,
27:48well, we have to fight fire with fire, right?
27:50This notion that somebody is going to take a second or three seconds to be like, what was that thing
27:55I learned about?
27:55What do I have to be responsible and think about that? These platforms are literally designed so that you don't
28:00have that moment.
28:01It's about impulsive reaction. Grab the back of your brain.
28:04And so how do you get someone to be intrinsically, inherently motivated in the ways that mimic vanity, ego, inclusion,
28:12power, wealth?
28:13Because then we might have a chance. So I just wanted to add that piece.
28:17That's a very important piece. So does anyone have an idea? Is it more validation?
28:21Is that we have to play more like the bad guys, but with good incentives instead?
28:28I really want to challenge you on this.
28:31So your question was about systemic solutions as opposed to the individual.
28:35I guess I have a few thoughts on that. The first is that the individual and the system are intertwined.
28:41To create systemic solutions, we obviously have to create solutions as individuals, either by advocating for different systems,
28:48for voting for different systems. So I think it's intertwined.
28:51And I think you can't like fully just be like, oh, it's just the system. Like we're so screwed over.
28:55There's no hope.
28:55So I think that's a danger of always pointing out the system is not talking about how it's intertwined with
29:02the individual.
29:04But it's challenging in terms of solutions, because I think early on there was this hope.
29:09There was a hope among psychology researchers like we could do research about how to improve social media platforms.
29:14And then we could give this information to the social media platforms and then they'd be motivated to change.
29:18But I think that's really changed with like Elon Musk buying Twitter and with meta disabling fact checking and everything
29:26that's gone on politically.
29:27I don't think there's as much hope for these big companies to do everything.
29:32We've seen some hope with, you know, emerging social media platforms develop like Blue Sky, which are able to implement,
29:40you know, different structures.
29:42But those have trouble getting a broad enough user base. So that's challenging.
29:47So I'm going to bring up a word that is a little bit, you know, people don't like this word
29:51regulation or like changing policy.
29:54I think it's it's worth thinking through, you know, what we've done to try to regulate tech and social media
30:02platforms.
30:02And I think a lot of the early regulation of social media in the United States failed and that they're
30:08like, you know, there was no regulation.
30:10There were a ton of congressional hearings where they brought in like, you know, Mark Zuckerberg to testify.
30:16But nothing really happened in the United States, which was a little disappointing.
30:20I think in the European Union, you've seen actually some sensible regulations emerge.
30:25Europe is always a little bit ahead in terms of regulating tech.
30:28For instance, they passed the Digital Services Act, which holds tech companies accountable for a number of things.
30:36One useful thing they've actually been doing, which has impacted me as a researcher,
30:41is they actually mandate social media companies to provide data to researchers like myself so we can understand the spread
30:49of misinformation on these platforms.
30:51The difficulty has been enforcement of these regulations, like Twitter does not give up its data,
30:57even though it's legally obligated to and has been fined.
30:59But I've gotten access to LinkedIn data, for instance, as a result of these regulations.
31:04So I think thinking through regulation and policy is one solution that we often don't like to think about, but
31:11maybe should.
31:11Very unpopular because it stifles innovation, right?
31:14That's the answer to that.
31:16But it has there has to be some sort of policy change.
31:20The way I think about it is as humans, when I think about sort of Darwinian laws is humans have
31:28no problem changing whatsoever.
31:30We're agile.
31:32We're in our nature.
31:32It's why we've been around for thousands of years, right?
31:36But institutions we create are very resilient.
31:40The good ones and the bad ones.
31:42Those are the ones.
31:43So if we can figure out the humans inside of these institutions, how to recalibrate, then maybe we'll have a
31:50solution.
31:51So the question I have for all of you, again, whoever wants to jump in, you talk about sort of
31:58a healthier information environment.
32:02We've talked about incentives and education and literacy.
32:05What does that really look like, you know?
32:08And Mario, I'm going to look to you and to you, Katie.
32:11Maybe Katie, you want to go first and then Mario.
32:14What is a healthy, like what does it look like?
32:16Is it sort of the same as a physical healthy environment?
32:20I'll talk about how we can improve the one we have.
32:24So we have this community notes model at most of the platforms now.
32:28It started with X.
32:29It's moved to meta platforms.
32:31TikTok is introducing footnotes and there's a program with YouTube as well.
32:35And I think if you're paying attention from a high level, you think that that excludes the work of fact
32:41checking journalists and fact check organizations.
32:43But it really doesn't.
32:45A lot of that, a lot of the effective notes that end up getting posted.
32:49And I know this from a study that came out this year.
32:52Actually, a third of them include links to fact checking organizations.
32:56So we're still helping people through the morass of social media information online.
33:03We're just not getting support for it anymore.
33:05So I think it's really important to continue to support the work of this kind of journalism.
33:10It exists in countries all around the world.
33:13Because it is still really underpinning these community notes models that we have along with Wikipedia articles and then links
33:22to other X posts.
33:23So I'm referring to a study of a community notes post on X that came out this year.
33:29So I think we need to recognize that this work to fact check the internet is still happening.
33:35And it needs to be, working with those professionals still needs to be incorporated into platform solutions.
33:41And I think what you're saying, it's also, it's not just only happening.
33:45It's happening and it's also successful in some degree.
33:47So we just need more of it.
33:49So is it a matter of quantity?
33:51We just need more?
33:54Sure.
33:55That sounds good to me.
33:57But I think we need to be more honest about where the work of correcting misinformation is happening.
34:04And so I think there's this impression that, oh, it's the work of everyday people who are volunteering.
34:11There's piles of community notes that never get posted because they're just silly.
34:16They're attacking things that are opinions, predictions, and also just things that nobody really cared about.
34:22And there's another stack of notes, again, by well-meaning people on the internet perhaps, who are, they don't know
34:29how to really do it.
34:30But they don't know how to communicate it effectively.
34:32They're not using good sources, so they don't get posted.
34:34So I think we need to collaborate with people who have a lot of experience recognizing what a checkable problematic
34:40claim is and figuring out how to deliver an authoritative source for it quickly.
34:44Yeah.
34:45Sonny, before I go to you, I know I promised Mari, but Sonny, I know you had something to say
34:50about that.
34:50Does it go back to your formula for education?
34:54Yeah.
34:54I think that I really support Katie's one.
34:57I think that when we think about the healthy system, it's always a collaborative effort.
35:02We do have to have professional journalists that can really hold fact checking.
35:07But also I think that users who post content like influencers, they also take a lot, they should take responsibility
35:14as well.
35:15So maybe when they post content, think about how they can train like a journalist and using accuracy or different
35:23type of high quality content as a way to benchmark their content.
35:27Individual wise, I think that individually, I think that I don't think that when we talk about digital literacy is
35:35put all the responsibility on individuals.
35:38I think it's the opposite.
35:40It's put a responsibility in education department, in government, and in all those organizations.
35:47Provide resource, space, and training and time for people to do digital literacy training, even platform themselves.
35:56So that's how I see that for the digital literacy training.
36:00And policy too.
36:02I think it's really when we want a healthy ecosystem, we have professionals.
36:06We have good content creators.
36:09We have users can smartly discern information.
36:13And the platform itself.
36:15How I think currently platforms using engagement as only incentives, which totally makes sense.
36:23But besides I think that engagement, how we think about the social value.
36:28The value, the things we value most to some communities, maybe it's their family.
36:34To some communities, maybe it's democracy.
36:36For some, maybe it's about immigration, it's about jobs, it's about water.
36:41So how we can embed all those different values, align, and again, with engagement together.
36:48So we can have a system, information ecosystem that can really support diverse needs and values.
36:57Thank you, Sunny.
36:58And again, all roads lead back to education.
37:03And the onus falls obviously on the institutions, but especially us as individuals.
37:08So when we talk, you know, going back to the title of this panel, the information gods.
37:12I initially thought, oh, it's the platforms.
37:15Those are the new information gods.
37:17And especially those who have bought or run these platforms, right?
37:21But that may not be the case.
37:23I think the individual agency and power is not to be dismissed.
37:28So what every person here does in their little smart device actually matters quite a lot.
37:35So Mario, I want to go back to you and to talk about this.
37:38What does a healthy system look like when it comes to information?
37:43So there's two sides to it, because I want to piggyback on both of these points, which is,
37:47I think we have to remember that there is a funnel of responsibility that big tech companies have.
37:53And what they've tried to do is move the responsibility further and further away from the core.
37:58If a healthy ecosystem, the best health ecosystem would be them before you even open it,
38:04when you first open the platform, them filtering and giving you more context,
38:08giving you more agency, more friction.
38:10But, you know, God forbid, because that would limit clicks and views and whatnot.
38:13And so they move it further down.
38:15They move away from fact checking.
38:16Oh, we're going to now add community notes.
38:18Like just move it further and further away.
38:20And I just, first of all, I want everybody to remember this analogy.
38:24These platforms are like a boat that has a hole in it and it's water filling up.
38:29And instead of just plugging the holes, they hire 10,000 people to scoop out water as best as they
38:33can,
38:34because they make money off your feet being wet.
38:36And so that should not be the way it is.
38:38When we talk about health, though, the second kind of analogy I just want to leave you with,
38:43because we should accept that if we obsess about how we feed our bodies,
38:48why are not we obsessing about how we feed our minds?
38:51We have nutrition science.
38:52We have no equivalent for our minds.
38:55What is infotrition science?
38:56What does it mean to be info beasts?
38:58How do you break that down?
39:00How do you become aware of that?
39:01And I think a healthy ecosystem is where there's a science that exists for it,
39:05because imagine we lived in a world with no nutrition science, no ingredient labels, no nutrition labels.
39:10None of that.
39:11We'd all look like in Wall-E.
39:13But we're getting there now with our minds.
39:15And so we need that so that people can be equipped.
39:17We have the context.
39:18And by the way, once we have the metrics of health around media and interactions,
39:23we can start building regulation with teeth so that we don't just drag people in front of Congress,
39:28tell sad stories, and then keep doing the same thing over and over again.
39:32So the answer is not no devices if you're in school, because they're going to figure it out, right?
39:38My teenagers are very resourceful.
39:41They still text me from class.
39:42So we're sort of in our final round here.
39:48I'd like to maybe ask each of you to reflect on this.
39:53Is the future, and when I say future, I know future is not a distant thing.
39:58It's the next 18 months.
40:00Does that future look glass half full or glass half empty?
40:04Do we have the incentives, social, you know, incentives, psychology, all of the things that you guys talked about?
40:13Do we have the will?
40:14Do we have the right, you know, validation to just go for changing the systems?
40:20Or are we going to be here a year from now talking about disinformation spreading in an alarming state?
40:27Whoever wants to jump in first, but, you know, that was the fun question I had for you, not skincare.
40:36I'm at half empty.
40:38I don't see a lot of proactive measures being taken by people who have power.
40:44And so I actually wanted to go last to answer that question to see if I could add any more
40:48liquid to my glass.
40:50So I'm still listening for reasons to be more optimistic.
40:53From my vantage point, I'm a very optimistic person.
40:57So I really think we have to be very serious about this.
41:00And I and I don't see that. I don't see that that culture changing at this point.
41:05You didn't disappoint me as a recovering journalist myself.
41:09You did not disappoint me, Katie.
41:11Steve.
41:13I guess I'll provide the more optimistic perspective.
41:16I'm glass half full. And part of that is because I am.
41:20That's the optimistic side, right? OK.
41:22I think that's because I'm a psychologist who studies the negativity bias and who studies, you know, people pay more
41:28attention to negativity than positivity.
41:30Negativity goes viral on social media for a reason.
41:33It's because of our negativity bias.
41:34We are always going to be talking about the problems in society because we're wired to pay attention to negative
41:40things.
41:41So, yeah, for the immediate future, especially with, you know, decline of fact checking for the misinformation issue.
41:48I am concerned for our long term future is a species.
41:54If you just like look at human progress on a historical perspective, we're doing, you know, a lot better than
42:00we were in the past.
42:01And many things have been improving.
42:04So I think eventually we might figure things out, even though it might be a rough near future.
42:11But our minds are wired to pay attention to negativity.
42:14So it's, you know, it's sometimes good to be aware of that, that we will always be focused on the
42:19problems.
42:20Thank you so much. Sunny.
42:22Yeah, I think that my answer is I think that I think it's half full.
42:27I'm really actually really excited for the next 18 months for couple reasons.
42:33First is that when we have those conversations happening right now and then we have all those audiences here pay
42:38attention and really think about those issues.
42:41I think that is really indicate that I think that collectively we can find better solutions.
42:48And the second actually I'm super excited about the tools and the potentials of AI.
42:52Think about content moderation and think about all those kind of different harms on social media.
42:58I think that AI can potentially provide a really effective solutions on platform levels levels to do content moderations to
43:07reduce harms.
43:08Also, I think that AI is a really credible education tools, how we can leverage that tools to make individuals
43:16easier to discern information.
43:18And they have AI tools and agents for them, not for Google, not for Meta, but it's their own individual
43:25empowerment tool.
43:26So that's I'm really excited.
43:28That's exciting. Mario, you can lend the plane.
43:31I'm going to meet in the middle. I am glass half empty for the next 18 months and glass half
43:37full after that,
43:38because I think we are going to face imminent reality collapse.
43:41You're already seeing it today where you don't know which resume is real.
43:44You don't know who actually did the essay. You don't know which comment is real or credible on a comment.
43:48Everything is dissolving with the power of these tools where you don't have provenance.
43:51This will force us for once to finally start saying, hey, who are you?
43:56How did you get here? Did you do any work? Do you know anything?
44:00And after, you know, 18 months from now, we're going to have to move forward to a path where you
44:04need proof of human knowledge.
44:05So I think that's where we're going. That makes me optimistic because we're going to be forced.
44:09Katie, did you change your mind?
44:11I'm getting there. I just wanted to say like that's so true that like things simply cannot stand in some
44:17ways.
44:18Thank you all so much. And thank you for your attention. Have a good.
44:21Thank you. Thank you.
44:23Thank you.
Commentaires

Recommandations