- il y a 3 semaines
Generation Playlist
Catégorie
🤖
TechnologieTranscription
00:00Sous-titrage Société Radio-Canada
00:04Welcome. Maurice Levy called me in January to ask me to come here again and do a talk on AI.
00:14He really just needed a warm up act for Elon Musk.
00:17And you're not Elon Musk.
00:19And I'm not a warm up act.
00:21But, you know, when I looked at the program, there were hundreds of good panels around technology and particularly AI.
00:29And all focused on the tech.
00:31And my response was, glad to come, but I think it's important that we have a group that talks about
00:36the Viva.
00:37Because this is all about technology and service of humanity.
00:41And that got me to really think about all the folks I've met and worked with around the world who
00:48are really focused.
00:49You know, it's like we are at Nielsen on people and on humanity.
00:52And that's why I'm really excited to have two really good friends here.
00:55And I want them to give a little bit of their personal story.
00:58So, Jillian, as she mentioned, you are the editor at large of the Financial Times.
01:03And an author and an anthropologist.
01:05And on top of that, you took another job.
01:07You kept all those jobs and you took another job to be the provost or president of King's College.
01:14Why?
01:15And what's your passion in doing all of that?
01:20Well, thank you very much indeed.
01:21And it's fantastic to be here.
01:23It really is with so many people.
01:25In essence, I was trained as a cultural anthropologist.
01:28I'm fascinated by what makes human groups tick.
01:31I then became a journalist and wrote about the world of finance and business.
01:35And what I'm doing now for half my time is leading King's College in Cambridge and really trying to find
01:42ways to champion and celebrate the amazing youth that are going to be the leaders of tomorrow.
01:47And finding ways to become the best versions of themselves, to build the best version of society we can in
01:54this crazy, crazy world, which is being ripped apart, not just by geopolitical problems and political and social problems, but
02:03also by really rapid tech innovation that's changing how we operate as social groups in ways that we often don't
02:11think about.
02:13Well, thank you for being here.
02:14I want to come deeper.
02:15Jerome, of course, I worked with a long time ago.
02:18You started the Vissimo, which was early in semantic search, helped build Watson in the early days, did a number
02:26of things.
02:27And I was called as one of your references when you were hired to run the AI group at Meta,
02:33which I thought was the coolest job ever to run AI on one of the great platforms.
02:37And it was an awesome job and you did great work there, but you left.
02:42You left Meta and you decided to focus on Sizzle and education.
02:48Why did you do that?
02:51Well, I'm passionate about AI, you know, I'm a technologist, but I'm also passionate about the human.
02:56And at Meta, it was an amazing experience, you know, tremendous talent there, tremendous impact.
03:03But what I wanted to do is how do you take all that talent, that impact, that power that AI
03:07is gaining and aligning with what humans want to do?
03:11And I thought actually learning is something everybody wants to do.
03:14And how do we make that better, faster, more engaging?
03:18And so take what I learned there and apply it to something that I feel people will feel really strongly
03:23about.
03:24Well, why do we need to learn? Because we can now go to Claude or Pi or ChatGPT, it gives
03:28us the answer to anything.
03:30And what is unique about your approach to learning?
03:34Well, our idea is to learn how people learn so that we can make that experience more engaging.
03:40You know, think of, you know, my team at Meta was in charge of creating algorithms that you use when
03:45you go on Reels or Instagram or, you know, or Facebook and make that very engaging.
03:50We know how to do that. We know from every interaction, figure out what's going to make you come back
03:55next.
03:56What we want to do with Sizzle is apply that to learning.
03:59The challenge with learning today is kind of boring, you know, and people drop out very quickly.
04:03How can we understand what makes you tick, what makes you come back, and what makes you learn?
04:08Well, let's go back to what people do need to learn.
04:10I think it's important if they're learning that they're learning something that we can all trust so that we have
04:16a common view.
04:17At Nielsen, we care a lot about that. So we measure everything that people read and listen to.
04:24People spend about nine and a half hours a day on media. That hasn't changed much.
04:29But if I do, we talked about AI before. If I say AI is after iPhone and BI is before
04:36iPhone,
04:36and before iPhone, about two-thirds of media was consumed in groups.
04:40People had a common view of news. They had a common set of facts.
04:44After iPhone, every year, people have become more individual.
04:48Today, people only spend about 9% of their media time doing anything in a group of more than themselves.
04:54And that's almost entirely sports.
04:57So it means we've moved into a world where people are really living in their own bubble.
05:03Sure.
05:04And I think that changes the ability for society to be society, to have a common view.
05:10I'm looking forward to the conversation.
05:10Jillian, you've studied this on so many fields. How do you frame what's going on with trust today?
05:15Well, I think there are two really critical things going on today,
05:19because we're living in the world of AI after iPhone, which people don't think about enough.
05:26The first is a pattern of trust.
05:30Historically, anthropologists like me said there were two types of trust that glued social groups together.
05:37Horizontal trust, which is when you trust your peer group, your friends, your squad, your posse.
05:42And then vertical trust in authority figures and institutions.
05:48And that second vertical trust happens when groups become big.
05:52What has happened with this for the first time in history is we've created the ability to have horizontal trust,
06:00peer-to-peer trust on a massive scale.
06:03You never had that before.
06:04It's distributed trust.
06:06It's what drives so much of our modern life and our apps.
06:11And at the same time, we've also had the rise of what I call Gen P, generation pick-a-mix
06:17or generation playlist.
06:19And by that I mean, a hundred years ago, if you wanted to listen to music,
06:24the only way to do it was to turn on the radio on someone else's schedule
06:29or to buy a vinyl record that somebody else had assembled.
06:34Today, there is nobody in this audience, I think, who would settle for that.
06:39Because everyone wants to customize their music and listen to it when they want, how they want,
06:45exactly where they want, with whatever selection they want, in their own version of the matrix, if you like.
06:51And we're extending that sense of consumer choice, extreme customization, to every aspect of our lives,
06:58be it work culture, media, politics, education, you name it.
07:05So we have a shift in trust patterns, we have the rise of Gen P,
07:09and that is changing how we interact in some really profound ways that are both really good and really bad.
07:17But either way, you cannot turn back the clock and undo it.
07:21And that's where Jerome is trying to work with that, to look at education.
07:25That's right. I am a big believer in Gen P, okay?
07:29And I do think people choosing and embracing their own education or learning has huge potential.
07:34Now you're going to give back to me, which is like, are they learning the right thing?
07:38But I think it's more important that they believe, you know, and they are interested and motivated by it,
07:42and they can pick it, and it's talking to them, than someone feeding it to them from the top, you
07:48know.
07:48Absolutely. I think if you ask this audience today, you know, hands up,
07:51who would like to go back to a world of just vinyl records,
07:54or just somebody in a school telling you what to learn on their schedule?
08:00Most people would say, absolutely not.
08:02We're addicted to customization, and it feels empowering and liberating.
08:06And the fact we're increasingly asking our peer group for advice, not authority figures,
08:12also feels exciting and liberating.
08:15The problem, though, is you put those two trends together,
08:18and it's a recipe for echo chambers, polarization, false information, cyber flash mobs,
08:27ideas going viral and then dying.
08:28And the question really is, if you're trying to teach people to be wise,
08:33if you're trying to do what I'm doing at King's, trying to teach the next generation to make good choices,
08:39how do you do that when they're looking to each other and just choosing on their own agenda what to
08:45study,
08:45as opposed to getting something given to them from above?
08:49Well, and I think it's going to go well beyond the phone, because AI then, you know,
08:54we're going to be able to communicate with everything, with our appliances, with our refrigerators,
09:00with our cars, you know, it's going to be completely immersive around us.
09:05So, as that happens, how are people going to be able to know what is actually even horizontal trust from
09:13people they trust,
09:13versus something that's delivered to them by an algorithm controlled by a force that's not even human?
09:20Yeah, I think we need to talk a bit about the algorithm, right?
09:23And what you talk about in terms of, you know, people being more polarized, going more, having a more narrow
09:30view, doesn't have to be actually.
09:32The key is to understand what is the objective behind the algorithm that feeds them that customized content.
09:37Because actually, we talk about customization, but it's actually fed customization.
09:42You know, people just watch things and then we give them the next thing, right?
09:45What's the algorithm behind it?
09:47Today, a lot of these algorithms are based on engagements.
09:50That means they try to get you to come back. That's their sole objective.
09:54That objective has a bad feedback cycle. That's what I'm trying to change when I do Sizzle.
09:59You know, can you do an objective so that the person will be more informed, the person will be more
10:04engaged?
10:04Not just like, oh, you know, I'm just going to watch video, but I can actually answer questions about it.
10:10I'm more informed about it.
10:12So, the algorithm is very important, the objective is very important, and the transparency around it is very important.
10:17But the image I use for what you're trying to do is you've got a toddler in a kitchen faced
10:24with a plate of spinach and a plate of cookies.
10:28And if you leave the toddler to their own devices, they'll always choose the cookies and eat more and more
10:32cookies.
10:33You're trying to train them to choose spinach.
10:36Yes, I'm trying to make the spinach actually taste so good that they'll pick it.
10:41Okay, that spinach with sugar coating on top.
10:43No, it actually cooked very well. My wife is in the audience, she makes amazing spinach, okay.
10:47And my kids actually eat it. Better than cookies.
10:50You see, the key issue is that if you go about a hundred years ago, people would sit in the
10:54kitchen together, eat exactly the same food, no matter what, and that bound them together.
10:59Now, it's almost impossible to get a family meal where everyone eats the same thing.
11:04They all want to choose their own thing.
11:05And the question really is, can we actually reorientate those choices to be better?
11:10And tech companies have spent the last decade or two essentially training us just to have more and more and
11:15more cookies, because that's what we want.
11:17And you're trying to break that by teaching them about the joys of spinach.
11:20And that's fantastic. Good luck.
11:22I just wish more were doing that same thing.
11:26Is that really why you left Mehta? Because you wanted us all spinach instead of cookies?
11:29You actually think we like spinach.
11:30No, no, I do feel it's about making spinach taste better, you know, and learning to do that.
11:36You know, what's your spinach? What's yours, you know?
11:38I mean, the question that really concerns me is if you look at the world today, what you see is
11:42this incredible polarization in almost every aspect of, you know, where we are.
11:49And people in schools today teach kids about cyber hygiene, cyber bullying, you know, cyber security.
11:57We're not telling people enough about cyber tribalism and how we're being sucked into these echo chambers.
12:03And for me, the question is, when you look at what's happening in the education system, how do you actually
12:08train people to have curious minds and to want to really collide with the unexpected?
12:14And above all else, to choose to listen to alternative points of view.
12:19So how do the kids with your platform actually, how do you persuade them to choose to study things they
12:25may not want to study?
12:26First, I mean, I think the research is a bit more mixed.
12:31Because actually, if you think of like the real world, the real world is very biased, right?
12:35If you live in the middle of Alabama, you're not going to have a circle very, very diverse around you,
12:41right?
12:41That's the same if you are like in the middle of New York.
12:43And so, in reality, life tends to be very biased and have echo chambers without digitalization.
12:49Actually, I think, you know, digital platforms are an opportunity to create exposure to much wider views.
12:55The problem sometimes is if they're optimized just for engagement, people tend to engage with things that they either disapprove
13:02completely or like, you know.
13:04And so that's what we need to break, right?
13:06Our objective is to make you informed and form a multiple point of view.
13:10So you can tweak the algorithm to privilege content that's up in your mind, for example.
13:16So I think there's more hope than you mention it.
13:18And I think the current, you know, status is a bit of a function of what has been done so
13:23far.
13:24Well, but then I think we have to say, where is the authority and responsibility in that?
13:29Because if we're really eroding any sense of vertical trust, we did trust professors, teachers, in my world, journalists.
13:40What you're now saying is it's the platforms because the platforms control the algorithms.
13:46So have we really decided that we're going to exceed our vertical trust to a handful of global platforms?
13:55I think we need, I think it's a great question.
13:57And I think we need openness and we need transparency.
14:00That means this algorithm has a lot of impact.
14:02You know, I had calculated one as a meta that our algorithm, you know, decided the fate of a thousand
14:07lives per day.
14:08You know, the number of times people will actually watch things.
14:11Amazing impact.
14:12With this kind of impact, you need accountability, you need transparency.
14:15So there are experts who understand that.
14:17And you need to know what you're optimizing for.
14:18That's right.
14:19The objective is really, really important.
14:22But I guess the other question I really have, which is so relevant to this audience, is in the first
14:28decade or two of the internet and social media, what we discovered is that if you are tapping into the
14:35worst aspects of human nature, you will have an amazing business model.
14:40Because it is so easy to make the worst aspects of digital life addictive.
14:46Because guess what?
14:47We all love those cookies.
14:50What you're trying to do is try and tap into the best aspects of human nature and assume that actually
14:56out there, there are a whole bunch of people who want to self-educate themselves to be wise and curious
15:02and engage in Socratic debate.
15:04So I guess the question really is, do you believe that there are enough people today who actually want to
15:11aspire to the better aspects of their nature to have a really effective business model tapping into that?
15:18Because if you believe that, then that matters to the entire audience right now as they look in their own
15:24businesses that they're trying to develop.
15:25I believe that we can do a much better job. Today, a lot of like practical learning, if you go,
15:32you know, it tends to be kind of boring, right?
15:33Can you make learning as good as the best teacher you've ever had in your life, but every single second
15:39of it like this, right?
15:40I think we can do that with technology. And so what I want to do is make learning and this
15:44kind of like more inspiring aspect easier for people to get.
15:49Let's raise it even beyond learning, back to trust, because you want people to learn things we trust.
15:57Where communities have created trust is that they create standards.
16:00So we are able to have a global financial market because we have generally accepted accounting principles that we all
16:07trust.
16:07We trust that's revenue, that's cost, that's cash flow.
16:11Ratings businesses are this way. You have a trust of an audience.
16:16In so many aspects, trust is under attack if it's really going to completely move from any sense of vertical
16:24trust.
16:24If it's horizontal, how do groups come together to create commonly accepted trust principles in everything that you're teaching?
16:33How do we create commonly accepted trust principles in everything we're seeing in news so that we can know that
16:39it's true?
16:40First, it's extremely complicated.
16:43And how do you get the algorithms on that?
16:44I'll tell you, when I was at Meta, I had a team trying to figure out if we could judge
16:48news, you know,
16:49and decide whether the news was trustworthy or not with an algorithm.
16:53So you're kind of trying to judge me, in a way.
16:55Yeah.
16:56Yes, exactly.
16:57But you could at least know whether she had an editorial process, followed standards, took bribes, fact check.
17:03We actually never put the system in production.
17:06This is too hard.
17:07So what do we do?
17:09In that case, we actually use fact checkers, but even that was complicated.
17:14In my case, I think it's easier.
17:16We create models, and the idea is to have these models, have benchmarks, so you can see how they are
17:22responding.
17:22Right?
17:23And the reality today of this model, actually, they've done a pretty good job at being kind of like middle
17:27of the road.
17:28You know, they tend to be non-controversial, sometimes so much so that people criticize them.
17:33So I think the solution there is to be very open, to test them, to put the test in the
17:37open, to make people accountable for this.
17:40Okay, but...
17:40Which is harder for humans, by the way.
17:42Well, and it's hard depending on what you're teaching.
17:44So I'm going to go to an example.
17:45So I get that in mathematics, there are rules, there are axioms.
17:49You're starting to teach humanities.
17:51You're teaching the history of Israel and Palestine.
17:55There's more than one view on that history.
17:56How do you know which view to teach?
17:59And then I think, you know, Jillian had some interesting experience yesterday.
18:02I want her to share.
18:03I think you should ask the former president of Harvard how she feels about that one, you know.
18:08Don't give a legal answer, that's for sure.
18:10I think it's hard for humans even to have this kind of like very neutral view, right?
18:16So...
18:16But how do we use the algorithms to actually create diverse views versus very polarizing views?
18:25Today, actually, we train them to do that.
18:26And it's funny because, I don't know if you've seen the news in the past few months,
18:30sometimes we train them so much that they rewrite history, right?
18:33They think the founding fathers were multi-ethnical or multi-gender, right?
18:37So that's the model right now.
18:39I'm not saying it's perfect.
18:40There's a lot of work to do.
18:41But I think the future will be these models that kind of gives you multiple views,
18:45don't take a lot of risk.
18:46A little bit like, I would say, a provost has to be embracing the views of all the faculty, you
18:51know.
18:51That's the way they're going to work out.
18:53Yeah, I mean, can I say this?
18:54Because...
18:55So we had Al Gore come and speak at King's College in Cambridge ten days ago.
19:00He gave the sermon.
19:01And he pointed out that we're living today through a moral moment for the younger generation,
19:08a bit like the 1960s, where the horrors of geopolitics have awakened a real sense of conscience
19:16and activism and engagements amongst young people.
19:19And that is fantastic.
19:21We should absolutely celebrate that.
19:23Because, you know, when I was at college, you know, it was a Thatcherite, Reaganite years
19:28when actually there was a real sense of political apathy.
19:30So it is fantastic that people care so much today that they want to get out and protest,
19:35they want to express their views, whether it's on climate change, geopolitics or anything else.
19:39The challenge is how do you then create a thoughtful and curious conversation that is open to listening and learning
19:47to each other
19:48and recognizing that no one side ever has a complete monopoly on moral outrage or the truth,
19:56and we have to have empathy for other points of view and learn from each other.
20:01That is really hard in a world of echo chambers and polarization and all of these patterns we've been talking
20:08about.
20:09I actually believe that it makes face-to-face education and meetings and groups of the sort we're having here
20:16right now
20:17or the sort you have in universities incredibly important
20:20because being forced to collide with the unexpected and other people's points of view is critical.
20:25And so to go back to my question about, and that's what we're trying to do, you know, we've had
20:29big protests in Cambridge,
20:32there have been big protests right across universities, and that is, of course, challenging to navigate,
20:38but it does show the value of this amazing moral moment that we should all applaud.
20:44And to go back to my question, if you can find a way to take the better angels of our
20:50nature,
20:51make a business model out of it, and actually champion this moral moment for the next generation,
20:57that is fantastic. Good luck.
20:59I don't think Facebook did that, I hate to say.
21:02You can show them how you can get it right.
21:05I think we can do better, but I don't think we're going to solve all the problems.
21:09Actually, one of the goals of CISL is not to replace education, by the way.
21:11I think it's to free, you know, professors and teachers and students to spend their time at school doing more
21:17interesting stuff
21:18than just sitting in lectures, you know, like passively.
21:21Maybe you guys should be actually learning rather than just sitting here.
21:24So, you know, that's the idea. I think the time in real life, you know, with a teacher, with a
21:30classroom,
21:31will still be very important.
21:32But can I say one other thing, which is, I think everyone in the audience should ask themselves,
21:35when you go online, how do you judge whether information is trustworthy or not?
21:42Because, you know, my generation, you know, back in the dark ages,
21:45basically trusted what was in a newspaper sometimes, because it was an authority figure,
21:51or we were trained to go out and verify or fact check in a kind of logical way.
21:56And there's some amazing work done by anthropologists, digital anthropologists,
22:01who are out there studying how real life human beings behave in the wild, in cyberspace.
22:07And this research has shown from Jigsaw that actually Gen Z tends to judge whether or not information is true
22:15or worth tracking or watching, not through authority figures or by going out and verifying,
22:22but by essentially sensing like a weather vane all the posts and comments,
22:27particularly the comments from their peer group, and then smelling the atmosphere and the environment.
22:33That's a very different model.
22:35It means that tech companies or even media companies who are trying to tell people,
22:39this is how you tell whether information is true, may not even be in this place where Gen Z actually
22:44wants to listen.
22:45They might be missing the point.
22:47So again, I'm really curious about how you're going to persuade people to use your app.
22:52I don't know. My memory of the way we decided, our generation and what was true or not when we
22:59were young,
22:59is different from yours.
23:00I think it was a lot more siloed, we had access to a lot less information,
23:04and we just trusted people to do it, right?
23:06I don't think it was much better than today.
23:09So I think this is a skill that people are developing.
23:11And I think actually the current generation, in my opinion, is smarter at deciding through the noise than we were.
23:18And I think, I mean, I trust their judgment, my kids, for example, more than mine when I was their
23:24age.
23:24Do you trust that, David?
23:26Do you think your kids are better at judging things than you?
23:30No, no. I think the challenge is to have enough transparency on what led to a conclusion.
23:38What I like about your model is you're showing how to get to the answer, not the answer.
23:42But I don't think we have enough transparency.
23:45And so much of what we're now relying on increasingly, our LLMs, to help us make decisions,
23:51they're not so transparent.
23:53And you're right, we can trust that they wait to be, you know, to take other points of view.
23:59We can trust that they get to a set of trust standards.
24:03But we don't have standards to hold them to.
24:06And I don't think we have openness to know that they're actually doing it.
24:09So my question is, the more our kids and the more the next generation depends on LLMs to make decisions
24:17in their lives,
24:17how are we, again, going to make sure the rest of us hold them accountable?
24:23And how are we going to get enough openness and transparency to know where the weights are?
24:28No, I think it's a good question.
24:30And I think the industry is still learning, right?
24:32There is just, things are changing so fast.
24:35My answer to this, and we discussed it, I believe in open models.
24:38I believe, like, you know, releasing the weights, letting people test it, running benchmark,
24:43understanding what data it was trained on, that's the way to go.
24:46They will be so important, right, that we can't just leave that to a handful of people.
24:50So I think openness, open source is the way to go.
24:55Well, what does that mean for all the businesses that are here today?
24:57Because I think the other thing is, we're almost worshipping at this church of the LLM,
25:04that it's, you know, there's going to be a handful of master algorithms.
25:09Where's the economic value in the base model?
25:12And where's the value in the applications on top of the model?
25:15And is that going to be different in AI than in prior technology ships?
25:20Well, it depends who you ask.
25:22I'm asking you.
25:23My bet is, you know, I think the vertical application, you know, understanding the user,
25:28the human is the most important part of this.
25:30So for us at Sizzle, we want to learn how people learn, and as they interact with the model,
25:35we understand their patterns, we understand what makes them tick, what makes them come back.
25:38That value, I think, will be owned by the application, not the model itself.
25:44And hopefully, there will be a lot of really interesting applications understanding humans
25:47and making things work for them.
25:48I mean, to, you know, middle-aged people like myself, the idea of being taught by a bot sounds scary.
25:53But then again, the alphabet research shows, you know, if you do an anthropology of Gen Z,
25:59they actually prefer personalized bots for medical information often over humans.
26:05And if you look at Gen Z, they're getting used to being interviewed by bots doing, you know,
26:10higher views, AI interviews, in a way that's starting to seem normal.
26:15You know, people say the same thing about books.
26:17When the printing press came out, people said, hey, you know, we don't want people to learn from books.
26:21As we expected, we ran out of time.
26:23Rather, I think we're getting pushed off the stage by Elon Musk.
26:26Let's face it, we're the warm-up act for Elon Musk.
26:28I appreciate all you're doing. Thank you very much.
26:30Thank you.
Commentaires