- il y a 2 jours
The Societal Impact of AI
Catégorie
🤖
TechnologieTranscription
00:00As we explore the usage of tech for good, it's crucial to explore how AI influences our society, including the
00:10effects on the future of work, education, and governance.
00:13From revolutionizing industries to reshaping daily lives, AI holds immense potential to address societal changes and enhance human well-being.
00:24Join us as we delve into the complexities of AI, social impact, and its implications for the future.
00:33Join me at welcoming a conversation between James Manieka, SVP Research, Technology, and Society from Google and Alphabet, and Anna
00:44Bouverotte, Chairperson at École Normale Supérieure.
01:15Well, thank you for coming to this session, and it's wonderful to see all of you here.
01:20My friend Anne and I are going to have a conversation about lots of things that we've both been thinking
01:27about.
01:29Yes, well, delighted to be here. Good afternoon, everyone.
01:34I know it's a packed session, a packed schedule.
01:38We're going to try and make it lively, James and I.
01:41We have this habit of conversing about things, so we'll do it in front of you this time.
01:47So, James, we haven't seen each other since last year, but you were just at Google I.O.
01:55and from your perspective and the perspective of everything you're doing at Google and Google DeepMind,
02:02well, can you speak about the most significant opportunities for society from AI?
02:10Well, thank you.
02:12I mean, the way we think about the opportunities from AI is kind of in a few areas.
02:17One is what it can do to enable what people want to do, the most creative, imaginative endeavors.
02:24We also think a lot about the potential to impact the economy, something I know you think a lot about.
02:31We've also been very excited about the possibilities for science, and that's something I'll probably come back to,
02:37but also the ability to impact society.
02:39But I can maybe highlight some of the things I came away most excited about at I.O. a week
02:46ago.
02:47Yes.
02:48So, let's see.
02:49There's so many things we talked about at I.O.
02:51First, I was very excited with our current and latest generation of Gemini models,
02:57and I'm particularly excited about them at least for three reasons.
03:01One is the fact that these models are natively multimodal from ground up.
03:06Second, we've now kind of innovated this idea of a very long context.
03:12So, you may have seen that we'd announced a one million token context window.
03:16We've actually now upped that to two million tokens in the context.
03:23And then we're starting to explore some of these agentic capabilities.
03:28Hopefully, some of you might have seen our Project Astra, which we're very excited about.
03:34And that's a very exciting area.
03:36I should also mention that we announced the latest generation of our Gemini open models.
03:42We're very proud of that.
03:44By the way, our French team...
03:45And it's your team in France, exactly, working on this.
03:47In fact, some of our team in the AI hub in France actually worked on our Gemini open model.
03:53So, we're very, very excited about it.
03:54So, there's a lot that we shared at I.O.
03:56Yes.
03:57But I'm also quite curious.
03:59I'm sure we'll come back to the science topic.
04:01We should.
04:03You've been thinking a lot.
04:05And you've been running this commission looking at the opportunities for AI in France.
04:11And you published your report recently.
04:14Talk a little bit about the highlights from that report.
04:16And more importantly, what do you think will happen next?
04:19Yes.
04:20Well, thank you for that.
04:21So, I typically carry a version of the report with me.
04:25But yesterday, I had one and I gave it to someone after my speech.
04:29So, I don't have.
04:30But I have the cover page.
04:33And we're very proud, of course, of the content.
04:36But we're also very proud of the cover page of the report.
04:40AI, our ambition for France.
04:41We delivered that report to President Macron beginning of March.
04:47I think March 13th.
04:50And we've had a nice press coverage here in France.
04:55And a bit more global than that.
04:57We're making a lot of recommendations.
04:59We're making, but the very first one, and I'm sure I'll come back to it.
05:03The very first one is we need everyone to be able to understand what AI is.
05:10We need an opportunity for people to experiment it.
05:14We need capacity building.
05:16We need training.
05:18We need to ensure we don't leave people on the side of the road.
05:22So, that's true for France.
05:23So, it means people who are not digitally native.
05:26It means people who are maybe older or who don't, who are scared about computers and things like that.
05:35But everybody really should have an opportunity to experiment and to try with it.
05:40And so, we've delivered that report.
05:42And actually, yesterday at the meeting at Elysee with President Macron, the government unveiled some of the things they'll be
05:50doing.
05:51That's exciting because one of the things that's always struck me looking around the world is just the extraordinary amount
05:57of talent in France, actually.
06:00I mean, there's one of the reasons why we've invested in an AI hub in Paris.
06:04There's just extraordinary talent.
06:06And I think the opportunity for France, at least in my view, is to enable that talent to do the
06:11extraordinary thing.
06:12So, I hope that's part of what will happen.
06:16But how do you think, Anne, about balancing some of these potential benefits of AI, some of the ones I
06:23mentioned,
06:24but also thinking about the risks and the complexities that come with AI?
06:28Yeah, so it's important to think about both the benefits and the risks.
06:36There's a quote that I really like.
06:40It's not related to AI.
06:41It's by Marie Curie.
06:43So, it was on different type of science.
06:46But what she said is that now is – no, sorry, that's the second part of the quote.
06:53The first part is that there is no – this is not something to be scared about.
06:59This is something to understand.
07:01And then she said, now is the time to understand more.
07:04So, I think in front of something new that, of course, it's understandable that people can be afraid about AI.
07:11The first thing is to try and understand better.
07:13So, that's why the capacity building, training, experimentation, social dialogue, all of these things, I think, are the first ones.
07:21And then, of course, we also need to study the risks and to address them.
07:26And in order to do that, we not only should think about the maybe science fiction type risks
07:33and the things that come to mind when you think about, oh, the wider possibilities,
07:40but also the real, more concrete risks about AI and work or AI and biases or disinformation and deepfake.
07:50I think all of these things need to be studied and we need to have concrete solutions as much as
07:55we can to address them.
07:57Yeah, and those are some of the things we've been thinking a lot about.
08:00I think one of the other things we announced, for example, at I.O. a week ago is that, you
08:07know,
08:07about a year ago, we invented this watermarking technology we called Synth ID to help with watermarking.
08:13Initially, we'd done this for images and audio.
08:17And so, this year, we actually have extended that to text and video.
08:22And more than that, we've also open-sourced it.
08:25Partly because we think it's quite important to have many, many innovators build on it, improve on it.
08:32And I think it's going to take all of us.
08:35So, I think these are some of the things we need to think about,
08:37especially in a year like this year when, what, two and a half billion people around the world are voting.
08:43So, the concerns about misinformation are very important.
08:47But I think these are some of the things we need to focus on.
08:51And I would like to highlight that.
08:52I think the, if only the more recent events have really showcased this,
08:58using people's voices to train AI.
09:00And then, if it's with their agreement, that's fine.
09:03If it's not with their agreement, that's an issue.
09:05The whole issue around authenticity, can you really say it's who you think, who's talking to you?
09:14And then, there's the scams, actually.
09:15People are being targeted to wire huge amounts of money because they think their boss is asking them to secretly
09:21do something.
09:23So, there's lots to be done around voice and images and video.
09:28So, I really like the things that you've been looking there.
09:32They'll be doing there.
09:33They'll be very useful.
09:34Yeah, but I think it's important to balance the combination of both these innovations, but also the risks and complexity.
09:42I think one of the things we've been thinking a lot about is this idea of being bold and responsible.
09:48But let me talk a little bit about some of what bold means for us.
09:52It basically means trying to do, can we do the most ambitious things that can help people, the economy, and
10:00society?
10:01And, you know, this brings me back to my comment about science, because some of the most exciting, fun things
10:07that we've done, maybe I can highlight at least three things from the last three weeks.
10:13So, three weeks ago, we released a model, one of our models, Gemini, which is tailored for medical applications, MedGemini.
10:21It's now the state-of-the-art model.
10:23It uses long contacts, can understand videos, and a whole bunch of things in the medical domain.
10:28That was three weeks ago.
10:30Excellent.
10:30Two weeks ago, we also announced and released AlphaFold 3.
10:35Yes.
10:36If you recall, the first AlphaFold that we'd done about a year ago was focused on understanding the structure of
10:42proteins.
10:42Proteins.
10:43And actually did all, whatever, 200 million of them.
10:46What we've now done with AlphaFold 3 is actually extend that to all of life's biomolecules.
10:52So, not just proteins, but also RNA, DNA, ligands, and so much more.
10:58And so, and we've actually made that available to science researchers from around the world.
11:04I think already from our original AlphaFold, we have something like 1.8 million biologists around the world who are
11:12using this to actually work on a whole range of neglected diseases.
11:17And actually about 44,000 of them are actually in France.
11:20I don't know if you knew this.
11:21So, French researchers are actually using AlphaFold to work on things.
11:25And then a week ago, just to again highlight the pace of these things, we did some work on connectomics
11:33to use AI to actually understand portions of the brain at the synaptic level, which hadn't been done before.
11:41This is actually work we did in collaboration with Harvard.
11:43So, I mentioned these three examples just to give you, again, just the pace and scale in just the last
11:51three weeks.
11:51But I think we need to balance that with the idea of being responsible.
11:55That's why the idea of bold and responsible really matters, which includes things like, are we being thoughtful about, will
12:03this harm people in any way, will this benefit people in any way, and how do we keep researching things.
12:10So, that's how we're thinking about some of that.
12:12Yeah, no, excellent.
12:15One of the things about AI is that you have new announcements every week, as you showed, or even every
12:22day.
12:23And it's fascinating, and that's also why we watch this space closely, and we can start to see better every
12:33week the benefits that it can bring to humanity.
12:36But that's also why I think a lot of people are scared, and that's also why we really need to
12:42think about what we can do.
12:45And so, I like this approach of bold and responsible.
12:48One of the things that we've tried to do in France, and this is not to do with the Commission,
12:54but this is more to do with what I do at École Normale Supérieure, and on societal impacts of AI.
13:02We created, we launched at the end of last year, so in December, an AI and society institute.
13:10And the idea is to have a place where you can have researchers, very solid one, very rational-based people
13:18that will go deep into things, but can study the societal impacts of AI.
13:24And then, not only do the research and keep it to themselves, but also engage in public debate, engage in
13:32helping people see through, first of all, the accelerated timeline of new developments, but also the difficult questions of, is
13:41it good, is it bad, where do we need to navigate, how can we steer this?
13:46And I like this idea of helping steer the development of AI, because it's really moving fast, it's really changing
13:56things, and if we don't do anything, it will just maybe just go the way it's being pushed by the
14:03people who develop it, and then maybe it will not exactly go where we want it to go as a
14:09society.
14:09So we have, and I know you shared that, I'll let you react to that, but it's about how we
14:14steer AI, how we shape it to be beneficial for humanity, rather than maybe more on the productivity side, for
14:23example, for work, and more on the augmentation and improving the lives of workers.
14:29No, I think that's exactly right, and I think it's also the reason, and I strongly believe that particularly in
14:35rooms like this, when we've got builders, developers, and innovators, the more we can think about what are the incredibly
14:43ambitious, beneficial innovations and products that we can build to serve people, economies, and humanity, I think that's fundamentally important.
14:52And I think I worry sometimes when, in fact, all our narratives are just focused on the risks, those are
14:58very, very important, but we should also be thinking about why are we building this technology?
15:03I think all of the developers in the room and the innovators are thinking about how do we improve society,
15:08how do we build businesses, how do we do imaginative, innovative things that solve some of the world's problems.
15:14I couldn't agree more.
15:16I mean, if we, for example, focus for a minute on the topic of AI and work, of course, AI
15:21will have a huge impact on work, and of course, there are some jobs that are really focused on specific
15:28tasks that will be automated or replaced by AI in the field of, for example, dubbing for films or some
15:37of the translation tasks,
15:39but also there's a whole set of new jobs, and I know you can talk about that, and if we
15:45start the conversation by saying, you know what, half of the jobs on earth are going to disappear in the
15:51next two years,
15:51well, first of all, I fundamentally think it's wrong, and that's not what the AI Commission shows or all the
15:57economists' reports and studies show, but also, it's not where to start the conversation.
16:04The conversation is where, how do you ensure that you get more of the people ready to do the new
16:10jobs, how do you help all the people whose jobs are going to change,
16:14and how do you help people who are doing jobs that maybe will not be needed as much move to
16:20something else?
16:21But I know you have lots to say about that.
16:23Yeah, I think this question of skills and skilling is fundamentally important, fundamentally important.
16:29The one thing that I am excited about and quite optimistic about is I think much of what we're experiencing
16:36with this current wave of generative AI is how assistive it is to workers.
16:42There's now been quite a lot of studies that show that, in fact, quite often it's the less skilled workers,
16:48when they make use of this technology, that I should get the most out of it.
16:52And I think the possibility of actually giving workers more opportunity, and kind of, you know, historically, I think access
17:00to work and opportunity has always been constrained by either expertise or experience.
17:05But the fact that I can actually interact with all these systems with very little expertise and still get something
17:12useful out of it, whether it's coding tasks or writing tasks and so forth, I think it's very empowering and
17:19very, very imaginative.
17:20So I think the more we can deploy solutions that are assistive to workers in that way, I think we'll
17:26do a lot.
17:26That doesn't mean there isn't an incredible, important need to work on skilling and skills for workers.
17:33I think that'll be very, very important as well.
17:36No, I agree.
17:37Maybe we could move to global governance and international topics.
17:43AI, of course, is global.
17:45This is not something that one country can, maybe one country is trying to do that, but very few countries
17:51would be able to keep it within their borders.
17:53It is international by design, global data sets, global compute, etc.
17:58You co-chair the UN advisory board on AI that was launched by the United Nations Secretary General.
18:06You published an interim report.
18:08You're continuing to work on this.
18:10Can you tell us a bit more about this?
18:12Yeah, no, no, thank you.
18:14I mean, I've had the great privilege of working with a group of colleagues.
18:18There are actually 39 members on our advisory board.
18:21I don't know how many you had for your commission.
18:23Fifteen.
18:24Okay, we're 39 from 33 different countries.
18:27And I think that's actually very important because the members come from academia, from the private sector, from civil society
18:36and from government.
18:36And in our report that we published, it highlighted a few specific things.
18:42One is the fact that there's lots of opportunity from this technology to improve society, improve the world.
18:49Yes.
18:49At the same time, there are risks, many of which we've talked about.
18:52But it also highlighted some important gaps.
18:56And the gaps were actually of two kinds.
18:58Governance gaps, but also capacity gaps.
19:02So when you look around the world, especially at the global south, countries in Latin America, Africa, parts of Asia,
19:09there are incredible capacity gaps to enable them to both participate in the development of this technology and its use.
19:16So there are some clear, important gaps.
19:19So we made some recommendations that highlighted a few important principles on which the world should think about governing AI.
19:28The fact that it should be based on fundamental human rights.
19:31Yes.
19:31The fact that it should be based on international law.
19:33Yes.
19:33That it should be in the public interest.
19:36That, in fact, we should try to use it to solve the world's toughest challenges and opportunities.
19:41But it also highlighted some functions that probably need to get coordinated around the world.
19:48One of those functions, for example, is just interoperability of standards.
19:52I think in a room full of people who are building technology, you don't want to be building for each
19:57country one at a time.
19:59I think you want interoperable standards.
20:00I think there's also a sense that it's important to have some way to harmonize even the various regional and
20:09national regulations that are coming together.
20:12Yes.
20:12That it actually kind of works because the technology is global.
20:15So there's a lot of exciting, important work.
20:17But the one thing that I probably hadn't quite appreciated, Anne, from this work is to see how attitudes around
20:25AI are very different around the world.
20:28Yes.
20:28So it was quite striking that, for example, many in the global south are actually quite excited about AI.
20:35Yes.
20:35In some ways, much more so than in Europe or North America.
20:40But they still have issues around these gaps, the desire to participate, to be part of it, and so forth.
20:45So that was also quite interesting.
20:48No, I think that's super interesting.
20:50Actually, France is one of the countries where people are the most scared about AI.
20:55And, of course, it's right, we've discussed it, to think about fears and risks.
21:00But it's also super important to understand the opportunities and to help people have places where they can discuss and
21:06debate and then get more comfortable.
21:08And as you said, in a number of countries from the global south, people are actually much more eager to
21:14be able to not be deprived of the chance of using AI.
21:18But I want to come back to something you're doing.
21:21So you're going to host, you're helping to think through the AI summit that France is going to host.
21:29Yes.
21:29And there have now been quite a few of these summits.
21:32I mean, there was the UK summit, the Korean summit is happening as we speak.
21:36Yes.
21:36And you're going to host the France.
21:38How are you thinking about that?
21:40And will it be similar to the other summits that we've seen?
21:43Say more about that.
21:45So, yeah.
21:46Well, first of all, I'm super excited about that.
21:49France will host an international summit on AI in February 2025.
21:55It will be the 10th and the 11th of February.
21:5810th and 11th of February, put that in your calendars.
22:02Please take note.
22:03Yeah, please take note.
22:04This is when France will host everyone who's relevant and wants to participate on AI.
22:10And a lot of the things you said from your work at the UN level is actually on the agenda.
22:18The UK summit focused a lot on the latest frontier models and the whole idea around big risks and safety.
22:30The Korea summit taking place right now in Seoul is around innovation and especially focused on Asia, which is very
22:39important.
22:39And the French summit will be called the AI Action Summit.
22:43Oh, I like that.
22:44Yeah, because we want to try and so, of course, it's not like there's a lack of action.
22:48But actually, action happens and things happen from the development of models.
22:54We need, as societies, as countries, as individuals, to steer it, so to take action, to shape it.
23:01And so the areas that we have in terms of what we want to focus on for that summit,
23:06one is, you actually mentioned it, is public interest AI.
23:11What do we do for AI in the public interest?
23:13How can we have access to compute, access to data that is for the public interest, public research, and not
23:22only private research?
23:23So what openness do we need?
23:26What level of openness?
23:28Satellite data, should it be available to everyone to build their own models on?
23:33So public interest AI is a very big track.
23:36AI in work, we mentioned it, is a very big track as well.
23:41We have innovation and culture, and we're putting it together because of the copyright issue and the tension sometimes between
23:50creativity and technological innovation.
23:54But what if we solve this by putting them in dialogue and make those two work together, which I hope
24:02we can do?
24:03Then, of course, we have the safety and security discussion, which is an important one, and the global governance track,
24:09for which we will be discussing with you at the UN advisory body and seeing what is the most relevant
24:16to do at UN level,
24:18maybe at other levels, maybe then there's the regional and country specifics.
24:22But, yeah, so we'll be working between now and then on hopefully concrete deliverables that would be there by February,
24:31the AI Action Summit in Paris.
24:33Yeah, I'm actually very excited.
24:35I hope I get an invitation to come.
24:37Of course you are invited.
24:38But I do think, I like the fact that it's called the Action Summit.
24:42I think the focus across the different tracks.
24:45But I also think what you said, Anne, about culture and all of that is fundamentally important.
24:51One of the things we hear a lot, even in our work at Google, is the importance of actually understanding
24:57all the incredible diversity of languages, of culture.
25:01And, in fact, I think the metric is if you actually want to get to something like 96% of
25:07humanity,
25:09you'd have to actually be able to work in something like 7,000 languages.
25:12So we still have a long way to go.
25:14So I think there's some way to reflect the diversity of the world, the culture, the, you know, whether it's
25:19in Europe or any other parts, I think it's fundamentally important.
25:23But as we wrap up here, because I think we're close to the end of our time, I'm actually quite
25:27curious if we could do just a quick round here.
25:31Sure.
25:32What are you most excited about in short order?
25:35What are you most excited about?
25:36So, well, I'm not going to steer yours, because I agree with what I think you're going to say, but
25:41I'll leave it to you because it's really your point.
25:44Right.
25:45But I'm excited about the ability to steer the development of AI.
25:48I think this is really, we have few occasions in our lives, in humanity, where we have a technology that
25:56is nascent,
25:56and we have an ability to steer it, hopefully, for the benefit of humanity.
26:02Now, what are you excited about?
26:04Well, I might actually try to sneak in two things.
26:07I think on the one end, I'm very excited about the AI and science and the application of science.
26:12I think the rate at which we're able to do incredible breakthroughs across all the fields of science.
26:18I didn't even mention some of the work we do on material science.
26:21Some of the work we're doing even in areas like quantum, both quantum computing and some, that whole space is
26:27very exciting to me.
26:28But, you know, the other thing that I've come to appreciate a lot lately is what creatives and creators are
26:35doing with this technology.
26:36Yes, I agree.
26:37That's incredible.
26:38I agree.
26:38So, when we've had artists, musicians, filmmakers, I think we showed at I.O. last week some work we're doing,
26:45the actor and director, Donald Glover,
26:48who's trying to do some incredible creative work.
26:50That's very exciting.
26:52Excellent.
26:53Well, yeah, and for the summit, we're going to try and do cultural events.
26:56So, well, I'll pick your brain on that.
26:58Well, I think that wraps our time together.
27:00Thank you, everyone, for listening to our conversation.
27:04And James and I are going to continue this conversation outside of the stage.
27:08Thank you.
27:08Thank you, James.
27:09Thank you, everyone.
27:11Thank you.
Commentaires