Passer au playerPasser au contenu principal
  • il y a 5 semaines
The Promise of AI

Catégorie

🤖
Technologie
Transcription
00:00Sous-titrage Société Radio-Canada
00:30...
01:01...
01:02...
01:02...
01:02...
01:05...
01:05...
01:06...
01:07...
01:07...
01:09...
01:10...
01:11...
01:11...
01:13...
01:13...
01:14...
01:14...
01:16...
01:16...
01:16...
01:16...
01:17...
01:17...
01:18...
01:19...
01:19...
01:19...
01:20...
01:20...
01:20...
01:21...
01:21...
01:23...
01:24...
01:24...
01:24...
01:25...
01:25...
01:25...
01:26...
01:27...
01:28...
01:28...
01:28...
01:29...
01:29...
01:29...
01:30...
01:30...
02:01...
02:01...
02:02...
03:31...
03:32...
03:32...
03:33...
03:34...
03:35...
03:36...
03:37...
03:38...
03:40...
04:40...
04:41...
04:41...
04:42...
12:12...
12:13...
12:14...
12:14...
12:14...
12:14...
12:15...
12:15...
12:16...
12:16...
12:17...
12:17...
12:19....
12:20It's a family of AI, it's going to be many different, it's like if you said there is software, it's
12:25not one software, it's going to be many different applications, many different types of products.
12:29Yeah, yeah, I think we're seeing a tremendous amount of innovation, and you know, there isn't necessarily a, you know,
12:36for different use cases, for some use cases, you need your systems to be really fast and responsive.
12:42And Google has a 25-year history of trying to take technology and deploy it at scale globally.
12:52And so for those things, like having it on your mobile device is going to be really useful, or having
12:58it in a data center where you can serve it in milliseconds, those are the things we need to aim
13:02for.
13:04If we move on to maybe the other side of AI, one of the first problems we've seen with a
13:09large language model is the problem of hallucination.
13:13Is it something that is actually built in, this model, which is something you will never be able to get
13:18rid of?
13:19Or is it something that the AI will improve itself and one day overcome this problem?
13:26Yeah, so, you know, clearly the problem of hallucination, where, for example, a large language model will just make up
13:35things that sound plausible, but are totally false.
13:40Like, just to ask it, a lot of people, the first time they interact with an AI system, if they
13:44have any kind of public profile, they say, you know, who is David Bahou, or whatever.
13:49And then it makes up a CV for you that has lots of fictional things in it.
13:54Certainly has happened to me.
13:55And so, you know, these are problems that we gradually will chip away at.
14:02So already, when one of the reasons we've tried to be very responsible about the way we put out our
14:09language technology, we had Lambda, for example, which was our large language model two years ago.
14:16We presented it at I.O. as a demo, but we didn't actually unveil it until recently in the form
14:22of BARD.
14:22One of the reasons we are taking our time is that the quality of information, the groundedness, is really, really
14:32important for these things to be useful on a daily basis.
14:35So how do we do that?
14:38Well, we do that by leveraging, for example, search so you can have the system kind of link to additional
14:49sources and so people can follow down to original sources and find what's out there.
14:56But this is a very active area of research.
14:58And just to answer your question, I think we will chip away at it until it gets better and better
15:04and better in the same way as, for example, self-driving technology.
15:09People had demos of this.
15:10Actually, I remember there was a demo of a self-driving car.
15:13And you used to work at Uber at one point.
15:15And I used to work at Uber.
15:16I was a chief scientist at Uber.
15:17But back in 1980-something, there was a self-driving truck that drove, you know, on interstate highways in Pennsylvania.
15:29And so you can do a demo, but it's just not safe enough.
15:32And it takes, like, sometimes it takes a decade or maybe several decades to get it to the quality levels
15:37you want.
15:37It's the same with language models and groundedness and factuality.
15:41On the other things that keep people worried about, it's the impact on jobs.
15:45Yeah.
15:45How do you see the negative impact this might have?
15:49Or is there also maybe a good side of AI?
15:51I mean, it's clearly a top of mind for many people and including, you know, the public and policymakers.
15:59I think one of the ways to think about it is that what AI really does is it can help
16:07automate skills and tasks.
16:10But if you look at a job, whether it's, you know, a journalist or, you know, research scientist, a job
16:17has many, many different aspects to it.
16:20So generally in the history of technology, tasks have been automated and jobs have evolved to maybe be more efficient
16:31and new jobs have appeared over time.
16:33So, I mean, I'll admit, I don't know where this is going to end up, but I certainly think that,
16:39you know, there will be changes to our current jobs.
16:42We'll start to use these tools. And at some point, some jobs may just become obsolete.
16:47Mm hmm. It's more common than a question.
16:49But somebody once told me that maybe the difference this time, it's that it's worrying for people that were white
16:56color because it used to be that the revolution would impact more the blue color workers.
17:00And the other thing is that this revolution is going so fast that usually you have time to adapt this
17:05time.
17:05There is a risk maybe that the adaptation time will be short and that we won't have time to to
17:10reskill ourselves.
17:11Yeah, I think both of those things are true. But one interesting aspect of this of AI is that when
17:20you think about reskilling,
17:21it's actually amazing how you can use AI tools to learn new things.
17:27For example, I was I was teaching my my 13 year old daughter Python, the programming language, about a week
17:35ago.
17:36And I, you know, she did an exercise and I asked her, do you understand the line, the code line
17:43by line?
17:43And we typed it into Bard, Google's AI chat system. And it explained every line of code in amazing detail.
17:54And it was just an amazing education companion. And I thought, well, this is pretty powerful.
18:00You know, it's nice work. I should have sent an email to my team saying, oh, good job, you know,
18:05because because, you know,
18:06we can use these technologies to help us adapt as well.
18:09Another topic I'd like to touch upon is regulation. What kind of regulation should we try to put in place?
18:16Because there is, of course, a risk that the AI can have a negative impact, can be used by for
18:23bad motives and have a negative.
18:24So how can we make sure or try to mitigate those risks? Is regulation the only solution that we have
18:31to slow down or prevent innovation in some fields?
18:34Yeah, I think, you know, AI is clearly, as our CEO said, actually, AI is too important not to regulate
18:43and is too important not to regulate well.
18:47So what that means is, first of all, we need to think about the downstream use cases of AI, whether
18:57it's like self-driving cars or medicine.
19:00All these areas are already regulated, right? We need to think about the negative impacts that AI can have on
19:08misinformation, hate speech, you know, phishing and scam attacks and things like that.
19:15Of course, a lot of those things have already regulations. But we also need to think deeply about potentially new
19:22problems that may arise from AI.
19:24And so really, it's not for any one company or government to do. It really needs to happen as a
19:31cooperation between governments globally and large corporations.
19:38And also, we need the buy-in of smaller companies as well as the general public.
19:44But do you think there will be room for auto-regulation, self-regulation? Because it's a competition also. So there's
19:51always a maverick, there's always a startup that's going to try something and use a breakthrough to try to be
19:56the next Google, in a sense.
19:57Yeah, no, I think it's, I think you can't leave it just purely to self-regulation. You know, I think
20:04realistically, like, these are technologies that impact billions of lives on a daily basis.
20:11There is an important role for governments to play as, you know, bodies that represent the public.
20:19But, you know, it is very competitive. There is a lot of innovation. And, you know, the navigation of that
20:28fine line is going to be a challenge for all of us.
20:30And some of the regulations that we have are probably historically just, like, out of date.
20:37So we may need to reconsider the way we think about, you know, certain issues.
20:42AI is going to be a journey. It might be a long journey.
20:45Would you say that we're at the first chapter, the very beginning, the infancy, the adolescence, or the adult age?
20:53Where are we on this journey?
20:55You said you've been working in this field for 30 years. After 20, it started to move. So you're still
21:01a baby?
21:01Yeah, I mean, it was a baby, you know, kind of in my view, like a decade ago. It really
21:10felt...
21:11I mean, I had a personal moment where the first time I flew to the San Francisco Bay Area,
21:19and I drove by and I saw a billboard that said machine learning, like there was some company hiring for
21:27machine learning skills.
21:28I thought that was really weird. I'd never seen my weird academic field on a billboard by a highway.
21:36And then, you know, but now everybody's talking about AI and machine learning.
21:41And, you know, we've got a long ways to go. There is a lot that can happen over the next
21:47few decades.
21:48And all the billions that are injected in this industry will help you grow faster?
21:54Yeah, I mean...
21:56Or is it science-based and all the dollars will not actually... we need breakthrough?
22:02Yeah, I think that it's not just about how much money you throw at things.
22:07You know, there are really fundamentally important scientific challenges still to solve.
22:13I'll give an example. Large language models distill a lot of knowledge that is openly available on the web.
22:22That's amazing. But they don't really understand the physical world.
22:27So they don't understand the common sense that, you know, if I put a glass of water on this table,
22:33it won't go through the table or fall on the floor or something like that.
22:37So, you know, causality, fundamental advances in, you know, embodied AI.
22:46All of these things, we're still very early days in.
22:50Very early days, but very interesting days in front of us.
22:53Thank you all for being here.
22:54Thank you, Zubin, for joining us here at VivaTech.
22:57Thank you very much.
22:59Thank you.
Commentaires

Recommandations