Passer au playerPasser au contenu principal
  • il y a 2 jours
AI Trends From Lab to Market

Catégorie

🤖
Technologie
Transcription
00:02Sous-titrage Société Radio-Canada
00:30...our viewers online. So welcome to you all. Now after a morning of diving into deep tech, we're taking it
00:37in a new direction now and looking into a technology that's on everyone's minds, obviously, AI. Because there are so
00:44many unknowns, aren't there? Where will the increasing use of AI take us? Should we be afraid of generative AI's
00:52impact on our jobs? Very probably.
00:54How is regulation adapting to this fast-changing landscape? Those are just some of the questions that we're going to
01:01be delving into during the next four hours.
01:04And once again, we want to hear your thoughts, your questions about these issues. And it is sometimes an all
01:12-compassing and slightly overwhelming theme. So we want to get your input as well.
01:16So for all the interactive sessions that we'll have, and I'll let you know whether it is an interactive session
01:22as they come up, you'll be able to send in your questions by simply connecting to the VivaTech platform on
01:28the app.
01:29What you do there is select the tab in the drop-down menu, entitled Interactive Session with Slido. You select
01:36Stage 3, and you're ready to get interactive.
01:39So please do. Our panel of moderators will always make sure to keep some time at the end for your
01:44questions. We want to make this as interactive as possible.
01:46So, that being said, let's get started with our first session, AI Trends from Lab to Market. Benoît Bertolot, tech
01:55reporter Bloomberg, is moderating this conversation.
02:23Hi, everyone. Thanks for joining us.
02:28for this first session of the afternoon. So we've all been using ChatGPT. We've all seen the rise of generative
02:36AI in the past few months.
02:38In this session, what we want to look at is, is it a revolution for business as well? Is it
02:44going to last? Is it going to impact many, many companies, your startups, your companies?
02:49Is it already the case? We've got, we're lucky to have two great guests today that have been looking at
02:56AI for a very long time and not only six months.
03:00So, let's go ahead and get started.
03:01Joël, you are the CEO of AI Research at Meta. You're a researcher. You've been at Meta for six years,
03:08the head of the Montreal Research Centre, and now the VP of AI Research at Meta.
03:19You've been an investor for a few years at General Catalyst and investing in AI companies for a few years
03:31now.
03:32My first question, I guess, is from your job, how have you been witnessing the rise of generative AI, this
03:40revolution that only happened in a few months?
03:45Great question to get us started. Pleasure to be here with all of you today.
03:49You know, it's been fascinating to see the progress in AI itself.
03:53And if I can afford a few words of maybe context, I'm a university professor for many years and there's
04:00a piece of that that stays with you.
04:02What's interesting about generative AI may not be just the generation.
04:06In fact, like the generation is our way to know what did the model learn about the world.
04:13And so in many ways, the progression we've seen is the quality of that model itself, what we call foundation
04:20models, pre-trained models.
04:23Once you generate from them, what's been fascinating in the last six months or so is just the quality of
04:28the generation.
04:29But that's really the face of the model itself.
04:32The progress on the model has been more steady, I would say.
04:36We've seen increasingly large models, the ability to train on larger data sets and the ability to use these models
04:43for a wider set of tasks and much more diverse use cases than before.
04:49And at Meta specifically, has it been really a revolution for the company?
04:55I will say that the changes that we've seen in terms of, you know, there's really a step change in
05:01the quality of the generation.
05:03We've seen it with images.
05:05We've seen it with text.
05:06We are seeing it with music, with code, with speech, with lots of other modalities.
05:10Where most of the change has been is really in the excitement to find ways to take this research results
05:21and bring them into product and bring them into new experiences for their users.
05:26And so that is where we've seen the biggest step change, I would say.
05:29On the research, we've been investing continuously and building our research models for a decade now.
05:36Okay, we'll come back to this.
05:38Quentin, same question for you.
05:39Yeah, I think if you go back in time, what you see and recognize is that this has been an
05:46overnight success decades in the making.
05:48Or, you know, you were in academia, so 50, 60 years, you were telling me, back in the hallway.
05:53And in some ways, this is actually like the third major sea change in computing we've ever seen.
05:59The first was transistors.
06:00When we moved away from vacuum tubes and started expressing computing in silicon, that allowed us to start to miniaturize
06:08and scale what computing is.
06:09And in most ways, everything you've seen since then, the advent of PCs, the miniaturization of computing down to fit
06:17in a phone.
06:18All that's just like the natural progression of that first major sort of innovation.
06:24And then the second one has been about connecting them together.
06:27And everyone thinks like the internet was this big sea change, and it was, but also it was decades in
06:32the making.
06:33I mean, there were researchers tolling away trying to figure out how to build resilient networks, and what you do
06:40if you can, in fact, connect computers together, how to make them talk to each other, and get utility out
06:44of that.
06:45And then everything since then has been, you know, again, evolution in a way, right?
06:50So the mobile networks, which took decades and decades to build, that we all now take entirely for granted, all
06:57these things kind of came together.
06:58And so computing writ large, getting to scale with silicon, connecting them together, and now going from deterministic creation with
07:08computing to involving non-deterministic capability, this has also been decades in the making.
07:13But its impact will be as profound as the other two I mentioned.
07:19I mean, we can't imagine a world where we didn't figure out how to make integrated circuits and transistors work.
07:25We just can't imagine that world today.
07:26Like, we can't imagine not having them connected to each other.
07:30If you fast forward 10 years, we won't be able to imagine a world where there's not AI interwoven into
07:37the software we use every day.
07:39And so that sort of, like, opportunity and that promise and that kind of scale of change, that's, you know,
07:46obviously exciting, but it did not happen overnight.
07:48It's been going on for a while.
07:50Yeah, the hype is obviously generative AI, but is it a lot of hype?
07:55We've seen a French company raise 100 million euros.
07:59It only started a few weeks ago and has no product.
08:02Do you think it's a kind of a bubble or is it what part of it is hype and what
08:08part is really profound and changing?
08:12I'm not a market analyst, you know, in terms of, like, whether the specific investments are fully worth it.
08:18I will say there are many very talented people who are putting together some really exciting startup opportunities,
08:24opportunities, and I would certainly bet on a few of them.
08:29I don't think we are overestimating the profound change that will come out of the models that we have today.
08:38What we are able to do, and again, I come back to the model and the representations themselves,
08:43because we talk about generative AI, but the same models can be used to build predictive powers,
08:50to build, you know, all sorts of classification, segmentation tasks, and so on.
08:54And so that level of distilling of knowledge that can then be used to make predictions,
09:00to control, and to generate, I think it's going to be a profound change.
09:05For you, Quentin, as an investor, obviously, the question is especially important for you.
09:10Yeah.
09:12I think, you know, like you said, we're very much at the beginning of the impact of this and what's
09:18happening.
09:18But a couple things are already very clear.
09:21One, the ability to have these models produce output based on, you know, prompts that are given,
09:29questions that are asked, but do that in context of other information that's been given,
09:34whether it's from a history, what's called memory of, you know, successive questions I'm asking,
09:39whether it's from embedding or fine-tuning or other things that are within prompts.
09:43It's capable of producing uncanny sort of results.
09:47There's a counterbalance to this.
09:49The other thing that's very clear is we've not yet figured out how to understand without those outputs
09:55or what we're looking for exactly, problems with hallucination and just being just not factual about stuff,
10:02because that token prediction isn't necessarily bounded yet by facts and some other system that can help us do that.
10:10And I think it's one of the reasons that you're seeing the world right now embracing this sprint at how
10:16many generative models do we need to create
10:18that are maybe domain specialized or fed a specific diet as they're trained to get closer and closer and closer
10:26to the accuracy we need.
10:42As an investor, what's an AI startup that you would invest in and what's an AI startup you wouldn't and
10:49would never invest in?
10:50That's a good question.
10:52It's a little tricky, too.
10:53So, look, here's, like, let me, because this is a huge space, right?
10:58There's foundation models that are being built.
10:59There's tooling that's being built.
11:01There's applications that are being built.
11:03So let me just take one example in the app space that people can maybe relate to.
11:07Everyone's kind of, like, it takes very little time to go find discussions on the internet right now about the
11:16impact this will have on support.
11:17Like, okay, if I can take a foundation model and I can point it at a company's KB knowledge-based
11:23articles about support,
11:24and I can give it the user's challenge, like a user comes in in a chat window and says,
11:29having the following problem, if I take those things, wrap them up, hand them to an LLM,
11:33it can produce very accurate instructions for that user for that specific problem.
11:39That's great.
11:40It's a really good application of an LLM into a problem space.
11:44So what makes a good company out of that?
11:46Well, one who's doing that and just that is not necessarily a great company,
11:53because the incumbents who already own distribution, they can add that feature
11:58because they don't have to, like, rewrite all their software to do it.
12:02Unlike the transition from, like, mainframes to PCs where all the software had to be written
12:07or from PCs to the internet, all the software had to be rewritten.
12:12Here, we don't have to rewrite all the software.
12:14So what's more interesting to us is a company that starts and says,
12:17we're going to go change what's possible in support,
12:20and we're going to make support just disappear.
12:23Like, we're going to make it so that we're taking the telemetry
12:25from what the user's doing, what the product's doing,
12:28and the KB, etc., and we're going to intercede with the user as they're working
12:34so they don't have to open up a chat window to support.
12:37Okay, that's interesting, right?
12:39So it's really, like, thinking through the workflows and what's going to be enduring.
12:43That's one of the things that's really important to us.
12:45Joel, for you at Meta, you're developing LLAMA,
12:48which is Facebook's version of TARGPT, basically,
12:54and we've seen great products in terms of image recognition, video recognition.
12:58All these products, are they going to have an impact on the business at Meta,
13:04user engagement, or what's really important for Meta in these new fields?
13:10Yeah, certainly so.
13:11So, you know, we've seen over the last few months really a quick cadence of research releases.
13:17We shared our LLAMA model, which is a foundation language model.
13:21It's not quite fine-tuned like chat GPT.
13:23It's not intended yet for an end user in an application.
13:25It's really the foundation model.
13:27We open-sourced that in February.
13:29Very quickly after that came our Segment Anything and our Dino models,
13:33both of which are on display for people who want to try them out at the booth.
13:36We have some music generation results.
13:38iJEPA coming out yesterday.
13:40So you have a really quick cadence of research models.
13:43Now, these are in the hands of everyone.
13:45Most of the work that we do is actually shared open source.
13:48They're also in the hands of the developers inside the company.
13:51And so, you know, I'm not yet at the stage of making product announcements.
13:55I think, you know, stay tuned in coming months.
13:57But what I can share certainly today is, you know,
14:01the potential to use these models to enhance the creative expressivity of everyone
14:06is something we're definitely looking at.
14:08The ability for everyone to come in and share with the people that they are close to,
14:13with their communities in a much more imaginative, expressive way,
14:17whether it's through images, whether it's through sounds and so on.
14:22That's something that we're looking into.
14:24And the ability to enhance the communication with language like Lama
14:28and have much more interesting engaged interactions is also something we're looking into.
14:32It's super interesting that you are actually working in open source.
14:36I don't know if the audience there is associating meta with open source.
14:41Do you think that this revolution should be and should happen in open source?
14:46or is it going to be closed models?
14:49I mean, which is the most successful in terms of creating a business?
14:54One could ask a question.
14:55There's really different views on this right now, to be honest with you.
15:00And, you know, meta's approach has been really anchored on open source.
15:04And this isn't an approach we've taken just in the last year,
15:07going back, you know, 10 years when we founded our fundamental AI research team.
15:11It was on the premise that when you do research in a way that's open,
15:15when you publish papers, when you release code, data sets, models,
15:19you really do a few things that are important.
15:22On the one hand, you know, I think the philosophy for us is start every project
15:27with the goal to share it openly.
15:30And when you do that, it really profoundly changes how you build the project.
15:34It, you know, it determines what data you're going to use,
15:37how you're going to build the code, how you're going to set up your evaluation.
15:41That helps us have very high standards in terms of excellence and quality.
15:46Quality in terms of the model performance, but also in terms of responsibility aspects.
15:51You know, we've been very transparent about the data sets that we use to train the LAMA model
15:55in a way that you don't see with other work that's not open sourced.
15:58And so it's been part of our culture.
16:00It's been part of how we do the research.
16:02I believe it helps us do better research more responsibly.
16:06I also think it really accelerates the innovation cycle significantly.
16:11And this is a lesson we learned a few years ago with PyTorch.
16:15I suspect many people are using PyTorch.
16:17It actually came out of the AI research lab at Meta, out of our own needs.
16:22We built this set of libraries, this framework to accelerate our own ability to innovate and build models.
16:28We thought it was pretty good.
16:30We shared it openly.
16:31And last year we shared it as part of a foundation sponsored by the Linux Foundation.
16:36So this sort of, you know, this culture and this playbook of developing the things we need,
16:41doing it with a high standard of excellence,
16:43and then sharing it with a world to really accelerate the cycle of innovation
16:47is something that we strongly believe in.
16:49I will add one more thing is when we do these model releases, we do them one by one.
16:55In every case, we evaluate how do you balance the benefit that you can bring in terms of innovation
17:02versus the responsibility that we bear by releasing these models out.
17:06And so, so far, I think in every case it has been a resounding yes.
17:10This is technology that better serves society as well as Meta by sharing it openly.
17:17At some point, maybe that won't be the same answer depending on what are the capabilities of these models.
17:23But for now, I still deeply believe where we are,
17:25we benefit in terms of building better models for everyone when we do that.
17:29Same question for you, Quentin.
17:31Open source versus proprietary data and software.
17:34I think everything you just said is exactly right.
17:38I mean, we have seen this over and over again.
17:40So I was at Microsoft in the era where enterprise open source really first started kind of coming out.
17:46And I would say it took a while for Microsoft to really understand its role in enterprise software.
17:52It's being nice, I guess, about it is the way to say it.
17:56But over time, what you see is that you get these two major effects.
17:59One, it does really raise the bar for everybody.
18:03You get this cycle of innovation and contribution that you otherwise is hard to get to in a closed source
18:09or a single company building proprietary systems.
18:12But the other thing you get is you get standards.
18:16And one of the things that is sorely lacking today in the AI world is standards.
18:22You can't walk up to a model today and ask it, what is it you do?
18:26And that's unlike most other walks of software engineering today.
18:31We spend a lot of time in the object-oriented world around self-reflection.
18:35We document APIs.
18:37We make APIs in such a way where if you walk through and inspect what they do and interrogate them,
18:44they'll express to you what they're capable of doing.
18:46We don't have this in the model world.
18:48And how we're going to fold models into enterprise software when you have to version them,
18:56you have to update them, you have to maybe use different ones in different regions,
18:59use different ones for different purposes.
19:01We have to start thinking through the standards that are going to come to play around these models.
19:07And it's more likely not to come from the open source world.
19:10So I think more than ever, open source can continue to be an important part of the industry.
19:16What are the business areas, the industries that really could be completely transformed
19:22and pretty quickly by AI, generative AI, specifically in your view?
19:26Is it health?
19:27Is it e-commerce?
19:29Is it you name it?
19:30Look, I think that's like asking in, I don't know, you pick the right year, 2007 when the iPhone came
19:40out,
19:40like when industry is going to be affected by mobility or, you know, the late, I guess the mid-90s
19:46when, you know, Mosaic Browser first showed up in the world, like where is that going to have an impact?
19:52The answer is everywhere, right?
19:53I mean, this is not the kind of thing that's going to stay, you know, into one market or one
19:58sector.
19:59We're going to see it sort of systematically impact everything.
20:02And, yeah, I think there are places where you're going to see it move much more quickly.
20:06Things in the consumer space, certainly.
20:09Things around knowledge worker and the kind of work that knowledge workers do every day.
20:13Data, data management as a whole, I think it's another place you're going to see it take pretty fast effect.
20:18But, ultimately, it's going to find its way, you know, across society very deeply.
20:23Joel, same question for you.
20:25I tend to agree.
20:27I mean, I think we're going to see that technology come through across all sectors, quite frankly.
20:32A couple that maybe haven't been mentioned so far.
20:34One of them that is really, you know, tackling how they will position themselves is the education sector.
20:41I think there's a lot of very exciting potential to include these models within education.
20:47but, at the same time, it poses some challenges with respect to evaluation and so on.
20:51So, there's a whole sector that's really grappling with that.
20:54I'm getting a lot of inquiries from that side.
20:57I think a lot of the information and media domain is also going to be profoundly changed by this.
21:05And there's an opportunity to incorporate what are essentially new tools in terms of their practice.
21:11But, it really requires rethinking the approach and rethinking how people interact with technology to achieve what they're trying to
21:20do in their field.
21:21Obviously, there is a question of the risks of this technology as well.
21:27The EU Parliament today voted the AI Act to regulate AI in terms of copyright, in terms of all the
21:35risks that are out there.
21:36Do you think it's necessary?
21:38Or it could hurt, actually, business and the unfolding of this technology?
21:43I do think the government has a role to play here.
21:47And I do think that regulation is a necessary ingredient to any technology that's going to have this kind of
21:53widespread impact.
21:55I mean, that's true for cars, right?
21:58I mean, the notion that we're not going to have the government as the voice of the citizenry of the
22:05planet not be enrolled here, obviously, it makes a ton of sense.
22:09Regulations, though, have to be, they have to be operable, and they have to be consistently enforceable.
22:14And so, as these regulations kind of come up and get inspection and come to pass, I would encourage people
22:22to look through things through that lens.
22:24As a technologist, can you operate over that regulation?
22:28Like, this is where we went from, you know, from GDPR, you know, some number of years ago to where
22:35it is today, where we're far enough along in terms of how that regulation is written that it actually would
22:39be operated.
22:40You can actually, as a company, you can actually adhere to it, understand it, and you can follow it.
22:45And then there's an opportunity for regulators to enforce them in a consistent way.
22:51And that's the other, like, litmus test that this has to pass.
22:54And so, I think we're still in the early days of where regulation is, and we've seen, you know, announcements
22:59of, you know, the UK getting more involved in the AI work this week.
23:03Saw what happened today.
23:04There's more coming, I think, in France later today.
23:07Like, it's just, we're in the most beginning days, but we have to have those conversations, and we have to,
23:12you know, engage in them now.
23:14Joel, as an AI researcher, do you feel the regulators are grasping what's at stake there, and are sensitive in
23:21what they envision for the technology?
23:24I think it's moving so fast.
23:26It's very hard for many people, not just regulators, but including regulators.
23:30It's really hard to always stay sort of on the edge of where the research is and where the innovation
23:37is.
23:38That being said, I do think there's really an important conversation to be had in terms of regulations, in terms
23:45of how to do it in a way that's, you know, operationalized.
23:48And in particular, I think there's two pieces that I think are worth keeping in mind in this respect.
23:56The first one is, I think, for many people today in industry, actually having regulation is a better situation than
24:05having an avoid, an unknown.
24:07The uncertainty makes it very difficult when you're building product that takes several months to roll out.
24:13If you don't have clarity about what will be the regulatory framework, it makes it very, very difficult to know
24:18where to invest and how to build your product strategy.
24:21So, in some sense, having clarity will be beneficial.
24:25The other thing I will say is, you know, we hear a lot of discourse around hypothetical future long-term
24:32risks.
24:33And I think where we do best in terms of thinking regulations is to really focus on the capabilities that
24:39are known aspects of models today.
24:42By really looking at what we know models can do and what are the potential harms and being thoughtful about
24:49how to give the proper framework to that.
24:52Just one very quick last question, maybe for you, Joël, from the audience, from Cédric in the audience.
24:58He says, we recently got ChatGBT4, which was significantly more performance than version 3.5.
25:06How would the next level of AI, of generative AI, what would it look like?
25:14Everyone is speculating what will be the next generation.
25:17I will speculate with you, but please take that with a really large grain of salt.
25:22And where I'm coming at is really looking at where are the points that are still modes of failures or
25:27weak points in terms of where the technology is today.
25:30I think that's our best sort of hypothesis when we sort of roll this out forward.
25:35I will pick out two aspects that I think we should expect to see in next generations of models.
25:41One of them is I do expect significant progress with respect to so-called hallucinations.
25:47And so in particular, having much better verification factuality of the models.
25:51I expect we'll see a step change.
25:53We have talked so much about this problem.
25:56I do not see a path that someone will release a model where we haven't seen a significant improvement on
26:01that.
26:01The other place I expect to see quite a bit of change is in terms of the ability to deal
26:07with multimodal information.
26:10So bringing in, and already GPT-4 does some of this, but really understanding information coming in from a much
26:17wider set of modalities.
26:19And the reason we will need to move in that direction is because more modalities means more data.
26:24And to get a better model, you need more data.
26:26And so more modalities in, but probably more modalities out also in terms of the ability to generate.
26:31It's something I'd expect to see in the coming generations.
26:34Okay, well, that's fascinating.
26:36Thank you so much for both of you for attending, and thanks to you for tuning in.
26:41Stay tuned.
26:42The next session starts in five minutes.
26:44Thank you.
26:45Thank you.
26:49Thank you.
Commentaires

Recommandations