Passer au playerPasser au contenu principal
  • il y a 5 semaines
Part 2 Preparing Your Company for the AI Revolution

Catégorie

🤖
Technologie
Transcription
00:00All right, so now I'd like to welcome onto the stage four more guests to discuss this topic in a
00:07bit more detail.
00:08So we'll be welcoming Professor Gina Neff, who's the Executive Director of the Mindaroo Centre at the University of Cambridge.
00:19And we'll be welcoming Xavier Vasquez, who is the Chief Technology Officer of IBM Technology and R&D France.
00:27Corinne de Bilbao, Chief Executive Officer and President of Microsoft France.
00:34And Beatrice Saint-Saës, Global Consulting Data and AI Leader at EY.
00:40Let's give them a hand, please.
00:52Thank you.
00:54So let's just do, so we know where we stand with how these three companies, and perhaps as well how
01:02the Mindaroo Centre is preparing in its own way.
01:04So we know where your next comments will be coming from.
01:07Can you just give us a bit of a lay of the land of how each of your companies are
01:11preparing for the AI wave?
01:15Bracing yourselves, trying to ride the wave.
01:18Maybe let's start with you, Beatrice.
01:20Thank you.
01:21Well, for us, AI is a strategic imperative.
01:25We are a knowledge-intensive sector, and we all know that AI is a knowledge kind of game.
01:31So, I mean, it's three years ago already that we started a huge investment, 1.4 billion investment.
01:39And September 2023, we launched EY.ai, so a unifying platform that brings together a technology stack, the ecosystem, which,
01:49you know, is hugely important, and the talent.
01:53We launched a strategic program across three pillars, transform clients, EY, because we believe we need to disrupt ourselves, and
02:02that's kind of the first experience, and the world as well, because we can play a role on the advising
02:08the regulators and kind of the multiples.
02:11So, I think a year on, I think we can be proud of, you know, some of the things we
02:19have achieved.
02:20We're driving some of the largest transformations programs for large corporates, also to build their dot AI version.
02:28So, what is the dot AI version of a company?
02:32We have reskilled the 400,000 professionals at EY.
02:36So, through kind of a very intense AI academy, and I think we have a best-in-class kind of
02:45a platform.
02:46And last, I want to make sure that we have time, but I think it's important to remark one point,
02:54is things are happening very quickly.
02:58So, transformation is an imperative, but transformation with confidence.
03:03So, that's also a focus that we have as a big four, you can imagine, right?
03:08We bring the experts in resk, in legal, and for us, we have also kind of, again, walked our own
03:16journey, and so we have rebuilt even our own internal policies.
03:22So, again, when we advise our clients, we have somehow walked the talk.
03:27Corinne, I think, more than any other panelist, Microsoft's work with AI is the most well-known.
03:34Perhaps most businesses are already using co-pilots.
03:38So, perhaps you can tell us a bit more internally at Microsoft.
03:42Give us a view on the inside.
03:44What are you using AI to achieve from an enterprise perspective?
03:47Yeah.
03:49Well, of course, I mean, we've been embracing AI very early, and we saw that wave coming since 2010.
03:58Didn't want to miss it.
04:00And I think before I answer your question, I mean, there are three things to consider.
04:03I mean, the investment in infrastructure is super important.
04:08And this is, I mean, the role we have in country, I mean, to get infrastructure close to the economy
04:13so you can enable the company to adopt generative AI.
04:16The second one is the ecosystem.
04:18So, the way we do it internally but externally as well is you need a dynamic ecosystem, and you need
04:23to make sure, I mean, you have different options, different large language models, small models.
04:29But the third one is adoption, and here we go.
04:32The adoption is super important.
04:35I mean, in the last six months, you know, I'm going to talk about Microsoft, but I just want to
04:40give you a data point.
04:41We run a work trend index around 31 companies, 31,000 employees.
04:46And what the data says is that in the last six months, the adoption of Gen AI has doubled by
04:51employees.
04:52Employee won Gen AI.
04:54And all leaders are trying to figure out the path forward.
04:57How do we do it?
04:59Employee come with their own AI at work, like they were coming with their own device.
05:04So, of course, I mean, we're in a position, best place to run Gen AI and adoption, adoption of Copeland
05:11as well.
05:12But we kind of done the same.
05:13It's a little bit, you know, top down and bottom up.
05:16So, it has to come from the top and, you know, to train people, get them the skills because this
05:23is a skilling problem.
05:26This is a skilling topic.
05:27And second, I mean, you need to get people, I mean, able to have fun and to try technology.
05:33So, it has to, it's a bottom of our process as well.
05:36But it's how you use every day.
05:37So, we use every day in every function, in sales, in marketing, in our support function.
05:42As we, as we grow, I mean, you need, you have the need of more and more support.
05:47And Generative AI help, you know, you get your virtual assistant.
05:51It helps you.
05:52It helps with developers, you know.
05:54Even training when you hire new developers, instead of getting, you know, senior people, spending time with them, or for
06:00them to get six months before they can develop code.
06:03Now, they have a virtual assistant to help them to grow.
06:06So, it opens a lot of career opportunities for new people.
06:09So, we use it with our developers.
06:11We use it in marketing, in sales, in finance.
06:14I mean, just say.
06:15But you need to make sure, I mean, this is an everyday job.
06:19It's not once you train and then it's gone.
06:21It's everyday job.
06:22Everyday, you need to get your team, use the technology.
06:25I got it, I tell you.
06:27I was one of the first ones to get a co-pilot.
06:29I was kind of by myself.
06:31I was like, oh, what am I going to do with it?
06:33So, it's also a community, you know.
06:35How you get into your organization, the community of users, and share tips and tricks.
06:40And everyday, even our employees are discovering new usage, which is phenomenal, you know.
06:45Great.
06:46So, Xavier, then, tell us, give us a bit of insight about IBM, specifically the R&D that you're pursuing
06:53when it comes to AI.
06:55Right.
06:56So, I can start with a personal opinion.
07:00When you do research on AI or you are a data scientist, one of the, let's say, biggest fear is
07:07to let AI in a handful of, a couple of companies across the world.
07:12And the reason I'm saying that is across the world, you have many countries, many languages, different values, a different
07:21culture.
07:22And we think that is really important to not have a strategy where you centralize everything, but more distributed AI
07:33landscape, let's say.
07:34And the ecosystem is very important to work with the local ecosystem and to take into account the local ecosystem.
07:43So, in IBM, we are really promoting this open science, open source.
07:48Open source is a very good way to, let's say, not left behind a couple of countries or companies and
07:56also the culture, the languages, and the values of the countries.
07:59So, we are promoting these two, let's say, important pillars and then create platforms that allow freedom to the companies.
08:09And you mentioned adoption.
08:10I think it's super critical because if you do not adopt, then you will be left behind.
08:17And our strategy is about providing the platform that allows you to have freedom regarding AI.
08:24When I say freedom is, are you able to incorporate in your data, in your AI platform, proprietary models, models
08:32coming from the open source?
08:34For example, sourcing them from Hugging Face?
08:38Or are you able to create new models and to incorporate that into your platform and put this at scale?
08:44That's why, also, it's important to develop some strong partnerships.
08:49We did that with Meta and Lama3 and also Mistral AI in France that we are able to incorporate the
08:56open source model, but also their large model and commercial model.
09:00And also to give the freedom to deploy these AI models where you want.
09:04It can be in a public cloud.
09:06Azure, for example, can be IBM cloud, can be AWS, or in a private cloud or on-premise.
09:13And this is what we have been doing in IBM to, let's say, build this kind of platforms that will
09:21allow to incorporate and have this freedom.
09:24And, by the way, we also develop our own models.
09:27Some of them are open sourced.
09:29We just announced a couple of days ago the open source model Granite about generating code.
09:37But also the way you train your model.
09:40For us, it's important to say, okay, there is a general model, big ones.
09:47Sometimes there are strong black box, let's say.
09:50And what we say is an AI strategy is not only about calling an API.
09:56It's also creating value with AI.
09:58So we also create our own models.
10:01And the way we do it is also by incorporating data that we think is of quality.
10:06Just to give you a quick example, and I will finish on that, we developed a small model because we
10:12think that specialized models, they are easy to deploy, less expensive, less energy efficient.
10:18So we train a small model called Granite, 14 billion parameters.
10:23And we used 6.5 teraoctares of data.
10:26And this data is coming from the public domain.
10:29So many companies are using this kind of data.
10:32And we had to eliminate two-thirds of this data.
10:36The reason is because within this data, you have bias, you have not correct languages, you have URLs that are
10:46not really recommended, let's say, to click on it.
10:51You have some recipes that are not really recommended to apply also.
10:56So two-thirds of the data has been eliminated to train this model.
11:01So that's our positioning here.
11:03Really open science, open source, publishing the data sources, publishing the architecture, publishing some models, and have this platform that
11:12can give this freedom to the companies.
11:15Gina, perhaps, for people who don't know, just a brief introduction to the Mindaroo Center and what you're working on
11:21specifically with AI and what you think about these different approaches.
11:25Great.
11:25I'm Gina Neff.
11:26I run the Mindaroo Center for Technology and Democracy.
11:29You know, just two small problems linked together at the University of Cambridge.
11:33And the work that we've been doing over the last year really is to help to think about the capacity
11:40that we need to build in society.
11:43So our mission is to help people, communities, help to make sure we're making digital technologies work for people, communities,
11:50and the planet.
11:51And right now, some of the conversations we're having about AI assume that we can get there without any costs
11:58to our climate goals.
12:01Just recently in their earnings report, Microsoft, for example, released how costly it is in terms of their scope three
12:11emissions, the work of building and maintaining the focus on large models.
12:18So one of the things that we're doing is helping people and companies think about how they're building the social
12:26capacity to really harness the technological changes that are going on.
12:31And let me give you some concrete examples about that.
12:34So, for example, in the U.K. where we're based, we don't have the EU AI Act, but the U
12:41.K. government has taken a really proportionate, what they call a proportionate response to regulation of AI.
12:46They say, you know, like saying, Augustine, you know, regulate AI, but not yet.
12:54Like, give me AI regulation, but not yet.
12:57The U.K. government has said, let's use existing regulation to think about how we address the potential harms of
13:06AI.
13:06And my center, along with other stakeholders, convened a nine-month process bringing together business, bringing together the tech industry,
13:15bringing together human resource professionals, unions, civil society, academia, to say, okay, let's figure that out.
13:25If we're going to think about if we're going to use existing regulation to think about AI in the workforce,
13:30what do we need to change?
13:32What should change now?
13:33So we've put forward a campaign to think about what an AI and employment bill would look like.
13:39It's a little more streamlined than the omnibust approach that's happened here in the EU.
13:44The second thing in terms of building capacity is to help companies and others think about what those principles of
13:52responsible AI are.
13:54So I help to direct a 30 million pound investment called Responsible AI UK that brings the cutting edge of
14:01research together with industry to really help, again, figure out what do we need to know at the intersection of
14:09those organizational company, you know, cultural issues that we're having around with the technology.
14:17And then how do we feed that back into the conversations that are about technology?
14:21So I know in these kinds of conversations that we have here at Viva Tech, we, you know, I was
14:27here last year and it was also a standing room only room.
14:32Everybody worried about what is AI doing for professional work?
14:37What is AI going to do for our jobs?
14:39What is AI going to do for our companies?
14:41And a year later, the sky has not fallen.
14:44But we're busy and we're focused on trying to make sure that we can get these things right.
14:53We've got a limited amount of time, I think, to help people and companies make the choices today that are
15:01going to put them on solid footing for being able to benefit from these incredible and powerful technologies.
15:11So if we don't do that capacity building, if we don't do the kinds of work of helping to upskill
15:19workforces, then we're going to have a hard time seeing the benefits in a way that doesn't make existing today's
15:28inequalities even worse.
15:30Is it, let me ask this to all of you, actually, I mean, it might seem like an obvious question,
15:37but is it just easier for the bigger companies to bolster themselves more and prepare themselves more, particularly, for instance,
15:45for potential job losses,
15:51compared to smaller players who might struggle to upskill or provide necessary training for their employees to use AI?
16:03Maybe I'm jumping.
16:05I don't think so.
16:06I think AI is a huge opportunity for the middle market, actually, even the smaller companies, because maybe we have
16:18to demystify, but the cost of technology will come down.
16:22Like big implementations, ERP implementations, I think very quickly we will see kind of somehow that being kind of replaced
16:31somehow for kind of a lot of the AI capabilities.
16:34So cost of technology, which is a huge barrier to entry, it's coming down.
16:42Also, we will see a huge, more and more kind of a growing disintermediated economy.
16:49So disintermediated economies are opportunities also for the small and mid market, because they can reach directly to the end
16:55consumer.
16:57And that's, we will see that, you know, coming through, you know, any kind of AI application.
17:04You will be able to order directly, hey, I want this thing in my house at 11.
17:09And, you know, and the technology will, you know, the platform.
17:13So we are entering, sorry, into, I think, a more disintermediated economy.
17:21And I think, but it's the Gauss Bell.
17:25I think everybody's talking about AI adoption.
17:28And companies that do not adopt AI or employees don't, they will be displaced by, because of the productivity, et
17:37cetera.
17:37So I think it's an imperative.
17:40Any other thoughts on that?
17:41I can, I can follow.
17:43I think at a macro standpoint, somehow we get the productivity we dream of, in terms of, you know, the
17:50knowledge workers and what we call the white color.
17:54Because productivity has been lagging behind for many years.
17:57So as a macro standpoint, I mean, people tell you, I mean, it saves time, get them more creative, more
18:03performant.
18:03They like it.
18:05The matter of the fact, as well, is that some areas you have an aging population, or some areas you
18:09just cannot recruit.
18:12And so the point is that maybe some job will go out.
18:17You will have new job, like in any technology disruption, like the printing machine.
18:21You know, some jobs went out, but you had new jobs, like the bookmaker, logistics.
18:25And here, micro level, you create a new economy.
18:28You will have new companies working, new startups.
18:31You will have companies working, you know, in the energy sector, because of their energy consumption.
18:37You will have companies in the ships industry, so that creates new job.
18:40Now, at the company level, it's really required.
18:43The most important thing a company can do today is to look at their competency map, at their scaling.
18:48I mean, the human resource person becoming the most important person, because this thing is planning.
18:54How are you going to plan, you know, the future work in your company?
18:57It's a massive disruption.
18:58It's not only about IT, it's every single function.
19:02So that's a massive work.
19:03I am not sure the small company will be left behind, because there's the agility piece, which is important as
19:10well.
19:11Now, what happened today is that most of the large companies are embracing, but I think, I mean, the speed
19:17-up is going to come, you know.
19:18Small companies, they're going to catch up.
19:21And this is why, like, well, managing Microsoft Friends, we invested massively in that scaling, to give us, as well,
19:29you know, simple kits to the small and major company to embrace AI.
19:33Because for the first time, this is not a complex technology in terms of usage.
19:38It's not your language.
19:39So it's, you speak to the machine.
19:41You don't need to be, you know, to have a PhD in IT.
19:45So this is, in that respect, that I think there is a hope that full economy will embrace it.
19:50We have a responsibility as big companies.
19:53You told it before, in terms of, you know, having the ecosystem part of it, investing in startup.
20:00Many of those companies, as well, like, again, we've been working with them, supporting them, and many more.
20:07That's the only recipe, you know.
20:09But we need to be aware, as well, there is this responsibility for us, as big companies, but as well,
20:16also groups, to make sure, I mean, they get all their ecosystem embarking on that journey, as well.
20:23I can add, also, and I can cite, maybe, two examples that is coming in my mind.
20:29The first one is the invasion of printing.
20:33At that time, printing was seen as a threat because we'll lose our memory, we'll lose our brain, let's say,
20:45and it can create a couple of problems.
20:48And what printing brings to the world is sharing knowledge, developing science, economy, new opportunities.
20:58And I think it was a super opportunity, and I think we are at this stage on AI.
21:06And the second example is, and it was mentioned yesterday by President Macron, who bring in a session, AI experts,
21:16and I had to place that one in the room table.
21:19But he said something that was important, I think.
21:23And what I think is when robotization came, the first reaction of some countries was to be against robotization because
21:31we lose our jobs and it will make some problems, etc.
21:35And some countries just embrace it, and what happens is that the countries that embrace robotization, they improve the productivity,
21:45they stayed in the country, and they became more productive and develop new opportunities.
21:51When some of us that were against robotization, what happened is that the industrial tissue went out of the country
22:01and went to low-cost countries.
22:03So I think we are in the kind of the same stage regarding AI.
22:08And I like the word adoption because if we do not adopt it, and if companies do not adopt it,
22:16and this includes smaller companies,
22:18companies, I think we will stay behind, and it's for us a really new opportunity to develop the economy, to
22:28develop science, share knowledge, etc., etc.
22:31But, and I think the but is super important, is how we govern all of that.
22:38How we make sure that models are well governed, how we make sure that models are doing what they are
22:45expected to do,
22:46how they incorporate all the culture and languages and so on.
22:51So governance is also a very important point in embracing all the economy.
22:55Gina, if we look at the UK specifically, that IPPR report in March was predicting up to 8 million jobs
23:02lost.
23:03That's, that's, that would, if that came to pass, have a cataclysmic effect not just on companies,
23:10but an entire disenfranchisement of a section of the population as well.
23:14That's right. That's, that's an eighth of the, that's the eighth of the country.
23:19That's about 30, 40% of the labor force.
23:23This was a report that was released a few months ago by the progressive-leaning think tank IPPR.
23:29And what I think is really interesting about that headline is that they say,
23:33well, there may be 8 million jobs lost in the UK from AI, and there may be zero.
23:39They lay out three pathways.
23:41And, you know, the high is really high, and the low is, well, whoops, we don't actually have job loss.
23:46The best economic estimates that we have globally coming from the International Labor Organization
23:52and the use, the OECD suggest that on net job loss will be about zero.
24:01That doesn't mean that the transition, though, to AI technologies will be easy.
24:07In fact, it's going to be hard, and it's going to be hard for a lot of people.
24:12So we have, I think those of us in the room, we have a responsibility to think now about what
24:20kinds of transitions to an AI future we want.
24:23Do we want a future that is open for people to participate in,
24:28or do we want a future that has been shut down and is closed
24:32and is not providing the opportunities for, you know, either productivity growth or human flourishing?
24:39And let me just add one small point, and this may be the most controversial thing I say.
24:44An AI future that works for society cannot be based on the small companies buying services from a few large
24:57companies.
24:58We cannot build an AI future where the cost to play is a billion dollars or more.
25:06That's not a future that will work for the kinds of innovation, the kinds of excitement, the kinds of products
25:14and services,
25:15and let's be frank, the kinds of problems that the world is facing.
25:19Just last week, I was sitting with colleagues from Microsoft and others with a group of humanitarian sector organizations,
25:28organizations, and these are organizations that are, you know, they are at work in some of the hardest conditions and
25:35environments around the globe.
25:37And they're struggling how to make sure that they use AI responsibly,
25:42how they make sure that their generative AI tools aren't causing harm to people who are already facing war and
25:50famine and other disasters.
25:53They're wondering, how can I get companies to focus on my problems because my problem is too small to make
26:01a market in if your, you know, market cap or your valuation is billions of dollars.
26:06When you hear some of the CEOs of generative AI startups, they say, we're excited to see what use cases
26:13come from the world.
26:14Well, if your go-to-market strategy is, well, let's throw it out into the market and see what people
26:20do with it,
26:21you might not be open and available to seeing what small companies can do, what the innovators can do,
26:28where that excitement and entrepreneurial enthusiasm is.
26:31So we have to make sure that when we're looking at a foundation model strategy in various countries,
26:38that those models are handled with a sense of responsibility of who and how can build on them.
26:45And this is why I was really, you know, excited to hear kind of your experience.
26:50Because if those models are producing content that put communities at risk,
26:58if those models are producing biased outputs that put companies' reputational harms at risk,
27:06they simply won't build them.
27:08And we will have lost an opportunity to develop the next generation of AI tools that can actually make our
27:14world a better place.
27:15The other potential break on AI is if the predictions about the technology getting exponentially better don't come to pass,
27:26and we remain around the same performance as we have now,
27:31how does this change your predictions of where it's going to go?
27:35Because if it does start to tail off, do we need to take that into account as a possibility?
27:42Do we need to have different strategies as a company, depending on the performance trajectory that AI is going to
27:49take?
27:49Is this something people are thinking about?
27:52Well, my view, I mean, it's good to put things into perspective.
27:56I think this technology is in its infancy.
28:00Think of any technology revolution.
28:03It always starts big, large, expensive, slow.
28:09Building an iPhone today with the technology of the 1960s will have cost like a trillion dollars
28:14and a computer of the size of the entire city of London.
28:18So I think with AI, the path is going to be quite similar.
28:24We are close to what we call general AI, an AI with the ability of reasoning.
28:32That would be absolutely a game changer, because then the need for large language models,
28:37large building parameters, that will come with you.
28:41So I'm really waiting to see.
28:43I think the potential is there.
28:45I mean, there's no question about, you know, the increase kind of in productivity.
28:48We have all, it's early days, as I said, it's in its infancy.
28:52For me, it's less question about the potential of the technology itself,
28:57which, again, for the first time, we have a technology that is not process-based, but objective-based.
29:03It is a game changer, right, in terms of the speed to produce an outcome.
29:08We come from productivity ratios since the financial crisis around 0.8%,
29:14less than 1% year-on-year, and all in a sudden it goes up to 40, 30.
29:19Now it's fact.
29:20And I think, again, when general AI will come in, I think even that will go up.
29:26And so business will start moving from, hey, how can I automate my existing process to,
29:34now, wait a moment, I may not need this process.
29:37I may completely rethink about how do I achieve an outcome.
29:41This is what it's all about.
29:43So I think, again, just my recommendation would be to put things into perspective
29:49and think that this technology, especially when general AI will come in.
29:54And I think we will have, even in 2024, we will have some sort of first kind of shapes
30:01of general AI coming into play.
30:04So I'm positive, optimistic.
30:09Any other thoughts on that?
30:10I think we're already over.
30:12I don't believe, I mean, we'll go back.
30:14It's a lot beyond, you know, the hybrid work.
30:18When we had the pandemic crisis, we didn't go back.
30:22I mean, it's a revolution which is here forever now.
30:25People want to have hybrid work.
30:28And now they want to use generative AI, be more productive, be more creative.
30:31So I don't think we'll have to go back.
30:33But like in any technology change, we are at a point where do you go from experimentation
30:39to transformation?
30:40And this is a problem every company, every organization is facing.
30:44And to me, I mean, what we can do, and which is very important, is that give access to technology,
30:50which means that huge scaling programs for everybody.
30:54This is what we announced at Choose France last week.
30:58It gets AI in the hands of everyone.
31:01And job seekers, entrepreneurs, companies, small, medium-sized,
31:07and make sure, I mean, everyone can use the technology easily and adopt it.
31:13So the scaling part is including professional,
31:15because the most difficult part is not when you're a startup.
31:18The most difficult part is when you work one way for years,
31:21and then you have to change.
31:22And so this is why we launched also a professional skill and program, and inclusion as well,
31:27because you need to, that needs to be inclusive.
31:29This is our, in our DNA, we do believe, I mean, we have a role to play in terms of
31:34inclusion,
31:35which is to make sure everybody can access the technology.
31:38and this is some programs we are launching with Simplon schools.
31:45Beyond the access, there is a responsibility.
31:49You touched on it, it's responsible AI.
31:51We've been working on principles for years, ethics, fairness, security as well, privacy, transparency.
31:59And so we do provide with tools, and from the design, because it's not when you deploy,
32:04from the conception, you need to make sure, I mean, you deploy a responsible AI.
32:09And I think the last one is fairness, because if you are big,
32:14and if you deploy that technology, that doesn't help to solve some of the largest problems of the world.
32:19Healthcare, you know, this is, because there is a path to research
32:23that you can use technology moving forward to solve some of the largest issue
32:28in terms of, you know, there are a lot of examples in oncology,
32:32where technology will help, fortunately.
32:35Same for the planet, I mean.
32:37And I hear what you say in terms of, well, we are transparent,
32:41and for sure, I mean, the deployment goes with some increase on scope 3,
32:45not necessarily on the, but you have also responsibility to, well,
32:49first of all, to work on it, to maintain your commitment,
32:52but as well to use the technology to solve some of the largest problems of the planet.
32:56And the startups are important.
33:00We launched in France an accelerator for sustainability.
33:03Some of them, I mean, CarbonNet, who owns a carbon capture,
33:07has been working on measuring where we are,
33:10helping companies to be able to measure.
33:13So if you don't go on the fairness, access, responsibility, and fairness,
33:18be fair, and some of the problems, I mean, you're right,
33:20in that case, the technology won't deploy as a scary tool.
33:23But I do believe it will.
33:28You want to start?
33:30Go ahead.
33:31Go ahead.
33:31If I can just jump on the tail of that,
33:32because I think, Corinne, your points are so great,
33:35and if you haven't seen how Microsoft articulates those principles of responsible AI,
33:41you should definitely see that, those kind of, I think, six, seven principles.
33:45Six.
33:46And I would add a seventh in terms of accountability,
33:49and that's my one point.
33:50You know, there are some questions that tech companies building AI
33:56should not be the ones to decide.
33:59The AI scientist Stuart Russell talks about having very good asphalt engineers,
34:05you know, the ones who make the paving for the roads,
34:08making all the decisions where roads should go.
34:11We wouldn't stand for a group of engineers standing up and saying,
34:14well, you know, the beach, it would be more efficient
34:16if we just paved it over with our asphalt.
34:19We're really good at making asphalt.
34:21Let us do this.
34:21We can solve your problem.
34:24We would perhaps riot in August in the south of France for that.
34:29But on the other hand,
34:30the idea that we need to build the capacity of societies to raise those points,
34:37to help hold companies, their governments,
34:42themselves accountable for the decisions we are making.
34:45So some of these trade-offs, for example, in terms of climate,
34:48we're going to have to make some questions and choices to meet our climate goals.
34:53And that's a choice that we're going to need to hold companies accountable
34:57to make sure we get there.
34:59And that includes companies at the forefront of AI.
35:02That's why, you know, these kind of calls to make sure we measure
35:06and we can benchmark and we can know are great.
35:10But we also need legislatures to step up and say,
35:13this is what we need to see
35:15if we're going to have a sustainable future for our societies.
35:18Thank you.
35:21Yeah, a few things to maybe add regarding this.
35:28I insist in my first point.
35:32AI should not be something that is in a handful of a couple of companies.
35:38I think it's super important because,
35:42and I'm not against this,
35:45but I think we should see much more larger than this.
35:51For example, what is the project for the society?
35:55We have to make, develop awareness to the population,
35:59how they can use it,
36:01how they can see the potential of AI,
36:04but also the limits, how to handle it.
36:06We speak about fake news, for example,
36:08how we can detect the fake news,
36:10how we can give the tools to the people
36:13in order to manage or interact with AI.
36:17What is the project for the society?
36:20Meaning, do we have a project for the future of the society,
36:25for the good of humanity,
36:27but also manage the threats and handle all of this?
36:32And also for the companies, we spoke about the adoption.
36:35I think it's critically important.
36:37And that's why in IBM, but I know others also do that,
36:42but we created what we call an AI alliance,
36:46where we bring together companies, universities,
36:50but also governments in order to share these ideas, principles,
36:54and make sure that we develop a responsible AI,
36:58but also the tools to measure it.
37:00I think it's super important what you said about measuring this
37:04and creating the tools to measure all these developments and AIs.
37:11So I think it's super important that we make, let's say,
37:16a little bit more larger AI for the world.
37:21That's my, let's say, my point on this.
37:24We've talked a lot about ethics, responsibility.
37:29Corinne, you perhaps touched on it a little bit,
37:31but what I'm not hearing is the answer to,
37:35is AI, the pursuit of AI, going to trash companies' climate goals?
37:41Is it already having a major impact on climate goals?
37:46And how can this be rectified?
37:48We tend to look at the negative side of it.
37:52But if we look at the net-net,
37:54also in terms of what is the carbon footprint
37:57that is being saved because of these productivity gains,
38:00I think in the long run the net-net will be very, very positive.
38:03Also because, back to the point I was making before,
38:06when general AI will come in with the ability of reasoning,
38:10the need for parameters, the need for data,
38:13will be reduced to a fraction.
38:15And so we will see this large language model
38:18become less large, kind of much smaller,
38:21with a lot less number by orders of magnitude
38:25in terms of data that is needed.
38:29So net-net, which I think is how we should kind of see.
38:33I think AI will have a huge impact in a positive way
38:38in terms of a carbon footprint.
38:40That's my...
38:42I think when you look at this,
38:44I mean, the CO2 emission from IT in general
38:48has doubled in the last few years.
38:50That's for sure.
38:52That's the impact of all the IT industry.
38:56First of all, all the industry is working on it, you know.
38:59For example, Microsoft,
39:01we are going to be by 25, 100% renewable energy
39:06for data center, 24 hours, 7 days.
39:11But what you don't measure is the impact of the IT
39:17to meet your green goals.
39:20Let's take the example of hybrid work.
39:22When we started to work on an hybrid way,
39:26maybe 30% of the population right now,
39:29before COVID was 100%.
39:31I mean, if you look at all the transportation costs you avoid,
39:35of course you use technology,
39:37whether you use Microsoft or other technology,
39:40which has an impact,
39:41but it's really less versus, I mean,
39:43the avoidance of transportation.
39:44It's COP4.
39:45We don't measure COP4.
39:46That's okay.
39:47That's fine.
39:47We have to clean our own garden first.
39:50And that's important that every company does that.
39:52But on a larger scale, when you look at this,
39:55if you look at efficiency in transportation,
39:57in mobility,
39:58when you look at the logistics,
39:59where you're going to save a ton of time,
40:03when you look at the waste as well,
40:05you're saving CO2.
40:07We don't measure it because it's not under the CFCL rule,
40:11and that's fine.
40:12But I think the global equation is important as well.
40:14Now, saying that,
40:16all the companies have to clean their own garden
40:19and make sure, I mean,
40:20we are still on our commitment.
40:21It becomes harder because as you develop,
40:25as Brad says,
40:26the moon is a little bit more far away.
40:28but still, you know,
40:30it's important that we stick to the commitment.
40:34I really agree on that.
40:35I think we are in a turning point.
40:38There is the AI for green,
40:40but we have to think about the green for AI.
40:43And we are in the middle of this.
40:46So, the A question is very simple.
40:49Okay, do we use
40:50one or two percent of the
40:53energy consumption in the world
40:55for AI,
40:57but at the same time,
40:58are we able to save
40:5920 percent of
41:01energy consumption in the world?
41:03So, this equation needs to be defined.
41:05That's why we have also
41:07a couple of strategies to do that.
41:09I mean,
41:10from a technology standpoint,
41:11there are many ways,
41:13let's say,
41:13to improve this,
41:14like research on the hardware itself,
41:19developing new chips,
41:20developing new ways
41:21of doing the computation.
41:24We can speak about quantum computing,
41:26for example.
41:27It's one of the possible futures.
41:29And also,
41:31all the use cases we do
41:32for, let's say,
41:35safe energy consumption,
41:36of course,
41:37in transportation
41:38and many industries.
41:40So, this equation
41:41is very important.
41:43I think we don't have
41:44yet the answer,
41:46even if there is
41:47a lot of use cases,
41:48et cetera,
41:48but sometimes,
41:50maybe the GIEC
41:51will answer this question.
41:53But we are in this
41:54turning point
41:55where we have to do
41:57this equation.
41:59Basically,
41:59what I believe,
42:00personally,
42:01is that Gen AI
42:02will clearly help
42:04improve our carbon footprint
42:06and also fight
42:09against the climate change.
42:10I will give one example
42:12and I will finish on that one.
42:13We built a partnership
42:15with NASA
42:16where we analyzed
42:19geospatial satellite images,
42:22sensor data,
42:23scientific publications,
42:25et cetera,
42:25and we are able
42:26to ask
42:27very simple questions.
42:29But what is the consequence
42:31of using this aerosol
42:32in this geographical zone?
42:35How we can predict
42:36flash floodings
42:38at 30 meters
42:40precision?
42:41How we can really
42:43ask questions
42:44to the scientific community
42:47but breaking the silos
42:49and invent
42:50new sustainable materials,
42:52new algorithms
42:53that will optimize
42:54transportation,
42:56new materials
42:57is fundamental
42:58when you want to,
42:59for example,
43:00build a new plane
43:00with batteries
43:03or something like this.
43:04You will need
43:04not heavy metals
43:06but, on the other hand,
43:08light metals.
43:09Example for the batteries,
43:11heavy metals
43:11is a big problem.
43:13How we reinvent
43:14new materials
43:14and that's why
43:15JNI will bring this
43:17to the world clearly.
43:19The economist
43:20John Maynard Keynes
43:21very famously said,
43:23in the long run
43:24we're all dead
43:25and I truly hope
43:27that is not the case
43:28with how we talk
43:29about AI
43:30and our climate.
43:31We have work to do now
43:33and there are concrete things
43:34that can be done now.
43:35I think we can hope
43:37for the breakthroughs
43:38and the innovation
43:39but we have agreed
43:41as countries
43:42that we have a set of goals
43:43that we need to meet
43:45to mitigate the worst
43:47of the impacts
43:47of climate change
43:48and what that means
43:49for the tech industry
43:50I think is in part
43:52checking our work.
43:54Can we ensure
43:56that the code we run,
43:57that the models we build
43:59are as efficient
44:00as we can make them
44:02given the purpose we have?
44:03That's something
44:04I think the industry
44:05can step into
44:06and lean into.
44:07My colleague at Cambridge,
44:09Loic Langois,
44:11has developed
44:12a code checker
44:14that's now being used
44:15by French Scientific Council
44:17called Greener Algorithm
44:19and it's a small step
44:22but it's these small steps
44:24that we need to do
44:25in the tech industry
44:26to make sure
44:27that we're not the cause
44:29for runaway climate emissions.
44:32I'm glad you started
44:34rather than ended that
44:36on that Keynes quote
44:37saying in the end
44:38we're all going to die anyway
44:39because that wouldn't have been
44:40a very optimistic quote
44:43to finish on
44:43but thank you everyone
44:44very much
44:45for listening to
44:46Gina, Xavier Corian
44:47and Beatriz.
44:49Let's give them
44:49a nice round of applause.
44:51Thank you.
44:51Thank you.
44:54I know there's a lot of topics
44:56we didn't quite touch on
44:57but I'm sure
44:58you will be milling around
44:59the conference centre
45:01and available for questions
45:02I assume.
45:02and thank you very much.
45:04Thank you.
45:08Thank you everyone.
45:09I'm going to ask you
45:10to leave the stage rapidly
45:11as we can
45:12because right now
45:13and in just a few minutes
45:15in fact
45:15thank you so much
45:16to that panel
45:17thank you so much
45:17to Peter
45:19for moderating that.
45:20We have a live demo
45:22of OpenAI
45:24featuring the latest
45:26ChatGPT happening
45:27in just two or three minutes.
45:28The setup is going to start here.
45:30Take your seats
45:31get ready for that.
45:32We have Roman
45:33who is head of developer
45:34experience at OpenAI.
45:35Thank you.
Commentaires

Recommandations