- il y a 2 semaines
AI Rulebook What is to be Regulated, How and by Whom
Catégorie
🤖
TechnologieTranscription
00:00Good afternoon, I'm Jennifer Schenker, Editor-in-Chief of the Innovator, a global publication about the tech industry.
00:09Welcome to our session on AI rulebook, What is to be regulated, How, and by whom.
00:18We're lucky to have with us here today, Dragos Tutorake, Chair of the Special Committee on Artificial Intelligence in the
00:30Digital Age,
00:31and the lead rapporteur on the AI Act, the European Parliament.
00:38Next to him, I have Anu Bradford, Professor of Law and International Organization at Columbia Law School,
00:46and last but certainly not least, Rafi Krikorian, CTO at the Emerson Collective.
00:53So, Dragos, let's start with you and talk about the EU's risk-based approach,
01:01why the EU felt it was necessary to take this approach,
01:06and whether or not you believe other countries are going to follow Europe's lead as they did with GDPR after
01:15much grumbling.
01:19Well, let's start five years ago.
01:22Five years ago, at the start of this political mandate,
01:26there weren't many people who thought that regulating AI made sense.
01:30And, in fact, there were quite a lot who were thinking that we were relatively rushing,
01:36not to say that we were lunatics,
01:37to consider that it makes sense to capture AI in a rulebook,
01:41because they considered that technology was too fluid, was nascent,
01:45it was impossible to actually bring in rules that would be regulating the technology itself.
01:50Others who thought that the general principles that were already available out there by OECD,
01:55by UNESCO, by GPI, and also by the companies themselves,
01:59many of them were boasting their ethical credentials.
02:04Or others were saying that regulation would be stifling innovation,
02:06and therefore trying to come up with some obligations
02:08would actually be creating exactly the opposite effect of what you wanted to have in the economy.
02:15If we fast forward to last year,
02:17we were in a situation where we had many asking for moratoriums on the technology
02:23because we were talking about existential threats,
02:25and here we are this year with the AI Act adopted,
02:29where I think the majority has completely changed,
02:31and I think there are quite many who believe that actually the safeguards were necessary
02:36because the risks were real, speaking of the risks,
02:39and they are no longer just theoretical or on paper.
02:41Many who believe that actually rules are important for business
02:46because they bring certainty to the market,
02:48because they put standards in place,
02:50and that helps actually companies navigate the development of AI
02:53and also the uptake of AI,
02:55which is what we want for an economy that will be inevitably driven by AI
02:59because of its potential and benefits.
03:02And also those who understand that if you're a democracy
03:06and you care about your people and you care about societal risks
03:10and the damage that technology can bring to society at large
03:13and also individually to citizens and their rights,
03:16then you need a rule book.
03:17You need fixed rules because general principles,
03:19and if we look in the past of how general principles have been applied
03:23with social media and this information,
03:25there's not a great track record out there,
03:27and there's not a great track record of companies actually listening to their ethicists
03:32when they have concerns about how the technology plays out in society.
03:35Absolutely, and I think what you said is really important,
03:40that the conversation has moved on a lot in the last year.
03:44And I mean, if we look at the news headlines just in the last week,
03:50you had the story about the top two people at OpenAI quitting
03:56and saying publicly that these were the people that were in charge of AI safety,
04:02and they say that the company is running after shiny things
04:06and not taking care of safety, and that's why they left.
04:10You have Microsoft announcing that its consumption of energy has gone way up
04:19and that it may not meet its climate goals because of the use of AI.
04:24You have international organizations talking about the impact on the labor force
04:33and social unrest.
04:35The list goes on and on.
04:38So we've moved away from this, you know, killer robots
04:42and, you know, uh-oh, what happens when AGI comes to the very real things.
04:48The CEO of WPP last week was, you know, a victim of an attempted deepfake scam.
04:58All of these things, we're seeing the headlines every week,
05:00and people are realizing that there are very real concrete harms
05:05and risks that need to be dealt with.
05:09So let's move now to the U.S.,
05:11and I'd like both you, Anu and Rafi, to talk about, you know,
05:16where are legislatures' minds in the U.S.?
05:21What's the state of play?
05:24Thank you, Jennifer.
05:25And I think the fair way to characterize the U.S.
05:27is that the U.S. is still more focused on the development of AI
05:31rather than regulating AI.
05:34You're absolutely right that the conversation is shifting.
05:37It is shifting globally, and it is also shifting in the United States.
05:40So if you ask the American citizens whether they want to see more regulation of AI,
05:46they say yes.
05:48They are very attuned to these risks that you just mentioned.
05:51But the Congress is not exactly delivering anything approaching
05:56to what Dracos and his colleagues in the European Parliament achieved.
05:59So there was a moment before the U.K.'s global first AI safety summit
06:05when the U.S. seemed to be really shifting.
06:08There was an ambitious, in U.S. standards, executive order
06:11that really indicated that the U.S. is willing to abandon
06:15some of its techno-libertarian, techno-optimist convictions
06:19and recognize the need to regulate.
06:21But if you read closely that executive order,
06:24it is mainly consisting of voluntary commitments,
06:27of various guidelines, of tasking 25-plus agencies
06:32to build the capabilities in the future to regulate AI.
06:36But it doesn't pose the kind of guardrails
06:39compared to the European AI Act.
06:41So still, there is a long way to go
06:44on the U.S. following that approach.
06:46And just recently now, we had this bipartisan committee
06:50in the Congress deliberating the roadmap forward.
06:55And the document that was released after a process
06:59that was closed to the public
07:01was very much lacking in ambition.
07:05It was, again, very focused on innovation.
07:07The U.S. seems to be obsessed with the sort of legislating
07:10in the shadow of U.S.-China tech war
07:12and very committed to maintaining the U.S.'s technological supremacy,
07:17even then at the expense of neglecting its responsibility
07:21towards its citizens when it comes to the potential harms.
07:25Well, that's disappointing to hear.
07:27So, Rafi, you know, first, just so people in the audience understand,
07:32tell us a little bit about the Emerson Collective
07:34and, you know, how you're going,
07:36and also your tech background,
07:38so people understand how you're going at this.
07:41Yeah, so Emerson Collective is a social change organization.
07:44We were founded 12 years ago by Laureen Powell Jobs,
07:48Steve Jobs' widow.
07:49And we think about these broad systemic issues as intertwined problems.
07:54So we try to make change through things like either venture capital investing
07:59or philanthropy activities or work in the media or in politics, etc.
08:05So we actually try to take a big view of it.
08:07So I'm fortunate to be CTO there,
08:10and my background is I used to be a VP of engineering at Twitter 10 years ago.
08:15I ran the self-driving car team at Uber for a while,
08:18and then I was lucky to be CTO for the U.S. Democratic Party for a few years.
08:22So when it comes to this particular issue, I actually don't disagree.
08:26Like, the U.S. is sort of failing in a bunch of different ways,
08:29so I'm going to say yes to that,
08:31and maybe I'll put the slight optimistic spin on it just for a second.
08:35But it seems that we're, the focus of legislators has shifted also.
08:40So instead of tackling AI,
08:42which we can't seem to get any agreement on and actually how to do it,
08:46hence the fact that the Senate released some fairly lame guidelines, frankly,
08:51but they have shifted focus to starting to think about things around data privacy and others.
08:56So we have a lot of work to do to catch up to the EU version,
08:59but I look at that as an optimistic of just like,
09:01okay, maybe we can be 10 years behind,
09:03but at least start moving on those kind of frameworks
09:05because you can't talk about AI without talking about data in some way.
09:09Thank you, Rafi.
09:11So let's now move around the globe to China,
09:14and I know you are also a specialist on that.
09:17You know, how does the Chinese approach to regulating AI?
09:21AI differ from Europe and the US?
09:24So AI plays a very different role in a digital authoritarian country like China.
09:33So AI in many ways is a double-ed sword for the government.
09:36So the Chinese Communist Party is very keen to develop AI
09:41as a tool for facial recognition that allows for mass surveillance.
09:45So it really can enhance the pursuit of the political goals
09:50around social stability and social control.
09:53But AI can also undermine that social stability.
09:56If you imagine the chat GP type of use of AI that can generate content,
10:03that content in China needs to be consistent with the message of the Communist Party.
10:09So you cannot just set off these chat GPT variations free
10:16to generate the content that they want.
10:18So China has moved to regulate recommendation algorithms,
10:23deep synthesis technologies, and generative AI.
10:27So as opposed to having more of a horizontal, overarching AI act,
10:32as the Europeans do,
10:34they have sort of taken piecemeal certain dimensions of AI and regulated them.
10:41And now China is in the process of considering an AI act type of more comprehensive regulation.
10:47But I think there we need to understand the specific characteristics
10:51of the role of AI in the country that does take the control aspect seriously.
10:58Okay, thank you for that overview.
11:01And so, you know, we're meeting here at VivaTech just days after a global conference in Seoul ended,
11:09where this was a follow-on meeting to the famous Bletchley Park AI safety meeting in the UK last year.
11:19They, you know, governments from around the world just met in Seoul,
11:23and they were looking at, you know, okay, how do we go forward?
11:28Because the big criticism of the Bletchley Park meeting was that it really didn't have any teeth.
11:35It was just kind of recommendations.
11:37And so at the end of this meeting, there were 22 companies,
11:42including some of the big U.S. players,
11:45who all signed voluntary commitments to say,
11:49we're going to try to keep AI safe.
11:53So let me ask the panelists, do you believe them?
11:56Should we have confidence in them?
11:58Is that enough?
12:02Well, let's start with Bletchley.
12:04I think one of the criticisms that was brought to what happened last year in autumn in Bletchley Park
12:10was that, in fact, it was too narrow.
12:12And that it focused exclusively on the issue of safety
12:15and the big existential threats to mankind,
12:17and that left out, in fact, the real risks of AI that are confronting us on a daily basis.
12:25And also it was quite narrow in terms of the broader participation of stakeholders
12:29who actually have quite an important role in how AI is being developed.
12:32So I think that if we look at what happened this week in South Korea,
12:36there's already a step forward.
12:38So I want to start with a positive because it was a broader,
12:41first of all, the theme was much broader than just safety.
12:44So already recognizing that, in fact, if we speak of a global governance
12:48or a need for a global governance and approach on AI,
12:51we need to move away all just from the safety element,
12:54but also in terms of broadening also the base of stakeholders that were invited.
12:58So it was already a good step.
12:59In terms of the 22 companies signing up to a set of voluntary commitments,
13:03again, it is what?
13:05The 10th iteration of voluntary commitments?
13:08It doesn't matter who issues them.
13:10The real question is what do they mean?
13:13How do you actually bring in those companies to respect those principles?
13:20What are the mechanisms that you're going to put in place to then,
13:24I don't want to use the word police,
13:26but to actually make sure that those commitments are being respected.
13:30And this is where I think that the model, not, of course, I'm subjective,
13:33and I don't necessarily preach the word of the AI Act,
13:36but I think that the approach and the model,
13:38because AI Act is not just a piece of legislation,
13:41it is a model of how a democracy,
13:44how us, EU, as a democracy,
13:46understand the interaction between technology and society.
13:49And I think that it is this model which actually has the teeth necessary
13:53to bring to bear when it comes to companies that choose to put ethics aside,
13:58how you actually do that.
13:59And I think this is what is still missing in terms of the global conversation
14:03and the mechanism that actually makes those commitments true.
14:08So, there are several issues here that you've raised,
14:12and one of them is really discussed at length at a recent report
14:19from Stanford University's human-centered AI center called the AI Index,
14:29and they discuss how each of the big tech companies' approaches are completely different,
14:39and since there are no standards on how to measure safety and effectiveness,
14:49then, and they're all, like, using apples and oranges,
14:55you can't measure a single company's effectiveness,
14:59and you certainly can't, like, benchmark them against the others.
15:04So, to what extent, like, Anu and Rafi, do you think that's an issue,
15:08and what could be done to fix that?
15:12So, first of all, I think research like that is absolutely central.
15:17It does increase transparency when we have institutions publish
15:21more about the safety practices and AI practices of different companies.
15:25That does enhance accountability, but you're absolutely right.
15:29We need to have some kind of a benchmark against which we are measuring these companies,
15:33and that benchmark needs to be more standardized so that we all agree what are we aiming towards
15:39and whether these companies are meeting those goals or whether they are falling short.
15:44That's the beginning of holding them then accountable
15:47when they are not doing what we expect them to do,
15:49and that's why, ultimately, we cannot have the companies self-assess them
15:54against the benchmarks developed by the companies themselves.
15:58We still need to have a democratic governance foundation for AI practices,
16:04and that's why benchmarks like the EU's AI Act
16:08and various other international efforts that are underway are so critical
16:13because we can detach them from the interest of the developers
16:16and then benchmark these companies against broader public interest.
16:20Rafi, you want to add to that?
16:22No, I mean, I completely agree.
16:23Having been an executive at some of these companies,
16:26I think that our normal tendency would be to just create our own metrics
16:31and try to measure against our own metrics.
16:32Sure, we might have our own internal dashboards.
16:35Sure, we might publish them transparently, quote-unquote, to the outside world,
16:38but there's no accountability if you let us grade our own homework.
16:42So I think it's just we need more checks and balances in the system
16:46to actually cause the tech companies to align themselves
16:50with not just the financial incentive that they're working against,
16:54but also these other metrics.
16:56And look, I'm also, I'm not going to say that these tech companies are bad
17:00because I actually don't think they are.
17:01I think they're actually well-meaning in a lot of cases.
17:04I'm not going to say majority either,
17:05but I think having that external form of accountability
17:08and external standards is really the only path to actually keep them in check.
17:14Yes, please.
17:15May I add something on that?
17:17I think one of the benefits of having clear rules is also this one,
17:20that you're going to have very clear rules on how you evaluate,
17:23how do you measure this system, and against what?
17:26Against standards that, again, are very clear, are set,
17:29and they're also clear and the same for all companies alike.
17:33The second point I want to make here is that we have to be very careful,
17:37even as we, as EU, now have the AI Act
17:40and we're going to have the AI Office,
17:41which is going to establish for the next 12 months
17:43those benchmarks and those internal evaluation tools
17:47to then be able in one year's time, as the regulation says,
17:50to knock on the door of Google and Gemini,
17:53to knock on the door of OpenAI and ChGPD,
17:56and actually start evaluating those models
17:58because this is what is supposed to happen 12 months from now.
18:02But I think what's going to be critically important
18:05is that the EU AI Office will also reach out
18:08to the Safety Institute in the UK,
18:09which is right now doing evaluation of these models,
18:12okay, on the basis of voluntary commitments,
18:14but they are doing those
18:16and they are developing tools for those evaluations.
18:18The Safety Institute in the US,
18:20the Safety Institute that is now emerging in Japan,
18:23Canada will certainly soon have an institute of its own.
18:26So I think both to the benefit of the regulators themselves,
18:30but also certainly to the benefit of the companies
18:32and also to the rest of us looking at what is going on,
18:35I think it would be ideal that
18:37as all of these governance structures
18:40are moving along in developing these tools,
18:43that they will be talking to each other,
18:44that they will network,
18:45to create as much commonality
18:47between the benchmarks and the evaluation tools
18:50that are being used.
18:51So, Rafi, as someone who comes from big tech,
18:57you know, we've all heard the arguments
19:00over and over and over again about,
19:01oh, regulation of AI is going to kill innovation
19:05and it's going to put this undue burden
19:08on the younger companies
19:10that might be able to compete with the big guys.
19:13You know, young companies like in France,
19:17Mistral, AI, or in Germany, Aleph, Alpha.
19:22So, you know, is that just BS?
19:28Or is there, is there, are there approaches,
19:35should the regulators keep in mind certain things
19:39to ensure that whatever rules
19:44and standards are put in place
19:49don't prevent the technology from developing?
19:54Yeah, I think there's a false slider
19:57between regulation and innovation.
19:59I don't think these are actually like
20:01you tune up regulation and you tune down innovation.
20:04I think, you know, they are correlated in some way,
20:06but I think you can have both regulation
20:08and innovation at the same time.
20:10I think our partners in Europe will also agree with that.
20:13But I think a lot of it is sort of like
20:15being mindful of where innovation should be occurring
20:19and how do we make sure we do cause the right investments
20:22to still allow that to happen.
20:23Like, for example, in the U.S. context,
20:26like we could have regulation today, sure,
20:28but our innovation problems are,
20:31it's not just in the technology sector,
20:33they're innovating in one particular way.
20:34We need to be doing better investments
20:36like around things like the CHIPS Act and others
20:39to make sure that we're setting up foundational places
20:41for even more innovation to be occurring
20:44in some ways that we can be looking even further forward
20:46than just the kinds of things
20:48that we're trying to look at right now.
20:50So I think that we need to,
20:51yes, we need to be mindful
20:52that we could be like attenuating an innovation,
20:57an innovation engine,
20:58but we need to make sure
20:59we're seeding the innovation engines
21:01for the next few years
21:02and the next decades on top of that.
21:04Yeah.
21:04Luke, please, Anu, go ahead.
21:06Can I add here,
21:07because I really worry about this narrative
21:11that it's very common
21:12that there would be an inevitable cost
21:14on innovation if you regulate.
21:16And what I hear often in the United States,
21:19being a European, working in the U.S.,
21:21that people draw this distinction
21:22that the Americans are good at developing technologies,
21:25the Europeans are known for regulating technologies,
21:28and it must be because of the commitment
21:31to digital regulation
21:32that the Europeans are not able to innovate.
21:34And I really don't think that is the reason.
21:37So first of all, I want to be clear,
21:39not all regulation is optimal,
21:41but neither is all innovation.
21:42So we need to think about
21:44what kind of innovation we want to see
21:46and then design the regulation
21:48to encourage that kind of innovation
21:50and discourage the innovation
21:52that is societally harmful.
21:54But if I draw the distinction
21:56between why the Americans are doing better
21:59in general in innovation space
22:01and why also the leading AI companies
22:03generally come from the United States
22:05and why the Europeans are behind,
22:08I don't think it's digital regulation.
22:09I think, first of all,
22:10there's no digital single market in Europe.
22:13So scaling tech companies,
22:14it's much harder in Europe.
22:16Much bigger obstacle than digital regulation.
22:19Second, funding your innovation.
22:21There is no deep integrated capital markets union in Europe.
22:26It is much easier to raise venture capital in the U.S.
22:30Third, bankruptcy laws
22:32and attitudes towards risk-taking in Europe.
22:34It's often, it's fatal to fail in Europe,
22:37whereas in the U.S.,
22:38it's kind of a rite of passage.
22:39You fail and then you go and raise money again
22:42and they give you more money
22:43because you seem to be working on big things.
22:46And a fourth issue
22:47is that Americans have been so much better than Europeans
22:50in attracting global talent.
22:52So immigration is a huge story
22:55behind American tech success.
22:57So over 50% of over $1 billion startups
23:01in the U.S. have an immigrant founder.
23:02So again, those are the fundamental pillars
23:05of the innovation ecosystem
23:07that you were also referring to
23:09that the Europeans ought to be mindful of.
23:11It's not that if we now decided
23:13to abandon the AI Act
23:14that suddenly all those AI companies
23:17would emanate from Europe
23:18if we don't take care of all those other issues
23:21that need to be done.
23:22Well, I'll fully agree.
23:26I'll fully agree and applaud what Anu said
23:28and also I agree with Rafi.
23:30I'll add just one thing on this
23:32because we've been very conscious
23:34and deliberate
23:36when writing the AI Act
23:37to actually move away
23:38from some of the mistakes
23:39from my point of view
23:40that were done with the GDPR
23:42and the way the GDPR
23:43was put out there in the market
23:44and how it left companies,
23:47particularly the smaller ones,
23:48a bit alone
23:49in trying to figure out
23:50what their obligations were
23:52and how they were supposed
23:53to do their business.
23:54So that's why we've put
23:55a lot of enablers in place
23:57dedicated particularly
23:58to the smaller companies.
23:59I dare you to read the text.
24:01The word SME,
24:02which stands for
24:02Small and Medium Size Enterprise,
24:05features 38 times in the text.
24:08They have free access
24:09to sandboxes.
24:11They have free access
24:12to all the accelerators
24:14at member state level.
24:15They have dedicated provisions
24:17for how they can actually
24:18do their self-assessment.
24:19They are exempted
24:20from a lot of the burden
24:21that is there
24:22for other type of
24:23companies.
24:24So we've tried again
24:26deliberately
24:26to actually even up
24:28the keel,
24:28to even up the playing field.
24:30Understanding also
24:31that in Europe
24:3298% are small
24:34and medium size enterprises.
24:36So because we wanted
24:38to see an environment
24:39of innovation growing
24:40in and with the AI Act
24:44as a reality in the market.
24:46So I'd like to ask
24:48a follow-up question to you.
24:50So I know that the EU
24:53did a very in-depth,
24:56intensive consultation
25:00with all the actors
25:03that could be affected
25:05to come up with the legislation
25:07as it now stands.
25:09And it was a big fight.
25:10It was a huge hassle.
25:12It's almost kind of a miracle
25:14that you managed
25:15to get it passed.
25:17So kind of looking back,
25:20you know,
25:20what did you learn
25:22on that journey
25:23that, you know,
25:24other regulators
25:25who are just starting
25:27to grapple with this,
25:28like in the U.S.,
25:30could learn from?
25:31And the other question
25:34is about talent.
25:35Like, you're setting up
25:36this AI office
25:38where you need, like,
25:40top-level technical people
25:42to be able to assess
25:44the safety.
25:46And let's face it,
25:48there's a global shortage
25:49of this kind of talent
25:51and they're not naturally,
25:53you know,
25:53attracted to go work
25:55for, you know,
25:56regulatory bodies.
25:59So, how are you handling that?
26:02Well, let's start
26:03with the first.
26:03And I think you said
26:04it all yourself.
26:05For me,
26:06the one thing
26:06that I take away
26:07from this process
26:08is indeed collaboration
26:09and being open
26:11to all those
26:12that actually had something
26:13to say in this process.
26:14I think the arrogance
26:15of being a policymaker
26:17that just designs a rule
26:19in the ivory tower,
26:20whether it's the commission
26:21or the parliament
26:22or the council
26:22and believes that
26:23that is going to be
26:24the set of rules
26:24that will change the world,
26:25I think was wrong.
26:27This is why
26:27as parliament
26:28we asked the commission
26:29which by the way
26:30announced initially
26:30that they would come up
26:31with the legislation
26:32the first 100 days
26:33and we said
26:33don't do that.
26:34Go through a consultation process,
26:36understand what it is
26:37that you want to regulate,
26:38talk to everyone.
26:40So, it was an unprecedented move
26:42to have the white paper
26:43and the whatever couple of months
26:45of consultation
26:46where I think hundreds
26:47and hundreds of inputs
26:48were provided.
26:49I think that was already
26:50a very good start.
26:52And also us as parliament
26:53during the two and a half years
26:54of preparations
26:55and negotiations,
26:56we have constantly been open
26:59to listen to everyone.
27:00People ask me
27:01whether I've been lobbied
27:02and I said no,
27:02I have not been lobbied
27:03because I have never taken
27:05whatever contribution
27:06came to me,
27:07whether it came from companies
27:08or from civil society
27:09or from trade unions
27:10or from anyone
27:12that had something to say,
27:13I did not take it as a lobby.
27:15I took it as a contribution,
27:17as an idea,
27:18as a thought
27:18which helped us
27:20as regulators
27:21understand all of the facets,
27:23all of the ins and outs
27:24of what made sense
27:26to put in the legislation
27:27and what did not.
27:28So, I think that this process
27:30is something that we also need
27:31to now see in the implementation.
27:33I think an implementation
27:34that will be just again
27:35locking itself up
27:36in the regulators tower
27:38and believe that
27:39they are going to implement
27:40and enforce the rule
27:41from there
27:42is going to be
27:42a wrong way to do it.
27:44It's going to have to stay open,
27:46stay open to stakeholders,
27:47bring them in
27:48so that the reality check
27:50is constantly happening
27:51and then in terms of talent
27:53certainly this is going to be
27:54I think one of the biggest challenges
27:55for the new regulator now
27:57not only because
27:58there is a competition now
28:00again as I said
28:01there are safety institutes
28:03let's say public authorities
28:04doing this kind of work
28:06in London,
28:07in US,
28:08in Tokyo
28:08and soon I think
28:09in many other jurisdictions
28:10around the world
28:11and the specialists
28:13are not in
28:15flourishing everywhere
28:16they are in short supply
28:17but I am encouraged
28:20I spoke recently
28:22to the commission
28:24in charge
28:24of setting up the office
28:25and they told me
28:26that at least
28:27in the first two rounds
28:28of vacancy notices
28:30they have actually received
28:31quite a number
28:32of good applications
28:34so it seems that
28:35the stimulus is there
28:37the attraction is there
28:38and that's a very good thing
28:39I think
28:39these are people
28:40that are attracted
28:41mostly by the mission
28:42certainly not by the salaries
28:43but that's a good thing
28:45but I think
28:46that will remain
28:46a constant challenge
28:47and not only
28:48and this is where
28:49in fact my bigger concern lies
28:52I'm less concerned
28:53with the ability
28:53at the EU level
28:55to bring in the talent
28:56I'm more concerned
28:56at the national level
28:57because let's not forget
28:58that the implementation
29:00is not only in Brussels
29:01the implementation
29:03is going to be also
29:04in the 27 different capitals
29:05of the EU
29:06and that's where
29:08a national regulator
29:09or national salaries
29:10I think are going to have
29:11an even harder time
29:12finding that talent
29:14and it's going to be
29:15even more important
29:16because sandboxes
29:18and the interaction
29:19between those smaller companies
29:21start-ups
29:22scale-ups
29:22that are doing the AI
29:23on a day-to-day basis
29:24their first interaction
29:26is going to be
29:26with the national regulator
29:27way before they actually
29:29interact with the EU AI office
29:31so I think it's that ability
29:33of the national member states
29:34to bring in the talent
29:35that is going to be
29:37the biggest challenge
29:38and the biggest test
29:39for the implementation
29:40of the AI Act
29:40thank you
29:42and so
29:42now
29:43as a follow-up
29:44I'd like to ask
29:45both you
29:45Anu and Rafi
29:48it's clear
29:49it's clear
29:51that
29:52the EU
29:53has bent over backwards
29:54to make sure
29:55that all the voices
29:56are heard
29:58is the same thing
30:00happening in the United States
30:01or
30:02is big tech
30:03really dominating
30:04the discussion
30:05and the lobbying
30:06to US Congress
30:09and Senate
30:11I am afraid
30:13that the access
30:14to decision makers
30:16in the United States
30:17is not equal
30:18so
30:19what we've learned
30:20is that
30:20just the logic
30:21of the elections
30:22in the United States
30:23the costliness
30:24of running
30:26for an office
30:26just dictates
30:28that you cannot afford
30:29to listening
30:30the corporate interest
30:32and as a result
30:33the corporate lobbying
30:34is very powerful
30:36there's a lot of money
30:38that is spent
30:39on lobbying
30:40the lawmakers
30:40and they get
30:42their day
30:42before those lawmakers
30:44and there's a lot
30:45of research
30:46showing how
30:47yes there's a lot
30:48of efforts
30:48to lobby
30:49the European regulators
30:50after all
30:51often if you capture
30:52Europe
30:53you can be seen
30:53as capturing
30:54the whole world
30:54given the influence
30:55that the Europeans have
30:56but the research
30:58shows that
30:58the corporate lobbying
31:00in Europe
31:00is offset
31:02often by
31:03in equal access
31:04by civil society
31:05to the lawmakers
31:06so the voices
31:07that get heard
31:08and incorporate
31:09into lawmaking
31:10are much more equal
31:11and that is just
31:13very hard to change
31:14in the US
31:15and right now
31:16I think when it comes
31:17to AI specifically
31:18the corporate lobbying
31:20has a very strong message
31:21by saying
31:22if you regulate us
31:23too much
31:24you are eroding
31:25the very asset
31:26you have
31:26in the tech war
31:27against China
31:28that the US
31:29cannot afford
31:30to set itself back
31:31this is a
31:32economic
31:33technological
31:34ideological
31:36geopolitical
31:36military fight
31:37and that narrative
31:39seems to still resonate
31:40in many corners
31:41of Congress
31:42I mean I'm not
31:43going to disagree
31:43the way I look at it
31:45is sort of like
31:45a three-legged stool problem
31:47so we have
31:48the tech companies
31:49the tech giants
31:50are one on the legs
31:50we have our regulators
31:51on the other
31:52and we potentially
31:53have civil society
31:53as the third
31:54and right now
31:55the tech companies
31:56are clearly driving
31:57most of that conversation
31:58have most of the access
32:00of all those things
32:01and it's sort of like
32:02up to us
32:03to figure out
32:03how to like
32:04prop up
32:05this other two legs
32:06so I'm inspired
32:07by certain programs
32:08in the US
32:09to try to bring
32:10those other voices
32:10into the building
32:12so there are
32:13a bunch of fellowships
32:14which embed
32:14post PhD students
32:16and post graduates
32:17into different committees
32:19across both
32:20the House
32:20and the Senate
32:21to try to provide
32:22more neutral views
32:24of how these technologies
32:25can work
32:25that does not
32:27counterbalance
32:28the lobbying efforts
32:28I'll definitely
32:29admit that
32:30but at least
32:31there's efforts
32:31right now
32:32to try to bring
32:33this type of talent
32:34that's desperately needed
32:35you know
32:36I was on the call
32:37with our AI institute
32:38just a few weeks ago
32:40listening to them
32:41and their trouble
32:42recruiting the kinds
32:43of people they need
32:44because they're competing
32:45with million dollar salaries
32:47at minimum
32:48from some of the
32:49big tech companies
32:50and trying to get
32:50those people
32:51to come over
32:51even if they're
32:52mission driven
32:53a million dollar salary
32:54is a very hard thing
32:55to turn down
32:56so trying to help
32:57them out
32:58and trying to get
32:59that right kind
32:59of brain trust
33:00is going to be
33:01a huge challenge
33:02for us
33:04so
33:05you know
33:06you've
33:07the three of you
33:08have given us
33:09a really great lay
33:10of the land
33:11of the current
33:11state of
33:13regulation
33:15you know
33:16even if the EU
33:17has managed
33:18to pass
33:19the AI act
33:21it's still going
33:22to take some time
33:22before it is
33:24actually applied
33:25and it will take
33:26even more time
33:27before similar rules
33:29could be
33:30effectively adopted
33:32around the world
33:34ones that have
33:35real teeth
33:36so
33:37my question
33:38for you Rafi
33:39is to what extent
33:41can we use
33:43existing technology
33:44to try to
33:47rein in
33:48the harmful
33:49aspects of AI
33:51in the short term
33:53and you know
33:55that might
33:56include things
33:57so
33:57I just want
33:58to mention
33:59that you know
34:00it's not just
34:01about governments
34:03here
34:03and the big
34:05tech companies
34:05large enterprise
34:07also has
34:08a big responsibility
34:09here
34:10and that
34:12Stanford AI
34:14index report
34:15mentions
34:16specifically
34:17that when
34:18they did a poll
34:19of
34:20I can't remember
34:21how many
34:211,000 or 1,600
34:22big companies
34:24about what kind
34:25of guardrails
34:26they had put
34:27in place
34:27most of them
34:28admitted
34:28that they were
34:29using AI
34:30and had not
34:31put the guardrails
34:32in place
34:33so there's a
34:34responsibility there
34:35on the part
34:36of the big companies
34:37but there are
34:38some technology
34:40tools like
34:41risk management
34:42software
34:42and other things
34:43that can be used
34:45so let's talk
34:46about how
34:47technology can help
34:48fill the gap
34:49until we get
34:49the right regulations
34:50yeah I mean
34:51normally I don't
34:53think we should
34:54be fighting
34:54technology
34:54with technology
34:55however
34:55I think you're
34:56exactly right
34:57I mean
34:57if you look
34:58at the fortune
34:59500 companies
35:00they're all
35:00adopting some
35:01form of AI
35:01tool policy
35:02etc
35:03inside their
35:04workflow
35:04and they don't
35:06have talent
35:07they're not
35:07developing
35:08their own
35:08work
35:09so therefore
35:10what they're
35:10looking for
35:11are things
35:11like what
35:12you said
35:12like insurance
35:13management
35:14risk management
35:15privacy policy
35:16scanning tools
35:17understanding
35:18what's really
35:18going on
35:18with their
35:19data sets
35:19so I think
35:20there is a
35:21market opportunity
35:22to actually
35:22create those
35:23tools
35:24to actually
35:24provide our
35:25big fortune
35:25500 the
35:27right kinds
35:27of internal
35:28checks and
35:29balances
35:29so that they
35:30can then provide
35:31market pressure
35:32on the big
35:32platform companies
35:33of just like
35:33well our
35:34internal processes
35:35said that we
35:36can't actually
35:36do that
35:36would be a
35:37pretty strong
35:38signal to
35:39say an
35:39open AI
35:40or an
35:40anthropic
35:41or a
35:41Google
35:41on how
35:42they're
35:42developing
35:43things
35:43it's kind
35:43of like
35:44what our
35:44executive order
35:45attempted to do
35:47which is sort
35:48of use the
35:48buying power
35:49of the US
35:49government
35:50as a way
35:50to force
35:51these companies
35:52the fortune
35:53500 have the
35:53exact same
35:54issue going
35:54on they just
35:55need the tools
35:56in order to
35:56do it so
35:57like I said
35:57I think there's
35:58a market opportunity
35:59to develop those
35:59tools around
36:00insurance management
36:01and others
36:01that could provide
36:02that market
36:03signal back
36:04great so
36:06with a few
36:07minutes that we
36:08have left
36:09I would like
36:10each of you
36:11to talk about
36:12you know
36:12given this
36:13current state
36:14of affairs
36:15what are
36:17the key
36:17action items
36:19that you'd
36:20like to see
36:21happen
36:21in the near
36:22to midterm
36:23to make sure
36:25that we're
36:26safeguarding
36:26against the
36:27potential
36:28societal harms
36:29of AI
36:30I'll start
36:31with you
36:31Dragos
36:33well
36:33I will go
36:34back to the
36:35discussion
36:35that we just
36:36heard about
36:36the
36:36summit
36:38this week
36:38I think
36:40that the
36:40global
36:41conversation
36:41on AI
36:42and the
36:44drive to
36:44find a
36:45global
36:47framework
36:47for governing
36:48AI
36:49needs to
36:49continue
36:50and I think
36:51there needs
36:51to be
36:52renewed
36:52political
36:52investment
36:53in making
36:54sure
36:54that that
36:55is
36:55delivered
36:57I understand
36:58that this
36:58is also
36:58the ambition
36:59that France
37:00has since
37:01we are in
37:01Paris today
37:02for the
37:03follow-up
37:04summit
37:04that is
37:04going to
37:05take place
37:05in Paris
37:06in February
37:07of next
37:07year
37:07both in
37:08terms of
37:09broadening
37:09the base
37:10of those
37:11that would
37:11be
37:11participating
37:12but also
37:13in terms
37:13of
37:13broadening
37:13the themes
37:14and also
37:15trying to
37:15have
37:15a very
37:16serious
37:17discussion
37:17of how
37:18we're
37:18going to
37:19look
37:19globally
37:20at this
37:21technology
37:21because
37:22we're
37:22going to
37:22be faced
37:23maybe
37:24in one
37:24or two
37:25or three
37:25or four
37:26years
37:26time
37:26but not
37:26too
37:27many
37:28with some
37:29important
37:29questions
37:30that we
37:30will face
37:31a society
37:31looking at
37:32the advances
37:33of this
37:33technology
37:33I heard
37:34Elon Musk
37:35yesterday
37:36saying
37:36that in
37:36two
37:37years
37:37time
37:37he
37:37believes
37:38that
37:38we
37:38will
37:38have
37:38AI
37:39that
37:40will
37:40be smarter
37:40than
37:40humans
37:41I
37:41actually
37:41think
37:41it's
37:42going
37:42to
37:42happen
37:43faster
37:43than
37:44that
37:44and
37:45we
37:45have
37:45to
37:46be
37:46prepared
37:46for
37:46that
37:46conversation
37:47and
37:47we
37:47have
37:48to
37:48be
37:48prepared
37:48with
37:49those
37:49governance
37:49tools
37:50in place
37:50at a
37:50global
37:51level
37:51because
37:51it's
37:52not
37:52only
37:52going
37:52to be
37:53about
37:53Europe
37:53or
37:54the
37:54US
37:54it's
37:54going
37:54to
37:54be
37:55about
37:55the
37:56rest
37:56of
37:56the
37:56world
37:57what
37:57we
37:57have
37:58as
37:58a
37:58duty
37:58at
37:59least
37:59this
37:59is
37:59how
37:59I
38:00feel
38:00it
38:00myself
38:10actually
38:10works
38:11and
38:12that
38:12we
38:12can
38:12actually
38:12do
38:13both
38:13protect
38:14society
38:14protect
38:15citizens
38:15and
38:16also
38:16leave
38:17the
38:17room
38:17and
38:17create
38:18enablers
38:19and
38:19the
38:19stimulants
38:19for
38:20AI
38:21to
38:21play
38:21the
38:21good
38:23cop
38:24the
38:24good
38:24contribution
38:25that
38:25it
38:25can
38:25bring
38:26to
38:26economy
38:26and
38:27society
38:27and
38:28I
38:28think
38:28that
38:28that
38:28could
38:28be
38:29a
38:29test
38:29and
38:30a
38:30testimony
38:31that
38:31actually
38:32you
38:32can
38:32do
38:32that
38:32and
38:33it's
38:33a
38:33model
38:33that
38:33works
38:33and
38:34that
38:34could
38:34you
38:35be
38:35used
38:35to
38:36stimulate
38:36the
38:37global
38:37governance
38:38conversation
38:38that
38:39everyone
38:40needs
38:40thank
38:41you
38:41dragus
38:42anu
38:43absolutely
38:44agree
38:45on the
38:45global
38:46effort
38:47that
38:47needs
38:47to
38:47be
38:48undertaken
38:49but
38:50maybe
38:50a
38:50message
38:50to
38:50the
38:51governments
38:52around
38:52the
38:53world
38:53that
38:53are
38:53still
38:53on
38:54the
38:54edge
38:54that
38:54still
38:55have
38:55not
38:55decided
38:56to
38:56embrace
38:56their
38:57roles
38:57of
38:57governance
38:58of
38:59AI
38:59societies
39:00I
39:00would
39:00really
39:01urge
39:01them
39:02to
39:02deploy
39:02the
39:03mandate
39:03that
39:04they
39:04have
39:04that
39:05ultimately
39:05the
39:06states
39:06the
39:07governments
39:07are
39:07the
39:07fundamental
39:08unit
39:08around
39:09which
39:09societies
39:09are
39:10built
39:10and
39:11they
39:11need
39:11to
39:12embrace
39:12that
39:12role
39:13they
39:13are
39:13the
39:13ones
39:13who
39:14need
39:14to
39:14protect
39:14their
39:15citizens
39:15and
39:15their
39:15societies
39:16and
39:16they
39:16need
39:17to
39:17be
39:17entering
39:18this
39:18space
39:19and
39:20then
39:20using
39:20their
39:20mandate
39:21and
39:21using
39:21that
39:22well
39:22so
39:23that
39:23I
39:23think
39:23is
39:23the
39:23first
39:24thing
39:24and
39:24then
39:24comes
39:24the
39:25question
39:25of
39:25if
39:26the
39:26governments
39:26make
39:27the
39:27decision
39:27to
39:28legislate
39:28what
39:29should
39:29they
39:29be
39:30focusing
39:30on
39:30and
39:31I
39:31think
39:31that
39:31is
39:32a
39:32big
39:32challenge
39:32in
39:32AI
39:33because
39:33it
39:33is
39:33such
39:34a
39:34multifaceted
39:35technology
39:35we
39:35can
39:36talk
39:36about
39:36existential
39:37long-term
39:37risk
39:38or the
39:38risks
39:38that
39:39are
39:39already
39:39here
39:39and
39:40now
39:40and
39:41maybe
39:41I
39:41would
39:41start
39:41from
39:42the
39:42already
39:42here
39:42and
39:43now
39:43this
39:44year
39:44in
39:44particular
39:44I
39:45am
39:45very
39:45worried
39:46about
39:46elections
39:47very
39:47key
39:48elections
39:48around
39:48the
39:48world
39:49and
39:49the
39:50potential
39:50role
39:51of
39:51AI
39:51to
39:52amplify
39:52existing
39:53problems
39:54that we
39:54have
39:54with
39:55disinformation
39:55and
39:56our
39:56deterioration
39:57of our
39:58relationship
39:58to truth
39:59and
40:00if
40:00we
40:00undermine
40:01the
40:01very
40:02institutions
40:02that
40:03are
40:03then
40:03that
40:04ought
40:04to
40:04be
40:04legislating
40:05I
40:05have
40:06less
40:06hope
40:06going
40:07forward
40:07so
40:07that's
40:08why
40:08I
40:08think
40:08it
40:08also
40:08speaks
40:09to
40:09the
40:09urgency
40:10of
40:10legislating
40:11and
40:12focusing
40:12on
40:12protecting
40:13our
40:13democracy
40:14because
40:14if
40:14I'm
40:14worried
40:15about
40:15the
40:15digital
40:15authoritarianism
40:16of
40:16China
40:17it's
40:17also
40:18not
40:18consistent
40:19with
40:19your
40:20commitment
40:20to
40:20liberal
40:20democracy
40:21if
40:21you
40:21outsource
40:22this
40:22to
40:22the
40:22tech
40:23companies
40:23as
40:23opposed
40:23to
40:24democratic
40:24governments
40:25thank you
40:26I'll take
40:27a very
40:27US centric
40:28view
40:28for a
40:28second
40:28being
40:29the
40:29American
40:29on
40:29the
40:29stage
40:30for me
40:31I
40:31just
40:31want
40:31my
40:31country
40:32to
40:32get
40:32off
40:32the
40:32starting
40:33block
40:33so
40:33right
40:34now
40:34we
40:34need
40:34to
40:34take
40:42and
40:42just
40:42instead
40:43of
40:43taking
40:43it
40:43all
40:44one
40:44shot
40:44so
40:44things
40:45like
40:45a
40:45federal
40:46data
40:46privacy
40:47law
40:47we
40:47just
40:47if
40:48there's
40:48bipartisan
40:48agreement
40:49on that
40:49we should
40:49just get
40:50something
40:50like that
40:50done
40:51something
40:51like a
40:51kids
40:52online
40:52safety
40:53act
40:53again
40:53large
40:54bipartisan
40:54agreement
40:55we should
40:55just
40:56get
40:56something
40:56like
40:56that
40:56done
40:57things
40:57around
40:59synthetic
41:00deep
41:00fakes
41:01of
41:01sexual
41:01imagery
41:02we
41:03can
41:03all
41:03agree
41:03that
41:03kind
41:04of
41:04stuff
41:04should
41:04not
41:04exist
41:04on
41:05the
41:05internet
41:05we
41:05should
41:05just
41:05get
41:05that
41:06kind
41:06of
41:06stuff
41:06done
41:06as
41:07a
41:07way
41:07to
41:07just
41:07build
41:08momentum
41:08so
41:09we
41:09can
41:09start
41:09tackling
41:09some
41:10of
41:10the
41:10bigger
41:10problems
41:11that
41:11our
41:11partners
41:12across
41:12the
41:13ocean
41:13have
41:13done
41:13so
41:14for
41:14me
41:14I
41:15just
41:15want
41:15to
41:15start
41:15really
41:16just
41:16chipping
41:17away
41:17at
41:17these
41:17things
41:17we
41:17can
41:17agree
41:18on
41:18and
41:18just
41:18start
41:19moving
41:21we
41:22have
41:23roughly
41:24two
41:24minutes
41:25left
41:25to
41:26I
41:26think
41:27we
41:27could
41:27probably
41:27take
41:27one
41:28or
41:28two
41:28quick
41:29questions
41:30from
41:30the
41:30audience
41:32gentleman
41:33over
41:33here
41:38thank
41:39you
41:39very
41:39much
41:39as
41:41artificial
41:41intelligence
41:42can
41:42affect
41:43everyone
41:44what
41:45would
41:45be
41:46your
41:46concrete
41:47solution
41:47to
41:48enforce
41:49the
41:50democratic
41:50governance
41:51of
41:52artificial
41:52intelligence
41:53and
41:54at the
41:54same
41:54time
41:55keep
41:55pace
41:55with
41:56the
41:56exponential
41:57growth
41:57of
41:58such
41:58technologies
41:59thank
41:59you
41:59very
42:00much
42:03who
42:03wants
42:03to
42:04take
42:04that
42:06dracos
42:07has
42:07the
42:07democratic
42:08mandate
42:08i
42:08think
42:10well
42:10of
42:11course
42:11i
42:12would
42:12say
42:12do
42:12what
42:12we
42:12did
42:14because
42:15i
42:15truly
42:15believe
42:16that
42:16well
42:17again
42:17i'm
42:18subjective
42:18but
42:18i truly
42:19believe
42:20that
42:20we've
42:20really
42:21put
42:21a lot
42:21of
42:21thought
42:21in
42:22finding
42:22that
42:22solution
42:23in
42:23finding
42:23that
42:24balance
42:24and
42:24looking
42:25again
42:26as
42:26democracies
42:27and
42:27understanding
42:28also
42:28the
42:28role
42:28that
42:28Adam
42:29was
42:29speaking
42:29about
42:29as
42:30rule
42:31makers
42:31the
42:31responsibility
42:32you have
42:33towards
42:33society
42:33towards
42:34the
42:34citizens
42:34that
42:34vote
42:35for
42:35you
42:35who
42:36are
42:36confronted
42:37with
42:37those
42:37real
42:37risks
42:38and
42:38the
42:39risks
42:39of
42:39today
42:40and
42:41who
42:41you
42:41don't
42:41want
42:41to
42:42grow
42:42mistrust
42:42because
42:43with
42:43mistrust
42:44we already
42:44have a
42:44society
42:45that's
42:45very
42:45divided
42:46and
42:46polarized
42:46you
42:47don't
42:47want
42:48AI
42:48to
42:48go
42:48down
42:49the
42:49path
42:49of
42:49vaccines
42:50right
42:50so
42:51again
42:51as
42:52policy
42:52makers
42:52you
42:52have
42:53that
42:53responsibility
42:53to
42:54act
42:54you
42:55have
42:55this
42:55responsibility
42:56to
42:56balance
42:56and
42:57again
42:57that's
42:57what
42:57I
42:58think
42:58the
42:58act
42:58is
43:00so
43:00I
43:01think
43:01we're
43:02going to
43:02have
43:03to
43:04close
43:05and
43:05in
43:06sum
43:08it
43:08looks like
43:09Europe
43:09will
43:10once again
43:10be the
43:11world's
43:11policeman
43:13but
43:14you
43:15have
43:16created
43:17legislation
43:19after
43:20listening
43:20to many
43:21voices
43:22you
43:23have
43:23a
43:25means
43:26of
43:26trying
43:27to
43:27evaluate
43:28and
43:29enforce
43:30in
43:30place
43:31now
43:31we
43:32have
43:32to
43:32test
43:32it
43:33to
43:33see
43:34if
43:34it
43:34works
43:35and
43:35hope
43:36that
43:37other
43:38governments
43:39are
43:39going
43:40to
43:40step
43:41up
43:41and
43:42adopt
43:42rules
43:44and
43:44regulations
43:44that
43:45have
43:46real
43:46teeth
43:46and
43:47hold
43:48the
43:48tech
43:48companies
43:49to
43:49accountability
43:50so
43:51with
43:52that
43:52I'd
43:52like
43:52to
43:52ask
43:53the
43:53audience
43:53to
43:54give
43:54a
43:54nice
43:54round
43:55of
43:55applause
43:55to
43:55our
43:56panelists
43:56thank
43:57you
43:57very
43:57much
43:57thank
43:58you
Commentaires