Passer au playerPasser au contenu principal
  • il y a 1 jour
The Deliberate Algorithm: Choosing AI's Ethical Path

Catégorie

🤖
Technologie
Transcription
00:00Good afternoon, I'm Jennifer Schenker, founder and editor-in-chief of The Innovator, a global publication about technology, and it's
00:10my pleasure to moderate the session on The Deliberate Algorithm, Choosing AI's Ethical Path.
00:16So I'm very pleased to have with us here today, to my immediate left, Samantha Glody from KPMG International. She's
00:26leading their trusted AI transformation efforts.
00:30And next to her is Ariana Legovini, Director of Development of Impact, DIME, at the World Bank.
00:39And next to her, Dr. Marwin Al-Zuroni, CEO of AI at the Dubai Department of Economy and Tourism, and
00:50Anu Niemann, CEO and founder of The Upright Project.
00:55So, I'm just going to take a minute to give framing for the discussion.
01:04Some 66% of people are using AI regularly, and 83% believe that the use of AI will result
01:15in a range of benefits, according to a recent KPMG report.
01:19Indeed, when used effectively and responsibly, AI holds the potential to make businesses more efficient, accelerate progress on sustainable development,
01:31and improve education globally.
01:33Yet, trust remains a critical challenge.
01:38Only 46% of people globally are willing to trust AI systems, which are prone to hallucinations, blur the line
01:47between truth and fiction, and threaten their jobs.
01:51Challenges in scaling AI for social good initiatives are persistent and tough.
01:58Equity is also an issue.
02:01Nearly 3 billion people are not connected to the Internet, cutting them off from AIs for any purpose.
02:08Even if they can access AI and want to use large language models, the Global South faces the problem that
02:16the underlying data available on the Internet is skewed towards the Global North and male-centric.
02:25This panel will try to tackle some of these issues and more.
02:31So, let's start with you, Samantha, and AI's ability to do societal good.
02:38How are organizations using AI to help further social good and sustainability?
02:46Thanks, Jennifer.
02:47I'm really excited to be here today and talk about such an important topic in AI ethics and building trust
02:53in AI.
02:54I think it's a really important factor to drive successful AI adoption.
02:59And what better thing to talk about but the impact of AI on social good and on sustainability.
03:06I think they're two very, very important use cases to discuss.
03:10And you mentioned our survey.
03:11We very recently released the Trust in AI survey in collaboration with the University of Melbourne.
03:18It's the largest survey of its kind on trust and attitudes on AI.
03:23We surveyed 48,000 respondents across 47 countries.
03:27So, we have really, really deep insights from that survey.
03:30And one that I wanted to call out just as sort of a grounding for the discussion today is there
03:38was a very obvious difference between the demographic of the emerging economies to the advanced economies.
03:44And in the emerging economies there was increased trust, increased literacy, and increased acknowledgement of the opportunity for value and
03:56growth from AI in the developing nations.
03:59And I think that that's largely due to such a significant opportunity for economic development and growth in those areas.
04:09But I think the fact that they felt they had stronger literacy really helped them to be able to use
04:16AI in an appropriate and effective way and trust it to do so.
04:21So, I thought that was an interesting data insight to pull out from the survey.
04:25But in terms of some examples of how AI is being used for social good, you know, I'd love to
04:31call out a couple in healthcare and education and climate change.
04:35And you think about healthcare, you know, I work with quite a few life sciences organisations.
04:41And the ability to use AI in diagnostics is just absolutely incredible to be able to diagnose and therefore treat
04:50disease and more effectively, particularly in emerging markets.
04:56But also predictive analytics to be able to predict outbreak of disease, whether that is in third world countries or
05:05we've all just lived through the experience of the pandemic, being able to predict the outbreak and therefore be more
05:11prepared to treat and test and recover in a more accelerated way.
05:17I think that's an exciting opportunity.
05:19And then, of course, with telemedicine, so people that live in rural or remote areas, having access to scarce specialist
05:28medical care is a real game changer.
05:30And then with education, you think about children in remote areas in developing nations not having the same access to
05:38education that children in many other countries do.
05:41AI tutors and AI translation capabilities allow amazing curriculum to be delivered to children across the world in the language
05:53that they understand.
05:54And I just think as a mother of an eight year old, I just feel very, very optimistic about that
05:59opportunity.
05:59And then, of course, with climate change, there's AI satellites that can detect illegal deforestation, predictive analytics to predict whether
06:10there's going to be a drought or a flood.
06:12And it allows communities to better prepare for and respond to those environmental crises that are happening more and more
06:20often.
06:20So these are just a couple of examples.
06:22And I know we're going to all talk about the ethics in today's panel, but it's so important.
06:28There are things like in less regulated countries, making sure that the society's privacy is maintained if they don't have
06:39the right regulation and that there's not going to be mass surveillance using AI of communities,
06:44making sure that the models are free from bias so that the decisions that are being made from AI are
06:50done in a non-discriminating way.
06:53You know, there's just so many ethical considerations to think about, and we're going to get into that today.
06:59Thank you so much, Samantha.
07:01Let me turn to you now, Ariana.
07:03So we've heard a bit about, you know, AI's ability to do good, but to be truly inclusive and equitable,
07:15there needs to be access to infrastructure and the global computing divide is, in fact, getting bigger.
07:23The Tony Blair Institute, in a recent report, found that the US built more data center capability in 2023 than
07:35the rest of the world combined, excluding China.
07:38And it notes that compute infrastructure and the difference in its availability from country to country risks becoming the basis
07:46of a new digital divide.
07:48So can you talk a little bit about how the World Bank is tackling that challenge?
07:54Thank you.
07:56I actually think compute is more than a technical issue.
08:01It is really an institution, a fundamental issue moving forward that requires not only compute capabilities, but a whole series
08:11of other interventions around it.
08:13So from the World Bank side, we start with energy.
08:18And actually large investment in energy access as a sine qua non to digital expansion.
08:26We then, one example of that is the M300 initiative to connect 300 million people that have zero access to
08:35energy.
08:36And half of that connections will be are being done with renewable energy.
08:42And so it so happens that once you set up solar mini grids, you also set up internet connections.
08:49And all of a sudden, these communities not only have access to light, but also have access to the internet
08:58and the possibility of introducing a lot of digital tools and AI tools.
09:02One example of that would be AI tutors where, you know, you can teach children in a fraction of time.
09:14In Nigeria, we find that the children in six weeks with an AI tutor learn as much as in two
09:22years.
09:22Now, wow, a new paper in Nature shows that at Harvard with the best professors and the best students, AI
09:32tutors double the speed of learning.
09:35Now, can you imagine in Nigeria that's multiplied multiple times because of the lack of access to trained teachers and
09:43the same type of opportunities.
09:45So we think kind of the energy is one driver.
09:49The second driver is digital infrastructure.
09:52And so we are working, especially the IFC, investing with the private sector in different locations to expand the compute
10:04power.
10:05For example, in Kenya with investments with Raxio and with the Africa data initiatives.
10:15These are obviously not just World Bank.
10:20And when I say we, what I mean by that is the World Bank kind of really leveraging a lot
10:27of partnerships,
10:28not only with the private sector, but with governments and others.
10:32Why governments and why World Bank?
10:35We are actually able to elicit and support the enabling environment through which then the private sector will want to
10:42come in and invest.
10:43Some of the enabling environment is literally broadband investments, large broadband investments.
10:51But the second is developing basic infrastructure such as the ID4D, digital IDs for all citizens across the world to
11:01access fintech, fintech, e-government and so forth and so on.
11:06Where we contribute from the research side is also understanding the skills and the impact of different initiatives.
11:16So my team is focused specifically on R&D to improve our products.
11:23Then what we call leads to enable adoption across our pipelines.
11:31And third, accelerate where we work with at government scale piloting the last mile adoption of a lot of technologies
11:40and support governments actually move from where they are today into the future.
11:46We, you know, we take inspiration from some of the countries that have done a lot of investments in this
11:53area.
11:55But then there is a lot of work on the ground to actually build capabilities and understanding.
12:01So we use real-time data and a lot of field experimentation to understand the difference and do a lot
12:08of A-B testing to understand whether things actually deliver on their promises.
12:14Okay, thank you.
12:16Let's move to you now, Marwen.
12:17I mean, you're here representing the UAE government and it's amazing how the government has embraced AI.
12:32It's become, I think it's fair to say, a leader globally.
12:36You have several hundred, you know, AI officers across in the different ministries across the government.
12:47And you've also created a ministry of things that are not possible.
12:54Is that the way to exactly?
12:56Ministry of possibilities.
12:57The ministry of possibilities.
12:58So basically the idea that whenever something new comes up that AI introduces, like, say, combining some kind of a
13:10flying car with I don't know what,
13:16things that the ministries today simply could not, you know, handle, it doesn't fit into their portfolio, this ministry is
13:24able to put things together.
13:26So tell us a little bit about that and the whole approach of the UAE government.
13:33Thank you, first of all, for having me.
13:35And I think one of the things that we did very early on is have a minister of AI to
13:40start with,
13:41which is a young person who is very driven into not only the technology side,
13:45but also the policy making and the policy adoption of the things that are needed, that are softly needed to
13:55empower innovation.
13:56So one of those things is bringing the right people to the table to tackle the right problems.
14:02And ministry of possibility was a vehicle that was a virtual ministry.
14:05So there is no setting minister there, but that role is actually falls into whichever ministry or whichever stakeholder that
14:13has the most influence in that sphere,
14:16but it's not completely owning the whole domain where that innovation could come in.
14:21So in that way, we enable everybody who is applicable to that use case to be able to join the
14:26table and join the conversation and come up with a solution to the problem.
14:29One of those problems that we had very early on as well was digital identity, which is now one of
14:35the world's first digital identity that is not only a digital identity,
14:39but it's also a single sign on solution, digital signature solution, blockchain vault for all the citizens and residents of
14:48UAE.
14:48And that's called UAE Pass.
14:50So that is also enabling other platforms like a universal wallet and universal services layer to be on top of
14:57that digital identity,
14:58which is called Dubai Now, for example, which you can make automated payments.
15:03You know, you can do a lot of things autonomously without needing the government oversight, including P2P exchange of value.
15:11So all of this stuff is enabled and does not fall under one ministry or one government department.
15:16So you need vehicles like the Ministry of Possibilities to do that.
15:20But more importantly, you need to involve always the right decision makers and the stakeholders within the private sector,
15:28to come in and P2P or private public partnerships to enable innovation to happen and remove as much as possible
15:35red tape
15:36and be extremely agile in moving forward and projects, including understanding where the value is
15:42and understanding where we need to remove some of the existing laws and regulations to enable innovation to happen.
15:50Thank you.
15:51Thank you.
15:51And I know the government is doing a lot of interesting things, and we're going to come back to those.
15:55But I do want to move to you, Anu, and talk about, so back in 2016, the day after the
16:06presidential election in the United States,
16:09when Donald Trump was elected for the first time, I was in the room at a technology conference in California
16:17where Mark Zuckerberg stood up and told the audience there is absolutely no way that Facebook had any influence on
16:28this election.
16:29Now, whether this was not true, whether he truly believed that or if he was being cynical or less than
16:40truthful, hard to say.
16:42But, you know, we now know that Facebook absolutely has an impact.
16:47And so, Anu, you and your company focus on unintended consequences of AI.
16:56Talk to us a little bit about how you're working with companies to get them to think about those consequences
17:04before they actually release things into the market.
17:09Thanks a lot.
17:10Yeah, such a fascinating topic whenever comes a question of AI and ethics.
17:16For me, it really boils down to what are the concrete applications that humanity and especially the global tech scene
17:25and the global business world that's also convening here today is utilizing AI for at the moment.
17:32And are we seizing the biggest opportunities?
17:35So I run a company called Upright.
17:37Long story short, we build a quantification model to measure both the negative and the positive impacts a company has
17:44in the surrounding world,
17:45shortly put.
17:47And what I find fascinating when we discuss different kinds of technologies and their impact is that to what extent
17:53are we even understanding the actual relevance of a technology?
17:59When it comes to AI, we talk a lot about transformational.
18:02We all know about the exponential versus linear and sometimes as an engineer, I got a problem of how that's
18:08maybe treated a bit in a wrong way, mathematically speaking, but let's not go into that right now.
18:14But we're talking about how AI is so transformational.
18:17Let's take a concrete example like and when it comes to purely computing and purely algorithms, that's exactly right.
18:26But let's say that an AI solution is able to make a marketing technique or a marketing process 10x more
18:34efficient.
18:35That still doesn't mean that the actual outcome out of it is 10x more.
18:39And this is something that what I dedicate my life to is really to understanding what really comes, how do
18:45companies turn resources into outcomes of value.
18:48And in the discussion of AI, I would love to see a lot more discussion around what are the concrete
18:56problems that we are now throwing the best brains at.
18:59Because even though there is understandably a lot of discussion thinking that you are using AI when you are using,
19:05let's say, chat GPT as a business leader and nothing wrong with that.
19:09I do it too, but that's just a drop in the ocean of the actual impact that is happening.
19:15And it's not always the person who is aware of using it that's actually the sort of recipient of either
19:21the positive or the negative impact.
19:23Let's take a concrete example of drug discovery, new sort of leaps of progress that the pharma industry can have.
19:33There it's easy to understand that there may be actually just five individuals that are able to come up with
19:39a really, really hardcore solution to a problem.
19:42But then the people in this case, let's assume it's a good drug and it's a positive impact.
19:47It's then a whole nother story.
19:49So that's why I'm sort of trying to always emphasize the discussion of what is the tech world currently using
19:58AI for.
19:59What part of it is truly transformational and what part of it is when you zoom out a little bit
20:04actually incremental.
20:06And a lot of these productivity shifts in business, even though they are really mind blowing when as a user
20:13I'm mind blown every day by a new AI technology might not in the impact perspective be that always.
20:20And we should be able to tell the difference between the really impactful and the more incremental ways of utilizing
20:27AI.
20:27Thank you. So when we talk about impact, infrastructure also includes data systems and policy environments and institutional capabilities to
20:42collect, manage and act on data in real time.
20:45So can you talk to us about some real life examples of how AI can do good?
20:55Thanks. Yes. Actually, this is our main focus on with their AI team, which is to understand how to leverage
21:07the power of AI for good in low resource environments.
21:10Now, a lot of the evidence that is coming up from scientific studies shows that AI is a great leveler,
21:19meaning that, you know, there is a distribution of skills in all of our workforces and that distribution of skills
21:26affects the overall quality of what we deliver.
21:30So from the best doctor to the worst doctor, from the best nurse to the worst nurse. And what we
21:35see with the introduction of AI tools is actually that the bottom gets lifted up.
21:41There is even a study in the UK looking at firms and firms who are the lowest production firms are
21:49the first to adopt AI tools and actually help them kind of level up with the rest of the market.
21:57So this is quite interesting. And so how we can apply this concept to the delivery of public services in
22:04low resource environments where resources are really scarce, skills are very low.
22:10And the idea is to really introduce these tools in a way that allows even community workers to act more
22:20like physicians by having in their hands the knowledge which is being duplicated and triplicated every year.
22:29And I really be able to look at their patients with the support of a physician, a physician like support
22:38and be able to diagnose and provide treatment plans that are less likely to be erroneous.
22:46The idea also is to bring in, as I mentioned earlier, kind of digital gamification of learning apps to solve
22:58the illiteracy crisis in Africa and India.
23:02We're talking about a combination of 1.1 billion people who cannot read.
23:07And we have solutions where we can provide a phone and resolve the illiteracy issue within eight months.
23:17Now, how are we going to do that?
23:21The other issue is, you know, using predictive AI for different purposes.
23:28So we have invested in predictive AI to predict food crisis.
23:34On the basis of evidence that we have developed through experiments in Bangladesh, Niger, among others,
23:43that show that intervening before a crisis hits actually increases the impact of our interventions,
23:53both on food security, but also on mental health and other outcomes that then lower households' ability to earn in
24:04the future.
24:05So very interesting to link our interventions to predictions six to 12 months in advance,
24:14allowing us to do much better and much faster.
24:18Second, using predictive AI, for example, in the health system where the cost of the health system,
24:25more than 50-60% of the costs are represented by 5-10% of the population that are at
24:33high risk.
24:34So by using predictive models to narrow down who is at high risk for heart failure, for gastrointestinal disease,
24:43for any of the high cost hospitalizations, it has been shown in some settings that we can decrease hospitalization by
24:5346%.
24:55Now, you can imagine the impact on the health system, but also on the quality of life of people that
25:02have been contacted
25:04and help to prevent those major crises in their lives.
25:09The last thing I want to mention is, I think, an important example given the increase in violence all over
25:17the world,
25:17is what we have done in Nigeria to localize a large language model to understand the Nigerian language.
25:26Obviously, we know large language models have been trained on US, UK, Canada and Australian English,
25:35but the language, the multiplicity of language that we see in other settings require customization.
25:42And we have used this language model to identify hate speech on social media.
25:49And now, that hate speech online is very highly, I would say, causally related to offline violence outbursts.
26:00We can see the patterns of increasing hate speech and a violent outbreak across multiple years.
26:09And so we have used this to understand who are the generators of hate speech and who are the propagators
26:15of hate speech.
26:16Now, you know, platforms try to manage content, but they have very limited ability to manage that content.
26:27We're talking about 10-20% at most.
26:30And so we had a somewhat different idea, which was to convince people to make better decisions for themselves.
26:40And so we used Nigerian influencers to target propagators with pro-social messaging.
26:50And we have seen a large reduction. We have done this through experiments across the networks of propagators.
26:58And so we have kind of established causal impact of these messaging across the populations that were targeted.
27:05And we see a reduction, as a first instance, of 20-30% of propagation in hate speech.
27:12And we believe that will turn into a reduction in offline violence as well.
27:18So it is not so much whether we can do good with AI.
27:23Is how much we're involved, how much we're willing to look at problems in local context.
27:30Localize the use of these tools that seem very easy for us to use in the context of Europe and
27:38the US.
27:39But they're not so easily adaptable and immediately adaptable to local context.
27:43Thank you. So I think that's a great response to Anu, who was saying, are we really applying AI to
27:50the big problems?
27:53And you have given some great examples of how that is being done.
27:58But can I just say that we are looking for partners to scale up because these things are very, very
28:04intense, resource intensive.
28:06And we love to have more partners join in and help us replicate and scale these ideas all over the
28:14world.
28:15Great. Well, so I want to move now into the topic of ethics because there is no global ethical standard.
28:25Each country and different populations within countries have different ethics.
28:33And so how do we create frameworks locally to come to grips with some of the potential downsides of AI?
28:47So I want to turn to you, Marwan, and talk about the ethical framework and tools that have been created
28:54in Dubai and United Arab Emirates.
28:58Thank you. Thank you, Jennifer.
29:01I think one of the things that we were very blessed with in the UAE is we started doing in
29:072016 the Dubai blockchain strategy.
29:12And that helped us realize that a lot of the infrastructure that was needed to do blockchain was not there.
29:20So what we started doing in parallel is creating a paperless strategy in Digital Dubai,
29:25which enables us to move all the processes that were there and re-engineer all the processes from scratch with
29:33a new kind of mission to remove paper completely.
29:38And we were able to achieve 99.9 paperless kind of strategy, which drove the blockchain strategy.
29:46But also very quickly, we realized that to do blockchain as well, we have to make sure that the data
29:51is also cleansed and orchestrated in the right way to be able to be ingesting that data into blockchain systems.
29:57And then we started creating a Dubai data office to basically categorize AI and classify AI into three different categories,
30:06whether it is private data or shareable data or shared data,
30:10and also open data that is completely available to anybody who wants to access and digest that data and draw
30:16a lot of inferences from those kind of data.
30:18So having this huge kind of infrastructure enabled us to move very quickly.
30:23And we realized that we need to, back in 2017, we need to create an AI ethics committee to showcase
30:30to the world how we're actually looking at AI.
30:33This is way before LLMs and all these kind of innovations that came out.
30:37And within that committee, we had people from big tech companies, from the public policy entities, international and local, as
30:46well as local digitization entities,
30:48including startups, including academia, all of them to be included in this kind of AI ethics committee.
30:56And the result of that was creating a universal AI ethics tool set that enables individuals, companies, governments to be
31:04able to self assess how they stand when it comes to AI ethics.
31:08And what are the priorities that they should set forward and be able for them to be provided with a
31:16tool to showcase if their AI is biased in a certain way or the other, if it's transparent or not.
31:21And where they lie on that scale of transparency or even on the scale of things like accountability of the
31:31AI model that they use, the sources of knowledge on that, how clean that data is and how pure and
31:37how can they prove that mathematically or by using any kind of other model.
31:41So that makes you self aware and try to avoid all the, and even be even more proactive before you
31:49start your AI projects and how you can have a good AI posture moving forward.
31:55Starting that early and with the younger generation with the actual agile use cases enabled us not only to move
32:03faster, but along with all the infrastructure that I mentioned before enables us to move with very steady and build
32:10upon solutions.
32:11And I think that's one of the steps for the next phases coming up.
32:13Great. Thanks. So now I want to come back to you, Samantha, and ask you, KPMG has also developed a
32:20framework to ask you to tell us a little bit about that.
32:22And then I want to jump back to you, Anu, to talk about how startups should be thinking about reining
32:31themselves in.
32:33So Samantha, to you.
32:35I think your point was really valid. There is not consistent regulation across the globe.
32:41I think the EU AI act in some cases has become sort of a de facto regulation, but because there
32:48isn't strong regulation in many countries, the burden of responsibility has really come to the organizations that are developing and
32:56deploying AI.
32:57And so at KPMG, we developed our trusted AI framework that we think is a very holistic approach to managing
33:05AI risk, not only from a systemic control perspective, but also behaviorally making sure that people interact with AI in
33:16an ethical and responsible way.
33:18And, you know, with legacy technology, I think the biggest risks were always privacy and data integrity and cybersecurity, and
33:27they certainly remain critically important in an AI world.
33:30But in an AI world, the risk ecosystem is very dynamic, very, very complex, you add AI agents into the
33:37picture, and it's just becoming more and more dynamic and complex and difficult to manage.
33:42And so what we are doing for ourselves as client zero, but also what we are helping our clients to
33:47do is to, you know, implement this trusted AI framework, so that the outputs and the adoption of AI can
33:55be done on the basis of trust.
33:57And part of that is making sure that there's a very robust AI governance model in place that has accountability
34:04and oversight for the AI systems, making sure that there are responsible use policies and controls implemented across an organization
34:13to make sure the system is controlled, but that people know how to use it responsibly.
34:19And AI inventory is very important to have true transparency of what AI systems you have in your ecosystem, and
34:29then generating system cards or model cards that allow the users of AI to make a measured decision on the
34:37value that they can get or the impact that that AI can provide, measured against the potential risk exposure, because
34:44you're never going to be in a zero risk environment.
34:47And then finally, second to last actually, continuous monitoring and testing of many of these risks, including explainability of the
34:59models, prompting them to make sure that they are free from bias, you know, what is the sustainability impact of
35:08the energy consumption of the models?
35:10There's so many different things that need to be considered.
35:14And so making sure you're automating the monitoring and the quantification of that is really important.
35:20And then finally, a very, very strong AI literacy program, making sure that users understand how to use the AI
35:29to the best effect and accuracy to get the best outcomes, but also training them to understand ethical and responsible
35:39considerations of interacting with the AI so that it's well controlled, but you can also tell that the people are
35:45going to be interacting with it in an ethical and responsible way.
35:48Okay, thank you.
35:50Okay, thank you.
35:50So what Anu, what does the startup equivalent to this kind of framework look like?
35:55What sort of questions should startups in this room be asking themselves when they're developing new AI technologies?
36:05Great question.
36:07I think just listening to also this discussion, and of course, many, many discussions right now ongoing, rightfully so on
36:14the ethics of AI, I'm sometimes slightly sort of worried or frustrated with the sort of trend also in the
36:23startup world of being a little bit binary towards either being like, basically scared of what's happening.
36:33AI will mean for all of us, or on the other hand being like, yeah, AI is going to solve
36:38all of our problems.
36:39We're fine.
36:40And I think the way to tackle that and the way that the startup communities in many, many places in
36:47Europe right now are tackling it is to, first of all, go one level below AI.
36:50It's a big difference whether you're using, let's say, natural language processing technologies to summarize science to understand impact, a
37:00concrete example from my company, or whether you are talking about thinking of completely agentic doctors or decision-making makers
37:09making decisions on the lives of babies in developing countries.
37:13So we need to really slice and dice this a lot, and my advice for startups, or not advice really,
37:20but something that in the startup community with some of the best founders that I get to talk about this,
37:27is that you need to be more granular and really be able to, first of all, explain how you are
37:32utilizing AI.
37:33AI, and you need to also accept that no one is on the driver's seat, no matter what somebody may
37:41have an incentive to claim.
37:43The EU doesn't know yet how we're going to regulate this, neither really do any of the other areas, because
37:50we don't really even know what the technology will bring us.
37:52And I think that's the sort of, the good thing for startups is that startups are excellent in an uncertain
37:59environment.
38:00They're excellent in just dealing with a lot of curveballs.
38:03And I think definitely the way in which AI will be regulated and the frameworks that will be put in
38:08place are unknown to all of us now.
38:10Anyone saying anything else is probably not being very sincere.
38:16So I think this is something that the startup community, I think we will sort of navigate through it well,
38:24but it will require a new level of transparency,
38:27and maybe using less of the letters A and Y, and more of the more concrete descriptions of what is
38:34being done with what technologies, how, by whom, for whom, and so on.
38:39Thank you, Anu. And so we've talked about business and government and startups. What about the general public?
38:49How do we educate them about the consequences of using AI on their data privacy and ensure that everyone has
38:59the right skill set in place to not only ensure they're using it ethically,
39:07but that no one takes advantage of them?
39:09Marwin, do you want to jump on that?
39:12Yes. I think one of the initiatives that we had as part of the Dubai 33 initiative is something called
39:19the Dubai Universal Blueprint.
39:21And the idea in there is the shifting the focus on how AI can be used or be replacing jobs
39:27to how the human life can be improved with AI.
39:30So one of the things that we wanted to do as a measurement is improving productivity for every person that
39:36works in Dubai,
39:37no matter if it's blue collar or white collar, into 50% improvement by using AI, remove all mundane tasks
39:44and brain dead tasks basically from every job in Dubai.
39:48And not only that, but also giving people their time back and giving people time to spend with their family,
39:55learn a new skill,
39:57and also enable AI to be able to democratize a lot of the skills, be able to empower people to
40:03do their own research when it comes to legal advice or cooking or creative sciences and everything else.
40:13Being more creative and being more genuine, I mean, being more genuine about using their brain and tasks that are
40:21fruitful to them and impactful to them,
40:23and removing all of the jobs that are really redundant and nobody wants to do.
40:28AI will go places that people do not want to go and cannot go, for example, and work.
40:34Nobody likes the jobs that are in remote areas, for example. Robotics can help in that as well.
40:39So, improving human lives and the quality of lives and creating jobs where we can upskill and reskill people to
40:46be doing more of what they love every day
40:48and wake up every day looking for tomorrow and looking for hope rather than waking up every day dreading their
40:54job or workplace.
40:55I think that's a great point, reframing the metrics around AI away from, oh, you know, just productivity increases and
41:04how do we make money off of this
41:06to how do we actually improve lives. So, we're almost out of time. I'd like each of you very quickly
41:14to address what is the one thing you would like to see happen
41:20between now and the next VivaTech to advance ethical AI. Anu, can I start with you?
41:27Sure. Let's go one level down. Instead of talking about AI, let's be concrete about the applications, about the technologies,
41:36about what really follows from it.
41:39Because just business leaders like us using JetGPT, that is not the essence of AI. So, let's get more concrete.
41:49I think every person here should look at AI very seriously. No matter what business lines you're in, AI is
41:56overarching
41:57and the difference is going to be between the people who thrive and people who survive is whether they are
42:05using AI or not.
42:07So, it's not even a question. You have to do it and you have to do it now.
42:10Okay. Ariana.
42:12For me, maybe not surprisingly since this is what we specialize in. I think when we apply tools, any tools,
42:21not just AI tools, but digital tools and other solutions,
42:25we have to test them rigorously to make sure that we actually improve results. So, I'm a big promoter of
42:34testing, A-B testing through randomized control trials,
42:37getting very clear guidance on both the impact, the positive impact, the unintended consequences,
42:45so that then we can move on with the scaling of these tools with an ethical basis that we are
42:52given the right advice to act on
42:55and we are not actually regressing into, you know, doing harm.
43:02Thank you. Samantha, you get the last word.
43:04Really, really quick. I mean, such exciting opportunities that we've talked about today.
43:08And at KPMG, we like to say, can you be bold, fast and responsible with AI?
43:13I think we can be bold and fast as a society to achieve these amazing benefits we've talked about if
43:20it is done on a foundation of trust.
43:22Great. With that, I'd like to ask everyone to give a nice round of applause to our panelists.
43:27Thank you very much.
43:28Thank you very much.
43:29Thank you very much.
43:31Thank you.
Commentaires

Recommandations