Skip to playerSkip to main contentSkip to footer
  • today
ESG Talk #16 - June 2025. AI Through the Sustainability Lens

In this episode of Candriam Academy’s ESG Talks, we explore Artificial Intelligence through the lens of sustainable investing. With global investments in AI projected to reach $1 trillion over the next decade, this powerful technology presents both remarkable opportunities and pressing ESG challenges. From energy consumption and resource use to data privacy, bias, and job displacement, the environmental and social implications of AI are vast. This session breaks down the risks, opportunities, and ethical considerations surrounding AI, and demonstrates how ESG analysis can help investors navigate this evolving landscape to identify growth potential, mitigate risks, and assess the broader impact on sustainable development.

Category

📚
Learning
Transcript
00:00:00Hello and welcome everyone to the 16th edition of Kandrium's ESG Talks. My name is Mahi Nimczyk.
00:00:18I'm head of ESG client portfolio management at Kandrium and I will be your host today.
00:00:23Kandrium's ESG Talks bring together eminent experts from various fields to discuss insights on key sustainability challenges.
00:00:34And today we're going to look at the topic of artificial intelligence AI through the sustainability lens.
00:00:41AI is spreading rapidly as we all know. It's reshaping entire industries, unlocking economic value and of course it is creating interesting opportunities for investors.
00:00:53Some AI applications are also showing us that they hold the potential to drive positive environmental impact and to address some pressing social challenges.
00:01:05But at the same time this new world of AI also ushers in a whole range of new risks and sustainability challenges that investors must of course manage to handle.
00:01:19Today we'll look at how they can do that by using notably some of the tools of ESG analysis.
00:01:27And to do that we're lucky to have with us two experts in the area.
00:01:32Johan van der Beest is head of thematic global equity at Kandrium and also the lead manager of Kandrium's robotics and innovative technology fund.
00:01:43Johan, you have been investing in technology since 1992, so you have a deep and long-standing expertise and know-how in the field and have seen the sector evolve through various markets.
00:01:57Thank you so much for being with us today and for sharing that expertise with us.
00:02:01Emma Miguel Unzoe is ESG analyst in Kandrium's ESG research and investment team.
00:02:09Amongst her sectoral responsibilities, Emma conducts fundamental ESG research on sectors including telecoms, media, retail and others.
00:02:19And Emma, you've also been instrumental in developing Kandrium's research and analytical approach with regards to artificial intelligence.
00:02:27And you have also engaged with companies on their artificial intelligence policies.
00:02:32Thank you so much for being with us today.
00:02:35Before we start and get into the topic, just a few housekeeping points.
00:02:39Please note that this webinar is recorded. It will be shared on our website afterwards.
00:02:45Please also feel free to ask your questions in the Q&A functionality during this discussion.
00:02:52We'll try to get as many questions answered at the end of the discussion.
00:02:59So without further ado, let's move into the topic then.
00:03:03AI, of course, seems to be at the top of many agendas today, but what are we actually talking about?
00:03:11Johan, to start us off, could you maybe help us
00:03:15arrive at a brief definition of what AI is and share a little bit with us how this technology evolved and became what it is today?
00:03:26All right. Thank you, Marie, for this kind introduction and happy to be amidst you.
00:03:35Basically, it all boils down artificial intelligence to, let's say, a technology that is designed
00:03:43to think and to learn like humans. That's basically the definition of AI.
00:03:50This technology is designed to understand language, to recognize pictures.
00:03:56It can support and it can even make the decisions.
00:04:01It can solve very complex problems.
00:04:05AI, of course, exist already for a long time.
00:04:10I think in the late 50s, Alan Turing came up with this idea to create a machine that could simulate
00:04:19human behavior and human intelligence.
00:04:23The reason that it took, let's say, so long before it is really spreading AI is really that you need
00:04:32a huge amount of computing capacity and a very easy accessibility of data.
00:04:40And that's something that we only got like the last, let's say, two decades.
00:04:45And that's also the reason why we only see now a lot of, let's say, interest in artificial intelligence.
00:04:51And a lot of applications are really common to the market.
00:04:56Of course, everyone knows ChatGPT and generative AI.
00:05:01And I think going forward, we will talk about generative AI because that's obviously the most known,
00:05:08let's say, sub part of artificial intelligence.
00:05:12Generative AI meaning that you can train an algorithm and eventually that algorithm
00:05:18will be able to generate content, be it pictures, be it text, be it music.
00:05:24So there are all kinds of generative AI applications.
00:05:29And I would say just for the information, the latest, I would say, sub part of artificial intelligence,
00:05:39in our opinion, would be agentic AI or AI agents where you take artificial intelligence still a step further
00:05:49by letting decide your algorithm about some decisions that need to be taken.
00:05:57So that's, for me, that's really the latest step in artificial intelligence.
00:06:04All right. So fast forward to today, then we've seen this quite historic development with this clear acceleration.
00:06:12And today, the pace of expansion of AI and of adoption is, of course, very impressive.
00:06:20Can you talk to us a little bit about the drivers of this expansion and fast adoption?
00:06:26Yeah, sure. And as you mentioned, the artificial intelligence, I think, is an especially generative
00:06:32AI has been one of these technologies that has been adopted at an extremely fast pace.
00:06:39If you know that, like, it took only like two months before, like 100 million people had already
00:06:47connected to ChatGPT. And of course, in the beginning, we were all just curious to see how smart this tool was.
00:06:54And we asking questions like, explain me in like five sentences how, how quantum computing is working.
00:07:01All these very complex, complex problems, they could be solved and explained very easily
00:07:08by using ChatGPT. Of course, companies were very quick to understand the huge potential
00:07:16in developing tools that were based on generative AI, because they saw immediately the huge, let's say,
00:07:23efficiency gains that could be extracted from using artificial intelligence.
00:07:32And that's also the reason why we have seen over the last decade, these massive investments by these so-called
00:07:39hyperscalers in the infrastructure that is needed to, in fact, to develop and to deploy
00:07:48many of the applications that we are currently seeing coming on the market.
00:07:53So, and yeah, I think even there are a lot of applications already in and we are using already
00:08:03a lot of applications. When, when, for example, giving or when, when booking a holiday, and I did it myself,
00:08:10I just asked ChatGPT to give me like an, let's say, an itinerary, all the things that I could do.
00:08:18And that was just, it took like one second, not more than that. So the applications are really
00:08:25enormous and we are just at the beginning of the use of artificial intelligence.
00:08:30Emma, I'm interested in hearing your perspective as well on this fast, very impressive
00:08:40adoption. How is artificial intelligence today already integrated in our lives?
00:08:47In your day-to-day life, you might not even realize it, but you are already using AI. Think about
00:08:55personalized music recommendation, like every time a user plays a song, skip it, adds it to a playlist,
00:09:03or listen to it multiple times. That action generates data and AI systems use millions or even billions of
00:09:11these interactions to act and predict what kind of music each person might enjoy. So concretely,
00:09:20by analyzing this large amount of data, the algorithm can detect patterns in behavior, like
00:09:27which songs are often listening to together, what styles appeal to similar user profiles, or even
00:09:34what kind of music prefer to listen at certain time of the day. So without this huge data set,
00:09:41the recommendation would be far less accurate. But with it, AI can suggest songs that truly match a listener
00:09:49taste, and sometimes even before they know what they want to hear.
00:09:54Right. So it's probably safe to say that most of us have already come in contact with AI just in our day-to-day
00:10:02activities, knowingly or not. Now, let's turn a little bit to the economics of
00:10:08AI. Johan, how is AI changing the way that companies operate? And related to that, what does this mean for
00:10:18investors looking at companies at the markets today in terms of investment opportunities?
00:10:26All right. So let's maybe just start on the enterprise side and how AI is impacting companies. Currently,
00:10:34every single company, every single company or almost every single company will say that they are using
00:10:39artificial intelligence to make efficiency gains. A lot of course is still like just marketing talk,
00:10:49because we are just at the beginning. And in fact, many companies first, they already have like
00:10:56huge amounts of data, but they need to define, first of all, a use case. And then they can try
00:11:04to develop or to adapt some existing AI tools. They can try to work together with some service
00:11:13companies, IT service companies, or they can buy simply some solutions that are already on the market, or
00:11:19they can do like a combination of these of these options. But clearly companies are starting to use it to
00:11:28improve or to gain efficiency. From the investor point of view, and that's the slide that you see.
00:11:38We think at this stage that there are different layers, and not every layer is as important for our
00:11:45interest for an investor. We used to start with the hardware layer. So really the companies that enable the deployment of
00:11:55artificial intelligence. And this hardware, I really think that you need to think very broadly about
00:12:03the definition of hardware. Because, of course, everyone is well aware of the the dedicated GPU
00:12:09chips that you need in the data centers. Everyone is well aware of the data centers itself. But behind this
00:12:17ecosystem, this ecosystem is much broader than just the chips and the data centers. Think, for example,
00:12:26about the interconnect network technology that you need. Think about also simply the utility companies
00:12:34that you need. Because, as will be discussed later on, these data centers, they use huge amounts of electricity. So,
00:12:42you need utilities or electricity generating technologies very close to these data centers. Think
00:12:50also about the cooling technology that you need. Even the cyber security, because clearly if everything is
00:12:58happening in the clouds, it needs to be secured. So, cyber security is also for us part of this enabling
00:13:05technology. The next layer, the data layer. As you know, we need huge amounts of data to train these algorithms. So, data
00:13:17needs to be extracted, needs to be integrated, needs to be retrieved, very easily
00:13:23interpreted. All kinds of data, structured, unstructured. So, there are a lot of companies that are active in that space and we
00:13:31think might are still interesting as from an investor point of view. Then the third level that we see is really
00:13:39the level of the models. Companies that are creating these models such as ChatGPT that is created by OpenAI or
00:13:48Gemini created by Alphabet and so on and so on. These companies, these are huge companies because it
00:13:55necessitates a lot of investment to really develop these models. The fourth layer, the application
00:14:04layer for us is a little bit more difficult in the sense that we see indeed companies such as Adobe or
00:14:11Workday or ServiceNow. They are coming to the market with some applications based on AI. The difficulty is
00:14:19how will they monetize it? Also, even from Microsoft, I guess that many of us are already working with with
00:14:27co-pilot. The question will be how will they monetize it? Because behind these new applications, there are huge, there are
00:14:35massive investments. So, the monetization of these applications is still not clear to me. So, we are a little bit more
00:14:44reluctant as an investor on this fourth level. And then finally, the services level. So, the IT service
00:14:52companies, I mentioned it already. Companies, not every single company is ready to develop everything
00:15:00themselves and that's the reason why companies such as IBM or Accenture or Capgemini, they are very active
00:15:09here. And we see already some, let's say, some tangible revenue contribution to the development of AI tools for these
00:15:20specific companies. So, that's the way that we are thinking about investing in artificial intelligence. The last point that I
00:15:29wanted to mention is for us, it's still a little bit too early to talk about the adopters because eventually every single
00:15:39company is supposed to benefit from the deployment of artificial intelligence. Anyhow, at this stage,
00:15:46it's very difficult to quantify where we are already in the, let's say, in the efficiency gains. But it is
00:15:54clearly something that we follow and that we monitor very closely. But for us, based on AI, it's still a little
00:16:02bit too early to invest in adopters. Thank you so much, Johan, for this very interesting overview of this
00:16:11new ecosystem, so to say, and a realm of new factors and elements to integrate as an investor in terms of
00:16:21what these players are doing, but also how they're doing it. Can you talk to us a little bit about what
00:16:28this means for financial analysis? What are the impacts of the emergence of what you've just
00:16:34described on the way you analyze companies in order to make an investment decision?
00:16:41Yeah, sure. I think there are two, basically there are two aspects. The first one is how do we use in
00:16:49our daily work as financial analysts? How do we use artificial intelligence? Because yes, we do already,
00:16:57and it's not marketing in this case, we do already use artificial intelligence to, let's say, to make
00:17:03it easier for us to crunch all these data because there are a lot of data out and we use artificial
00:17:11intelligence already to, let's say, enable a broader and deeper assessment of companies. The other way,
00:17:22of course, and I alluded already a little bit to it in the former question, is how do we see the impact of
00:17:32artificial intelligence in our assessment of a company? And I think it's very easy, it's not that easy, but
00:17:40it's, let's say, it's easier to see for the companies that are enabling AI. So, the pyramid that we showed, there we have some visibility, not for many years,
00:17:52many years because, yeah, this technology is evolving so rapidly and there are a lot of questions. Will, for example, the semiconductor, the GPU technology, will that be the only way to deploy artificial intelligence?
00:18:06Will the hyperscalers? Will the hyperscalers continue to invest massively in the underlying infrastructure? So, we do have some visibility, but not a lot.
00:18:18At the other hand side, for the adopters, it's even more difficult. And the way that we are looking at it is, we first, at this stage, we need to see an AI strategy. It's really on our checklist. If a company does not have an AI,
00:18:34let's say, let's say, strategic view, that for us is already, in our assessment, it's a negative, because in that case, we think that management is really not aware of what is happening, and they are not trying to use artificial intelligence as a competitive advantage. So, that's the way that we are looking, both on the enablers and on the adopters part of the universe.
00:18:58Thank you very much. So, we spoke quite a bit about companies, how to look at them, but of course, these companies operate within a broader context.
00:19:10I want to take a little bit of a step back and talk a little bit about the regulatory environment as well as self-regulation that creates the level playing field or attempts to create the level playing field for companies when it comes to
00:19:27AI. Emma, can you talk to us a little bit about what you see happening in terms of the regulatory context currently, and what the key challenges are in respect to regulatory initiatives in the area of AI?
00:19:45Yes. So, AI governance today is becoming one of the most critical conversations in tech and policy, and in my view, it's absolutely necessary. So, the capabilities of AI are growing rapidly, and while that offers interesting opportunities that we've discussed, it also brings serious risk, from misinformation, bias and surveillance, to job displacement, and
00:20:15safety concerns. So, where do we stand? We are in a kind of transitional phase, so AI is no longer emerging, it's there, and it's being integrated into everything, from search engine to education, healthcare, but our regulatory frameworks are still playing catch up.
00:20:34But, one of the most important initiatives so far, but one of the most important initiatives so far is the European Union AI Act. So, it sets the tone globally for how AI could be regulated.
00:20:46So, this act takes a risk-based approach. So, it classifies AI systems into different categories, minimal risk, limited risk, high risk, and unacceptable risk. And depending on where a system falls, different obligations apply.
00:21:04So, for instance, high risk-based approach. For instance, high risk, high risk AI systems such as those used in fashion recognition in public space, also in credit scoring, or hiring algorithms are subject to strict rules. And developers must ensure transparency and human oversight.
00:21:22So, this framework is both clear and ambitious, and it's pushing companies to build AI more responsibility from the ground up.
00:21:32And now, if we contrast that with the US, where the approach is much more decentralized and driven by self-regulation, there is no single national AI law.
00:21:44So, instead, we have different agencies and states that are experimenting their own guidelines, and many companies are developing voluntary code of conducts.
00:21:56So, big players in tech have published AI principles, ethical commitments, and a joint industry alliance focused on responsible AI.
00:22:06And, in some way, this self-regulatory model allows for faster innovation, but it also puts a lot of trust in companies to police themselves,
00:22:18and we know from experience in some sectors like social media with data privacy, that this doesn't always work.
00:22:27And, we have also internal bodies like the OECD, ISO, that are also stepping in to help create global standards, especially on technical aspects like transparency, safety testing, or data governance.
00:22:42But, these efforts are still evolving, and they are not currently legal binding.
00:22:48So, for the last part of your question, what are the key challenges?
00:22:53First, the speed of AI development is outpassing the ability of policymakers to respond.
00:23:00AI systems today evolve in months, even in weeks, while laws take years to pass and implement.
00:23:08And, this creates a gap between what AI can do and what's being monitored or controlled.
00:23:15And, then, even if good rules exist, enforcement is tough.
00:23:19Because, AI is often inside other tools, in apps, websites, chatbots, and so on.
00:23:27So, tracking where and how it's being used can be extremely difficult.
00:23:32And, then, we are also seeing a major difference in how countries approach AI governance.
00:23:38And, that fragmentation is becoming an issue.
00:23:41For example, the EU is more precautionary, focusing on human rights and accountability.
00:23:48The US is more market-driven, aiming to foster innovation while also managing risk.
00:23:55And, then, you have China that takes a state-centric approach with strong government control and tight alignment between AI development and national strategy.
00:24:06And, this divergence creates confusion for global companies.
00:24:10It has compliance costs, interest concerns about regulatory arbitrage.
00:24:15Because, companies might shift operations to regions with the loser rules.
00:24:22And, finally, we cannot ignore the role of big tech.
00:24:26So, these companies have enormous influence, not just in building AI,
00:24:32but also in shaping the narrative around the regulation.
00:24:37And, often, they participate in policy discussions and lobby lawmakers.
00:24:43But, it's not always clear whether that's for the public good or to protect their market position.
00:24:50So, when we hear about AI safety, we have to ask, okay, is it really about making AI safe for everyone?
00:24:59Or, is it more about controlling the rules of the game?
00:25:04Matt, you were just talking about regulation and you mentioned European regulation.
00:25:11One of the key principles that has underpinned sustainable finance regulation in Europe for the past five to ten years or so has been this concept of double materiality.
00:25:21It's also a concept that's often debated when it comes to ESG and sustainable investing.
00:25:28This idea that, on the one hand, investment portfolios have impacts on the environment and on society.
00:25:34And, that, on the other hand, environmental and social challenges impact the value of portfolios.
00:25:40Can you talk to us a little bit about how this concept of double materiality applies to the realm of AI?
00:25:50Yes. So, traditionally, companies focus on what's called the outside-in materiality, meaning how ESG issues affect the company's performance or value.
00:26:03For example, a company might look at how climate change or stricter data regulations could impact their operations or revenue.
00:26:14But, there is also what we call the inside-out materiality.
00:26:19And, this looks at how the company's own activities impact society and the environment.
00:26:27And, when we apply this to AI, with AI, you cannot just ask, how is this technology affecting our business model or helping us reduce costs or improve decision-making?
00:26:40You also have to ask, how are our AI systems impacting people, communities, and the planet?
00:26:47So, overall, double materiality in AI means that companies need to assess how ESG risk affects their AI strategy.
00:26:56So, that's the outside-in. And, how their AI use affects society and the environment.
00:27:03So, that's the inside-out.
00:27:05And, to give you a concrete example, from an outside-in perspective, AI can expose companies to ESG risk.
00:27:13For example, biased algorithms that can lead to repetitional damage or re-suit.
00:27:20But, from an inside-out perspective, the focus shifts to how the company's AI systems are affecting the world.
00:27:28So, are their algorithms reinforcing discrimination?
00:27:32Are they promoting misinformation?
00:27:34And, are their model energy-intensive and contributing to carbon emissions?
00:27:40So, these are ethical and environmental consequences that go beyond the company's own financial interests.
00:27:48And, they are highly material in terms of low-term sustainability and regulatory compliance.
00:27:57So, maybe let's spend a little bit more time on that, on this inside-out effect.
00:28:04And, let's try and break it down a little bit into the potential of AI to help with some of the social and environmental challenges that our world is facing.
00:28:16From your practice, Johan and Emma, can you share a few examples as to applications you've seen that hold this kind of potential to contribute positively to the challenges the world is facing today?
00:28:31I'll start with you, Johan.
00:28:33Johan?
00:28:34Yeah, sure.
00:28:35I think the healthcare sector was one of the first sectors to adopt artificial intelligence.
00:28:43And, as you can see on the slide, drugs discovery, with the help of AI, the number of drugs that has been, let's say, co-developed with AI is rising exponentially.
00:28:58And, of course, the number is still quite limited, but it is especially the shape of the curve that is really important.
00:29:08So, I think healthcare, it's quite clear.
00:29:11We have already quite a lot of help and some examples of the use of artificial intelligence.
00:29:20Another example that I really like is in precision farming.
00:29:25Precision farming is a company out in the US that, in fact, trained a self-driving tractor to destroy weeds in the fields by using a laser beam.
00:29:40So, the advantage is, in fact, huge in the sense that there is no need anymore to use herbicides.
00:29:50We all know the negative impact of herbicides on the biodiversity.
00:29:55So, I think here also the fact that the algorithm has been trained to recognize weeds in the fields, be it a corn field, be it a vegetable field.
00:30:09That has really been very helpful in the, let's say, in the huge reduction of the use of herbicides.
00:30:17And, Emma, from your perspective, can you also share maybe a couple of examples with us on the potential of AI to contribute to some of the ESG challenges that you're seeing?
00:30:34Yes.
00:30:35So, AI can play a positive role in sectors like telecom and finance, improving security and also reliance.
00:30:45If we take the example of cybersecurity, in both sectors, so telco and finance, companies deal with massive volumes or real-time data and traffic, which would be impossible for human teams to monitor in full.
00:31:02AI can be used to detect anomalies or suspicious patterns that might indicate a cyber attack or a data breach.
00:31:11And this kind of real-time threat detection helps safeguard millions of user personal data, which is a key social concern.
00:31:21And in finance, it also helps prevent identity theft, fraud or unauthorized transactions.
00:31:30And this is important as more people rely on online services for everyday banking and payments.
00:31:38So, great examples of potential contributions, positive contributions that AI can offer here.
00:31:49But, of course, the elephant in the room is a little bit the question of energy management, energy consumption, right?
00:31:57So, have you seen AI applications, solutions around AI that actually address that very problem?
00:32:07Maybe Johan over to you?
00:32:09Yes, yes, sure.
00:32:10And I think one of the greatest examples is a data center, a really advanced data center in Sweden, especially dedicated to artificial intelligence.
00:32:23And as you can see on the slide, cooling is really like almost 50% of the energy consumption of a data center.
00:32:33So, here also in the case of this Swedish data center company, you have all these GPUs, these CPUs that are working constantly day and night generating immense warmth.
00:32:48Now, engineers tried all kinds of technologies, even using like liquid cooling, but none of them were sufficient, in fact, to cool down the data center.
00:33:02So, they decided to try to use artificial intelligence and especially ATAR, which is an AI system that is trained not only on the data center schematics, but also on weather patterns, on thermal dynamics, on real-time sensor data from every corner of the data center.
00:33:24And this system.
00:33:25And this system, in fact, learned a lot of things, which might seem details, which might seem really unimportant.
00:33:33But in the end, if you combine all these details, they noticed that there was an enormous gain of efficiency in terms of cooling possible.
00:33:44So, the system, thanks to this real-time indicator, was able to detect that some wrecks, in fact, some server wrecks run hotter.
00:33:56Not only because of the workload, but just because of the sunlight that was hitting the outer wall at a specific angle during summer.
00:34:05It also discovered that the cooler air from the nearby river could be channeled more efficiently if the intake fans pulsed in sync with the wind gusts.
00:34:20It even predicted the thermal hotspots before they formed.
00:34:24So, they were able to transfer these workloads to cooler zones.
00:34:29So, within six months, and that is really amazing, within six months, cooling energy consumption dropped by 30%, 30%, without, in fact, doing really anything very important.
00:34:42Just taking into account a number of details that were very important.
00:34:47So, 30% of energy reduction for cooling is already like a huge, huge achievement, thanks to AI.
00:34:59So, there are definitely applications that offer positive contributions and that have demonstrated effects, both on the social side and on the environmental side.
00:35:14But if we take a little bit of a step back again and look at the impact of AI overall, of course, there are certain risks that we cannot ignore.
00:35:25So, maybe to start here on the environmental side, and we just saw a solution here with a significant reduction in terms of energy consumption.
00:35:34But if we bring this back to a broader scale, what can we say more generally about this environmental impact of AI systems?
00:35:43To transition from the energy efficiency gains that Johan just mentioned, it's important to remember that these efficiency improvements don't always compensate for the huge scale of data usage.
00:36:02And this phenomenon is known as the Jevons effect.
00:36:06So, it means that as technology becomes more efficient, the overall consumption can actually increase because demand grows.
00:36:14And in AI, even though models may become more efficient, the volume and complexity of data and also computation are exploding.
00:36:25So, overall, total energy use often rise.
00:36:29For example, training big AI models takes a ton of computing power and consumes lots of electricity.
00:36:37And the data centers that run these AI systems needs lots of water to keep cool, which can be a problem in place where water is already scarce.
00:36:48So, from an environmental perspective, AI is not limited to energy consumption alone.
00:36:55It extends to water consumption and the carbon footprint of data center infrastructure.
00:37:02But now, on the social side also, AI brings some important risks, especially around data privacy, bias, and jobs.
00:37:13If we take, for example, facial recognition technology, while it can improve security and convenience, FRT has been linked to serious privacy violations and biased outcomes.
00:37:28And also, these systems often involve massive data collection, which raise concerns about how data is stored, protect, and used.
00:37:38And if personal data is mishandled, it can hurt people's privacy and rights.
00:37:45And even beyond bias and privacy, there is also the issue of employment.
00:37:51So, AI-driven automation can display certain jobs, especially in routine or manual tasks.
00:37:58So, the social impact on workers and communities can be important and also require thoughtful transition plans and, of course, policies.
00:38:11So, we saw at the top of our discussion that, of course, there are lots of interesting opportunities coming up with AI,
00:38:17notably thanks to the efficiency gains that companies are able to implement thanks to these new systems.
00:38:26And we've seen that there are also lots of risks and opportunities from a sustainability point of view.
00:38:32Now, this, of course, creates quite a complex environment for the investor.
00:38:37Emma, how can an investor approach this new realm of risk and opportunities?
00:38:46How can ESG analysis be used as a tool to get a better sense of these risks and opportunities?
00:38:55Can you give us a few examples, maybe, of what kinds of things investors should be looking for when looking at AI?
00:39:03What kinds of questions to ask companies to get a better sense of how they're implementing new technologies?
00:39:11Yes.
00:39:12So, first, when approaching artificial intelligence from an ESG analytical point of view,
00:39:21I think the first and most important step is understanding how a company is involved with AI.
00:39:29So, at Kendrium, we are developing a framework that distinguishes three types of actors in the AI ecosystems.
00:39:37So, we have the developers, the deployers, and users.
00:39:41And this framework allows us to tailor our ESG analysis to the specific risks and responsibilities of each player.
00:39:52So, if we take the example of developers, so the companies building the AI model themselves,
00:39:59think of big firms like OpenAI or Entropic.
00:40:04For them, the ESG risks emerge at the core of the AI system.
00:40:09So, the focus is on AI governance and ethical design.
00:40:13So, when we are analyzing a developer, we are asking ourselves the question,
00:40:19okay, does the company have a publicly disclosed AI policy or a set of principles?
00:40:25Is the company integrating human rights consideration into its AI development?
00:40:32Is there an independent AI ethics committee that reports to the board?
00:40:37So, these indicators helps us to understand whether developers are embedding AI governance.
00:40:45Next, we have the deployers.
00:40:48So, companies that don't create AI systems themselves, but integrate them into their operations and services.
00:40:57For example, a telecom company using AI for network optimization or a major firm leveraging AI to personalize content.
00:41:06And here, the key ESG focus is on data governance.
00:41:10How is user data collected, processed and stored?
00:41:15Is there transparency in algorithm decision making?
00:41:19For example, if a company is deploying AI to monitor traffic patterns or customer behavior,
00:41:28they need to explain why this data is being collected and how long it is stored and for what purpose.
00:41:35And lastly, we have the users.
00:41:38So, companies that use AI tools in their daily operations without developing or integrating them at a technical level.
00:41:47For example, we have banks using AI for fraud detection, retailers deploying AI chatbots.
00:41:55And here, the ESG risks are more diffuse, but still important in terms of data privacy, job displacement risk, and bias and fairness in automated decision.
00:42:09So, in our analysis, one indicator we track is also employee turnover.
00:42:15So, is there a certain price that may be linked to AI-induced restructuring?
00:42:20We are also looking at AI-specific training program.
00:42:24Okay, are companies preparing their workforce to adapt and evolve in an AI-driven environment?
00:42:31And regarding environments, the highest environmental risk lie with developers and deployers due to the massive energy and water demands on training and running AI model.
00:42:45And here, we are also looking at specific indicators.
00:42:49The power usage effectiveness of data center.
00:42:53So, which is most of the time, it should be one, but most of the time it's not the case, but the closer to one is the better.
00:43:02The type of energy use.
00:43:04Is it from renewable, fossil fuels?
00:43:08And we are also looking at water consumption.
00:43:11So, water is often used for cooling high-performance computing infrastructure.
00:43:16So, companies should develop a water strategy with more intelligent water cooling system.
00:43:23So, in summary, analyzing AI from ESG lens requires a multi-dimensional approach tailored to the company's role in the AI value chain.
00:43:36And developers must show leadership in ethics and governance.
00:43:41Deployers must prove they handle data responsibly.
00:43:46And finally, users should manage social risk like job disruption and digital inequality.
00:43:55Thank you so much.
00:43:56I think it's very useful to have those concrete questions, example of concrete questions to ask companies and to look out for when analyzing them from a sustainability point of view.
00:44:08You spoke about the importance of governance of ethical AI policies or positions, at least, in the analysis.
00:44:16Now, the investor, of course, has this role of making sure they fully understand where a company is coming from in terms of AI.
00:44:25But investors can also have an additional role, which is through their investments and through engaging with companies.
00:44:33Actually, encouraging companies to move into a direction that makes the space of AI more transparent, more self-regulated, better governed and so forth.
00:44:48Can you talk to us a little bit about the role of engagement and its importance when it comes to AI?
00:44:55Yes, but for example, we launched an engagement campaign regarding facial recognition.
00:45:04So to give you a bit of context, we saw a company a few years back rolling out ethnic recognition technologies and no one was really talking about it.
00:45:16And it became clear that many large tech firms weren't transparent at all about what they are doing with AI, especially for facial recognition.
00:45:26So here we are talking about all the GAFAM, all developing their own facial recognition systems and often with little oversight.
00:45:36So these companies were basically in race.
00:45:40Whoever collects the most data holds the most power and also takes on the most risk.
00:45:46And so at Kendrium, we published a paper to highlight those risks.
00:45:51And following that, we launched a collaborative investor initiative, bringing together 50 investors representing over 500 billion in assets,
00:46:02all asking tech companies to be more transparent, more ethical and better regulated.
00:46:10And this then grew into a broader engagement campaign on AI in general.
00:46:15It was not just facial recognition.
00:46:18And it now includes over 60 investors.
00:46:21Together, we've had 30 direct dialogue with major tech companies.
00:46:27And this initiative is now led by Kendrium, Boston Common and Fidelity.
00:46:34So when we started this campaign, I think it was around 2021.
00:46:40Only 15% of the 200 big tech companies had any kind of policy on responsible use on AI.
00:46:49And today it's up to 40%.
00:46:52So progress is happening.
00:46:54But that still means 60% have no policy at all.
00:46:58And there is still a lot of work to do.
00:47:02Absolutely.
00:47:04Johan, you spoke to us earlier a little bit about how you actually use AI or AI elements in financial analysis.
00:47:14Emma, I also want to ask this question to you because we, of course, very much talking about how we look at AI.
00:47:21But how do you use AI and how is AI starting to be integrated in ESG analysis?
00:47:29Can it be a useful tool for investors, for asset managers as well?
00:47:34Of course, we are no exception to the rule.
00:47:37We are also using AI in our analysis.
00:47:41AI helps us process huge and complex data sets quickly.
00:47:46It spots patterns, connects diverse sources, and even monitors ESG events in real time.
00:47:53So that boosts our ability to detect risks and opportunities we might otherwise miss.
00:48:00But AI is just a tool.
00:48:03It cannot replace human judgments.
00:48:06We still need ESG analysts with real sector expertise.
00:48:10We know how to interpret the data, understand company-specific and sector-specific challenges.
00:48:17We can also validate the data quality, filling the gaps and going deeper, especially for smaller or private companies where data is limited.
00:48:29So I would say that at the end of the day, AI gives us speed and scale, but it's really human insight that brings the depth and context needed for solid ESG analysis.
00:48:44Yes, so we too as investors, as asset managers, must be careful in terms of how we apply AI.
00:48:49Just as we ask our companies that we invest in to have good governance and an intelligent and human approach to AI.
00:48:59Thank you so much for your insights so far, Johan and Emma.
00:49:03I now want to open the floor up to questions from the audience.
00:49:06So everyone, please feel free to send us your questions and comments via the Q&A functionality of Teams.
00:49:16We actually have already received a couple of questions and maybe a first one to you, Johan, from the audience.
00:49:25Do you think that ESG investors interested in Gen AI are too focused on the big tech model developers in their portfolios?
00:49:35And should they be looking to own stocks rather in the enabling layer of AI?
00:49:45The examples cited here are cybersecurity names, data centers, IT services, etc.
00:49:51Can you comment a little bit on that, on different areas of focus and where you see opportunities currently and how to build these different elements and the different sub sectors into a portfolio?
00:50:04Yeah, sure.
00:50:05In fact, I already answered, I think, more or less that question, but just to add some color to it.
00:50:15At this stage, we are still convinced that all companies that are, in fact, kind of enablers, that there is still huge potential.
00:50:26And we see all kinds of signs that the massive investment that we have seen over the last, let's say, two, three years, it will continue.
00:50:38Because we see that many of these hyperscales, they say, OK, we are not able to cope with the massive AI workloads that we are receiving currently.
00:50:48So from that point of view, I think there is still a lot of value to be made in these, let's say, in these stocks that are especially focused on the infrastructure side, on the enabling side.
00:51:04I am still a little bit more sceptical on the adopters side.
00:51:10I would never buy, let's say, a healthcare company because they are using or because they are far advanced in using artificial intelligence.
00:51:21I think it's an attractive plus, but it will never, until now at least, it will never be the main reason to buy a healthcare company or an industrial company.
00:51:33So I think it's still too early to see the tangible evidence of efficiency gains for these adopters.
00:51:42Thank you very much. And maybe over to Emma, a question that reads, do you think we are getting closer to knowing which companies are AI safety leaders versus laggards?
00:51:56And I think we can interpret AI safety here a little bit more broadly and talk about responsible and sustainable application of AI.
00:52:07In your analysis, you spoke about the things that you look at when looking at companies, the questions that you ask.
00:52:15Do you feel as though you see an emergence of certain leaders versus laggards?
00:52:25Oh, you are on mute.
00:52:28Sorry, I was just saying, I don't know if I can say any names in that here.
00:52:36No, let's maybe just talk about general trends.
00:52:39I mean, do we see certain cohorts, you know, groups of companies without naming any names here?
00:52:47We can't do this unfortunately in this forum, detach and kind of maybe even wanting to become leaders in this area and wanting to, you know, set an example and taking a strong stance versus companies that have clearly chosen another path, which is to not focus so much on matters of governance and so forth.
00:53:13Or is it really a very mixed back currently in the market?
00:53:18We see more and more companies that are shifting to AI principles, to AI, responsible AI principles.
00:53:30I would say that it's not the majority of companies, but some of them are building their own AI principles that include human rights consideration.
00:53:42And they, they have also sometimes in the best practices, companies that oversee, a committee, sorry, that oversee ethical AI.
00:53:54They ask themselves the question, how ethical AI is operationalized?
00:53:59And sometimes they even conduct human rights impact assessments, I would say.
00:54:08So these are positive developments.
00:54:11Are there any other things, if you would have, you know, your wish list of evolutions you'd like to see in the space?
00:54:18Where would you like to see the market moving, but maybe also regulators and investors?
00:54:26What would be your wish list for future developments over the next few years in the space?
00:54:35I would say, I would say, I would like to see a more global alignments, clear, with clear reporting requirements, accountability, and also enforcement.
00:54:46For me, ethics should be built into AI system from day one, not after the fact.
00:54:54And with this comes transparency for users.
00:54:58So how AI is being used.
00:55:01And more environmentally speaking, AI grows.
00:55:05So does it carbon footprint and water footprint also?
00:55:10So we would like to see companies disclosing the environmental cost of their AI operations.
00:55:17And this implies clear targets to reduce energy use, source renewable, as well as water reporting, especially in areas facing droughts of scarcity.
00:55:32One point that we touched on a little bit when we were discussing the impact and specifically the risks of AI, but that we didn't go into much detail on, is the impact on employment.
00:55:45Johan, maybe over to you.
00:55:48How do you look at this question of AI impacting employment?
00:55:55Do you feel that, you know, the fear that is sometimes voiced about massive unemployment coming from AI and, you know, it changing the ways our economies work.
00:56:09Do you feel that fear is justified?
00:56:11I think it's a very difficult question, to be honest.
00:56:15And the answer is, I don't know the answer.
00:56:18But what I know is that when I compare to, let's say, the former industrial revolutions that we had, every time, every single time people said, OK, but this industrial revolution that will cause like huge unemployment.
00:56:33And we've never seen that. We did see some changes in the, let's say, in the profiles of people, of working people, but I guess that will be the case currently also.
00:56:47And until now, we haven't seen any, let's say negative impact.
00:56:52And what is more, I think, with the, let's say, the decrease of the active population because of, let's say, the demographic evolution.
00:57:03I think we will need artificial intelligence, I think, as an extreme form of automation.
00:57:10So, I am really on the positive side of artificial intelligence, but it is true that there will be some people that will need to change jobs.
00:57:21And there, there might be some difficulties, clearly, it's not evident to change everyone from, let's say, a job at McDonald's to a job at, let's say, IBM or at NVIDIA.
00:57:40So, I think there might be some disruption, but in general, given the demographic evolution, I'm quite optimistic on the evolution and the impact of artificial intelligence on the workforce and on employment in general.
00:57:58Right. Thank you so much for your insights in this Q&A session as well.
00:58:03Thank you for, to those of you who sent in your questions via chat.
00:58:07If you have any questions following on this call, don't hesitate to get in touch.
00:58:11And we'd be happy to arrange discussions with our experts as well to follow up.
00:58:18And now, by way of conclusion, I'd just like to ask you, Emma and Johan, to give us, in a nutshell, maybe just two or three key words
00:58:28that you think the audience should take away with them today to continue thinking about AI through the sustainability lens.
00:58:37Emma, I'll start with you.
00:58:40I would say transparency for how companies are using AI in a transparent way with new policy.
00:58:50Next, I would say efficiency. So, efficiency gains linked to the use of AI in our analysis.
00:58:59And finally, ethics. So, companies should implement the ethical AI framework.
00:59:07Thank you. And a couple of words from you, Johan.
00:59:12Yeah, I'm probably a little bit more biased as I've been covering technology for so many years.
00:59:17But for me, AI, I would, the first key word that comes to mind is transformative.
00:59:23Really transformative, because in all these years that I've covered technology, I've never seen any technology that has been,
00:59:31or that will have such an impact on ourselves, on the environment, on the industry.
00:59:39So, transformative, that's for me really the key word.
00:59:43The second word that comes to mind is opportunities.
00:59:47It's as well for an investor and as well also for the environment.
00:59:52I think there will be a lot of solutions that will be helped by artificial intelligence.
00:59:58And I do not want to deny the huge challenges in terms of environment either.
01:00:04So, but I guess there are more opportunities than there are challenges for AI.
01:00:11And then a last key word that comes to mind and that really relates to what Emma already said.
01:00:19I think there is really a need for framework on, let's say, also on the legal side, on the governance side, but also on the company, on the individual company side.
01:00:31So there, I think, in order to develop, to further develop and deploy artificial intelligence, there must be, or there is really a need for a framework.
01:00:40So these three words, for me, are the most important ones.
01:00:45Very clear.
01:00:46Thank you very much, Emma and Johan, for sharing your insights and expertise.
01:00:52We hope you found this webinar useful.
01:00:55If you're interested in reading up more on this topic, please note that Kandrium will be publishing a paper on this very topic in a few weeks.
01:01:06So be on the lookout for that on our webpage.
01:01:10Thank you everyone for joining today and have a great rest of your day.
01:01:15Bye bye.

Recommended