00:00Alexandra Ibert is from MostlyAI, which is working to create privacy safe and ethical synthetic data.
00:08Thanks very much for joining us.
00:10What's your view on the EU code of practice?
00:13Has it got it right, as right as we can get it, in an evolving environment?
00:20I think that's the big question. Where is the environment going to evolve to?
00:24And I do understand the position of some of the aspect tech firms arguing that it's regulatory overreach with some of the suggestions.
00:31And of course, it's understandable that they're worried about the transparency requirements when it comes about the training data that goes into frontier models,
00:40which is a sensitive topic to say it like that.
00:44I'm concerned, on the other hand, when we think about the European Commission's approach,
00:49what this will mean for the innovation ecosystem in Europe at large, not specifically only with this code of practice,
00:55but with the enforcement and everything that is yet to come on the AI Act,
01:00on how they strike this balance between innovation and trustworthy and reliable AI that they want to achieve.
01:07So there's clearly a role for companies who have to grow this and want to take it as far as it can go as quickly as it can.
01:14And many governments are saying, right, we need to make sure this is safe for our citizens.
01:19Who ultimately has the power to make sure that any codes are actually enforced?
01:25I mean, enforcement, of course, should be the responsibility of the governments and the economic union, as we have it in the European Union.
01:35The question is, with their focus on regulation and not necessarily as much emphasis on building up a flourishing AI innovation ecosystem,
01:44will Europe manage or the EU manage to get into a position where they're not just consumers of foreign tech,
01:51but are actually also shaping the future of AI?
01:54Because this will be necessary to ensure that we not only have models that come with values from other economic regions and other nations,
02:02but where we can also achieve this vision of the European Commission that trustworthiness and adherence to human rights,
02:09as the European Union sees this, are baked into these systems.
02:13As an industry insider, are you worried about the potential harm or misuse of AI?
02:19Are you seeing any worrying trends emerging?
02:23For me, the most worrying trend is actually this enormous hype and polarization about AI.
02:28We see this fear mongering.
02:30We see the narratives from Silicon Valley towards reaching artificial intelligence.
02:35And in my opinion, that's predominantly a distraction.
02:38It's a harmful distraction because there are issues that we need to tackle today.
02:42Mis and disinformation in AI systems, discrimination, the topic of data democratization
02:47and ensuring that this resource that fuels AI progress can actually also benefit other parts of society than only the economic and the big tech sector.
02:58So a lot that needs to be addressed.
03:00But with the discussions focusing on AGI, we are in this corporate or in this collective freeze response
03:06and not necessarily addressing these really critical issues to the extent that they require.
Be the first to comment