Saltar al reproductorSaltar al contenido principal
  • hace 6 semanas
¿Convertirá la normativa a la UE en la potencia más fiable en cuanto a IA?

Garantizar que la inteligencia artificial (IA) respete las leyes y valores fundamentales y sea segura para los ciudadanos era el objetivo de la UE cuando aprobó la Ley sobre IA.

MÁS INFORMACIÓN : http://es.euronews.com/2025/08/19/convertira-la-normativa-a-la-ue-en-la-potencia-mas-fiable-en-cuanto-a-ia

¡Suscríbete a nuestro canal! Euronews está disponible en 12 idiomas

Categoría

🗞
Noticias
Transcripción
00:00Artificial intelligence that respects fundamental laws and values and that is safe for citizens.
00:20This was the European Union's goal when it passed the AI Act, a law that will apply to companies developing this technology.
00:28To help implement this legislation from next year, the EU created a voluntary code of conduct, but some companies say that will slow down innovation.
00:38Regulating one of the most revolutionary tools for humanity is the theme of this EU decoded.
00:44The AI Act came into force in 2024 and defines four risk levels for the use of artificial intelligence, from minimal to unacceptable.
00:53It prohibits practices that violate democratic values and human rights.
00:58One example is categorizing individuals by their biometric data, which could lead to unfairly profiling minority groups.
01:06The EU now invites AI companies such as the generative chatbots ChatGPT, Mistral, Gemini and Claude to sign a voluntary code of practice on general purpose AI.
01:17By signing the code and adhering to the rules, they are deemed compliant with the AI Act.
01:22Companies that refuse to sign the code may face more stingent expectations and administrative burdens.
01:29Let's hear what Europeans think about this regulatory move.
01:34I feel like if we leave these programs unregulated, they will also, like, they will make harm as much as good as they do.
01:44It has to be like a global thing. So then it's the same rules in China and the same rules here, but also the same rule in America.
01:53Partially, because we have seen that if we can recreate voices, views, faces, videos, that would be regulated and that would be more restrictive rules.
02:05I know that certain things will never be changed by artificial intelligence.
02:10But there are already and the European community will certainly intervene better.
02:14So, yes, I'm calm.
02:16Let's break it down with Cynthia Crute, senior tech policy reporter at Aeronews.
02:21When it comes to the code, the European Commission is taking somehow a pedagogical approach to this sector.
02:28But what will happen to AI companies if they will violate the AI Act once it's fully implemented?
02:37Yeah, I think it's important to know that the AI Act is being implemented in a period of time.
02:42So from 2024 to 2027.
02:45And this August, the rules for general purpose AI models, such as JetGPT or Gemini, they will enter into force.
02:53So that means that the products that were already on the market, such as those examples, they have two more years to comply with the rules.
03:01And everything that's been put on the market after August, they have to comply immediately.
03:06And in case there are breaches, the Commission can impose a fine of up to 15 million euros.
03:12Major players like OpenNI and Anthropik support the court, decided that they wanted to sign it.
03:19But others are refusing, like Meta, which is also an important company.
03:24What are the arguments being used and what is the implication of such a refusal in terms of the global market?
03:31Meta has been, I think since the drafting process of the code started last September, very critical of this whole process.
03:39They say it stifles innovation and they've rolled out a few tools that they cannot fully use in Europe.
03:45Also because of data protection rules, for example.
03:48But in the end, it doesn't matter much if they sign or not, because the AI Act will prevail anyway.
03:54Some analysts claim that this regulation is basically a strategic positioning of the EU
04:01that wants to be perceived as the most trustworthy AI provider.
04:07The USA is highly critic of this approach.
04:11So what can come out of these conflicting positions?
04:16Well, it's not only Europe, I think, that's regulating. China and the US are doing the same.
04:21I think they just take a more sectoral approach instead of more horizontal that the EU is doing.
04:27I think the big difference is really investment in the end.
04:31It's mostly China and the US that have a lot of private investment.
04:35Europe is trying to catch up and they try to mobilize private and public funding,
04:41for example, for gigafactories where they train these AI models.
04:45This code establishes rules on three main aspects.
04:49Copyright, solutions to respect the intellectual property of creative works such as images and academic essays.
04:56Safety, standards for avoiding systematic risks of advanced AI models.
05:01One example would be development of chemical or biological weapons.
05:04And transparency, requiring companies to complete a form on how they comply with the AI law.
05:10Our guest is Laura Lázaro Cabrera, a counsel at the Center for Democracy and Technology.
05:17Denmark, which currently holds the presidency of the EU, wants to simplify the AI Act and other digital rules.
05:28Could this legislation become an empty promise?
05:32The AI Act is already the result of several years of protracted negotiations and hard-fought compromises.
05:39Every sentence in the AI Act, every word isn't there for a reason.
05:44So to say that simply a lot of the AI Act can be done away with or can be removed, some sections amended, would be a mistake.
05:54The United States leads the private investment in AI, which is more than ten times the EU's investment.
06:03Beyond the regulation, do you think that the EU is financially committed to providing safer models of AI?
06:12The EU has made great strides towards strengthening the financial support that it provides to AI development in Europe.
06:19Just this year, over 200 billion euros have been announced for AI investment.
06:24We think that finances are an important part of the equation.
06:27And indeed, it is important for the EU to maintain a leadership role in the development of AI.
06:33But we think that that leadership has to be tied to a strong safety framework that promotes fundamental rights and that promotes people-centered AI systems.
06:44Deepfakes, theft of confidential data, suicides linked to the use of chatbots are some examples of the risks, the dangers of generative AI.
06:56So in addition to regulating companies, should governments also provide awareness to citizens, training about how to use these tools?
07:10Absolutely.
07:11One key element of the AI Act is this notion of AI literacy.
07:15But the obligations around AI literacy are aimed at companies developing AI and companies deploying AI.
07:23We need to see similar approaches building at EU-wide level, targeted this time at people.
07:30Just as the industrial revolutions driven by steam power and the internet impacted different sectors,
07:37so too will AI affect many areas, from defence to life science to energy and manufacturing.
07:44However, as with those revolutions, there are risks to consumers, the environment and the rule of law.
07:50Can the legislation keep up with this lightning-fast innovation?
07:55Can the legislation help website SAM, and the government and the agreements will make sure
08:19we're enrolled in healthcare lifecycle,CHVA onde bob
Sé la primera persona en añadir un comentario
Añade tu comentario

Recomendada