Skip to playerSkip to main content
  • 3 months ago
Czy regulacja sprawi, że UE stanie się najbardziej zaufaną potęgą w dziedzinie SI?

Celem UE, gdy uchwalała Ustawę o sztucznej inteligencji, było zapewnienie, aby sztuczna inteligencja (AI) przestrzegała podstawowych praw i wartości oraz była bezpieczna dla obywateli.

CZYTAJ WIĘCEJ : http://pl.euronews.com/2025/08/19/czy-regulacja-sprawi-ze-ue-stanie-sie-najbardziej-zaufana-potega-w-dziedzinie-si

Zasubskrybuj nasz kanał.Euronews jest dostępny na Dailymotion w 12 językach

Category

🗞
News
Transcript
00:00www.astronarium.pl www.astronarium.pl www.facebook.com
00:30The AI Act came into force in 2024 and defines four risk levels for the use of artificial
00:50intelligence, from minimal to unacceptable. It prohibits practices that violate democratic
00:56values and human rights. One example is categorizing individuals by their biometric data, which
01:03could lead to unfairly profiling minority groups. The EU now invites AI companies such
01:09as the generative chatbots ChatGPT, Mistral, Gemini and Claude to sign a voluntary code
01:15of practice on general-purpose AI. By signing the code and adhering to the rules, they're
01:21deemed compliant with the AI Act.
01:23Companies that refuse to sign the code may face more stingent expectations and administrative
01:29burdens. Let's hear what Europeans think about this regulatory move.
01:35I feel like if we leave these programs unregulated, they will make harm as much as good as they
01:44do.
01:45It has to be like a global thing. So then it's the same rules in China and the same
01:50rules in here, but also the same rules in America.
01:54Parzialmente, perché si è visto che si riesce a ricreare voce, visi, volti, video, quello
02:01andrebbe regolamentato e andrebbero imposte norme più stringenti.
02:05So che determinate cose non potranno mai essere cambiate o sostituite dall'intelligenza
02:10artificiale, quindi... ma già ci sono e sicuramente la comunità europea interverrà
02:13ancora meglio. Quindi sì, sono tranquilla.
02:16Let's break it down with Cynthia Crute, senior tech policy reporter at Aeronews.
02:21When it comes to the code, the European Commission is taking somehow a pedagogical approach to
02:27this sector. But what will happen to AI companies if they will violate the AI Act once it's fully
02:36implemented?
02:37Yeah, I think it's important to know that the AI Act is being implemented in a period of
02:42time, so from 2024 to 2027. And this August, the rules for general purpose AI models, such
02:50as JetGPT or Gemini, they will enter into force. So that means that the products that were already
02:57on the market, such as those examples, they have two more years to comply with the rules. And
03:02everything that's been put on the market after August, they have to comply immediately. And in
03:07case there are breaches, the Commission can impose a fine of up to 15 million euros.
03:13Major players like OpenNI and Anthropik support the code, decided that they wanted to sign it. But
03:20others are refusing, like Meta, which is also an important company. What are the arguments being
03:26used and what is the implication of such a refusal in terms of the global market?
03:32Meta has been, I think, since the drafting process of the code started last September,
03:37very critical of this whole process. They say it stifles innovation, and they've rolled
03:43out a few tools that they cannot fully use in Europe, also because of data protection rules,
03:48for example. But in the end, it doesn't matter much if they sign or not, because the AI Act
03:52will prevail anyway.
03:55Some analysts claim that this regulation is basically a strategic positioning of the EU that wants to
04:03to be perceived as the most trustworthy AI provider. The USA is highly critic of this approach. So what can
04:13come out of these conflicting positions?
04:16Well, it's not only Europe, I think, that's regulating. China and the US are doing the same.
04:21I think they just take a more sectoral approach instead of more horizontal that the EU is doing.
04:27I think the big difference is really investment in the end. It's mostly China and the US that have a lot
04:34of private investment. Europe is trying to catch up and they try to mobilize private and public funding,
04:41for example, for gigafactories where they train these AI models.
04:46This code establishes rules on three main aspects. Copyright, solutions to respect the intellectual
04:52property of creative works, such as images and academic essays. Safety, standards for avoiding
04:58systematic risks of advanced AI models. One example would be development of chemical or biological
05:04weapons. And transparency, requiring companies to complete a form on how they comply with the AI law.
05:10Our guest is Laura Lázaro Cabrera, a counsel at the Center for Democracy and Technology.
05:18Denmark, which currently holds the presidency of the EU, wants to simplify the AI Act and another digital
05:28rules. Could this legislation become an empty promise?
05:33The AI Act is already the result of several years of protracted negotiations and hard-fought compromises.
05:40Every sentence in the AI Act, every word isn't there for a reason. So to say that simply a lot of the
05:48AI Act can be done away with or can be removed, some sections amended, would be a mistake.
05:54The United States leads the private investment in AI, which is more than 10 times the EU's investment.
06:03Beyond regulation, do you think that the EU is financially committed to providing safer models of AI?
06:12The EU has made great strides towards strengthening the financial support that it provides to AI
06:18development in Europe. Just this year, over 200 billion euros have been announced for AI investment.
06:25We think that finances are an important part of the equation, and indeed it is important for the EU to
06:31maintain a leadership role in the development of AI. But we think that that leadership has to be tied to a
06:38strong safety framework that promotes fundamental rights and that promotes people-centered AI systems.
06:44Deepfakes, theft of confidential data, suicides linked to the use of chatbots are some examples of the risks,
06:54the dangers of generative AI.
06:57So in addition to regulating companies, should governments also provide awareness to citizens,
07:07training about how to use these tools?
07:10Absolutely. One key element of the AI Act is this notion of AI literacy, but the obligations around AI
07:18literacy are aimed at companies developing AI and companies deploying AI. We need to see similar
07:26approaches building at EU-wide level targeted this time at people.
07:31Just as the industrial revolutions driven by steam power and the internet impacted different sectors,
07:38so too will AI affect many areas, from defence to life science to energy and manufacturing.
07:46However, as with those revolutions, there are risks to consumers, the environment and the rule of law.
07:52Can the legislation keep up with this lightning-fast innovation?
07:56Thank you.
08:01Agustinant.
08:02After all,
08:03I'm massive!
08:08You
08:10Were
08:11W
Be the first to comment
Add your comment

Recommended