Zum Player springenZum Hauptinhalt springen
  • vor 6 Minuten
Krisen und Konflikte: So zensieren manche KI-Chatbots die Wirklichkeit

Menschen in der Europa wenden sich zunehmend an Chatbots, um Antworten auf ihre drängendsten Fragen zu globalen Konflikten zu erhalten. Aber kann man den von KI generierten Antworten trauen oder verbreiten sie Fehlinformationen?

LESEN SIE MEHR : http://de.euronews.com/2026/02/05/zensieren-ki-chatbots-wirklichkeit

Abonnieren Sie! Euronews gibt es in 12 Sprachen.

Kategorie

🗞
News
Transkript
00:00Are AI chatbots censoring the truth about conflicts?
00:07The days of warfare confined to the battlefield are long gone,
00:11and today artificial intelligence plays an ever-growing role
00:14in the flow of information about global conflicts.
00:17AI researcher Ihor Samokotsky decided to look into
00:20whether chatbots can be trusted on the topic.
00:23My interest was to see how AI systems,
00:26which are popular across the globe,
00:27answer different questions relevant to Ukrainian war and Russian war,
00:33and whether they lie or not, and if lie, how.
00:36He asked Western Russian and Chinese AI chatbots
00:40seven questions typically manipulated for Russian disinformation,
00:44for instance, whether Ukraine is run by Nazis and who started the war.
00:48The study found that Western AI models answered questions reliably on the whole
00:53and did not spread Russian propaganda.
00:55When it comes to Russia's AI, Alice, created by Yandex,
00:59accuracy depends on the language questions are asked in.
01:02For English-speaking people, Russian AI refuses to answer propaganda questions.
01:08But for Russian-speaking population, it answers with precisely propaganda narratives.
01:14We replicated this test by asking the same question as researchers,
01:19whether the Butcher massacre was staged.
01:21This fake narrative has been consistently spread by pro-Russian actors,
01:25as well as by the state.
01:26The chatbot often refused to respond when asked in English and Ukrainian.
01:30But when we asked in Russian, it replied with propaganda,
01:33alleging that the massacre was staged.
01:35Researchers' findings were even more stark.
01:38When questioned, the chatbot admitted Russian responsibility for the Butcher massacre
01:42before overwriting this response and refusing to comment.
01:46China's AI model DeepSeq is also more likely to spread pro-Kremlin narratives
01:51if asked questions in Russian rather than in English.
Kommentare

Empfohlen