Skip to playerSkip to main content
  • 4 months ago
Welcome to the world of AI slop. Short, AI-generated clips are flooding the internet, drowning out authentic and original stories. They may look harmless, but they are one of the biggest misinformation tools.
Transcript
00:00Did you ever notice that many AI-generated videos are exactly 8 seconds long?
00:058 seconds! That's all it takes to trigger your emotions and mislead you with fake AI-generated
00:10content. Some AI-generated videos might be funny, others are a major tool of mis- and disinformation.
00:16Welcome to the world of AI-slop. Wait, come again?
00:19AI-slop is a term referring to the mass production with cheap tools
00:23requiring minimal effort and no expertise. These videos have polluted online spaces
00:28and buried original authentic content. Either the clips are very very short,
00:32they're 6, 8, 10 seconds, or they are a combination of 6, 8, 10 second clips put together.
00:40In this video, we'll focus on these three questions. Why are these videos so short?
00:45How can we spot them? And why can they be dangerous? But first, do you remember this viral AI video of
00:51Will Smith eating spaghetti? People laughed at the mistakes, but the quality of AI-generated content
00:57has increased rapidly. There have been a real breakthrough in what is called text-to-video,
01:03where you type a prompt of what you want the video of, and not only does it create a video of that,
01:09but also an audio stream that's consistent with it. Models like Google's VO3 can generate highly
01:16realistic videos with supporting audio. But notice one thing. They are mostly short, usually 8 seconds,
01:22leaving an emotional impact on users. When we don't have enough time to process it, our brain responds
01:29much more to the emotional cues than to the logical reasoning. So why 8 seconds? Videos are also called
01:38moving images because they are made of several images in a sequence. Typically, 1 second of video
01:44contains 30 frames. Look at your mobile camera settings. That means 240 frames or images will be
01:50needed to create an 8 second long clip. Additionally, experts say creating such videos needs massive
01:56computing power. A study explains why AI-generated videos are so short. Current diffusion models for
02:02video generation can usually create clips of only about 10 seconds or 240 frames, because longer videos
02:09require massive computing power, it says. Trying to stretch them longer often leads to probed scene
02:14changes or unnatural motion. Researchers are now experimenting with smarter techniques to make longer
02:20videos smoother and more realistic. Some platforms now animate a single image into a few seconds of
02:26motion. These are quickly made and easy to share. It's now trivially easy to type into your VO3 to
02:32generate a complex scene of multiple people interacting, speaking or doing some kind of action. Takes you like
02:395 minutes. Short clips leave little time for critical thinking or fact checking. They are over before you have
02:45the time to doubt or question their authenticity. That makes them highly shareable and dangerous.
02:51Perfect for spreading misinformation. A funny or animated AI video might be harmless, but clips
02:57mimicking serious news events can be dangerous. Here's an example. This video shows a mass funeral after the
03:03deadly Afghanistan earthquake, asking for donations. If you look closely, you'll see a pattern of white bundles,
03:10a common AI mistake and people merging unnaturally. But its emotional impact convinces many viewers anyway.
03:17The content that we see online is often curated based on who we are following, the communities we are
03:24part of and what are we liking and sharing. Now, these are the things which we already relate to and feel
03:31connected with. What it does is it creates a sense of belonging and we naturally want to voice our
03:37opinions or emotions because it reinforces that kind of connection. Another example. A video posted
03:44during severe floods in India shows a train in a river. When you look closely, the typical AI hallmarks
03:50are visible. Hands merging, unnatural finger movements, gibberish texts on trains and microphones. The
03:56Instagram video is 16 seconds, but the scene changes exactly at 8 seconds. Two short clips are essentially
04:03stitched together here. So, let's talk about some clues how to spot these AI slops. Number one. Many of
04:09them lack scene changes. They stay with the same characters and background. Number two. Often there's no
04:16camera movement. These videos do not pan or zoom. Three. There's a gibberish text that doesn't make any
04:22sense. Number four. There are typical AR mistakes such as merging bodies and out-of-proportion body parts.
04:29And five. When it comes to crisis and disasters, they often use a high-angle video, as a film by drones.
04:35While such footage is possible, DW Factcheck has repeatedly observed this pattern in AR-generated videos
04:41that claim to show destroyed infrastructure in Syria, Gaza or Iran. Okay, now we understand that people can
04:48fall for that content. But is it really that dangerous? Look at this AI-generated news anchor.
04:54She's claiming JK Rowling drowned on her yacht. It's fake. Created with VO3, but has the potential to
05:00mislead a large audience and cast doubt on real newscasters. You don't want to be fooled by fakes.
05:06Get your news from reliable sources. Stop engaging with information on social media. That's not what it was
05:14designed for. It is not reliable. Political leaders have also exploited AI videos for their propaganda
05:20and have created false narratives. US President Donald Trump, for example, posted a fake clip of
05:25the FBI arresting Barack Obama in late July. The rise of hyper-realistic synthetic videos has made it
05:31even harder to separate facts from fiction. The clips can fabricate unimaginable scenes or perfectly
05:37recreate real ones. Many fear that they would start casting doubts on authentic and originally produced
05:42videos. Several of these videos have also been fueling racist or misogynist narratives. One
05:48Instagram channel, for example, features AI-generated reporters in bikinis interviewing men. The one
05:54question, one answer format targets women with sexist remarks. Of course, this format uses short videos.
06:01See, misogyny has always existed in the society for years.
06:06But it was usually the subject of locker rooms or it was limited to the private spaces. With the social
06:15media age and the AI, it has now become widespread and also being normalized. The more this kind of content
06:25is seen everywhere repeatedly in different forms, it starts becoming normal.
06:30So, what's the solution? The first steps should come from social media platforms. Experts have been
06:37proposing for a long time to label AI-generated content. Now, many AI generators have been working
06:43to incorporate watermarks. Google has introduced SyncID, an invisible watermark for all of its AI-generated content.
06:50When they released VO3, they also inserted invisible watermarks into every single piece of generative AI content,
06:59whether it's voice, image, or video. The good thing about that is it's much harder for an adversary to remove that.
07:05The bad thing is, you can't see it.
07:07One of the things you can do is you should petition your lawmakers is that if we do not regulate this, if we don't have,
07:14you know, some kind of solution, then just more and more people will be deceived. And really, there's nothing you can do about it,
07:20other than actually getting governments to do something about this problem.
07:23There are some online AI detector tools, but so far, they have not been very accurate.
07:29Therefore, much of responsibility lies with you, the viewer. With that in mind, what can we do when
07:35you see possible AI-generated content online? Stop when you see a very short video, especially a single
07:41scene clip. You can't assume all images are real anymore. Check other sources and the credibility of
07:47the account posting it. A simple Google search could clean up most of the trash out from your timeline.
07:52And pay attention to emotional reactions. If a short clip instantly makes you angry or sad,
07:58that's the right time to verify where it comes from. The easiest way is to check reputable media
08:02organizations. We need to first understand that the online world is psychologically and emotionally
08:08triggering by its very design. The more aware we are of our own emotional triggers,
08:14the better we can respond thoughtfully instead of reacting impulsively.
08:19So technology is changing rapidly and we have to stay up to date. Remember the Will Smith video?
08:25It's a reminder that we need to be aware of how AI content is evolving.
08:29Whatever I tell you today, six months from now probably won't work.
08:34Shareability drives social media success, but consider sharing responsibly. One cautious share is
08:40better than spreading misinformation. You can find more on this and other topics on our page dw.com
08:45slash factcheck.
08:56You
08:58you
09:10you
09:12you
Comments

Recommended