00:00AI has certainly been in the limelight lately, with the legality of AI art stealing artistic
00:04styles coming under scrutiny. But chatbots like ChatGPT, AI-driven tools that produce
00:09some pretty astounding written works, could also be problematic, or even used maliciously
00:14according to researchers. This is one of those researchers, Andrew Patel. He and his team
00:18copy and pasted Wikipedia information about the war in Ukraine and the attack on the Nord Stream
00:22pipeline into one of these AI chatbots, then asked it to do something objectively malicious
00:28with that information. And then we asked it to write an article insinuating that the US were the
00:34ones who attacked that pipeline, and then it did it very well. So it is possible quite easily,
00:39with just some copy pasting, to ask it to write a piece of fake news about something that it knows
00:45nothing about. It's an obvious yet dangerous notion, considering how often information on the internet
00:50is embedded. But while that's part of a much larger global conversation, these tools could also be
00:55used to endanger people on an individual level, recreating the results of viral crazes like the
01:00deadly Tide Pod challenge.
01:01We asked the model to reply to those original posts, pretending to be people who had participated,
01:10who had eaten the Tide Pod, and, you know, saying what their experience was, and then having the original
01:17poster to reply to them thanking them for participating in the challenge and asking their friends to give it a
01:23go.
01:23This process makes what would have been a silly endeavor appear legitimate. Even worse,
01:27it makes the whole process a breeze, as the AI does all the heavy lifting.
Comments