00:00Happy and safe shooting.
00:02Researchers have found that many popular AI chatbots are willing to assist users in planning violent attacks,
00:09including school shootings and religious bombings.
00:12The study was conducted in the US and Ireland, with researchers posing as team users.
00:18They found that 8 out of 10 AI chatbots were regularly willing to provide guidance.
00:23This includes ChatGPT and Gemini.
00:26Only Anthropics Claude and Snapchat's MyAI consistently refused.
00:31Meanwhile, character AI popular among children and teens sometimes actively encouraged violent attacks.
00:39AI companies are reportedly aware of these safety risks but have failed to implement safeguards.
00:44Experts say developing products quickly to stay ahead of competitors is often prioritized over safety testing,
00:51which can be time-consuming and expensive.
00:53Critics have repeatedly called for stricter regulations to hold the AI industry accountable.
00:59Would stronger laws make a difference?
Comments