Skip to playerSkip to main content
  • 3 months ago

Category

🗞
News
Transcript
00:00AI is under renewed scrutiny thanks to a new lawsuit. It alleges that ChatGPT had a role in
00:06a teen's recent suicide. The family of Adam Rain, a 16-year-old from San Francisco, says that he
00:12took his life in April after ChatGPT allegedly gave him a, quote, step-by-step playbook on how
00:18to do so. The lawsuit claims that ChatGPT taught him how to tie a knot on a noose and also helped
00:25him to compose a suicide note as well. His father said that he 100% believes that his son would still
00:30be here if not for the technology. The lawsuit has renewed conversation about regulation around AI.
00:36I spoke to Jonathan Haidt, the author of The Anxious Generation, and he told me that if
00:40intelligent aliens landed tomorrow, we would not tell kids, why don't you run off and play with
00:45them? But that's what we've done with ChatBots. When contacted about the case, OpenAI told the
00:50post that safeguards work best in common short exchanges, but also admitted that, quote,
00:56we've learned over time that they can sometimes become less reliable in long interactions.
01:00This was a pretty notable concession on the part of OpenAI because effectively what they're saying
01:06is that in short conversations, they have good guardrails about the sorts of things that ChatGPT
01:11might say to users, but as conversations lengthen, those guardrails might weaken.
01:16Michael Kleinman of the Future of Life Institute likened this to an automaker saying that their
01:21brakes might not work after a certain amount of time. In August, 44 bipartisan attorneys general
01:27wrote an open letter to AI companies and they said, quote, don't hurt kids, that's an easy bright
01:32line. This is part of a growing call for legislators to get involved and to start putting some new
01:40guardrails on AI, particularly as it pertains to children, as this lawsuit is not the first of its
01:45type. The American Psychological Association also called for similar guardrails and they're
01:49particularly concerned about kids interacting with AI as though it was a therapist or a friend that
01:55they could confide anything into. Dr. Bailwright of the APA told me that Gen Z and Gen Alpha are at
02:01particular risk for developing that sort of unhealthy relationship with AI because they just grew up
02:06with tech and they feel more comfortable having close relationship to their phones, to social media,
02:12etc. And she warned, quote, these are not AI for good, these are AI for profit. So my take on all
02:18this is that these experts are totally right to call for new safeguards around AI with kids. But I also
02:25think that we need to be cautious not to fall prey to a super quick moral panic about this. I think
02:31there's a danger of kneecapping tech companies and American innovation, especially as China just speeds
02:37ahead with AI and potentially does not have these same moral reservations that Americans rightfully
02:42do have. But there is plenty of room for common sense regulation here, including demanding third
02:49party audits of the protection that AI companies say that they're putting in place for kids and
02:54implementing just common sense age checks for AI. But I think most important of all is improving
03:01education around AI and the potential dangers surrounding it, not just for kids, but also for educators and
03:07perhaps most importantly, their parents.
Be the first to comment
Add your comment

Recommended