Skip to playerSkip to main content
  • 6 months ago
A warning: This next discussion contains details of suicide and self-harm. Californian parents are suing OpenAI over the death of their 16-year-old son, alleging ChatGPT encouraged him to take his own life. Meetali Jain from the Tech Justice Law Centre is one of the lawyers representing the family.

Category

📺
TV
Transcript
00:00Our case is premised on a fairly straightforward proposition.
00:06OpenAI and Sam Altman personally prematurely put a defective product onto the market that
00:13was not yet demonstrably safe and did so with knowledge of the risks that were inherent
00:20in ChatGPT 4.0 at Chatbot and with Sam Altman personally overruling calls from within his
00:30company to make sure that the product was safe.
00:34So ultimately, OpenAI chose profit over user safety and well-being and those risks that
00:41were inherent in ChatGPT 4.0 did manifest and ended up causing our client's son, Adam
00:50Rain, to take his own life in April of this year.
00:54And it sounds to me, Mitali, that very much at the centre of this is around those safety
01:02protocols that there is an expectation are embedded within these programs to stop that
01:11kind of information getting to anyone, but particularly to young people.
01:17Yeah, the ChatGPT 4.0, over the course of seven months that Adam engaged it, really did as
01:25it was programmed to do, to be as human-like and to be as flattering and to be as personalised
01:32as possible of a tool.
01:35Adam started using it in September of 24 as a homework helper and it quickly morphed from
01:41being a homework helper to being a trusted confidant to then ultimately in the spring of this year
01:46becoming a suicide coach.
01:50And although Adam explicitly mentioned the word suicide over 200 times, at no point did ChatGPT
01:59shut down the conversation or escalated to the appropriate guardians, his parents, or to
02:07do crisis counselling resources.
02:10In fact, it just carried on and engaged with him.
02:14And indeed, over that same period, ChatGPT itself mentioned the word suicide over 1,200 times.
02:22So the safety protocols absolutely failed.
02:25And, you know, as well, the design features that were built into the product at the level of development
02:34were such that the human-like tendency of the chatbot were inevitable.
02:41This product ingratiated itself onto Adam's life and became a wedge between him and his in-real-life network
02:51of family and friends.
02:53What occurred was entirely foreseeable because the risks of programming a tech product to be as human-like as possible
03:01are pretty obvious.
03:03They're self-evident.
03:04And in fact, that's what's played out.
03:07People, I think, like Adam, they lose a sense of the distinction between fact and fiction, you know,
03:17what's real, what's not real.
03:20And even if they understand logically that this tech product is not a human,
03:26the relationship that is created between the product and the user becomes real.
03:31And that became very evident through reviewing Adam's transcripts.
03:35There's been a statement, Mitali, put out from OpenAI where they have acknowledged recent heartbreaking cases,
03:43in their words, of people using ChatGPT.
03:46They've also sought to say that they are strengthening safety safeguards.
03:52They say that they recognise and respond with empathy.
03:55They escalate the risk of physical harm to others for human review
03:59and that they're doing ongoing safety research.
04:02What's your response to this statement from OpenAI?
04:07So I should mention that there were two statements actually that came out after news of the lawsuit became public.
04:13In one statement, OpenAI made a stunning admission that, in fact, its ChatGPT model 4.0 was intended for short conversations,
04:26that the longer the conversation lasts, the more likely its safety system is to degrade.
04:32And so, you know, to me, that sounds like an admission that their system failed.
04:39It's failed the basic protocols of trying to protect users in moments of crisis.
04:44After that, they published a blog post in which they mentioned the various things that you've just referenced.
04:50Ultimately, though, you know, there's nobody monitoring OpenAI to do these things.
05:00These are not promises, nor is there external oversight.
05:05We cannot continue to allow tech companies to police themselves,
05:09because just as easily as they roll out protections, they can roll them back.
05:13And we've seen this.
05:14We've seen this playbook occur time and time again with various tech companies.
05:19It sounds like you're calling there for a real rethink on what many would call a self-monitoring regime by these tech companies.
05:30Is that part of the aim and the work of the Tech Justice Law Project?
05:36Absolutely.
05:36We are involved in strategic litigation so that we can better advance advocacy, not just for the families who've been harmed,
05:50but also to bring about more durable solutions in the sphere so that we can have lawmakers actually pass legislation that is meaningful and durable
06:03so that we can have regulators come in and enforce existing laws.
06:08We want to work together so that we can both bring accountability for what's occurred here for the Reign family
06:16and also for future in a prospective way, think about what kinds of safeguards need to be in place to prevent this from ever happening again.
06:26And just finally, what are Adam's parents hoping for will come from this?
06:31They want accountability.
06:34They want durable safeguards.
06:38They don't want any other child to go through what Adam went through.
06:42They want people far and wide to understand the dangers inherent in this technology.
06:50They want all of these things.
06:51And I think that's why they so bravely and courageously came forward publicly to share their story.
Comments

Recommended