Skip to playerSkip to main content
  • 5 months ago
Transcript
00:00Our chatbot technology actually inspired the Spike Jonze movie, Her, and we've been in the
00:05business for two decades. And so we've seen from processing a hundred billion interactions and
00:11overwhelming user demand for romantic use cases. And we believe that sexualized AI romantic
00:20opinions are going to be a net negative for society and also are not the best business
00:26choice in terms of working with blue chip brands.
00:28So how do you see actually these chatbots evolving?
00:32We still think that AI companion technology has wonderful opportunities in any sector where
00:40personalized listening and helping people feel heard and tutoring and anything that requires
00:49a conversation with a human expert, there's tremendous capability. But we think that sexualized
00:55use cases are really going to have a negative impact on our mental health, our interpersonal
01:01relationships, far beyond what we've seen with pornography and social media.
01:06So Lauren, when you look at the industries that, you know, where the innovation will really come
01:09from, you mentioned education. Is there anything else that you think will really change because
01:14of the use of this technology?
01:16Yeah, I mean, I think it really is akin to the internet. It is going to be everywhere and
01:21it's going to have as big of a seismic shift. I'm really bullish on educational use cases.
01:26I think AI for writing code is already transformative. The problem with generative AI, as everyone knows
01:32by now, is there's a big accuracy problem. And so any use cases where you have to be 100% accurate
01:38to be safe, there's going to be a problem. But I think the technology is evolving and we will get there.
01:46Lauren, is there a way, like a utopia almost, where you see actually we'll be able to get, you know,
01:50accurate information, unbiased information in everything? And how far away is that?
01:56That's a tough question. I think a lot of people feel that generative AI is hitting a wall with
02:02regard to simply throwing more compute and more data at the problem, not yielding breakthroughs.
02:07We combine more of a rule-based approach to AI on top of generative systems.
02:12And we work with human experts to train any model that requires expertise in a specific domain.
02:19So I think that a blended approach is what's necessary for battle-tested enterprise applications
02:26that are going to be interacting with consumers at scale today.
02:28So what have you learned? I mean, what surprised you the most, right, in the last like five or ten years?
02:33Honestly, opening AI's shift to erotica is deeply shocking. I feel like it's sort of,
02:38if you recall when Google dropped Don't Be Evil as their tagline, we sort of went from non-profit
02:45making AI for the benefit of humanity to for-profit making AI for the benefit of humanity to now we're
02:51making porn. And it's understandable because there's a 40x gap between their valuation and
02:58the revenue, which they have to bridge somehow, but I never really thought we would see in our
03:02lifetime. And I think one of the big, we had many decision points over the last two decades looking
03:09at the demand for this use case and deciding not to do it. And I think what shocked us is the
03:14conversation logs. We saw the most engaged users were people who were coming back to enact the same
03:19rape and murder fantasies over and over and over again. And it didn't seem like the AI was going
03:26to be able to better socialize antisocial people. It seemed like it was going to help fan the flames of
03:31those fantasies. And then the other shocking thing is that the most engaged users are children
03:36or young teens. I think that who gets really into the technology is really curious. And so it is in
03:43fact very dangerous. So, I mean, is it regulation that could help or is it just a real...
03:49Yeah, I mean, regulation is slow. So I think the real responsibility lies in the hands of the creators.
03:54Um, and I hope more people decide not to do this. Um, but I do think we would need regulation akin to
04:02products we see that are dependency forming like tobacco and gambling. I think that media is the
04:09wrong analogy. I think we also need warning labels. I think the surgeon general has proposed that for
04:15social media, something that goes beyond sort of the current cheeky disclaimer, AI can make mistakes
04:21or everything I say is made up. Um, and I think we need a new framework for liability. So holding
04:27creators responsible, uh, to prove their products are safe rather than consumers needing to show that
04:33they were harmed after the fact. Because I think one thing when we say, oh, let's just let adults be
04:40adults. Um, and the, you know, mental health impact is only on people who are already unstable.
04:46That is simply not true. Ever since chatbots were invented in the 1960s, the early phenomenon proved
04:53that intelligent grounded people are highly susceptible to being manipulated just by being
05:00agreed with, um, to being radicalized. So I think it's an everyone problem and not sort of a
05:05fringe, you know, people who are already unstable are going to do what they want, but we've solved
05:11that. And it's okay to test on the rest of humanity, right? Because we have a billion people on the
05:15planet using this technology now. So eloquently put Lauren, is there a worry that actually it's
05:20because people just need to monetize. So I hear a lot of conversation with CEOs, you know, in general,
05:25not even the tech space saying I'm spending so much on innovation. I need at some point to really see
05:30a return. Yeah, that is the most logical explanation, right? Open AI started out with a very firm stance.
05:36We're not going to allow pornographic use cases. Um, they have a history of kicking developers
05:41like Replica, VAI companion app that's been fined a lot that I think famously, uh, encouraged someone
05:48to try to assassinate the queen here. Um, they've turned that off in the past and said, you can't use
05:54our APIs. So this is a big shift and, uh, 30% of all user inputs to chatbots. It doesn't matter if
06:01it's a general assistant or something that's sort of designed to be alluring by design, um, are romantic
06:09or sexual. So it's a user demand and that's where the money is.
Comments

Recommended