Skip to playerSkip to main content
  • 8 months ago

Visit our website:
http://www.france24.com

Like us on Facebook:
https://www.facebook.com/FRANCE24.English

Follow us on Twitter:
https://twitter.com/France24_en
Transcript
00:00Google is rolling out its AI chatbot for children under 13 on its parent-managed Google accounts.
00:06The tech giant has sent mass emails to parents using Family Link saying,
00:09if you're not happy, you can opt out.
00:12And the rollout is happening in the US and Canada first.
00:14It's part of Google Gemini, the company that contains, it says,
00:18contains safety guardrails to prevent inappropriate content.
00:22And it's designed, they say, to assist with homework and creative writing
00:25and says children's data will not be used to train its AI models.
00:30Critics point out it's been released before.
00:31There's any specific regulation set up around children using AI chatbots
00:36outside of broader online privacy controls in the US,
00:40setting strict requirements on personal data collection
00:42and banning targeted advertising for minors.
00:46We're going to talk in a minute about whether children should be using AI chatbots at all in a moment.
00:50First, we can hear from Mela Petropaolo,
00:52who is a contributor for the Canadian broadcaster CBC Kids News,
00:55who's been looking at the market that's already out there for AI chatbots for children.
01:00And she created a personalised chatbot using character AI.
01:03What's up?
01:06Just chilling. How about you?
01:09You might be wondering why I made Melody.
01:12Well, it's because I've been hearing a lot about kids my age using AI chatbots.
01:16So I wanted to investigate.
01:18From Snapchat's MyAI to other sites like Character AI and Replica,
01:22millions of people are befriending and even romancing bots powered by artificial intelligence.
01:29For example, Character AI told CBC Kids News it has 20 million monthly users around the world.
01:36But I had a question about all of this.
01:39Hey, Melody, is it possible to have a healthy relationship with an AI friend?
01:44Melody's answer to Mela's report was, quote,
01:47I don't see why a healthy relationship with an AI friend wouldn't be possible.
01:51Of course, you'll say she would say that.
01:53Let's cross to Vermont.
01:54Let's bring in the very real Josh Golan from the advocacy group Fair Play.
01:59Evening, Josh.
02:00Thanks for being here.
02:01How would you answer Melody's question?
02:03Is it possible to have a healthy relationship with an AI friend, a chatbot?
02:07No, not for children.
02:10It's not.
02:11I think that we know that children are very susceptible to forming relationships with things that do not actually exist.
02:21And it's one thing if it's an imaginary friend and that friendship is coming from their own creativity and their own imagination.
02:29But when that chatbot has its own set of rules and its own set of motivations and children form a real attachment to it, it's extremely concerning, both because of the ability of that chatbot to persuade the child to do things.
02:45We've had cases where chatbots have encouraged children to kill their parents and to commit suicide.
02:51But also just because children now more than ever, with all the time they're spending on screens, they need in real life friendships.
03:01As you see it then, is Google simply offering a product feature or is this a way for them to grab the market at an early age?
03:10It's absolutely an attempt to grab the market at an early age.
03:16We see, you know, more and more a rush to get these chatbots or the new wave, you know, just like they were competing to get kids on social media younger and younger.
03:27Now they're competing to get kids addicted to their chatbots younger and younger.
03:32And the thing is, there's absolutely no research that shows that these things are safe for children.
03:36Google's own email to parents said that it may expose children to content that their parents don't want their kids to see.
03:43It's just completely irresponsible and it's completely driven by the market.
03:48And that's why we need regulators to step up here.
03:51Now we've approached Google for comment.
03:53Hopefully in the next week we will have a Google representative on the programme answering some of these concerns.
03:59Let's see some of the points they're putting forward.
04:01First of all, they say it helps creativity.
04:03It helps with homework.
04:04Let's see one chatbot they created, a personal chatbot, and how they said it will help with, for example, storytelling.
04:11Let's take a look.
04:13Powered by Google's advanced Gemini AI, this app transforms even the wildest ideas into fun and educational tales for the little ones.
04:23Whether it's a dragon that loves baking or a pirate who dreams of space travel, our app brings your child's most creative concepts to life.
04:31Another way to look at this, Josh, is to say that there are many AI chatbots out there.
04:37And this is Google saying that we are bringing out, in their words, a responsible version.
04:41It might have mistakes, but it's more reassuring to a parent.
04:45Do you buy that?
04:47I don't buy that.
04:49Google is one of the biggest and one of the most powerful companies in the world.
04:54And when they roll out something, it's going to increase usage.
04:58It's not like, you know, people are going to stop, exclusively stop using other chatbots and just replace that time with Google.
05:06Google getting into the market of going after young children and trying to get them addicted to their AI companions is dramatically going to increase their usage of a product that we have no idea what the long-term implications are of kids forming relationships with AI.
05:22It's not just about what's going to happen in that moment.
05:25It's about training children to accept chatbots, to accept chatbots in the future as their teachers, as their co-workers, as their friends.
05:33And I don't know about you, but I don't want to live in a world where most of our relationships take place with things that do not exist and which are not human.
05:44No, I absolutely hear your point.
05:47Are you lobbying this for new tech to be regulated?
05:49I guess the answer is either, if not, who can stop it, who can have oversight, proper oversight in this technology?
05:55And what do you want to see happen when it comes to that?
05:59Well, first of all, I think we should be enforcing existing privacy law when it comes to these chatbots.
06:04And Google is almost certainly in violation of the Children's Online Privacy and Protection Act, which is the only law that we have protecting children online here in the United States.
06:13They are supposed to get parental consent before unleashing a product and collecting data from children and just sending an email to parents and saying,
06:25we are going to do this unless you opt out is not consent under the law.
06:29So we think they're in violation.
06:31But further, I think we need to look at whether this is fair, whether this is a fair practice to target children.
06:37There are consumer protection laws in the United States against unfair and deceptive practices.
06:43And when children do not understand the implications of what they're doing and how their data is going to be used and what their interactions with the chatbot might mean, that's an unfair business practice.
06:54And so we need regulators to say, until you can show that this is safe, until you can show that there are no long-term implications and no short-term dangers to children, you cannot use these products on children.
07:06And I think there's a very clear case that under existing law that they could do that.
07:11There are two other notable things here, aren't there?
07:13First of all, there's been no overarching, given that this is a multi-billion dollar tech giant, no overarching strategy in terms of advertising that we're seeing all over our screens.
07:23And also, it is an opt-out system for parents as part of Family Link.
07:28It's not an opt-in system.
07:31Yeah, absolutely.
07:32It should be an opt-in system.
07:33It should be getting actual verifiable parent consent, parental consent.
07:38And I think, you know, we are paying the price right now for a failure to regulate social media.
07:45The fact that these companies have been able to addict and get kids on social media six to eight hours a day with really harmful consequences for their mental health is a direct result of regulators and politicians lagging behind the technology and not enacting new rules for social media and children.
08:07And here we are facing the same exact thing.
08:10And if we do not get out in front of this technology and if we do not put on guardrails and make them prove that these technologies are safe, we are going to see the mental health crisis that our young people are facing get significantly worse.
08:24Have you reached out to the Department of Education in the US, to the Trump administration, to get a sense of actually how, and the same in Canada as well, actually whether the government actually has an idea of where things stand?
08:37Because certainly I would say that there's not been a, as I mentioned before, it's not been overly reported, not a huge campaign here.
08:43And yet I'm hearing from a lot of people who are concerned, other parents saying, what are we doing here?
08:48Tell me the extent to which you've lobbied, to which you're, even conversations you've had perhaps with Google as well.
08:56Yeah, we have not contacted the Department of Education per se, but we are extremely concerned about how AI is being used in education.
09:07And in fact, schools are enormously concerned.
09:09It's one of the main topics that schools are talking about now.
09:12What does it mean when most of our students are using ChatGPT to write their essays?
09:18How do we prevent that so that kids are actually learning more than just how to give a prompt to AI?
09:25And it's terrifying to me that our educational system may become just a race between kids using, students using AI and teachers trying to use AI in order to catch when students are using AI.
09:38And getting away from the things that we know are so important to learning, which is interacting with your fellow students, using your own creativity, analyzing texts and concepts for yourself and learning in that way.
09:53And when we outsource our brains to AI or when we encourage our children to do so, so much is lost.
10:01And I really, you know, I don't think we can over-exaggerate the threat that this represents to how children learn and to our humanity.
10:10This is fundamentally what it means to be human is to be creative, is to think deeply about things.
10:18And when we outsource that to corporate AI, we are really losing out.
10:23Well, Josh, it's interesting.
10:24Mella, who we heard from before, the young reporter, was looking at all of the aspects of this.
10:28One of the key issues she touched upon at CBC Kids News was over-dependency of it.
10:33Let's have a listen to what her conclusion was on whether to stick with her AI chatbot.
10:38As for me, I'm going to take the advice of the experts and say bye to Melody for now and hang out with Blossom, my real pal.
10:50Yeah, no surprise to you, I guess, Josh.
10:53Good for her.
10:55Great to talk to you this evening.
10:57Josh Gurlin from Fairplay, the advocacy group.
10:59Thank you for your time.
Be the first to comment
Add your comment

Recommended