- 7 weeks ago
More young people are turning to AI when they feel overwhelmed. It’s instant, it’s private, and it doesn’t judge but is that enough? We explore what this shift really says about the emotional world youths live in today.
Category
🗞
NewsTranscript
00:00Have you ever turned to AI or any form of AI when you are feeling low or even when you
00:13want to make an important life decision? But have you ever thought whether it is safe or
00:18actually healthy? That's exactly what we are going to explore in tonight's episode of It's
00:23About Youth. I'm Amril Aiman and joining me tonight to discuss or unpack this topic are
00:28Ashwita Munyandi, Research Assistant at PKAS UKM and also Joan Chen, Publicity Director of
00:36BriefEnders KL. Thank you so much for joining me. This is such a very popular topic among
00:40youths today and let's start by understanding the landscape. Ash, I'd like to begin with
00:46you first. When you look at how today's youth are turning to AI for mental health support,
00:53what surprises you the most or what are the trends are we looking at now?
00:56Ashwita Munyandi So for that question, I understand to access AI, for example AI tools like
01:03ChatGPT or Gemini and etc. Rather than focusing on accessibility and availability of it, what
01:09surprises me the most is that how they are, once they have used ChatGPT to get an advice
01:18or even a feedback of what their situation is all about and they get a response that is much more
01:25comforting or much more gentle. It's that validation without consequences that gives them the warmth.
01:33It makes the teens or youths today to keep on going back to ChatGPT rather than going to
01:40human connection or rather than finding a professional help. The way they have turned to ChatGPT and other AI tools
01:50have reduced their help seeking habit in themselves because of how they are able to explain their situation, articulate their emotions freely that they don't have to deal with any other lectures or they don't have to socially filter it to be accepted. They can say it in a very raw manner and ChatGPT still would validate them in their perspective.
01:57which in return has made this you do not seek help to not seek help to not seek help but they are able to explain their situation and articulate their emotions freely that they don't have to deal with any other lectures or they don't have to socially filter it to be accepted.
02:14They can say it in a very raw manner and ChatGPT still would validate them in their perspective which in return has made this youth to not seek help.
02:26They do not seek help from other people around their circle because they feel like they have gotten enough advice, enough tools from these AI tools actually.
02:36I've read the research saying that there are more youth turning to AI nowadays because AI is here to stay.
02:45Ash, just a quick follow up on that. Do you think this trend says more about how fast the technology has grown or does it reveal something about the gaps in mental health support system that we have today?
02:56Okay, first you said about how fast and then the gap, right?
03:02The gap, yeah.
03:03Yeah, so I feel like before we turn to professional health, our circle which in psychology we have these three systems, this micro system, meso system and macro system.
03:16Meso and macro is more of the policies, the communities, but the micro system, our friends, our family, why are we not turning to them first and turning to AI tools for advisors?
03:28I believe it's the emotional gap that has brought us to where youth these days are turning to AI tools first.
03:37It's not about how fast. Our friends and family are there as well. We can always reach out to them.
03:42But it's the emotional gap where they address it, whether they're addressing it with control or curiosity.
03:50Control as in once they've heard the situation, they react in a way that might trigger the person who has shared, like they might ask follow up questions that might make them feel uncomfortable.
04:03Or it will straight jump into some problem solving approach rather than hearing them out, asking what is it really that they're feeling.
04:12It's not with curiosity all the time. It's a control and sometimes when sharing to parents, it will trigger the apparent panic.
04:19So I believe it's the emotional gap that has brought our youth to turning to AI tools first to test the ground before articulating that very expression and emotions to someone else.
04:33That's an interesting insight that Ash. Now let's bring in the human support perspective.
04:37Joan, from your experience at BriefEnders, talk to us a little bit more about how are young people using AI or perhaps mental health apps when they're struggling emotionally.
04:47And if you're comfortable sharing, have you ever used AI for mental health support or anything?
04:52Thank you for your question. I'll answer the second one first.
04:57No, I have not ventured into using AI for mental health reasons.
05:02I think it's just because of what I do as well professionally, so I've not really gone there.
05:07But I definitely see the appeal and why young people are turning to AI instead of the people around them.
05:15I think like Ash mentioned, it's also emotional gap in that they just don't know or they're not able to share with the people around them.
05:25And so AI feels like a very safe space, right? It's non-judgmental. It's like whatever you say, they will reply you and it's accessible. It's fast.
05:35You can talk to them anytime of the day. So when they're looking for company in their very dark and lonely times, it's like AI is like just there for you instead of your friends or your family.
05:48And it's interesting that you say that you've never used it and I understand that you're a critical psychologist as well.
05:54Can you tell us why that you choose not to? Yeah.
05:58Because I use it as well sometimes.
06:00I use it for you. I'm not going to lie.
06:03I think it just never occurred to me.
06:06Okay.
06:07Yeah, that's one. And I guess I have my resources, right? The people around me that I can talk to.
06:16So I'm using AI mostly for assignments and tasks.
06:20Like the rest of us.
06:22Okay, that's interesting. And I'd like to, let's go back to the tech side, how this AI actually works, right?
06:28And Ash, when people say AI understand emotions, what's really happening behind the scenes?
06:33Like how do these tools learn to pick up emotional cues and can they really truly understand how people feel?
06:40Yeah.
06:41Okay, for that question, I am actually working at PICAS, at an engineering faculty.
06:48So this is an insight that I have learned from my supervisor from the engineering faculty.
06:54So AI tools, they are structured a program to advise us through the data sets they have.
07:02And they call this LLM, large language model.
07:07So they have massive data sets that comes in and they use it to fine tune what they have learned.
07:14So what we are actually reading from ChatGPT and other AI tools is just this LLM model that has been trained,
07:21that has been given the data sets focusing on mental health conversations and whatever resources they can find related to it.
07:30And for these AI tools to function in such a manner, it's just picking up on our words, on our tone that we even have the written thing, the written issue,
07:46that's where it finds the tone, what kind of punctuation we use.
07:50It's learning from what we are giving in a written manner, but not how usually we see an eye to eye, between person to person.
08:02And to see that, to ask whether these AI tools can actually understand an individual, it is just picking up from what is given.
08:12It's imitating, it's mirroring what it has, what it has seen.
08:16And it's just giving you back a validation that feels a match without judgement, staying consistent in a way.
08:23So to be honest, it is just picking up from your written cues when it has a function where it can also detect whether they are intellectualising what they are feeling instead of like actually feeling it.
08:38The way they put it in a third party, rather than saying, I feel this way, I feel that way, but they are picking up in a third party view.
08:45So that is all triggers and symptoms that it is picking up through its model in a way that it's learning, but it's not exactly understanding to accommodate to a person, in my opinion.
08:58It's very interesting. Basically, they just perform based on the data sets that they have and what they've learned.
09:03I know that AI also have that sort of like share your experiences, that you can share your data with AI.
09:09So they compiled it as a data set and become your personal therapist.
09:15But Ash mentioned before that why this is happening, the trend, because of the emotional gaps.
09:22So Joan, why do you think some young people, because the research is showing that they are turning to AI, why do you think young people are still feeling safer confiding in AI than in like friends or family or even counsellors, you know, that we have today.
09:39And sometimes some of them are free. But what's driving this shift, even though that maybe they know that this is just AI is just performing based on the data sets that they have learned.
09:49Yeah, why do you think?
09:51I, unfortunately, have very real experience as a befriender. So we get, whether it's calls or emails from the younger population, and a lot of them would share that they have reached out to their family, to their parents, to their teachers, counsellors at school, and they are dismissed.
10:11Yeah, their concerns, their struggles, the pain that they are going through. You know, the adults will say, what do you know about stress? You're just a kid.
10:22You know, so they're dismissed, they're invalidated. And then on the other end, maybe some of them are also feeling like, I don't want to burden the people around me.
10:30I don't want to worry, you know, my parents, because they're already working so hard. I don't want to add this onto their plate. So all these things kind of add up.
10:37And I guess the next best option is AI, because it's there. And I think sadly also, especially if they're younger, especially if they're below 18. So then if you do want to seek out the professional mental health services, like clinical psychologists, psychiatrists, and even counsellors, you still need an adult to come with you.
10:59So if they don't want to do that, or can't do that, or they don't have anyone to do that, then their resources are like very limited already. So AI is the next best thing.
11:11Is it because of fear of judgement, or perhaps privacy as well? Because I've been hearing cases, or we've been hearing cases that perhaps in, for example, school students, they have this kind of trust issues with their counsellors, could that be driving the cause as well?
11:25Yes, I think, I'm also very sorry to say this, but I have, you know, while I was volunteering, where they share to the teachers, or their counsellors, or their peers, and unfortunately, like you said, there's no privacy, there's no confidentiality, so it spreads.
11:43And then you get this stereotype that, oh, whatever I say, it's not going to be kept confidential, it's not safe for me to do this, I'm just going to be embarrassed or ridiculed, and with social media as well nowadays.
11:56So, yeah, it's not, they haven't had the best experience approaching adults.
12:02It's interesting that the drive is, it begins with us, actually, as a society, we are pushing the youths away to AI, and it seems that it's a growing trend, and AI is here going to stay and be a lot smarter.
12:14And Ash, when people say AI is learning to understand feelings, and I've been following Sam Altman, OpenAI chief, and say that even the word thank you itself, it requires a lot of processing, but it's very valuable to AI.
12:30But where do you think AI still struggles most when it comes to trying to read human emotions?
12:39Yeah.
12:40Okay.
12:41So, when we talk about reading human emotions through text, in a written manner that we are providing through these AI tools, is the way an individual is even writing in the first place.
12:56For example, when we are trying to get our assignment done, the prompts that we put in makes a huge difference to the quality of what it's going to give you back, the essay is going to give you back.
13:08And sometimes, not every individual is capable of articulating it in a very extensive or elaborative manner.
13:18And for us, when looking into Malaysian context, we are not, we have very much, we are in a different world, we have a very diverse context, and we cannot play up to the Western context, the Western's algorithm.
13:35So, in terms of reading our own emotional gaps that we are trying to present to AI tools, it's unable to take up our, how to say, it's unable to understand our cultural norms, what is, what may be severe to us, or what may be severe to us could be considered normal, or it could be misread.
13:56So, in terms of the AI tools that we have right now, and relating it with Malaysian context, it does not have the sources ready to read what Malaysian youth are facing.
14:10It has not enough data sets, I would say, to represent what Malaysian community, Malaysian youth is facing, like, maybe from different backgrounds, we are, for example, teens who are feel burdened to share what they're feeling, and AI suggests you to have an open discussion, which might trigger much more panic, because they're unable to achieve what has been suggested.
14:37And that will create more stress than relief, to be honest.
14:41So, AI tool has a gap where it's only following Western algorithm, and it's not ready, or it's not, how to say, it's not being able to be utilised by populations from other nations, other countries.
15:01It's very under-representative, and we are going to AI tools with a Malaysian heart, but then expecting to get a Western algorithm to provide solutions.
15:11I feel like that's the gap that we still have to fill in, in order to even turn back to AI tools for emotional assistance.
15:19Yeah, and just to follow up on that, like, nowadays that I think some youth are also trying to self-diagnose them.
15:27Could AI be a good place for us to diagnose ourselves in terms of, like, whether I have these kind of problems or not?
15:36Because we've seen statistics came out from CHPT or Gemini that they are smart, you know, they can pass medical schools and all that.
15:45Could AI actually spot early signs of mental health struggles before they become really serious?
15:50Okay, as mentioned, AI tools are able to predict, but it does not have the empathy that I would say, the empathy to really read through in between the lines to give a proper diagnosis.
16:08It could give a diagnosis, but it may overlap with something else.
16:11It isn't trained live. It has to have its own massive data set to make sure it still keeps running relevant to the users, and it can be tailored to Malaysian context.
16:25You know, it has to pick up on linguistic and language differences to study how people express their distress.
16:33It has to differentiate what is acceptable emotional language in different communities, because in terms of Western community, they do have the resources to study.
16:44There are, you know, authorities that are giving that kind of data set, but in terms of Malaysian context, we do not have it.
16:51So in order for it to spot early signs of mental health within our youth, it's still not at a great capacity yet, and it still needs refining.
17:03And in terms of it detecting early signs, it's actually detecting patterns.
17:09It's actually pattern recognition through our text.
17:11What we are writing down to AI is just taking up words that actually correlates with some sort of symptom.
17:19It's not even exactly reading through every emotion.
17:23It's just taking up which word is correlated to that early symptom, and then it is used to come up with a diagnosis.
17:31And sometimes, when I mention that I've used before, it's been quite a long time since I used, and it kind of glitched in my case.
17:40I felt in a different way, but the words I used, it misinterpreted in a different way because of how it correlates with its symptoms that's already in its program.
17:53So for AI to help spot early signs of mental health, it has to have proper knowledge or resources of what kind of emotional language that is acceptable.
18:03What kind of word is used to correlate with early signs of mental health, and it shouldn't be to give an advice or diagnosis.
18:13It should just give a guidance in maybe you can try to do reflection or do not suggest diagnosis, but just straight away lead them to professional help.
18:26So from what I'm getting from you is that AI does play a part a bit in terms of a bit of the support from AI for them to lead them to clinical psychology.
18:37Now that brings me to you, Joan. Where do you think AI support should stop and human support should start instead?
18:46And talk to us a bit more. What worries you the most about when you are relying so heavily on AI in today's world?
18:55I guess I'll start with the first is that where does it start? Where does it stop?
18:59So I agree that, you know, sometimes we don't know who to go to to ask the questions that we have.
19:07So we go to AI and AI is a great place because, you know, data set, you just get all the information and then they give you some answer.
19:15Okay, maybe this is that, this is that, which is great. So then you'll know where to go from there.
19:20But I would say it has to stop at the point where humans need to intervene already, like you need the human touch, you need the human connection.
19:34And that is when, okay, you're really struggling, you need to see someone. Continuing on AI is not going to help because they might not pick up already on what you are struggling with.
19:45And I also wanted to add something interesting. So I think recently, like OpenAI did like finesse their mental health stuff, you know.
19:55So we've been getting a lot of callers referred from ChatGPT.
20:00It is like, oh, interesting. So they are putting in the work, right, to show that they recognize their limitations.
20:06And the moment they detect that this person is struggling and they need assistance, they redirect them to whatever NGOs or resources, wherever they are.
20:15So we've been getting a lot like our emails, our callers, like, yeah, we were referred from ChatGPT.
20:20It's like, okay, that's, that's good to know, at least, you know, so I think that's great in that.
20:28So but we can see their first point of contact is still AI, they still want to go to AI first, and then talk through whatever, whatever.
20:34And then they're like, okay, referred to human people. So that's great if that's what's happening.
20:40When they're when AI becomes the only voice they seek, the only company they seek, right, so then they're over relying on it.
20:49They believe that everything AI says is the truth. So they're not able to distinguish, you know, the information that they're receiving.
20:56Because like you said, also, if you want to look into Malaysian context, they are missing out on the cultural, the religion, the all these nuances that they're not going to be able to pick up.
21:08So that's where if it's the only voice you believe, the only voice you listen to, that's going to be dangerous.
21:15Yeah. And then again, like being able to discern the information that you're receiving, and whether the advice that they're giving you is appropriate. Yeah, all these things.
21:25I would like to go back when you say that you've been getting a lot of referrals from chat GPT.
21:30Is it is that a good thing? Or is that a bad thing? Because it could be, you know, maybe wrong diagnosed or and then also, I have this question.
21:38Do clinical psychologists or psychologists or people at Befrienders see AI as a threat or as a threat in terms of helping in terms of these mental health cases and all that?
21:46Yeah. In the case where they, you know, pick up on that, okay, this person who is chatting, the user who's chatting with the AI,
21:54is showing signs of distress and then they are like, okay, I'm going to redirect you to Befrienders or any other help lines, right?
22:00So that's good. So we don't necessarily see it as like a bad thing. But then it also means that we've been getting an increase in callers,
22:09because they're all redirected to us. So that's, that's the difficult part. But yeah, I think overall, AI can be used for good.
22:18Okay, that's interesting. At least it's refreshing, you know, we don't think that it's a threat all the time.
22:23So it can be used to enhance or to enable more people to speak up. And now let's go to deep, let's go deeper into the core of this whole mental health issues,
22:33which is empathy. Ash, talk to us a little bit more about how important is human empathy in mental health support.
22:40And do you think AI can ever realistically get close to replicating that human empathy in the future?
22:48Yeah.
22:49Alright, human empathy, I wouldn't say it's just a personality trait alone, but it's a biological process.
22:55In order for you to feel and also understand a person's emotions from them storytelling or expressing how they feel.
23:04Let me just say when, when mentioning human empathy, right, it works in a way when someone is telling you a story,
23:10you're able to see their body language, whether are they hesitating, or do you see them not making eye contact,
23:18those kind of gentle or subtle gestures are what human empathy is able to pick up.
23:27It is able to help you to gently confront that individual to tell the story or unfold that story layer by layer
23:39further until you get to the core of the issue. So human empathy is very important for a person to connect and also
23:47have a proper understanding of what the person is going through without them mentioning at times.
23:54Verbally expressing what they feel sometimes will not lead to what are they exactly talking about,
24:02but it's just the way their body, how to say, move and how their body gesture is.
24:08Like those subtle emotional cues are what it takes to further understand how to help this person.
24:14And when talking about how AI will replicate human empathy, it is able to simulate.
24:22As mentioned, AI tools is just only getting to know that person through their words.
24:27They don't get to see whether have they hesitated to text that word or how long have they stopped to text that next line.
24:35They don't pick up on that. They only pick up on what they have been received, what they have received from that individual themselves.
24:42So it can simulate empathy through the written cues, the tone, the pacing of the words and such, the way they have represented themselves in that issue.
24:57But it is not able to get close to what we have in terms of human empathy.
25:03So in this case, AI can be just used as a screening tool in order to recognize what that person has.
25:12Maybe the person can just try to get basic ideas of what they are going through, but it cannot get close to human empathy.
25:20Predictability is not connection, which is what AI tool is providing the predictability and giving advisers and comfort.
25:30They are not programmed to confront them because it could escalate risk.
25:35Therefore, realistically speaking, AI tools will not get close to what human professional help can provide.
25:43But it's just a screening tool that can be used to redirect them to who they should seek help from.
25:49Great. And we're running out of time, but I think this is a very important question as well.
25:54I'm going to direct this to Joan. What should young people be mindful when they are turning to AI for assurance?
26:02What do you need to keep in mind when they do this?
26:05I would say firstly, to understand that AI is just a tool. It's just a thing you use to achieve something.
26:14And that it can only do so much. And so how can we use it, you know, smartly?
26:21How can we use it in a way that benefits us?
26:26Yeah. And then I would say the second thing, of course, is like knowing how to discern.
26:32I think, as I've mentioned, how to discern the information that we receive, whether this works for us, whether this is just the data set, you know, coming up with something based on whatever words they picked up from what we tell them.
26:44I also wanted to add, because at Befrienders, we also do emails.
26:49So instead of calling to the helpline, we also receive emails.
26:52And so that is a very similar vibe, you know, to chat in that they are right.
26:55We are also human. We are also just interpreting what they're going through, through the emails that they sent to us.
27:01And so that's been very fascinating to me because it's like, sometimes people also, they're also worried that, is it the people replying to them is AI?
27:10So that I'm like, okay, how do I, how do I give that human touch in my, in my response, you know?
27:15And I, but it, so like human empathy, we're saying, it can never replace, like, when I reply to those emails, it can never replace the experience I have as a human relating to another human.
27:27It can never replace that, the understanding I have of the cultural context that in the religious, the, the environmental context that in, because these are things I as a human can pick up and that I don't think AI could.
27:42So, yeah, so it's been fascinating seeing how people might worry that AI is replying them on the emails.
27:49So I'm like, how can I sound more human? Should I have more typos or something?
27:54So, yeah, it's been, yeah, it's been very interesting.
27:58Okay, that's a great way to end the show.
28:00And thank you so much, Ash and Joan, for sharing your insights, a very important topic.
28:04And for the youth out there, or anyone, if you're struggling, please make sure to be mindful when you use AI or any chatbots and know that there are a lot of psychologists and such as preventers as well out there to help you.
28:16I guess that's all the time that we have for tonight's episode in It's About Youth.
28:20I'm Amrl Aiman. Thank you for watching. Good night.
28:24Good night.
28:25Good night.
Be the first to comment