- 3 months ago
Join Mediha Mahmood, CEO of the Content Forum, as she discusses the evolving landscape of content regulation in Malaysia. Mediha explores the principles behind the Content Code, its impact on media and digital platforms, and how Malaysians can actively participate in shaping responsible content practices.
Category
🗞
NewsTranscript
00:00Thanks for joining us. This is The Economy. I am Ibrahim Sani.
00:11Today, we will be talking about the content standard of Malaysia.
00:14In the studio with me right now is Puan Mediah Mahmud, the CEO of Content Forum.
00:19Content Forum is, of course, launching a nationwide public consultation review on Malaysia's content code.
00:24The last time we've had this conversation was roughly about five years ago, wasn't it?
00:272022.
00:272022, three years ago. So it's every time a content code is being reviewed, it's always an opportunity for us to have a chat with Content Forum, with Mediah, of course.
00:38The public consultation ends on the 7th of November, roughly about a week or so from now.
00:44And we want to understand a little bit better in terms of what has transpired so far in terms of the public consultation,
00:50and particularly on the process on addressing accountability on how we manage content per se.
00:56Okay. Maybe walk through us in terms of what the process has been and what learnings have you gotten so far before we conclude to that 7th of November date.
01:04Okay. So in the beginning of the year, we already put out feelers.
01:07We had a public review where we opened up comments from the nation, from anyone who would like to say something about the content code, just to see a temperature check.
01:17And during that period, we got quite a bit of information. I think most of them were very concerned about AI, because in 2022, we did not see this coming.
01:26We did not see the whole AI generated content thing coming in.
01:28But what was also very clear was everyone was worried about child online protection.
01:33So that was also something that was raised.
01:35So from that exercise, we also sat down with the industry players within the Content Forum.
01:40As I'm sure you know, most of us are industry players, but there are also CSOs in there and academicians.
01:45And we sat down, went through the whole exercise again, and we wanted to make sure also one thing,
01:50we update ourselves with the amendments to the Communications and Multimedia Act.
01:54Okay, so this is the part where I want to say something, that I don't have to re-watch our old interviews to remember the conversations that we've had,
02:05because you're a rather memorable guest to have on my show, under the previous incarnation of Notepad.
02:11You were highlighting on the notion of self-regulation as being the key factor then.
02:16Is it still the case today?
02:18It is, but I do have a qualifier there.
02:20Self-regulation has worked very well with the broadcasters and advertisers.
02:25We've done that for the past 20 years, and they're pretty much self-regulating quite well.
02:29I think the challenge comes when the content ecosystem has expanded to those on the internet.
02:34Because of the fact that we're no longer looking at push content, we're looking at anyone can be a content creator.
02:41We're looking at streaming platforms, we're looking at social media, and these are all on the internet.
02:45And the CMA clearly says in Section 3.3, there shall be no censorship of the internet.
02:49So whatever self-regulation or standards that we put has to consider that balance as well.
02:54Speaking about that censorship on the internet, yet we've seen how this government has made requests,
03:02particularly to Meta and to Bike Dance, i.e. TikTok, on taking down some content.
03:08How does that jive with the Act?
03:10I believe the Act, Section 3.3, does not allow for pre-publication censorship.
03:15So that's still there.
03:16But there is always room for there to be flagged harmful content that needs to be taken down.
03:22And platforms will, I think YB Farming has said it himself, the decision to take down the content eventually sits with the platform.
03:30Because they'll look at their community guidelines and see whether or not it is in reach of that.
03:35But the primary criteria for take downs should always be if the content surpasses or goes into what is harmful to yourself, to someone else, or to a community.
03:46So who decides, the platform decides what is considered harmful.
03:50Yes, based on their community guidelines.
03:52And then with this process right now on the review, is it going to be reflected as such or it's going to be as is?
03:59For us, the interesting thing about this round of revamping or revising the content code is that we've not got a new member on board.
04:07We have TikTok on board.
04:09So instead of just the industry deciding what platform accountability should be, we actually have a platform there to sort of discuss with us what those, what do I say, what those boundaries can be.
04:19What is, you know, what they can and cannot do.
04:22At the same time, it was a very interesting process because we, you know, whenever we have these sessions with the working group, everyone has their own perceptions of what needs to be done.
04:31Everybody has their own interests to protect.
04:34So it was a pretty lengthy and challenging process.
04:37But I feel that everybody in the end came to a conclusion that you cannot have zero accountability by the platforms.
04:43You can't say parents need to play a role, users need to play a role and platforms doesn't have responsibility.
04:48They do.
04:49And that we've put them down as the fact that you have, again, no pre-publication censorship, but they are required to ensure like filtering, flagging, takedown mechanisms which are accessible.
04:59And also that if something is reported, something is harmful or illegal, there is a duty to take that down.
05:05You know, election is coming up maybe a year, maybe two years from now.
05:09And we're going to see a lot more political content being shared online.
05:13By the way, I'll get to the child protection and AI later.
05:17But on the notion of misusing or mishandling these kind of platforms, would there be some form of mechanism, at least in terms of how a content forum looks at this particular idea,
05:29in terms of people trying to take down content that they don't like, you know, it's more of a self-serving notion or a self-serving party, especially when it comes to political content.
05:39Does that mechanism have a place in our CMA?
05:42In the content code, at least, I can say that, you know, things like what we say is prohibited are hate propaganda or anything that incites harm against certain categories of people.
05:53But we also make it very clear things like criticism, satire or public interest commentary is allowed and should be actually in democracy be encouraged,
06:01as long as it doesn't intentionally incite hostility or target people based on those protected characteristics.
06:08So that is already there, at least in the content code.
06:11Let's now move to the two areas that you spoke of, which is self-protection as well as portrayal of minorities.
06:18We start off with the latter.
06:20The discriminatory coverage of minority or portrayals of certain segments of society does translate to how society views those communities.
06:29Yes.
06:29And again, this is more of a metamorphosis of how content is being shared and done online.
06:35And of course, it's translated to other forms of content through media, through movies.
06:40That's right.
06:41How has this deteriorated?
06:43How far has this deteriorated?
06:46And more importantly, what are the changes that can be done from the ongoing public consultation exercise to actually stem this deterioration?
06:53Okay.
06:54So I'll bring you back to in 2022 when we did this whole revision exercise.
06:59At that time, the focus was a lot of women and persons with disabilities because they were the vulnerable communities that we were looking at.
07:04So that's done.
07:05There are improvements there as well.
07:06But for this particular exercise, there have been two new things that have been added.
07:12For example, hate propaganda prohibits not just the advocating and promoting of hate and violence, etc.
07:16It also includes the facilitation or the enabling of such things because of the fact that we've added in the protected communities migrants and indigenous groups.
07:27Because what we've seen is a very disturbing trend where content targets migrants and immigrants and refugees and they just lump it into one.
07:37And it spills over into real life violence against these communities.
07:41So we are putting in the content code that there shall be no discrimination against this.
07:46There shall be no propagating or incitement of harm against this community.
07:50So that would be under hate propaganda and hate speech as well.
07:53There's no champion that I can think of.
07:55Because if you think about persons with disabilities, Senator Raz Adhibar, for instance, comes into the picture or comes on to mind.
08:06There's also women support groups that fights for that particular group.
08:13I can't imagine minorities or immigrants having a champion to actually get this across.
08:17So this must be a self-elected affair on behalf of Content Forum to actually fight on their behalf.
08:22So I would have to also let you know that when we had the pre-public review in the beginning of the year, we did have these groups represented.
08:32The reason why I believe they're not spotlighted is because they're a very vulnerable community.
08:36Because there are among them who would be undocumented.
08:39There are among them who are already facing discrimination.
08:41So they're not exactly in a position to be such loud and brave advocates like women's rights and children's rights and persons with disabilities rights.
08:49But there are groups out there who are pushing for their rights.
08:52And those are the ones who came to us and said, why don't you take a look at this?
08:55So we did.
08:56And when we had chats with our working group, which involves media as well, we highlighted the fact the reason why they're making this something they're recommending is because they've seen,
09:05for example, some news coverage being very blatant in their discrimination against migrants, immigrants, refugees, etc., which then spills over into affecting their communities.
09:15Another area of concern is, of course, child protection.
09:19Under the Child Act 2001, it's spelled clearly in terms of how we need to protect the rights of children, the identities, how we prosecute them, and so on.
09:28And it's rather familiar.
09:29Everybody gets used to it.
09:30If not before, it is already now.
09:33How are things different today in terms of ramping up the protection of children online, especially on how content is being served?
09:41And you've nailed it right there.
09:42We used to have parents and guardians being worried about children being exposed to content on TV or gaming, etc.
09:50Now, all their worry, most of it is concentrated on online.
09:54So one of the biggest change, and there's quite a bit on child online protection, one of the biggest change is that we have said it's not just looking at content that is targeted for children.
10:04You need to look at content which is appealing to children and where children are likely to go.
10:08So that includes gaming platforms, that includes content that is categorized as you, for example, and that includes internet spaces where children are likely to go.
10:17So this also includes the fact that we want child sensitivity to be built into the design.
10:23So that's something that we put in as well, built-in protections.
10:26And also encouraging default privacy settings for children in mind.
10:30So instead of saying, you know, this is what you get, if you want it to be safer, then toggle that.
10:35It should be safe already.
10:36If you want beyond it, then toggle it.
10:38Maybe have eight assurances to get there.
10:39But is it practical to be implemented and executed, especially on some platforms like, say, for instance, Snap or IG or TikTok, where there are more children there than ever?
10:52Exactly.
10:53So that's the whole point of our content standards.
10:55It's not something that we can enforce, but we're hoping to make it normalized as such that it's built into the mechanisms and it's built into the muscle memory of all these platforms to have that in mind.
11:04And it's not something that's unique to what we're doing all over the world is the same kind of things that are being lobbied for.
11:11So hopefully as an industry in Malaysia, at least we're saying, yes, these are the standards that we expect.
11:16There's two things that might propagate this.
11:19Number one is commercialization of the element of this.
11:23Another one is the support from consumers to actually perpetuate child-sensitive content or disregard towards child safety.
11:37Are these two areas of concerns being addressed or could it be addressed from content forums' point of view?
11:45Can you elaborate a little bit on that?
11:46Like, for instance, the sale of pornographic material, for instance.
11:50Another one is on we see child-sensitive content being served to adults and there's little to no mechanism to allow for adults to report on it.
12:00Those are the two elaborations.
12:02Yes.
12:02So one of the things, or there's quite a few actually, one of the things that we are also pushing for is the flagging and also accessible reporting,
12:10not just for adults, but also children themselves to have child-friendly accessibility to reporting.
12:16That's very rare that we've seen.
12:18At the same time, I also have to put it very clear, and we put that down as well, that we can't put all our focus on legislation and regulations and self-regulation by the industry.
12:28That's something that is a given.
12:29We need to do that.
12:30But from what we've seen on ground, it's also the parents' involvement in using these features.
12:36It's not enough that we have it.
12:38It's the parents who actually have to use it, number one.
12:41Number two, we also see that parents these days might not be aware of the actual dangers that the children are facing online.
12:47They understand about social media, they're worried about trends, but we are now looking at situations where children are also having relationships with AI companions.
12:56There is also a situation of digital grooming with AI apps.
13:00So we're hoping to really reach into that space because it's relatively new, and if we can lobby for greater protection for children in AI, at least, then that's something that we want to work for.
13:12See, this is the reason why you're a great guest to have, because I remember vividly as if I had this conversation with you yesterday.
13:19Both of us have not so much young children, but children that are very savvy online.
13:24And instead of taking devices away from their hands, both of us, I guess, discussed that it's better to coach them and manage those kind of platforms accordingly.
13:36Now, in that sense, both of us are in sync on how we intend to raise our children accordingly.
13:42I'm not too sure if the rest of Malaysia has the same viewpoint as us.
13:47The worst case scenario is sticking your head in the sand like an ostrich and take away the devices.
13:53On the other hand, it's unfettered access, and parents themselves are busy with their own lives, and they have no supervisory oversight altogether onto their children's online behavior.
14:05There has to be some middle ground here, right?
14:08And where do you see parents or adults and guardians play a role in enforcing some form of check towards a community of children being online?
14:17Yeah. It's the same conundrum that we find ourselves in whenever we do advocacy work with parents and guardians.
14:25It's the two extremes. Either they just say no devices whatsoever, which to me is also unfair, because these are literal digital natives.
14:33You can't stop them from accessing the internet, because that's also where they socialize, they learn, they play.
14:38My children flirt with their classmates on Microsoft Teams, because that's the device.
14:42Exactly.
14:43It's bizarre. But it is what it is.
14:45It is what it is.
14:46It's just like, you know, you can stop your child from accessing inappropriate content, but they can use Zoom for that.
14:51They just share screens. They use Discord for that.
14:54There's so many other platforms for them to go to.
14:56So I understand the concern about social media, people wanting to say, OK, we need to have, you know, Curb Meta, Instagram, TikTok, etc.
15:05Facebook, kids are not on Facebook, by the way.
15:06But you also have to look at the fact that they will be exposed to these things on other platforms that we might not even know exist.
15:12Yeah.
15:12And they're there.
15:13So the key thing here is to make sure that our kids are equipped with not just the digital literacy, but also the savviness and critical thinking.
15:20So whichever platform they go to, they know how to protect themselves.
15:23And that comes with conversations.
15:24Yeah, but parents don't know ChatGPT.
15:28I, you know, my day job with Yassan Pranaju, we went into an area where we had a lot of scholars there, their children.
15:40And one of our facilitators asked the kids, how many of you use ChatGPT for homework?
15:46Nearly all of them raised their hands.
15:47And the teacher was asking that facilitator, what is ChatGPT?
15:51Oh, dear.
15:51Now, this is, yeah.
15:53Sorry, if you guys weren't aware, the cameraman is laughing behind the screen.
15:56This is how bad it got, right?
15:57So, yeah, talk about digital literacy.
16:01I'm talking about the adults who don't even know what platforms exist, let alone AI and all that.
16:05This is how fast the situation is.
16:07And that's worrying because ChatGPT is sort of regulated or at least controlled.
16:11They're self-regulated.
16:13There are certain things that you cannot prompt and get.
16:15But there are several, they promote themselves as uncensored, unregulated, unfiltered AI, which the kids are using.
16:24The kids are not using ChatGPT.
16:27They're using other AIs, which allow them to do basically anything.
16:31And these are unregulated, unfiltered, and promoted as such.
16:34And is there any form of mechanism?
16:37You think maybe Content Forum might not necessarily be involved in this, but, or you can't, right?
16:42I don't know.
16:42I mean, we've been getting so much feedback because we're doing this public consultation, right?
16:45So, people are giving us so many ideas.
16:46They say maybe we should...
16:47It's a double-edged sword, this whole public consultation.
16:49You ask a thousand cooks, hey, how do you make a dish?
16:51Exactly, and they've got everything in mind.
16:53And they're saying maybe we should block it from the store, the App Store and Samsung Store, Google Play Store.
16:59Or maybe we should block it at the device itself.
17:01Maybe we should start offering devices which are already blocking everything unsafe
17:05and then just give kids access to filtered internet and apps.
17:09That's an idea.
17:10But it's something that the entire world is grasping at straws trying to do right now.
17:17And I think it will continue to be a challenge.
17:20I mean, Malaysia's got the Online Safety Act.
17:22We're about to see what the instruments look like and see what the approach is there.
17:25And not every other country is coming up with the same.
17:27But the best approach, in my opinion, is still on education and learning how to use those devices and content accordingly.
17:34I absolutely agree.
17:35I absolutely agree.
17:36No amount of censorship can make the world a safer place.
17:40Because people who go against censorship will find ways against it.
17:44And kids are very good at that.
17:45Yes, I know.
17:47I use, what do you call this?
17:49Family link?
17:50No.
17:50What do you call this?
17:51Oh, you wouldn't use family link.
17:52Yeah, I'm an apple.
17:54The lockdown thing.
17:56Ah, yes.
17:56I forgot the name.
17:57Sleep time.
17:58Sleep time.
17:59Sleep mode.
17:59Sleep mode.
18:00Yes.
18:00Yeah, my very smart daughter uses fire chrome.
18:05Shh, don't tell them.
18:05Nah.
18:07And here I thought I'm a responsible dad.
18:09Yes.
18:09You know, nothing is happening.
18:10Nothing is chrome.
18:11No problem.
18:11Because chrome doesn't have that.
18:13Exactly.
18:13They circumvent that.
18:15And that's what I tell the parents.
18:16Sleep mode.
18:17Okay, good.
18:18At least you're having some effort there.
18:19But you know your kids might circumvent that.
18:21The easiest way is to take their devices with you.
18:24And, well, yeah.
18:26And you're supposed to sleep, right?
18:29Okay, another area of concern is, of course, AI.
18:32And I want to deep dive in this.
18:33I was rather upset that I was not part of those AI deep videos of me having sexual relations with a man.
18:45You were disappointed?
18:46I am disappointed because that means I'm not up to the mark yet.
18:50Oh, my God.
18:51Because who's who has seen themselves in that video.
18:55You in FOMO.
18:56Yeah, yeah, yeah, FOMO.
18:57Someone never have thought.
18:58There's also phobia, fear of being included, which is a real thing now, apparently.
19:04But jokes aside, it is so pervasive that it's not becoming a joke that people, you know, if it's a very disconcerting video, the first reaction is always, is it even real?
19:16This is how people react nowadays.
19:20And the question now is, should we have more controls?
19:24Because I don't think you can stamp this, what used to be an overflow of fake is now a dam that is being broken.
19:36Yes.
19:37And I don't think any effort, commendable or otherwise, meaningful or not, is going to stop this.
19:44So how do we live in a post-AI world?
19:47We're all struggling to see that.
19:49And I think most, the first thing people are doing and what we're also doing is requiring labeling.
19:53That's like the basic rule.
19:56Label AI generated content.
19:58But even then, it can't be all AI generated content.
20:01So what we've done is that you, platforms need to allow for transparency at source, meaning that if you put AI, you need to label it as such.
20:08And, you know, things like simple use of AI to make yourself look better, grammar checks, you don't want to disclose.
20:14But things which can mislead the public, can mislead people into purchasing content, things that could lead to instability, stuff like that.
20:21That needs to be a prohibition against all that.
20:23But again, it goes back to digital literacy and now AI literacy.
20:27Because right now, we have a really big problem in trust.
20:31People don't trust things anymore.
20:33We have people who are questioning videos that come, for example, from Gaza and asking whether or not that's AI generated or claiming it's AI generated.
20:41We have people putting in deep fakes of politicians and believing it and people who are seeing the genuine ones and not believing it.
20:48It's a real mess.
20:50And I think the biggest problem that we are looking at is that there's going to be a huge trust issue with the media, whatever media that comes to you.
20:59And I don't think we can solve it.
21:00It's very defeatist to think so.
21:02But I'm hoping that we can find a way to do that.
21:05And at the same time, we can't not do anything about it.
21:08So I played a nice little game with my three children, real or AI.
21:13So it's a rather simple, straightforward thing.
21:15There's a video, two videos showing similar content.
21:18One is AI, one is real.
21:20The kids got it 10 over 10.
21:21I got it maybe 8 over 10.
21:22Not bad.
21:23But I feel that the younger generation have it in themselves inbuilt to distinguish.
21:33And it's us, the 40-somethings, the 50-somethings that have to grapple with this.
21:39So, yeah, I sound defeatist because I am.
21:44And I feel that nothing, you know, with the limited resources and the effort and the time that you have, perhaps it's better to do other stuff like protecting minorities or children and stuff like that.
21:55AI content is going to be there.
21:57In fact, you alluded to something quite interesting also that I want to bring in, which is, of course, AI companion.
22:01This will be the way forward.
22:05It is.
22:05It already is.
22:06Yeah.
22:07I mean, if it's so difficult and complicated to have relations with a human being and an AI removes that notion and you are having the same satisfactory level, the dopamine and the endorphins are being released in the same way as you would with another human being, why not AI companion?
22:23Oh, there's a list of why nots.
22:24All right, let's go.
22:25Let's go through the list.
22:26Yeah, we got it.
22:27You're basically removing humans from relationships altogether.
22:31And that is bad because?
22:34Because, I don't know, well.
22:36Ah, no, no, no.
22:37The mental health issues that come with having a companion, which you can't actually live with.
22:42No, you can't program it.
22:43I got mental health issues.
22:44There's always improvements, right?
22:46Yeah, but there's no human touch unless you wait for robots or humanoids or whatever, right?
22:51Give it a few more years.
22:53Yeah, it's probably going to happen.
22:54But we already see what's happening with the number of cases where teenagers end up in a relationship with AI and find themselves in a situation of suicidal ideation and being encouraged to, well, there are people who have died by suicide.
23:08So, that's one huge example of how it can actually be a life and death matter.
23:12But at the same time, we're already looking at situations where when the gaming thing came about, a lot of jokes about men sitting in their dark room with a bag of Doritos and then just gaming the whole one and isolation entirely.
23:24So, when you put AI companions into the mix, the problem of an aging population with no new generations coming in is going to be a problem.
23:33And then, of course, this ties in with the whole idea of living with AI being a key factor on how we build content.
23:41Are you optimistic in seeing some form of reasonable value coming out of the public consultation, particularly with regards to AI?
23:51Between the three ideas that we talked about, I am more of no restrictions whatsoever.
23:57This is me.
23:58But do you feel that that kind of AI approach is being considered quite well by the public in their address to you or content forum?
24:07Yes.
24:08One of the key things that has been out there is the fact that whatever it is, even if you use AI, if you generate with AI, the humans behind AI will bear responsibility for whatever happens next.
24:16So, using AI is not a defense against anything.
24:19So, that's one.
24:19Number two, there is an expectation for at least some level of control, either in labeling or repercussions for those who are exploiting AI for nefarious means, like misleading or inciting harm, etc.
24:32But I think the biggest thing that we need to do after this public consultation, after hopefully the content code gets registered and is implemented, is the AI literacy thing that we need to be doing with members of the public.
24:43Because like you said, from the very beginning, online safety, etc.
24:46Like even the campaigns that MCMC is doing and so many other people are doing, we need to equip people with the tools to make sure that they can use AI without being lied to or being misled.
24:57Let's talk about the commerce side.
25:01Because of influencer marketing and online advertising, we see how many of these platforms have bundled their shopping with the content.
25:11And we saw how countries like Indonesia, for instance, is managing that aspect in their own content forum equivalent.
25:20How is content forum approaching that side of the element in terms of commerce and shopping on this content forum?
25:26So the biggest concern with content with regards to commerce is, number one, is the advertising.
25:32And number two, is the fact that sometimes they can sell things which are actually illegal to be sold.
25:37So that, yeah.
25:38And that can be bought by almost anyone.
25:41Like if you have, if you give your child or teenager access to your touch and go card or you keep, you know, reloading their whatever wallets, right?
25:49They are able to order things online without you knowing.
25:52If, you know, it just arrives home, they can take it and then you don't know.
25:55And there are so many things that they are accessing online.
25:57Like what? Drugs? Outright drugs.
25:59Yes, outright drugs.
25:59And the government is doing their very best to do this.
26:03And they've raided and taken action against quite a lot.
26:06But the fact of the matter is the platforms also need to step up as well to identify this.
26:13I don't know if you can use AI to identify these things.
26:15But again, I have to add, the sellers are very creative.
26:19You will look at certain things and you think it's absolutely kosher.
26:21But there is a separate platform where they are, where they discuss you can buy this on this e-commerce platform.
26:27And then you go look for the exact thing.
26:28You can search for the code, etc.
26:30And it goes from anything from drugs to other things which you don't want your children to have access to.
26:35Or even we're not allowed to have access to.
26:38Yeah.
26:38But then again, OnlyFans is also there.
26:42And they use links upon links and an account to support that links and then so on.
26:46Yes, it's a whole ecosystem.
26:48It's very hard.
26:49It's very hard.
26:50And it's worth saying that it's not new.
26:54Back in the olden days, we've had antiviruses companies like McAfee, Kaspersky, who would untrick a trick that has been done.
27:04So male actors would have a trick and they would do the untrick and it's a perpetuating thing.
27:08I can see this as being a perpetuating notion as well.
27:11The only thing that can stop people from doing this is either better recognition of education or harsher and stiffer penalties.
27:20That's probably it.
27:21That's, I think, probably is the problem.
27:22Especially when it comes to illegal content.
27:24Okay, that's illegal, which is easy to untangle.
27:27But the not legal or outright legal items, especially when it comes to influencer marketing where false claims can be subjectified.
27:39You would like that lipstick more than I would.
27:42Therefore, you would hyper sell it even though that lipstick in general, people wouldn't find it to be good, for instance.
27:48But because of the popularity of the influencer and so on and so on.
27:51So how do you see this as being, I guess, regulated?
27:54Right.
27:55So that one, internationally, the rule is you need to disclose.
27:58The moment you are being paid to do content, you need to disclose that you're being paid to do content.
28:03But there's no penalty to not disclose.
28:06Well, right now for us, we can penalise the brands who do this.
28:11But the influencers are not exactly regulated.
28:14Of course, we can go the other route of going to Section 233 of the CMA.
28:18And, you know, I'm not a big fan of...
28:19Yeah, exactly.
28:20Weaponising it for these kind of things.
28:21So again, for us, it's also reaching out to the influencers to let them know that this is the way that they're supposed to do things.
28:27And we've seen influencers using that because platforms already have the toggle.
28:31You put sponsored or ads, et cetera, or collabs.
28:33So just that, just a simple act of disclosing should be promoted or encouraged further.
28:42Medea, thank you very much for sharing with us some of your thoughts.
28:45It's also always interesting to see how a lawyer like yourself works with the intangibles like content.
28:54I don't envy it, but you're doing a great job.
28:56You're doing an awesome job.
28:58And, you know, if you would like to make a call out for the public to address, how do they go about it?
29:03I would really love for it if anyone and everyone out there would just go to our website, contentforum.my.
29:08The public consultation is there.
29:10You can see everything from AI to hate speech to everything there.
29:13And anything else that you think we've missed, please let us know.
29:16We'll be very happy to discuss it further.
29:18So the deadline is on November 7th, 2025.
29:22So if you want to have a say in it, now's the time.
29:25Yes.
29:25That's it from me.
29:27You've just heard earlier from Ria Mahmud, the CEO of Content Forum.
29:30Until then, thanks very much for watching.
29:31Catch you in the next one.
Comments