- 7 hours ago
More than 67,735 online scam cases were reported in Malaysia in 2025 alone, with losses reaching roughly RM3 billion.
In 2026, AI-powered fraud is adding to the complexity of digital financial crimes.
From cloned executive voices and deepfake video calls to synthetic identities and highly targeted payment scams, cybercriminals are using generative AI to exploit the weakest link in any system — human trust.
Tehmina Kaoosji speaks to Krishna Rajagopal, CEO of AKATI Sekurity, on why AI fraud is becoming a major boardroom, governance and financial-risk issue for Malaysian businesses — and what companies need to do before legacy security systems become obsolete.
In 2026, AI-powered fraud is adding to the complexity of digital financial crimes.
From cloned executive voices and deepfake video calls to synthetic identities and highly targeted payment scams, cybercriminals are using generative AI to exploit the weakest link in any system — human trust.
Tehmina Kaoosji speaks to Krishna Rajagopal, CEO of AKATI Sekurity, on why AI fraud is becoming a major boardroom, governance and financial-risk issue for Malaysian businesses — and what companies need to do before legacy security systems become obsolete.
Category
🗞
NewsTranscript
00:08Hello and welcome to Niaga Spotlight with me Tamina Kaustri. Niaga Spotlight takes us through
00:12the week in economic analysis and future affairs. Today on analysis, our spotlight falls on AI
00:18fraud with a cyber security focus. Now, Malaysia's digital economy is growing rapidly, but so are the
00:25risks that come with it. Police recorded in excess of around 68,000 online scam cases in 2025 with
00:33losses totaling to roughly close to 3 billion ringgit. Now, a new layer of threat is emerging,
00:39AI-powered fraud. From cloned videos, voices and deepfake video calls to synthetic identities
00:46and even highly targeted impersonation scams, cybercriminals are using artificial intelligence
00:52AI to bypass traditional safeguards and exploiting something even harder to secure, human trust.
01:00Across the region, companies have already lost millions to deepfake-enabled scams,
01:05posing as senior executives. So, how prepared are Malaysian businesses for this next wave of
01:12financial crime? Joining us for this very pertinent discussion is Krishna Rajagopal,
01:17CEO of Akati Security. Krishna, a very good morning. Thank you, Tamina. Thank you so much for
01:24making time for this discussion. Now, this is an area that you specialize in, and I'm sure it's kept
01:29you very busy indeed. So, let's get down to business. Perhaps you can tell us a little bit more about
01:36perspectives on how companies around Malaysia need to be really taking AI-related cyber security
01:46and financial fraud very, very seriously this year.
01:50Tamina, I think it is something that is real. It is here.
01:55Sure.
01:56And it is affecting us on our daily lives, right? If you look at statistics, there are about 573 million
02:03ringgit in losses in just 90 days.
02:0690 days. Oof.
02:07And this is the latest that we have, right?
02:09Q1 of 2025.
02:11That's right.
02:11And that number just keeps increasing. And if we analyze that number, that's 6.3 million ringgit
02:18leaving our economy every single day. And that's not corporate money. That's the pachik who, you know,
02:25who lost his pension. The machik who lost the life savings, right? So, it really affects us on our
02:30day-to-day lives. And it has a massive repercussion as well. And I think that's a major concern. And
02:38it's
02:39something that we really have to think about seriously. Of course, on that angle, Bank Negara
02:44is doing a lot, right? They have come up with something called an SCFT framework.
02:48I think BNM came up with it in 2024, October. And what it does is it sort of allows victims
02:57of fraud
02:58to be reimbursed, right? And it has increased the number of victims being, you know, compensated back
03:04by about 26% in 2025 compared to 2024. So, that's a good sign. It's definitely worth recommending
03:11and mentioning, but it's not enough. So, clearly, plenty is being done by our regulators
03:18who have been very forward-thinking to have the regulations from 2024, even before this
03:23became something we were accustomed to seeing, is good news. But however, now we need to go into
03:30the scale of it regionally. So, regionally, we've also seen some headline cases. There was
03:36in Hong Kong, a 200 million Hong Kong dollar Aroop deepfake video call fraud. So, such examples
03:46are actually putting up threat perceptions amongst the financial sector. But what is it telling
03:53us about how quickly and rapidly AI is changing the face of financial fraud?
03:58Oh, that's a good angle. And I think three things are affecting corporate fraud in that sense,
04:06with the advent of AI. But before I get into that, I think, you know, we used to be dealing
04:12with malware that can hide.
04:14Sure.
04:15Yeah.
04:15But now we're dealing with malware that can think. And that's scary, right? So, when we
04:22talk about how AI has changed corporate fraud, I think the first thing is it has actually
04:27very industrialized social engineering attacks. Social engineering is not new, right? I think
04:32the con artist is probably the oldest, you know, one of the oldest professions in the
04:37world, right? However, it has brought it to a different scale. It has industrialized it.
04:42It allows a kid with, you know, a couple of hundred dollars to carry out something that
04:48a organized crime group used to be able to do just two years ago.
04:52Just one person behind a laptop is actually able to replicate that kind of a full-scale
04:57operation.
04:58Yeah. And I think the second thing is that the barrier to cost, the entry cost has literally
05:03collapsed, right? Under $500, you literally can carry out a large-scale attack, sophisticated
05:09attack. So, it no longer needs to cost a lot of money. And you can do it within 48 hours.
05:15So, I think that's the second thing that's changing corporate fraud.
05:17Right.
05:18Cost factors dropped a lot for the attackers. And I think the third thing that we really need
05:23to internalize is that the breach is no longer in your IT systems or your network. The breach
05:30is actually in your employee's brain, right? And professors and research scientists actually
05:35call this as an amygdala hijack.
05:38Right. Okay. Amygdala hijack.
05:39Yes. They're actually hijacking the fear part of your brain so that your rational part of
05:45the brain, the prefrontal cortex, does not do any rational thinking.
05:49There's no time for your prefrontal cortex.
05:51So, you just hijack the amygdala, right? Pushing in fear, authority bias, and you're
05:58worried that you're going to, you know, disobey your bosses. And that's where you fall prey.
06:04I think that actually speaks widely to how there was a recent case in Singapore where a financial
06:10director was almost duped into wiring over close to $500,000 USD.
06:16So, that definitely speaks of actual psychological processes being bypassed due to just how sophisticated
06:24these AI scams are.
06:27Yes. And in essence, they're actually targeting our biological gaps.
06:31Right. Exploiting the biological gaps.
06:34Exploiting the biological gaps.
06:35Fair enough. Fair enough.
06:36So, now, deepfake fraud, Krishna, is also quite often framed as a technology problem.
06:41Yes.
06:42I'd love for, from the background that you're coming, integrating not just psychosocial perspectives,
06:48but merging it together with insights on financial infrastructures.
06:53Tell us about how it also speaks to process failures.
06:57And there's also internal business workflows, which are now being very much exposed when
07:02it comes to institutions and their risk appetite.
07:07You framed it absolutely correct, Tamina.
07:10It is definitely not a technology failure.
07:12It's a process failure.
07:13Right.
07:13And from what we've been investigating and we see on the market, we see four main areas
07:18that attackers tend to target business workflows.
07:21Number one is any kind of urgent payment approvals that require, you know, someone of a senior level, right?
07:28A CEO, CFO.
07:29So, that should be a red flag.
07:30That's a red flag.
07:31Okay.
07:31And that's literally business email compromise on steroids, literally, right?
07:35Number two, vendor and procurement onboarding.
07:40This is a major problem because we are seeing threat actors creating synthetic identities
07:47and faking a vendor registration.
07:49Right.
07:50Clearing small invoices to build trust over time and then going for a big fraud.
07:54Actually committing to the fraud, yeah?
07:56Early and on and on.
07:57That's true.
07:57So, they're investing time into that.
08:00And the third thing I would say is any kind of executive communication, right?
08:05Deep fake clones, voice clones.
08:06That's a very common thing that they're targeting.
08:09And last is actually in HR and hiring.
08:12Right.
08:13Yeah.
08:13So, where people are state-sponsored attackers are doing this very commonly.
08:17They're creating synthetic identities of themselves pretending to be from another country and actually getting a job, getting paid, right?
08:25And they're bypassing the entire…
08:26Going the whole nine yards.
08:28The whole nine yards.
08:28And they're bypassing all traditional HR controls.
08:31And so, these are the main four areas we see attackers currently targeting in terms of business workflows.
08:36Exactly.
08:37And the moment that you mentioned that, you know, there are also scammers, etc., investing the time to build trust
08:45along systems, which are increasingly also automated because, of course, that speeds up workflows.
08:52So, that also relates to the fact that our traditional security controls, so financial systems, are built around, you know,
09:00stolen passwords, what has become…
09:04Viruses.
09:04Yes, viruses, etc.
09:06So, basically, those controls are really no longer enough when you can actually have something which exactly mimics a CEO's
09:14voice, mannerisms, to the point of being undetectable.
09:18True.
09:19And if you look at 2025, 79% of the breaches that we saw…
09:2579%.
09:2579% do not include any kind of malware.
09:28Right.
09:29So, they're not breaking in.
09:30They're logging in.
09:31They're not, you know, breaking your password.
09:34So, all our controls are protecting us against something that's obsolete at this point in time with AI.
09:40That obsolescence is what is most concerning.
09:43I would just, out of curiosity, Krishna, I'd like to ask, so from those close to 80% cases,
09:48is there any data available around how many of those impacted organizations then took a step back and decided to
09:58pivot to a different strategy when it came to hopefully preventing future incidents?
10:04There is statistics about one-third of them have taken a different stance, right?
10:09Almost like a phoenix type of approach, right?
10:12Okay, sure.
10:13Coming out of the fire.
10:13Rise from the ashes.
10:14Rise from the ashes, yeah.
10:15All right.
10:16But two-thirds are still treating it as traditional ransomware or virus breaches.
10:23And the ones, the two-thirds, which is also the higher proportion then, what would be your insight as a
10:31cybersecurity expert?
10:33What's going to happen within the next, let's say, quarter or six months?
10:36It's going to be a catastrophe because it's only going to get worse.
10:41Are they going to be likely retargeted?
10:43Most likely.
10:44Within the next six months of the first breach, we see them getting repeated incidents.
10:49And they're not going to come out of it until they re-look at the entire infrastructure and rethink.
10:56Because we cannot solve a problem the same level of a mine that created the problem, right?
11:00We have to think outside of the box.
11:02Yes, a different box, perhaps.
11:03Different box, perhaps, yes.
11:04And for the one-third of organizations which have documented doing something differently, what did they start doing differently?
11:13And why any insights around death threat perception was measured differently from the majority of companies?
11:21Well, the first thing they did was they looked at AI.
11:23And they started off with doing an AI risk assessment because they realized that some of those breaches came in
11:29from some of the AI tools they didn't ever knew their employees were using.
11:33I see.
11:33Sure, sure.
11:34What they call as shadow AI, right?
11:36What your employees are using that no one knows.
11:38Your note takers, et cetera.
11:39Note takers, you know, the tons of productivity tools, right?
11:43And so they started with an AI risk assessment, right?
11:46Started with an inventory, right?
11:50And they created a one-pager, right?
11:51Saying the do's and don'ts, right?
11:53And bringing all of those under control.
11:55Right.
11:56The second thing they did was they started looking at it from a holistic perspective.
12:00And one very interesting thing was that they brought in cybersecurity expertise at the board to change things.
12:07Because they realized that by asking relevant and hard questions with confidence at the board, it will steer the company
12:15in the right direction.
12:16Exactly.
12:17This was the sort of three main sort of trends from those one-third that really changed things around.
12:23Fascinating.
12:23Thank you so much, Krishna, for the conversation in so far.
12:26Sure.
12:26We take a short break.
12:27Don't go anywhere.
12:28We'll be back with the rest of the interview in just a tick.
12:49Welcome back to Niagara Spotlight.
12:51Still with me, Tamina Khosji.
12:52And today we have a focus on AI fraud, looking at cybersecurity in particular with Krishna Rajagopal, CEO of Akati
13:00Security, joining me live in the studios.
13:02So, Krishna, going straight back into the conversation.
13:06Now, the ground looks like Malaysian firms are really digitizing super swiftly.
13:13But at the same time, not all of them have the limitless budgets of financial institutions or bigger public listed
13:20companies to actually invest.
13:22A lot towards cybersecurity for our mid-sized companies, our growth stage firms, our fledgling, budding people in entrepreneurship.
13:33What would you say are the three most important anti-fraud controls that they should be prioritizing for the rest
13:41of the year?
13:42Sure, Tamina.
13:43I think the first thing is mandate an out-of-band verification for any kind of payment that, you know,
13:50is above a certain threshold.
13:52Alongside that out-of-band verification, have a secret passphrase that perhaps only the maker and the approver, you know,
14:00knows.
14:00And agree on that passphrase offline.
14:03Right.
14:04Don't store it on an Excel for that matter.
14:08Password manager is the way to go.
14:09Yeah.
14:10And the second thing I would say is, if possible, move into an MF, a phishing-resistant multi-factor authentication.
14:19And the technical term is a FIDO2-compliant hardware key, right?
14:23Or any kind of physical hardware tokens, be it in critical systems.
14:28And if it's banking, if possible, ask your bank if they do allow that.
14:32Because that does not suffer the weakness of MFA fatigue, we call it.
14:39Attackers are now, because a lot of the soft tokens are prompting on your phone.
14:43Okay.
14:43Theoretically, it's secure.
14:45But what happens is, attackers are now, again, hijacking the neurological side of things.
14:51They're prompting it continuously throughout the night.
14:54Giving you a sense of urgency.
14:56Urgency and also lethargy.
14:58Because you're like, I just want this to stop.
15:00So you would just say, okay.
15:02And then, boom, it goes, right?
15:04So hardware keys don't suffer that vulnerability.
15:08So if possible, ask your bank for that.
15:10And I think the third thing is, look at, whenever you're onboarding vendors, try to incorporate proof of life checks.
15:19Know who you're onboarding.
15:20So that you're not onboarding a synthetic vendor.
15:24There you go.
15:25So I think that would be one great way.
15:26But it would also entail fastly digitized systems to then have some human components to them.
15:32Yes?
15:33That's true.
15:33And I think at this point, human on the loop is still a safe point where a human is still
15:38in control of the entire digitized AI process.
15:42At least for the near future.
15:44So I think those are just some common sense ways in which the midsize, the small mom-and-pop businesses,
15:51etc., can just make sure that nothing bad happens cybersecurity-wise.
15:55So let's talk bigger pieces of legislation for this arena, especially.
16:01Malaysia has moved very quickly on several fronts.
16:04We've got our Cybersecurity Act 2024, now completely in force, and also the Online Safety Act 2025, which can also
16:12be utilized against deepfake content in particular.
16:16So Malaysia's AI standards are also being positioned as part of our trust infrastructure.
16:23Things are looking good on the legislative side.
16:26But from a business standpoint, what are some of the pain points that, Krishna, you're still observing, are preventing companies
16:34from moving either fast enough or at a pace which is suitable for all this policy shift too?
16:42True. Well, on the bright side, the government is moving fast.
16:48Unfortunately, most organizations are not keeping up.
16:51That's the honest answer.
16:53On paper, Malaysia looks like we've got a really developed cyber data privacy digital trust position in Southeast Asia.
17:04If not number one, number two.
17:06Okay.
17:06But our gap is operational.
17:08That is where we fall short.
17:11If you look at some of the statistics, I think only 1% of Malaysian companies can actually recover within
17:1824 hours of a breach.
17:20And more than one third of them actually take more than three weeks to recover after a breach.
17:25And what is the reporting guidelines for the time after a breach between which there is an actual legal obligation
17:35to report?
17:36Depends on the industry.
17:37So if they're in regulated industries, they've got a more stricter timeline to disclose.
17:43But if they're in non-regulated, and then of course you've got the critical infrastructure and then the non-regulated.
17:48And a general best practice is to report within 48 hours, 24 to 48 hours to the regulators if you
17:55have to and the government.
17:57But the regulation and disclosure is one thing, but coming back alive, being resilient post an incident is very important.
18:06So I think we should stop asking how we can prevent it.
18:09Companies should stop asking how we can prevent it and start looking at how can we stay resilient.
18:15Even if we get, let's have an assumed breach mindset.
18:19We are going to be breached, how do we stay resilient?
18:22I think that's the right way of thinking.
18:24And that should bring us up to the next couple of years in the right direction.
18:29So speaking of the next couple of years, but looking at the prism from now, Krishna.
18:32So malware is still something which most companies actually carry their cyber drills around.
18:40But perhaps not for executive impersonation or any kind of synthetic media crisis simulations.
18:47So what would be your 101 trying to speak directly to business owners to encourage them to not ignore malware,
18:57but also to shift and pivot distinctly towards the current emerging threats?
19:02I think the first thing is if you do have a board, you know, start asking, do you have AI
19:11and cybersecurity expertise on the board, right?
19:14There was a very interesting research done by MIT Sloan and Bentley University.
19:18And what they did was they interviewed about 239 board members in the US, right?
19:23And they found out that only 16 out of 239 had cybersecurity experience.
19:29Now, which is, and if the number is that staggering in the US, you can imagine the other parts of
19:34the world, right?
19:35Sure.
19:36So, and those researchers actually termed this as cyber washing.
19:39And a lot of companies are guilty of this.
19:41Where we set up committees, we put in board members who don't necessarily have the skill,
19:47but we want to be seen as if we are taking cybersecurity risks seriously.
19:51But we don't have the might or the strength to go and ask the right questions.
19:58I think that has to change.
19:59The second thing I would say is look at your environment.
20:04Start mandating, do you have a documented verification process for any high-risk transactions in your organization?
20:12Right.
20:12Are you victimizing your employees for pausing and thinking?
20:17Because that's not the right way to go, right?
20:19You should actually allow a culture of allowing them to pause, verify, and think, right?
20:26That should be encouraged.
20:27Speed is sometimes what gets us into the pits.
20:30Into trouble, right?
20:31Yeah.
20:32And then, of course, like what you said, have a live deep-fake scenario, right?
20:38Do a live drill, right?
20:40And that allows people to get their hands around things, right?
20:44And feel of what it does, right?
20:47Yes.
20:47Rather than be stuck in that scenario where...
20:50Yeah, not prepared.
20:51You're basically, you're ready to make a decision that will either lead to X, Y, or...
20:57Correct.
20:57...absolute total loss.
20:58Yes.
20:59And the interesting thing about AI is that, you know, in traditional attacks, you can always point a finger at
21:04a hacker.
21:05That's right.
21:05But in AI, it is your own employee that was both... that actually pressed the button, right?
21:10So that employee is both the victim and the threat actor, right?
21:14And it's not encouraged to go after them.
21:17Because once you go after them, what happens is you're sending a signal to anybody else, don't raise...
21:22They're going to be scared to raise their hand.
21:24Don't raise flags, yeah.
21:24They're going to just sweep it under the carpet.
21:27So that's not the right way to do it.
21:29And I think the last thing we've got to look at is have that culture...
21:35I mean, embed that culture of slowing down, right?
21:38Before any kind of high-risk transactions.
21:40Exactly.
21:41One interesting dimension of the human-fronted AI-enabled fraud, Krishna, is also that accountability then, of course, rests with
21:51whoever the employee that is in charge.
21:53That also, in a sense, weaponizes particularly Asian workplace culture, which always differs to authority.
22:02We're respectful of the senior executives, very unlikely to question them.
22:08If everything appears to be real.
22:11So any thoughts on that?
22:14You know, it's interesting because we've got an authority bias, right?
22:19And then I'd like to bring up this wonderful book that I read, Daniel Kahneman, right?
22:24Thinking Fast and Thinking Slow.
22:26So in a lot of these deep fake frauds, what they're really looking at is that thinking fast.
22:31They're making you think fast.
22:34And when you are, you know, at that point where you're thinking, I'm going to lose my job if I
22:38don't obey, right?
22:40I may get penalized.
22:42You're not thinking.
22:43You're not thinking.
22:44You're reacting.
22:45Exactly.
22:46Right?
22:46And then eventually that employee gets blamed.
22:50Because they were too flustered in that moment of crisis.
22:54Of crisis.
22:54And I think that has to change.
22:56We have to start encouraging the employees and teaching them that, look, these things can happen.
23:04Let's do live drills of this.
23:05And in the event of this happening, this is what we're going to do, right?
23:09Have that secret phrase, right?
23:11Have that out-of-band verification.
23:13These are non-tech, very simple steps.
23:15Don't cost much.
23:17But it's very effective.
23:19Yeah.
23:19Interrupt the chain more than anything, right?
23:21Have that pause, that 10 minutes for your brain, your prefrontal cortex to come back in charge and says, wait
23:27a minute, something doesn't look right.
23:29Let me chat with the CEO.
23:31Yes.
23:31That's all it takes.
23:32Yeah.
23:33And perhaps that also breaks down the barriers by making upper management more involved in the direct day-to-day
23:39because this is definitely affecting bottom lines too.
23:42Definitely.
23:42And in this AI-enabled fraud, the authority chain has to be top-down, not bottom-up, right?
23:49The examples have to be set from the top.
23:52Yes.
23:52Yes.
23:53Exactly.
23:54So examples needing to be set from the top, also interrupt the chain, have those safe passwords, and definitely run
24:00simulation drills.
24:01That's true.
24:02Yes.
24:02Which is very different, of course, from malware drills.
24:05Correct.
24:05All right.
24:06Krishna, it's been an incredibly important conversation.
24:08Thank you so much for the work that you're doing as well.
24:10Thank you, Tamina.
24:11It was a pleasure.
24:13So in conclusion, the challenge with AI fraud is that it no longer looks suspicious in the traditional sense.
24:18It looks credible and increasingly human.
24:21As our business is digitized further, the test will be whether, technological resilience aside, companies are able to rebuild trust
24:28and verification into every layer of decision-making.
24:32I'm Tamina Kausji signing off for now.
24:35Here's to a productive week ahead.
24:49I'm Tamina Kausji signing off for now.
24:50I'm Tamina Kausji signing off for now.
24:51I'm Tamina Kausji signing off for now.
24:51I'm Tamina Kausji signing off for now.
Comments