Welcome to Verify News Global. In this deep dive, we expose one of the most sophisticated financial heists in history. We break down the terrifying $25 Million corporate scam where an employee was tricked into a fake video call featuring a deepfake of their own CFO.
This video is a wake-up call. We explore how Artificial Intelligence is being weaponized to steal identities, clone voices, and manipulate reality.
In this video, we cover:
The $25M Ghost Call: How a UK-based firm lost millions to a deepfake meeting.
The Rise of Deepfakes: Why digital deception is becoming harder to detect.
The Liar’s Dividend: How scammers use AI to make you doubt the truth.
Stay Safe: Practical steps to verify information and protect your assets.
Important Disclaimer:
This video is for educational purposes only. The goal of Verify News Global is to promote digital literacy and cyber-security awareness. Always verify sensitive information through multiple reliable online and offline sources before taking any action.
Connect with Us:
Channel: Verify News Global
Topics: AI Scams, Deepfake Technology, Cyber Security, Digital Truth.
If you found this video helpful, please SUBSCRIBE and hit the bell icon to stay protected in the digital age!
#VerifyNewsGlobal #AIScams #Deepfakes #CyberSecurity #DigitalLiteracy #ArtificialIntelligence #DeepfakeWarning #TechNews2026
This video is a wake-up call. We explore how Artificial Intelligence is being weaponized to steal identities, clone voices, and manipulate reality.
In this video, we cover:
The $25M Ghost Call: How a UK-based firm lost millions to a deepfake meeting.
The Rise of Deepfakes: Why digital deception is becoming harder to detect.
The Liar’s Dividend: How scammers use AI to make you doubt the truth.
Stay Safe: Practical steps to verify information and protect your assets.
Important Disclaimer:
This video is for educational purposes only. The goal of Verify News Global is to promote digital literacy and cyber-security awareness. Always verify sensitive information through multiple reliable online and offline sources before taking any action.
Connect with Us:
Channel: Verify News Global
Topics: AI Scams, Deepfake Technology, Cyber Security, Digital Truth.
If you found this video helpful, please SUBSCRIBE and hit the bell icon to stay protected in the digital age!
#VerifyNewsGlobal #AIScams #Deepfakes #CyberSecurity #DigitalLiteracy #ArtificialIntelligence #DeepfakeWarning #TechNews2026
Category
🗞
NewsTranscript
00:07Imagine you're just, you know, sitting at your desk.
00:09You're logged into a totally routine video call.
00:12Right.
00:13Just a normal Tuesday.
00:14Exactly.
00:15And on the screen is your company's chief financial officer.
00:19He looks a little stressed.
00:20He's got that familiar kind of raspy cadings to his voice.
00:23And he's asking you to authorize an urgent $25 million transfer to secure a sudden acquisition.
00:31Which obviously is a massive request.
00:33Huge.
00:34But everything about the interaction feels completely normal.
00:37So you click approve.
00:39But here's the thing.
00:40Your CFO was never actually on the call.
00:42You just wired $25 million to a ghost.
00:45Yeah.
00:45It's a completely staggering scenario.
00:47And I think the most chilling aspect of it is that it's not hypothetical at all.
00:51No, it's really not.
00:52Welcome to this deep dive.
00:54Today, our mission is navigating a brand new reality for you, the listener.
00:58One where you literally cannot believe your own eyes and ears anymore.
01:01Right.
01:02Because we are looking at a landscape where the fundamental building blocks of objective,
01:07truth-like audio, video, photographic evidence, they are being compromised and weaponized at an industrial scale.
01:13It's wild.
01:14So to help us map out this minefield, we're drawing on two incredibly eye-opening sources today.
01:19First, we have an industry report from editor and publisher by Rob Tornow.
01:24Which takes a really hard look at how this tech is just eroding media trust across the board.
01:29Yeah, exactly.
01:29And second, we have a stark corporate risk analysis published on LinkedIn by Prashant Doom.
01:35That one really details the severe financial and operational threats companies are facing.
01:41So, okay, let's unpack this because we really need to understand the sheer scale of what we are dealing with
01:47here.
01:47Well, the scale is the entire point, really.
01:49We have to move past this completely outdated notion that deep fakes are just, you know, niche internet jokes or
01:54people making silly mashup videos in their basements.
01:57Right.
01:57Like the early YouTube days.
01:59Exactly.
01:59That era is over.
02:01Generative media is now an active, highly sophisticated battleground.
02:05And the targets are your wallet, your media consumption, and basically your baseline trust in reality itself.
02:11I think to really grasp the threat, we have to look at how absurdly fast the mechanics of this technology
02:17have evolved.
02:18I mean, in the editor and publisher piece, Tornow points out that just a few years ago, the absolute pinnacle
02:24of AI video was that viral, honestly laughable clip of Will Smith eating spaghetti.
02:30Oh, my gosh.
02:31Yes.
02:31The spaghetti video.
02:33It looked like a fever dream.
02:34Like the physics were entirely wrong.
02:35The pasta was clipping right through his face.
02:37His jaw was morphing into the bowl.
02:39It was obviously a joke.
02:40Total meme.
02:41Right.
02:41But you fast forward to today, and we are looking at the hyper-realistic real-time deception from Doom's corporate
02:48risk analysis.
02:49Let's look at that $25 million heist at a UK architecture firm.
02:53Yeah.
02:54The terrifying part of that specific scam wasn't merely that they cloned the CFO's voice.
02:58I mean, voice cloning is bad enough.
03:00But it's that the attackers likely used real-time face swapping software.
03:04Wait, real-time.
03:05So someone was actually there on the camera.
03:07Yes.
03:07An actual human being was sitting on the other end of that video call, moving their head and moving their
03:13lips.
03:13But the AI was tracking their facial landmarks.
03:16So the eyes, the nose, the jawline.
03:19And it was literally mapping the CFO's face onto theirs in real time.
03:24That is insane.
03:26It operates very similarly to a highly advanced, weaponized social media filter.
03:31And while that's happening visually, a secondary audio processing tool is simultaneously altering the pitch and the timbre of the
03:37scammer's voice to match the CFO's exact acoustic fingerprint.
03:41Wow.
03:42It makes me think about the evolution of CGI in movies, you know?
03:45Yeah.
03:45Like 20 years ago, you went to the theater and you knew exactly when the monster was coming.
03:49Oh, absolutely.
03:50The lighting was off.
03:51The rendering was clunky.
03:52Exactly.
03:52The shadows were wrong.
03:53Your brain just inherently knew it was a special effect.
03:56But today, the technology is so seamless, filmmakers use CGI for things you don't even notice.
04:02Like changing the weather in a scene.
04:03Changing the weather, extending a city skyline, or entirely reconstructing a historical room.
04:09You have literally no idea you're looking at special effects.
04:12Yeah.
04:12And that level of invisible manipulation is exactly what is happening to our daily communication now.
04:17It's a great analogy.
04:19But if the tech is moving this fast, I mean, aren't we just completely outmatched?
04:23Why is this suddenly absolutely everywhere?
04:26Well, the short answer is that the protective guardrails are quietly being dismantled by the industry itself.
04:32Really?
04:32Yeah.
04:32In the early days of generative AI, the major players' companies like Google and OpenAI, they placed strict, hard-coded
04:41limitations on their tools.
04:43To keep people from making fake news.
04:45Exactly.
04:45They explicitly prohibited users from prompting the AI to create videos of public figures or copyrighted materials precisely to avoid
04:53this kind of chaos.
04:54But I'm guessing the market pressure to dominate the space kind of changed that calculus.
04:58Significantly.
05:00Torno points out that when OpenAI launched their new social video app, Sora, the initial architecture basically allowed users to
05:08generate content using celebrities and protected intellectual property with minimal friction.
05:14Oh, wow.
05:14So they just took the brakes off.
05:16Pretty much.
05:16The compute power was essentially handed over without the traditional filters.
05:20It resulted in such a wild free-for-all that Sam Altman actually had to scramble to rewrite those constraints.
05:26Probably to stave off a massive tsunami of copyright lawsuits, I'd imagine.
05:31Primarily, yes.
05:32But the underlying capability is now out in the wild.
05:35And because those guardrails vanished, the barrier to entry just dropped to zero.
05:40You no longer need a Hollywood budget or a massive server farm.
05:43You literally just need a smartphone.
05:46Just a phone and an internet connection.
05:47Right.
05:48Which brings us to the inevitable result of handing over studio-level special effects to the entire internet.
05:53We get what the sources call AI slop.
05:56AI slop.
05:57It's such a perfect term for it.
05:58It really is.
05:59Torno quotes Vox's Brian Walsh, who had this incredibly vivid, honestly visceral description of it.
06:05He compared taking a TikTok-style social media platform and mixing it with AI-generated content to taking heroin and
06:13mixing it with heroin.
06:14Wow.
06:14It is a really intense comparison, but neurologically speaking, it actually makes perfect sense.
06:19How so?
06:19Well, social media recommendation algorithms are already designed to be highly addictive, right?
06:24They surface high-engagement, emotionally resonant content to keep you scrolling.
06:29Right.
06:29The infinite scroll.
06:30Exactly.
06:31So when you flood that system with generative AI, you are injecting an infinite brain-rotting stream of garbage content
06:40that can be manufactured at zero cost.
06:43It perfectly feeds the algorithm's demand for constant novelty.
06:46And a massive portion of that novelty content is deeply political.
06:50Now, before we get into this, I need to be really clear with you, the listener.
06:53Our sources highlight examples from across the political spectrum.
06:56We are simply reporting these case studies to show how the tech is being used without endorsing any political viewpoints
07:01whatsoever.
07:02Yes, that is a crucial distinction.
07:04We're looking at the mechanics here.
07:05Exactly.
07:05Because what's really striking in Torno's report is that this isn't isolated to one political ideology.
07:12Looking at the case studies, whether it's Andrew Cuomo using AI to depict himself as a subway conductor in an
07:18ad with a tiny, barely visible disclaimer, by the way.
07:22Oh, very tiny.
07:23Or Eric Adams utilizing AI imagery before suspending his campaign.
07:28Or Donald Trump sharing an AI-altered video putting a sombrero on Representative Hakeem Jeffries.
07:33Or Elon Musk sharing a manipulated campaign ad featuring Kamala Harris.
07:38We're seeing this technology deployed across the entire spectrum.
07:41They're all basically beta testing the exact same tactic.
07:44Right.
07:45The focus for us here isn't the politics.
07:47It is purely the mechanics of the deception.
07:48What's fascinating here is the sheer variety of the testing.
07:51Some political actors are using it for campy online humor to drive engagement metrics.
07:56You know, just trying to go viral.
07:57Yeah, the meme wars.
07:58Exactly.
07:59But others are trying to artificially manufacture a relatable blue-collar aesthetic without actually, you know, staging a real photo
08:07shoot.
08:07But every single instance contributes to the exact same underlying problem, which is the normalization of fabricated reality in our
08:16civic discourse.
08:17And it is not just playing out at the national presidential level where you have teams of forensic digital analysts
08:25scrutinizing every single pixel.
08:26The accessibility of this tech means it's hitting local communities, which is honestly arguably much scarier.
08:32Oh, without a doubt.
08:33The local level is highly vulnerable.
08:35Yeah.
08:36Torno cites an example from a suburban Baltimore school in 2024.
08:39The school principal was suddenly placed on administrative leave because an audio recording went incredibly viral.
08:45In the clip, the principal was allegedly using just horrific racist and anti-Semitic slurs.
08:50I remember reading about this.
08:51The damage to his reputation and the resulting community outrage was instantaneous.
08:56Overnight.
08:56But an investigation eventually revealed the audio was a complete fabrication.
09:01It was AI-generated.
09:02And the architect behind it wasn't some sophisticated foreign hacker collective.
09:07It was the school's own athletic director.
09:10Which is just crazy.
09:11He created the fake as revenge because he was under investigation by that very principal for allegedly embezzling school funds.
09:19It perfectly illustrates how you don't need a massive supercomputer to destroy someone's life or career anymore.
09:26Just a grudge, an internet connection, and like five minutes of audio training data.
09:31That's really all it takes now.
09:32But here is the twist that I found genuinely mind-bending in the sources.
09:36Because fake audio and fake video are becoming so ubiquitous, there is a secondary psychological effect taking hold.
09:43And it might actually be more dangerous than the fakes themselves.
09:46Ah, you were referring to the liar's dividend.
09:48Yes.
09:49The liar's dividend.
09:50Torno shares a personal anecdote that perfectly captures this feeling.
09:54A friend sends him a video of former NFL head coach Jim Mora.
09:57It's an old clip.
09:58It's grainy, low resolution.
10:00And Mora is just passionately cursing out reporters.
10:03Standard old school sports footage.
10:05Yeah, it's wild footage.
10:07But Torno, a man who literally analyzes digital media for a living, looks at it and genuinely cannot tell if
10:14it is real.
10:15He has this headache-inducing moment of profound doubt.
10:19The mechanism causing that doubt is so fascinating.
10:22The low resolution of older video actually acts as a form of camouflage for AI.
10:27Really?
10:28How does that work?
10:28Well, the natural visual artifacts of old VHS tapes or early digital cameras like the blur, the pixelation, the dropped
10:35frames, they perfectly mimic the rendering flaws and glitches of early generative AI.
10:40Oh, that makes sense.
10:41Right.
10:42So the human eye struggles to differentiate between analog decay from an old tape and digital fabrication from an AI
10:48model.
10:48Wow.
10:49Now, Torno eventually verified that the Mora video was completely real by digging into old newspaper archives.
10:55But the fact that he doubted it at all is the core issue here.
10:58And this brings us to that crucial study published in the Journal of Politics by Professors Burari, Lucas, and Munger.
11:03Yes, this study is the absolute key to understanding the true societal impact of this technology.
11:09What they discovered is that deepfakes, on their own, might not actually deceive the broader public on a massive civilization
11:16-ending scale.
11:17Because eventually someone figures it out.
11:19Exactly.
11:20Most people eventually figure out the fake, or journalists rigorously debunk it.
11:25But the sheer existence of so much fake content achieves something far more insidious.
11:31It systematically discredits authentic media.
11:34It's like tossing a single, counterfeit $100 bill into a busy cash register.
11:39The real danger isn't just that someone successfully spends the fake bill.
11:42The danger is that now the cashier doesn't trust any of the real bills in the drawer.
11:47That is a brilliant way to put it.
11:49They are questioning the authentic currency.
11:51Right.
11:51And the studies show this exact psychological mechanism across the board.
11:55They show Republicans a very real, completely authentic video of Donald Trump messing up Apple CEO Tim Cook's name.
12:02And the Republicans were highly likely to dismiss it as a deepfake.
12:05And the researchers observed the exact same cognitive dissonance playing out on the other side.
12:10They showed Democrats a real, authentic video of Barack Obama appearing to make a post-election deal with Russian President
12:18Vladimir Putin.
12:19And the Democrats overwhelmingly dismissed the real footage as an AI fake.
12:25So wait, let me make sure I'm locking on to the underlying logic here.
12:28The primary danger isn't necessarily that we're all going to fall for the fake video.
12:33The real danger is that we use the existence of fake videos as an incredibly convenient psychological escape hatch to
12:41just ignore the truth whenever it challenges our worldview or makes our side look bad.
12:46That is the liar's dividend in its purest form.
12:48When anything can be faked, nothing has to be believed.
12:51Wow.
12:52People are just offloading their cognitive dissonance onto the technology.
12:55If a corporate executive is caught on tape admitting to a crime, their immediate legal and public relations defense will
13:01simply be, well, that is an AI voice clone.
13:03Right. You can't prove it's me.
13:05Exactly.
13:05If a politician is filmed doing something corrupt, they just claim it's a deepfake.
13:09And because the public knows deepfakes exist, a large portion of the population will readily give them the benefit of
13:15the doubt.
13:16It actively erodes the very concept of shared objective evidence.
13:21Which is terrifying, because if we can't agree on basic recorded facts, the entire foundation of how we function as
13:28a society starts to crumble, which leads to the obvious next question for anyone listening.
13:34How do we fight back?
13:35I mean, why can't we just build an AI to catch the AI?
13:39We have antivirus software for our computers.
13:42Where is the algorithmic antivirus for our eyeballs?
13:45It is a very logical assumption, but the reality of the engineering is much more complicated than that.
13:50Researchers at Cornell University are calling this a new battleground, and it really comes down to how these AI models
13:55are built in the first place.
13:56Okay, lay it on me.
13:57They often use what are called generative adversarial networks, or JANs.
14:00Right, JANs.
14:01Essentially, one part of the system generates the fake, and another part of the system tries to detect it.
14:06And they learn from each other in real time.
14:08So by the time you train a detection algorithm to catch a specific type of AI artifact,
14:14the generative AI has already used that detection data to smooth the artifact out in the next version.
14:19Man, so it's not just an endless game of whack-a-mole.
14:22The mole is actively learning how to dodge the mallet.
14:25Precisely.
14:26And relying heavily on software can actually introduce new vulnerabilities.
14:30Experts at the Toe Center for Digital Journalism warn about something called automation bias.
14:35Automation bias.
14:36What's that?
14:36It's where detection tools create a false sense of security.
14:39If a glitchy detection tool flags a fake video as real, journalists and the public might just blindly trust the
14:46software's output and amplify the misinformation.
14:48It effectively weakens our own critical thinking muscles because we're outsourcing our skepticism to a machine.
14:54Though, to give credit where it's due, Tornow did find one tool that actually worked quite well in his testing.
14:59It's called the Hiya DeepFig Voice Detector.
15:02It's a Chrome plug-in, and it analyzes audio in real time.
15:05But I'm curious, how does it actually know the difference if human ears can't tell?
15:09It's analyzing the acoustic artifacts that fall completely outside of normal human perception.
15:15Like stuff we literally can't hear.
15:17When an AI generates speech, it often struggles with the subtle physics of human anatomy.
15:22The micropauses for breath, the natural resonance of a vocal cord, the specific frequency distribution of a consonant, the plug
15:31-in measures those anomalies and assigns an authenticity score from 1 to 100 based on those physical impossibilities.
15:38And it seems highly effective, at least for now.
15:40Tornow tested it on 10 real videos and 10 deepfakes, and it scored a perfect 20 out of 20.
15:45That's impressive.
15:47Yeah.
15:47It even caught that AI-altered Kamala Harris ad that Elon Musk shared, giving it a 1 out of 100
15:53and flagging it as a likely deepfake.
15:55But as you said, the tech will adapt, so if software can't permanently save us, we have to patch human
16:00behavior.
16:01Doom's piece had a brilliant suggestion for a human defense playbook, starting with pre-agreed code words.
16:07It sounds almost archaic, like a digital speakeasy.
16:11But if your CFO calls asking for $25 million, you just ask for the password.
16:15It is archaic, and that's exactly why it works so beautifully.
16:19It breaks the reliance on the screen.
16:21If you have elderly parents or if you work in finance and regularly authorize transfers, establish a specific word or
16:28phrase offline.
16:29Right. Don't text it to them.
16:30No. Do it in person.
16:32It is a remarkably simple, zero-tech defense that completely shatters the AI impersonation.
16:38You should also really watch out for timing manipulations.
16:41What do you mean by timing manipulations?
16:42Bad actors know that chaos is their best friend, so you need to be hyper-alert if a scandalous video
16:48or a dire financial request drops right before a major event, like the night before an election, or an hour
16:54before a company's quarterly earnings report.
16:56The goal is to force you to react before the truth can put its boots on.
17:00And a massive part of forcing that reaction is weaponizing our feelings.
17:05Bad actors are exploiting our amygdala.
17:07They know that if they can trigger a high arousal, emotion outrage, terror, deep empathy, sudden urgency, they effectively bypass
17:16the brain's critical thinking centers.
17:18That is the most crucial vulnerability we have as humans.
17:22If a video makes you instantly furious or instantly terrified, you must recognize that emotional spike as a giant red
17:28flag.
17:28Take a breath.
17:29Yes.
17:29That physiological reaction is exactly what the creator is relying on to bypass your rational processing.
17:35To combat that, Torno brings up digital literacy expert Mike Caulfield's SIFT method.
17:40It's an acronym that stands for STOP.
17:42Investigate the source, find better coverage, and trace the claim to its original context.
17:47It's not just about digital hygiene, it's a cognitive circuit breaker.
17:51I like that, a cognitive circuit breaker.
17:53Yeah.
17:53If you see a crazy video on social media, don't just hit share.
17:56Stop.
17:56Look to see if established, rigorous news outlets are reporting on it.
18:00Try to find the uncut, original version.
18:02Slowing down our processing speed is our absolute best defense against the generative speed of AI.
18:08If we connect this to the bigger picture, that intentional friction forcing yourself to manually slow down your consumption
18:14is the absolute antidote to the viral nature of deepfakes.
18:18Okay.
18:19We've covered the multi-million dollar heists, the algorithmic slop, the erosion of objective truth.
18:25It's incredibly heavy stuff.
18:26It really is.
18:27But here's where it gets really interesting.
18:30Because both of our sources acknowledge that this technology is just a tool.
18:35It isn't inherently evil.
18:37There is a silver lining here.
18:39There are constructive applications that are actually quite beautiful.
18:42Yes.
18:43We have to look at the full spectrum of the technology.
18:46Joom's analysis highlights how, with proper ethical frameworks and transparency,
18:50deepfakes are going to revolutionize several fields.
18:53Take accessibility, for instance.
18:55This was the part of the deep dive that actually gave me some hope.
18:57It is remarkable.
18:58The exact same mechanism that can clone a CEO's voice to steal money can be used to give a voice
19:03back to someone who is actively losing theirs.
19:06Individuals suffering from degenerative diseases, like ALS, can utilize something called voice banking.
19:12They record a specific phonetic script while they still have full vocal control.
19:17The AI maps the unique timbre, pitch, and cadence.
19:21Then, as the disease progresses and they lose the physical ability to speak,
19:25the AI can generate real-time speech from whatever they type.
19:29That is amazing.
19:30And the output isn't a robotic, generic, text-to-speech voice like Siri.
19:34It is their authentic, acoustic identity.
19:37It restores a profound piece of their humanity.
19:40That is incredible.
19:41And the sources also mention real-time translation into sign language,
19:44which obviously opens up massive communication avenues.
19:47What about the educational and entertainment applications?
19:50Well, imagine bringing history to life.
19:52Instead of reading a dry textbook,
19:53students could interact with an AI-generated, historically accurate avatar of Abraham Lincoln or Marie Curie,
19:59answering questions in the classroom in real time.
20:02Kids would love that.
20:03And in the entertainment and business sectors,
20:05we are already seeing actors being seamlessly de-aged for flashback scenes,
20:09which reduces massive physical reshoot costs.
20:13Furthermore, instead of poorly dubbed movies where the audio clearly doesn't match the mouth,
20:18AI can sync an actor's lip movements perfectly to a foreign language track.
20:22Oh, creating seamless global localization for films and marketing.
20:27Exactly.
20:27So Tom Cruise can literally look like he is speaking perfectly fluent Mandarin in the next Mission Impossible,
20:34expanding that global connection.
20:36The potential is vast.
20:38Doom quotes the entrepreneur and futurist Peter Diamandis, who states,
20:42The future of this technology remains unwritten.
20:45But the absolute non-negotiable requirement to unlocking these benefits, however,
20:49is ethical consent and complete transparency.
20:52Right.
20:52We have to know without a shadow of a doubt when we are talking to a machine.
20:55Exactly.
20:56The technology itself is completely agnostic.
20:58It's the human intent behind it that determines whether it operates as a miracle or a menace.
21:03So what does this all mean?
21:05We have journeyed from the goofy low stakes days of AI spaghetti videos right into the heart
21:11of a technological revolution that's actively targeting our wallets,
21:15our political systems, and our very perception of reality.
21:19It's been a rapid shift.
21:20The core takeaway for you today is this.
21:23We have crossed a fundamental threshold.
21:25You can no longer take what you see or hear on a screen at face value.
21:29The era of implicit trust in digital media is over.
21:33Sadly, yes.
21:34But you don't have to exist in a state of constant paranoia.
21:37You just have to be prepared.
21:39By adopting a mindset of pause, question, verify, using the SIFT method as a cognitive circuit breaker,
21:46recognizing when your emotions are being hijacked,
21:48and setting up simple analog defenses like code words with your family and your coworkers,
21:53you do not have to be a victim of the next $25 million scam
21:56or fall for the latest piece of viral political slop.
21:59You can navigate this new reality.
22:01It's really about adapting our human instincts to survive in a synthetic world.
22:05But, you know, this conversation raises an important question,
22:08one that builds on what we just discussed about preserving voices for ALS patients.
22:12Okay, what's that?
22:13If artificial intelligence can perfectly mimic a person's voice, their face,
22:17their exact mannerisms, and their conversational style,
22:20what happens to human grieving in the future?
22:23Oh, wow.
22:23When we pass away, will it become a standard cultural practice
22:27to leave behind interactive, deepfake versions of ourselves
22:30for our grandchildren to talk to, to seek advice from?
22:33And if we do integrate that into our life,
22:36are we beautifully preserving a legacy,
22:38or are we just preventing our loved ones from ever truly letting go?
22:43Oh, man, that is a deeply profound place to leave it.
22:46Next time you get a strange voice note from a friend
22:49or an odd video call from your boss and you do that double take,
22:52remember the mechanics of everything we unpacked today.
22:55Thank you so much for joining us on this deep dive.
22:57Stay curious, stay skeptical, and we'll catch you next time.
Comments