(2026) - FULL ENGSUB | Reelshort Hot HD
- FULL MOVIES ENGLISH SUB
2Chinese Drama English Sub Full HD
#shortdrama #bestdrama #actionmovie #Drama #Film #Show #Anime #Movie #cdrama #Movies #BILLIONAIRE #shortdrama #dramashort #shortfilmdrama #minidrama #shortstorydrama #webdrama #indiedrama #shortfilmseries #shortdramaseries #dramashorts #englishmovie #cdrama #drama #movieshortfull
#BillionaireObsession #VirginAuction #MrDelaney #AlphaRomance #DarkDesire #SoldToHim #DailymotionDrama
#goodfilms romance #bestfilmromance #romance #filmromance #drama romance
#fullmovie2025 #Dramavideo #trending
- FULL MOVIES ENGLISH SUB
2Chinese Drama English Sub Full HD
#shortdrama #bestdrama #actionmovie #Drama #Film #Show #Anime #Movie #cdrama #Movies #BILLIONAIRE #shortdrama #dramashort #shortfilmdrama #minidrama #shortstorydrama #webdrama #indiedrama #shortfilmseries #shortdramaseries #dramashorts #englishmovie #cdrama #drama #movieshortfull
#BillionaireObsession #VirginAuction #MrDelaney #AlphaRomance #DarkDesire #SoldToHim #DailymotionDrama
#goodfilms romance #bestfilmromance #romance #filmromance #drama romance
#fullmovie2025 #Dramavideo #trending
Category
🎥
Short filmTranscript
00:03Hi, I'm Hamish McDonald, and I'm a journalist, which means facts are my business.
00:09Where do you get your information from?
00:11Social media.
00:12Facebook.
00:13Facebook, Instagram and YouTube.
00:15Social media is where so many of us now get our information.
00:19But can we rely on it?
00:22When tech makes it so easy to create a fictional story, which can look and sound real.
00:29I went to the moon last week, don't you know?
00:31Even though it's entirely fake.
00:35Are you like so many people now, struggling to know what's true and what's false?
00:41Big tech is shifting our relationship with facts.
00:45We're going through a period of radical change in the way that people consume information.
00:49We've created this monster.
00:52An algorithm is outsourcing the human choice into a machine's decision.
00:57The end goal is keep you scrolling.
00:59Hack our biology, change the way we feel, to change the way we see the world.
01:04Is anyone else paying attention to how all of this is affecting society?
01:10Over the course of this series, I've explored how social media is transforming the way we
01:15communicate.
01:16It's part of the propaganda, it's a biased content production.
01:19But we're about to take another technological leap.
01:23It took 68 seconds to replace his face.
01:27As AI makes everything believable and nothing certain.
01:32Anything could be fake at any time.
01:35So the experiment is detect AI faces.
01:39I was really easily tricked by it.
01:41We don't want to sleepwalk into a future that nobody wants.
01:44Are you scared?
01:45I'm very scared.
01:47Join me on a journey right around Australia and the world to meet people determined to
01:52keep facts alive.
01:54In this real-time information revolution.
02:10Social media has changed how we communicate.
02:14Algorithms have sent us into information echo chambers.
02:20Do you ever have those moments where you feel just a bit overwhelmed from all the information
02:24coming at you?
02:25You know, the light, the noise, even the speed of it all?
02:28Well, I don't want to alarm you, but it's probably going to get worse.
02:33You see, AI is here and it's revolutionising everything.
02:44Including our perception of who and what is real.
02:56Already, almost half our population is playing with generative AI.
03:03Even though no one knows where it might take us.
03:09One of the confusing things about AI is there's so many different kinds of risks that are all
03:14connected in a way that's even more profound than social media.
03:18Social media is like a baby AI.
03:22AI, if it really automates human labour, will rapidly automate billions of people's jobs.
03:28The actual mission statement is to automate every form of human cognitive labour in the economy.
03:36To be able to do every job better than a human can do.
03:40AI is a computer system that can do tasks otherwise requiring human intelligence.
03:46It's working for us every day, by phones, cars, appliances.
03:55It's also data crunching, picking strawberries, and diagnosing diseases.
04:04But AI is particularly good at work that involves distorting reality.
04:13As AI technology improves, it's getting harder for all of us, including experts,
04:19to detect what's human versus machine built.
04:26And am I looking in the same place each time?
04:28Same place every time.
04:29And don't you dare smile.
04:30Okay.
04:31Stuart White has spent decades perfecting his craft as a special effects artist.
04:37I'm going to walk an arc around the front here, take about 50 photos all one after the other.
04:41The whole process always starts with scanning somebody and turning them into a 3D model.
04:46Just do what the face is doing.
04:49Creating a three-dimensional version of my head is a painstaking process,
04:53requiring thousands of images and months of work.
04:57Wide eyes, surprise mouth.
05:00That's it.
05:02Funnel lips.
05:04That's a lot of muscles in the face.
05:06And you're done.
05:06I'm done.
05:07Yeah.
05:08I never need to work again.
05:11The robots can do the rest.
05:14Now Stu's got the full picture of my face, he needs a body he can connect it to.
05:29Highland dancer Tristan has been perfecting his art since he was six.
05:42He's now a national champion.
05:47First, 669.
05:52And with Stu's help, I can be too.
05:58Rather than take his usual time-consuming 3D modelling approach, Stu is about to experiment with an AI-powered shortcut.
06:09So this is the software called Face Fusion.
06:12It costs about $1.50 an hour to rent a computer powerful enough to do this, so not much.
06:19And basically what it's asking for here is a still frame of the person whose face you want to use.
06:26So I'm going to find a fairly frontal picture of Hamish and drop it in there.
06:32We've got one of the shots from the dancing competition here.
06:36And hopefully pretty much straight away it'll start trying to get to work.
06:40It happens in basically about a minute, more or less, which is astounding to me because historically this took six
06:50months with traditional techniques.
06:53And yeah, here we are, we have a result already.
06:57Stu is watching a machine supersede his job.
07:00It took 68 seconds to replace his face.
07:03It's done a pretty terrific job.
07:06And people who had never met Hamish could probably watch the end resulting clips and believe that that person was
07:14doing that dance.
07:15So what about someone who has met me?
07:18In fact, who knows my face better than anyone?
07:21Alright Mum, do you want to take a seat?
07:23I will.
07:23I've invited my mum, Carol, into work for the afternoon.
07:26My screening room.
07:27I've told her I've got something special to show up.
07:30So you know I've been very busy.
07:32Yes, always busy.
07:34I've been working on this as well as other things, but I've also been learning some new skills.
07:39Oh right.
07:40You know I like to do things well.
07:42Yes, yes.
07:43I've actually done quite well.
07:45Oh, thank you.
07:46So what?
07:48Do you want to have a look?
07:50I'll put my glasses on.
07:52Okay.
07:53You ready?
07:53Yeah, ready.
08:12Oh, that's gorgeous.
08:25First 669.
08:27You're joking.
08:31Very clever.
08:33You do things perfectly, always.
08:35I didn't realise you had that skill.
08:37Did you?
08:38Not really.
08:39I was a little bit surprised.
08:40Oh my goodness.
08:42That was fantastic.
08:44Dad would have loved it too.
08:46Yeah, Dad would have been very proud.
08:48Yeah.
08:49How did you learn it?
08:51Was there anything about it that looks weird to you?
08:57No.
08:58Okay.
08:59Kind of scary.
09:00AI can trick even my own mum.
09:03What if I told you it's not really me?
09:07Well, some of it's you, but some of it's not.
09:09None of it's me.
09:10None of it.
09:14Well, I thought it was you when I got going, but then I thought it doesn't look quite like
09:20your face.
09:21Yes.
09:23And also the perfect arabesque.
09:29Do you look out for AI?
09:31Do you think you spot it?
09:31Yeah, well, I do look at things that are, and I do spot them.
09:35Yeah.
09:35Usually.
09:36Oh my gosh, it has rattled me.
09:40It's very clever.
09:43Love you.
09:43Love you too.
09:51AI is duping us, but Australians aren't blind to its dangers.
09:56Only 30% of us think the benefits of this deck outweigh the risks.
10:01We've recruited 10 participants and two cognitive psychologists
10:05to understand how we're adjusting to AI-generated content.
10:12So the experiment that I'm about to share with you is about our capacity to detect AI faces
10:21as they're looking now online.
10:24Go with your gut instinct and just keep on moving through.
10:46So you reckon he's real?
10:50That background is, sauce is all hell.
10:56The people who make the most errors on AI faces are actually the most confident.
11:04See that, that looks fake.
11:07That looks fake, doesn't it?
11:11Um, real.
11:15Um, real.
11:18I'm not a fan of AI.
11:20I never have been.
11:21I don't like looking at AI-generated content in my head.
11:25AI is still that video of Will Smith eating spaghetti
11:28and then he kind of turns into spaghetti and then he's eating it again
11:30and then the fork disappears into the bowl.
11:34That's what AI still is to me.
11:36Oh no, I just, I just don't know anymore.
11:40My head's hurting.
11:46Okay, we're done over here.
11:50What'd you get?
11:52Oh, this would be interesting.
11:53What did you get?
11:54Thirteen.
11:55Oh.
11:55How many?
11:56That's not good.
11:57Thirteen.
11:59I was really easily tricked by it.
12:00And that was when I was at my most vigilant most.
12:04Okay, I know in my head all of the cues of an AI, the wonky collars, the uneven hair.
12:10You guys did better than me.
12:12And I think that over vigilance of once I'm aware of it, I over corrected and did the, and had
12:20the same result as if I didn't know anything.
12:23When they discovered how easily their own brains fell prey to some of this, there was a real humility in
12:29how they turned around and said,
12:31actually, it's not everybody else.
12:33I'm not immune to this because I'm young and I've grown up with this.
12:36It's me too.
12:37And this is going to be my future and I need to do better.
12:41My brain is so cooked after all of this.
12:46I have analyzed your data as a group.
12:48You were a perfect participant group in that you showed exactly what much larger samples from the community show.
12:56You were at chance.
12:59So overall, 51% accuracy where 50% is chance, you might as well have flipped a coin.
13:08What these algorithms are doing is looking for patterns and then regenerating those patterns.
13:14So they're tending more towards the average.
13:18And average faces we see as being more familiar, a little bit more trustworthy, a little bit more attractive.
13:26And these are things that are coming through in the AI faces now.
13:30So these are entirely convincing now.
13:33We can't spot them online.
13:39Over 24 months, AI-generated Will Smith has been getting much better at eating spaghetti.
13:47Mmm. I can't get enough of this.
13:50In fact, it's thought AI can double its ability to perform complex tasks every few months.
13:56One of the sort of magical qualities of how generative AI works is it's not really thinking, it's mimicking.
14:04AI could rapidly flood our information environment with content generated by an AI that will look indistinguishable or even perform
14:14better than human-generated content.
14:17So we're being out-competed both in that AI-generated content is cheaper to produce, but second, that then people
14:24doubt and question the value of human-generated content and assume maybe it's made by a machine.
14:36My name is Joss and I am a video journalist, YouTuber.
14:41I make explainer videos about science and tech.
14:45Howtown is a new YouTube channel that I started with my colleague Adam Cole.
14:49Every episode answers a how do they know that question.
14:54Joss Fong knows how well AI can mimic humans.
14:58She's been forced to prove that she's not AI-generated.
15:02I'm working on a story about giant pterosaurs, which are flying reptiles that lived more than 60 million years ago.
15:09The only thing that it left behind is this pile of bones from its left wing.
15:15Okay, so we don't even have any part of the head here.
15:18No part of the head.
15:19So how do they know what it looked like or how it moves?
15:22I called up a bunch of paleontologists to find out how they reconstruct these animals.
15:26We've been publishing for about six months and we started to see the comments on YouTube that were accusing us
15:33of being AI-generated.
15:37You get a lot of comments on YouTube and it's hard to know how seriously to take any of them.
15:42But there was a trend and they'd say things like, are these people real?
15:47Or just like AI question mark.
15:50Or a lot of them said, this sounds just like an AI podcast.
15:55Then it all became clear.
15:59Notebook LM.
15:59Notebook LM, our nemesis.
16:04It's a product where you can upload documents and ask questions about them, which is cool.
16:13But they also had this feature where it would generate a podcast.
16:18To give you a more local example, I've asked Notebook LM to whip up a podcast about our longest running
16:27kids TV show.
16:33And within moments...
16:35We are diving into the essence of play school.
16:38Entertaining and engaging children.
16:40The fact that...
16:41A very human sounding conversation.
16:43Right.
16:44It proves the show has this fundamental structure that's just, well, it seems to be completely...
16:48Two generated voices, one male, one female sounding, that would have a conversation about your given topic.
16:55How does a kid's show manage this grand idea of exploring the world without, you know, huge budgets or fancy
17:02CGI or flying presenters everywhere?
17:04And it's very realistic.
17:06They breathe.
17:08Really important.
17:09And the source material is very specific.
17:11It lists, you know, they interrupt each other.
17:13They would kind of use the filler words that humans use.
17:17Or is that deliberate simplicity actually the secret weapon for survival?
17:22I mean, I'd argue the simplicity is absolutely core to its longevity.
17:25And so I think as more and more people started to use that app, especially students using it for studying,
17:33they started to associate that kind of conversation that is explanatory in nature.
17:38They associated that with AI.
17:40And that's a bummer for us, because when we started Howtown, we thought, okay, we know AI can do voiceover,
17:47and that's no problem.
17:49But at the time, AI didn't do conversation.
17:53Quetzalcoatlus, part of a group called the Ajdarkids.
17:56Where does that come from?
17:58It feels sort of Tolkien-esque or something.
18:01I believe it is Persian.
18:03Adam and I thought we would add conversations into our videos as a way of showing that we're people.
18:09You're like, that's an Ajdarkid humorist. I know it. Like the back of my hand.
18:13So if you scroll down on page 19.
18:16Yeah. One thing I would do is just not say vertical resting pose.
18:20Okay.
18:21I think you could just say its head was up like a giraffe instead of...
18:23Joss and Adam have had to convince their audience that they are real people.
18:28And having to do that has taken a toll.
18:31It tapped into some insecurities that I already had about how I present myself online.
18:35And so when I get these comments, it kind of...
18:39Part of me thinks, okay, well there's something unique to you that's causing them to accuse you of this.
18:44And not the other creators who are better at this.
18:49As AI becomes a better mimic of every form of human expression,
18:54artists, photographers, musicians are getting these accusations.
18:59And that sort of human connection that the internet made possible
19:04is now being interrupted by this growing sense that anything could be fake at any time.
19:10When really there are people who are very rigorous, who follow a process,
19:15who have editorial supervision and who get in trouble if they lie.
19:20And those are the people we need to stick with and we need to protect.
19:24And they're real. And I'm one of them.
19:32Just like social media, AI isn't necessarily engineered to cause harm.
19:37It's engineered to create profit.
19:42US tech companies are the most powerful businesses the world has ever seen.
19:50The most brilliant people are gathered around this table.
19:54This is definitely a high IQ group.
19:56Mark, you're building some buildings that are as large as Manhattan.
20:02That was pretty good, right?
20:04You know, all of the companies here are building,
20:06just making huge investments in the country in order to build out data centers
20:11and infrastructure to power the next wave of innovation.
20:16Collectively, these tech executives spent more on developing AI in 2025
20:21than the US government spent on education, jobs and social services combined.
20:27How much are you spending, would you say, over the next few years?
20:31Oh, gosh.
20:32I mean, I think it's probably going to be something like,
20:35I don't know, at least $600 billion through 28 in the US.
20:42Yeah.
20:43It's a lot.
20:44No, it's significant.
20:46It's a lot.
20:47Thank you, Mark.
20:48It's great to have you.
20:50These magnates control the platforms that shape our information ecosystems.
20:55The AI moment is one of the most transformative moments
20:58any of us have ever seen or will see in our lifetimes.
21:03They do so with minimal oversight or accountability.
21:07What AI is doing even today for education in so many other industries is really great.
21:11So thank you so much for enabling this.
21:13We will invest a ton in the United States
21:17and we will do our best to make sure that we continue to lead here.
21:20That's right.
21:21Hundreds of billions of dollars and it's going to be well worth it.
21:25And you have an unlimited market, right?
21:28It seems like it.
21:31With AI, the race is, well, if I don't do it, if I don't build AI and go as fast
21:36as possible
21:37and take as many shortcuts as possible, even if that causes risk in society,
21:41I'm just going to lose to the company or country that is willing to go as fast as possible
21:45and take as many risks as possible.
21:47And that's what's so dangerous about AI is it's like the ring from Lord of the Rings.
21:50If I get the ring first, then I have all the power and I'll be the good king, I promise.
21:56But that is not a good way to race the most powerful, inscrutable and uncontrollable technology you've ever invented.
22:05The country competing in the AI race with the US is China.
22:09We're leading China by a lot, by a really, by a great amount.
22:14The Chinese government has invested billions in the industry.
22:18And it's using what it develops to create state-sanctioned disinformation campaigns targeting its neighbours.
22:29Perched on China's southeastern coast is a small island called Kinman.
22:38It's littered with the remnants of war.
22:44Although only three kilometres from China, it is Taiwanese territory.
22:50For decades, Taiwan spent periods defending this island from China.
23:01Taiwan's soldiers blasted cannonballs across the strait.
23:05And they broadcast propaganda from an enormous speaker directed at mainland China.
23:18The speaker extols the virtues of life beyond communism.
23:31The idea behind this was a psychological operation to try and sprit democracy to the Chinese mainland.
23:38They suggested that members of the PLA, the People's Liberation Army, should come over here and try it.
23:48These days though, the speaker is a little outdated.
23:52The Chinese superpower is wielding one of the world's most pervasive propaganda tools, AI-generated memes.
23:59That's the one that doesn't know the world's most pervasive propaganda tools, AI-generated memes.
24:01For countries who have arrived, the people who already lifted up with Mama's arms.
24:05The people who are Chinese are already in the world's most pervasive propaganda.
24:25He told us that humans are growing up in Europe.
24:29that needs to unify with the Chinese motherland.
24:42You probably heard talk of a possible global flashpoint over Taiwan,
24:47China seizing territory that, historically,
24:50Beijing says belongs to it.
24:53But what if, in this far more complex world,
24:56taking Taiwan doesn't involve landing on the beaches,
24:59storming territory.
25:01What if taking Taiwan happens in an entirely different space?
25:13This is Taipei, the capital of Taiwan.
25:16Most security experts agree
25:18this is where China is truly on the attack.
25:23Stay vigilant against online propaganda.
25:27That's the message from officials here in Taiwan.
25:30There are growing fears that Chinese social media apps
25:32are hosting content that could threaten Taiwan's national security.
25:37Civil organisations are launching into action
25:39to fortify the nation's digital literacy.
25:43Hi, Teddycat.
25:44Hi.
25:45I'm Hamish.
25:46Really nice to meet you.
25:47Yes.
25:47And you've got the chest ready to go?
25:50Yes.
25:51Wimin Suan, also known as TitiCat,
25:54has co-founded DoubleThinkLab.
25:56It's a group working to track and expose
25:58Chinese digital disinformation.
26:01By the way, why TitiCat?
26:03I had a cat.
26:04Her name was Titi.
26:05OK.
26:06So I've been using this Alice for a very long time.
26:09OK.
26:10It's my online handles, everything.
26:12So what is cognitive warfare for people living in Taiwan?
26:17Create more chaos, polarization,
26:19and eventually lead to we choose a more friendly attitude
26:24toward China government leaders.
26:28Because you're thinking about war, right?
26:30If you are the aggressor, how you actually win a war?
26:35There's no way you can kill everybody in the island.
26:39Taiwan has 23 million people.
26:42There's no way you can eliminate everybody here.
26:46So the only way you win a war is we surrender.
26:49The way to make us surrender is when we believe
26:53there's no chance we can win,
26:55or there's no reason we should fight or resist at all.
27:01China's AI-generated disinformation campaign
27:04particularly targets Taiwan's democratic leaders
27:08and drives the narrative that, inevitably,
27:11Taiwan will be absorbed by China.
27:14People of all ages need to be wary
27:15that when they're consuming content online,
27:17if there is a video that makes you feel
27:20particularly strong emotions,
27:22that you should maybe be online to the fact
27:25that that video could be trying to manipulate you
27:28in one way or another.
27:32The dominated narrative is that the U.S. will not help you.
27:37Nobody cares about you.
27:38You're a small state.
27:40They put this like a small state in your mind
27:43that you have no chance to survive by yourself.
27:47You can only react on a big power.
27:50And the U.S. is not trustworthy.
27:52They are hypocrites.
27:53And they do a lot of evil things, right?
27:56So you can only react on China.
27:58And China will be very friendly to you
28:00because we are family.
28:02So this is the main narrative.
28:05So does that mean that China
28:07is preconditioning you for surrender?
28:09I think so, yes.
28:11A lot of young people actually,
28:13if we see the polling,
28:15there's maybe 20% or 30% of young people
28:18doesn't really see through these lies.
28:20Are you scared?
28:20I'm very scared.
28:23Yeah, especially like generative AI, right?
28:26It will also going to change how people interact
28:29with social media or interact with news.
28:35The Taiwanese are resilient.
28:39Their island has been labelled
28:40one of the most dangerous places on Earth.
28:44It's buffeted by earthquakes and typhoons.
28:49China frequently runs combat drills
28:52just kilometres off its coastline.
28:54And the information war is relentless.
28:59We always look at these foreign adversarial attacks
29:03as well as our natural disasters
29:05as ways for conflict to not turn into explosion,
29:11but rather see them like fire on the ground,
29:13not to be put out,
29:14but rather as energy sources to harness into co-creation.
29:19Audrey Tang is a tech genius
29:21who went from hacking into government sites
29:24to make them more user-friendly
29:25to becoming the world's first digital minister.
29:30In 1981, when Audrey Tang was born here in Taiwan,
29:34the internet was still very much in its infancy
29:36and like most kids born around that time,
29:38the pair kind of grew up together.
29:41At first, the internet was shaping Audrey,
29:43but today, Audrey is trying to leave her mark on the internet.
29:52I think we live in a time
29:55when people's interaction with internet
29:58is being shaken very quickly now
30:01with the rise of AI.
30:04Almost three-quarter of the time,
30:06people think that AI is more human than human.
30:09And so it becomes very easy then
30:12to orchestrate entire villages of fake people
30:17that all look very real,
30:19shaping the political opinion in a very insidious way.
30:24But every time something attacks the society,
30:27the society needs to very quickly make sense of it
30:30and determine what to do with it.
30:32I think this is a classic case
30:35of the people's collective wisdom
30:37is actually much better than the politicians' instinct.
30:43Audrey Tang has pioneered all of these new ways
30:46of integrating AI and technology
30:48into strengthening democracy.
30:51Using AI to take citizens deliberating on a topic
30:54that's contentious,
30:56having an AI listen to that conversation,
30:58automatically transcribe it,
31:00maybe even facilitate the conversation,
31:02then find the areas of unlikely consensus,
31:04and rapidly do what would have taken
31:06lots of human labor to figure out.
31:09We send text message,
31:11SMS to random numbers around Taiwan,
31:14asking what should we do
31:16about the deep fake advertisement online.
31:19And we chose 450 people.
31:22And those 450 people in rooms of 10,
31:26facilitated by AI system.
31:28It can summarize in real time
31:30what each room of people have agreed on.
31:34One room may say,
31:35Facebook earns advertisement money from those scammers,
31:39so Facebook should be liable.
31:41And maybe another room say,
31:43TikTok, buy dance.
31:44We should slow down connection to their servers
31:46until all their business goes to Google.
31:49All these ideas become a draft law in May,
31:52and it passed last July.
31:54You simply do not see those sponsored fake advertisement anymore
31:59because the people through citizen assembly
32:01set the rules that govern the AI's deep fake harm.
32:06And so I think we can use AI
32:09to address the harms of AI to democracy
32:12by using AI to foster democracy.
32:26I would love to see places like Australia engage and experiment
32:30with these kinds of ideas.
32:32Using technology to facilitate consultation, engagement,
32:36a greater sense of democracy,
32:37that I think is the antidote to the centralisation of power
32:41within the technology industry.
32:44Democracy thrives on participation,
32:46and digital technology offers one of the best ways to boost that.
32:51Audrey's ideas about digital democracy
32:54have gained traction around the world.
32:59At home in Taiwan, she offers hope and a sense of agency.
33:04It doesn't take all that long in Taiwan
33:06to be deeply impressed by the place.
33:08I'm not talking just about the trains.
33:10They're fantastic.
33:11But this place is ambitious.
33:14They are politically engaged and tech savvy.
33:17And it's all the more remarkable
33:18when you think about the fact
33:20that this place emerged from martial law.
33:22Maybe even it's because of that.
33:24They know democracy is fragile and hardworn.
33:37Ms Chen is a retired teacher
33:39who lives on the top floor of an apartment block
33:42in central Taipei.
33:45That's very beautiful, the scenery.
33:47Nice garden.
33:48Now, in her early 60s,
33:49she's old enough to have lived under martial law in Taiwan
33:52when new political parties, human rights,
33:55even free speech were banned here.
33:58Wow, this is spectacular.
34:00Yeah.
34:02Around the Taipei city is mountains.
34:04Yeah.
34:04So many mountains.
34:05What do you love about this city?
34:07We are so free and so happy.
34:10How to keep Taiwan's freedom?
34:13So, um, how do you feel about China?
34:19Invade Taiwan. Anytime.
34:21You're worried that will happen?
34:24Yes.
34:24Of course.
34:25And so you're going to classes to prepare?
34:29Yes, yes, yes.
34:30Why?
34:31Because just protect my family,
34:35protect my country.
34:46Ms Chen has signed up for a full day's workshop
34:49at the Kummer Academy.
34:52It's a not-for-profit,
34:53teaching citizens defence strategies
34:55in case there's an invasion.
35:05They begin with first aid lessons.
35:09The students learn how to tie a tourniquet
35:11if a limb gets blown off.
35:24Then they learn what to do in case of broad-scale aerial attacks.
35:31But this course puts just as much emphasis
35:33on the impact of cognitive warfare.
35:39They teach students how to spot the difference
35:41between a real person
35:42and an AI-generated influencer
35:45whose very realistic content
35:47may be flooding their feeds.
35:59This is it.
36:01The Fact Check Centre.
36:02Yes.
36:03The biggest fact-checking initiative in Taiwan.
36:06OK.
36:07TD Kat and his collaborators
36:09are using AI to mine social media
36:12for mis- and disinformation.
36:14So basically we are collecting the posts
36:17on social media platforms.
36:19The volume of data is very, very high
36:22because we set up a rather large amount
36:25of keywords to collect.
36:27Their AI tool scans huge amounts of data
36:30to identify patterns in language and content,
36:34like influencers, both Chinese and Taiwanese,
36:37who follow similar scripts
36:39and use the same vision.
36:41in what appears to be
36:42an orchestrated disinformation campaign.
36:45How have you designed this technology?
36:47It's picking up when there's a trend,
36:49when something is just being
36:51artificially blown up as an issue.
36:54Yes.
36:55They identify a stream of false narratives
36:57on social media,
36:59designed to reduce confidence
37:00in Taiwan's leadership
37:02and their ally, the US.
37:09So this is an example of using AI
37:12to actually hang on to facts?
37:15Yes.
37:16If you fact-check something,
37:18what then travels faster,
37:20the lie or the fact?
37:21Of course the lie.
37:23Those rumours go faster than facts.
37:27If we have done the fact-checking,
37:28we need to promote our fact-checking results to people.
37:32We try to disseminate those fact-check reports
37:35through all the social media
37:37where we collect those disinformation and rumours.
37:41So do you think you can win that race?
37:43Don't know, but yeah,
37:45if nobody do it,
37:47there's darkness,
37:50there is a candle,
37:51a small light,
37:53therefore people know what's light,
37:56what's darkness, right?
37:57But if there's no light,
37:59so darkness will be the normal,
38:01nothing else.
38:03So that's what we believe
38:05and yeah, what we're doing.
38:11For now,
38:12Taiwanese support for unification with China sits below 20%.
38:16The majority of people want to maintain the independence they have
38:20or build on that further.
38:22Miss Chan is determined to preserve that majority.
38:26She will share what she's learned at Cooma Academy
38:29so that friends and family are more alert
38:31to targeted disinformation.
38:40Unlike the European Union,
38:42the Australian government has chosen against legislation specific to AI.
38:47Yeah, give it a diva moment, diva moment.
38:49This means more economic opportunities for new tech,
38:52but not mandatory guardrails.
38:55I think we're all just kind of used to being taken advantage of
38:58by big tech companies that it's like,
39:00well, yeah, of course this would happen.
39:02This is perfectly feasible.
39:03Before?
39:05After!
39:06Whoa!
39:06That doesn't look like me at all.
39:08That's so weird.
39:09The magic of AI.
39:11Hey!
39:11You look so respectable.
39:14I am giving AI a wide berth
39:17and I'm realising I can't afford to do that.
39:23I'm sorry.
39:24It looks very AI, doesn't it?
39:27Wait a minute, this is not what I think it is.
39:30You know, this is not as cool or funny or engaging as it actually should be
39:36because it's actually not real.
39:37We're living in a world that is, in technological terms,
39:41advancing at an exponential rate
39:43and is surprising even the developers in terms of that rate.
39:49What I think is interesting about Australia specifically
39:51is that we rate as nearly the lowest on most questions globally
39:57about optimism in relation to AI.
40:04So Australians, they are not optimistic,
40:07they don't feel inspired by it.
40:10I think we might have AI smarter than any single human at anything
40:13as soon as next year.
40:16And then probably within, like say 2030,
40:19probably AI is smarter than the sum of all humans.
40:22Certainly the hype that comes out of the AI industry
40:25is focused on inevitability.
40:26They've invested hundreds of billions of dollars into this industry
40:29and they want to seek a return.
40:32One of the ways they do that is they create a narrative
40:35that this technology is inherently valuable
40:37and will generate productivity gains
40:40even when the data suggests that might not be true.
40:46Australia is in the global spotlight,
40:49having introduced a world-first ban
40:51on the creation of social media accounts for under-16s.
40:55But while Australia pursues a hard line on this,
40:59lawyer Lizzie O'Shea wants to know
41:01why the approach to AI is more passive.
41:0483% of Australians would be more prepared
41:06to use AI products if they felt there were guarantees
41:10around things like safety and standards.
41:14I think there's currently more regulation
41:16on producing a sandwich
41:17than there is on producing world-ending
41:20artificial general intelligence.
41:23Lizzie has founded Digital Rights Watch
41:25to defend Australians from data mining,
41:28surveillance and digital disinformation.
41:31She's also busy urging Australians to speak up
41:35about how AI will change our world.
41:38I really want to start a conversation with anyone
41:41who is interested in the topic about how we can make AI fairer,
41:45how we can make sure that we have a balance
41:48for the concentration of power in the technology industry
41:51that is currently dictating how AI will be developed and deployed.
41:55How many people are using AI regularly in their work or study now?
42:01I would put that at two-thirds.
42:03Yeah.
42:04Rather than just accept AI products
42:06and are part of our world,
42:08Lizzie suggests we interrogate their purpose.
42:10Yeah, well, I'm going around asking anybody
42:12if they can give me a good use case for Sora
42:14because I don't know what the purpose of that is.
42:17It's like a fully synthetic social media platform
42:19to generate deep fake videos.
42:22I think there are a lot of nefarious purposes I can think of,
42:25a lot of maybe marginally...
42:27We have certain sets of regulations
42:28around what is expected of consumer products,
42:31including that they don't harm people
42:33and that they're developed with a certain kind of trustworthiness
42:37in how they're put on the market.
42:40Here in Australia, we've come to be very much a nation on wheels.
42:45Cars, when they started being sold en masse to the public,
42:48there was an assumption that drivers are responsible
42:50for all mishaps, accidents that occur in vehicles.
42:54We just make the product, there's nothing we can do.
42:57It's their responsibility if something goes wrong.
43:01People were harmed en masse by vehicles that were not designed well,
43:06even though the companies manufacturing them knew that to be true.
43:14We don't take that approach to cars anymore.
43:16We now have safety features.
43:18We have seat belts.
43:19We have airbags.
43:20That was not introduced because the car industry thought it was a good idea.
43:24They had to be cajoled into doing it.
43:29We need to take an approach to regulation
43:31that enshrines what we think is fair and reasonable as a society
43:36and then require companies to comply.
43:39So I think there is a clear agenda by large tech companies
43:43to experiment on users,
43:45to try and find a way to make these products profitable.
43:47And we need to think very seriously about the implications of systems
43:50that can't be trusted to answer basic questions.
43:54When government talks about the need to embrace the opportunities of AI,
43:58the way they ought to do that is by giving Australians confidence
44:01that they have done the work to protect them.
44:03My mission is to create a movement of people
44:05that can start to take power back
44:07and to ensure that elected representatives
44:10who want to do the right thing are supported.
44:12There is nothing inevitable about the future.
44:15The future is ours to shape
44:16and the best way we can do that
44:18is by being active, engages in our democracy.
44:21It is not a spectator sport.
44:23Please join us and help us to build this movement.
44:33Ask yourself the question,
44:35is AI actually inevitable?
44:38Like, just, I know that's not true right now,
44:41but just imagine for a moment
44:42that literally no one on planet Earth wanted this to happen.
44:46Would the laws of physics force AI into existence?
44:50And the answer is no.
44:57As we all head off into this great technological unknown,
45:00it may be that we come to look back on this moment
45:03and see that we do have an opportunity to act,
45:06to shape what happens next.
45:08For me, already, this has been a mind-bending adventure,
45:12trying to figure out what the hell happened to all of the facts.
45:16Along the way, I have realised something,
45:18that facts alone are not going to solve these huge challenges
45:22we all face as humans.
45:24But without facts,
45:26we don't have a hope of solving those problems.
45:29So perhaps, after all, facts are worth fighting for.
45:33Facts do matter.
45:52Please, check in see there.
45:58See you in the next few months.
46:00Explore for
Comments