Skip to playerSkip to main content
  • 5 hours ago
The Matter Of Facts S01E03

Category

📺
TV
Transcript
00:03Hi, I'm Hamish MacDonald and I'm a journalist, which means facts are my business.
00:09Where do you get your information from?
00:11Social media.
00:12Facebook.
00:13Facebook, Instagram and YouTube.
00:15Social media is where so many of us now get our information.
00:20But can we rely on it when tech makes it so easy to create a fictional story, which can
00:28look and sound real?
00:29I went to the moon last week, don't you know?
00:31Even though it's entirely fake.
00:35Are you like so many people now, struggling to know what's true and what's false?
00:41Big tech is shifting our relationship with facts.
00:45We're going through a period of radical change in the way that people consume information.
00:49We've created this monster.
00:52An algorithm is outsourcing the human choice into a machine's decision.
00:58The end goal is keep you scrolling.
00:59Hack our biology, change the way we feel, to change the way we see the world.
01:04Is anyone else paying attention to how all of this is affecting society?
01:10Over the course of this series, I've explored how social media is transforming the way we communicate.
01:16It's part of the propaganda.
01:18It's a biased content production.
01:20But we're about to take another technological leap.
01:23It took 68 seconds to replace his face.
01:27As AI makes everything believable and nothing certain.
01:32Anything could be fake at any time.
01:35So the experiment is detect AI faces.
01:39I was really easily tricked by it.
01:41You don't want to sleepwalk into a future that nobody wants.
01:44Are you scared?
01:45I'm very scared.
01:47Join me on a journey right around Australia and the world to meet people determined to keep facts alive.
01:54In this real-time information revolution.
02:10Social media has changed how we communicate.
02:14Algorithms have sent us into information echo chambers.
02:20Do you ever have those moments where you feel just a bit overwhelmed from all the information
02:24coming at you?
02:25You know, the light, the noise, even the speed of it all?
02:29Well, I don't want to alarm you, but it's probably going to get worse.
02:32You see, AI is here.
02:35And it's revolutionising everything.
02:44Including our perception of who and what is real.
02:56Already, almost half our population is playing with generative AI.
03:03Even though no one knows where it might take us.
03:09One of the confusing things about AI is there's so many different kinds of risks that are all
03:14connected in a way that's even more profound than social media.
03:18Social media is like a baby AI.
03:22AI, if it really automates human labour, will rapidly automate billions of people's jobs.
03:29The actual mission statement is to automate every form of human cognitive labour in the economy.
03:36To be able to do every job better than a human can do.
03:40AI is a computer system that can do tasks otherwise requiring human intelligence.
03:46It's working for us every day by phones, cars, appliances.
03:55It's also data crunching.
03:58Picking strawberries.
04:02And diagnosing diseases.
04:04But AI is particularly good at work that involves distorting reality.
04:13As AI technology improves, it's getting harder for all of us, including experts,
04:19to detect what's human versus machine built.
04:26And am I looking in the same place each time?
04:28Same place every time.
04:29And don't you dare smile.
04:30Okay.
04:31Stuart White has spent decades perfecting his craft as a special effects artist.
04:36I'm going to walk an arc around the front here, take about 50 photos all one after the other.
04:41The whole process always starts with scanning somebody and turning them into a 3D model.
04:46Just do what the face is doing.
04:49Creating a three-dimensional version of my head is a painstaking process, requiring thousands of images
04:55and months of work, wide eyes, surprise mouth, funnel lips, right up.
05:04There's a lot of muscles in the face.
05:06And you're done.
05:07I'm done.
05:07Yeah.
05:08I never need to work again.
05:11The robots can do the rest.
05:14Now Stu's got the full picture of my face,
05:17he needs a body he can connect it to.
05:29Highland dancer Tristan has been perfecting his art since he was six.
05:42He's now a national champion.
05:47First, 669.
05:52And with Stu's help, I can be too.
05:58Rather than take his usual time-consuming 3D modelling approach,
06:03Stu is about to experiment with an AI-powered shortcut.
06:09So this is the software called Face Fusion.
06:12It costs about $1.50 an hour to rent a computer powerful enough to do this, so not much.
06:19And basically what it's asking for here is a still frame of the person whose face you want to use.
06:26So I'm going to find a fairly frontal picture of Hamish, drop it in there.
06:32We've got one of the shots from the dancing competition here.
06:36And hopefully, pretty much straight away, it'll start trying to get to work.
06:40It happens in basically about a minute or less, which is astounding to me because historically,
06:48this took six months with traditional techniques.
06:53And yeah, here we are, we have a result already.
06:57Stu is watching a machine supersede his job.
07:00It took 68 seconds to replace his face.
07:04It's done a pretty terrific job.
07:06And people who had never met Hamish could probably watch the end resulting clips and believe that that
07:13person was doing that dance.
07:15So what about someone who has met me?
07:18In fact, who knows my face better than anyone?
07:21All right, Mum, do you want to take a seat?
07:23I will.
07:23I've invited my mum, Carol, into work for the afternoon.
07:26To my screening room.
07:27I've told her I've got something special to show up.
07:30So, you know, I've been very busy.
07:32Yes, always busy.
07:34I've been working on this as well as other things.
07:37But I've also been learning some new skills.
07:39All right.
07:40You know, I like to do things well.
07:42Yes, yes.
07:43I've actually done quite well.
07:45Oh, for me.
07:46What?
07:48Do you want to have a look?
07:50I'll put my glasses on.
07:52OK.
07:53You ready?
07:53Yeah, ready.
08:12Oh, that's gorgeous.
08:25What?
08:25First, 669.
08:27You're joking.
08:32Very clever.
08:33You do things perfectly, always.
08:35I didn't realise you had that skill, did you?
08:38Not really.
08:39I was a little bit surprised.
08:41Oh, my goodness.
08:42That was fantastic.
08:44Well, Dad would have loved it too.
08:46Yeah, Dad would have been very proud.
08:47Yeah.
08:49How did you learn it?
08:51Was there anything about it that looks weird to you?
08:57No.
08:58OK, kind of scary.
09:00AI can trick even my own mum.
09:03What if I told you it's not really me?
09:07Well, some of it's you, but some of it's not.
09:09None of it's me.
09:10None of it.
09:14Well, I thought it was you when I got going, but then I thought it doesn't look quite like
09:20you, your face.
09:23And also the perfect arabesque.
09:29Do you look out for AI?
09:31Do you think you spot it?
09:31Yeah, well, I do look at things that are AI, and I do spot them, usually.
09:36Oh, my gosh, it has rattled me.
09:40It's very clever.
09:43Love you.
09:43Love you too.
09:51AI is duping us, but Australians aren't blind to its dangers.
09:56Only 30% of us think the benefits of this tech outweigh the risks.
10:01We've recruited 10 participants and two cognitive psychologists
10:06to understand how we're adjusting to AI-generated content.
10:12So the experiment that I'm about to share with you is about our capacity to detect
10:19AI faces as they're looking now online.
10:24Go with your gut instinct and just keep on moving through.
10:46So you reckon he's real?
10:49That background is all hell.
10:56The people who make the most errors on AI faces are actually the most confident.
11:18I'm not a fan of AI. I never have been. I don't like looking at AI-generated content in my
11:24head.
11:24AI is still that video of Will Smith eating spaghetti, and then he kind of turns into spaghetti,
11:29and then he's eating it again, then the fork disappears into the bowl.
11:33That's what AI still is to me.
11:36I don't know. I just don't know anymore. My head's hurting.
11:47OK, we're done over here.
11:50What did you get?
11:52Oh, this will be interesting.
11:53What did you get?
11:5413.
11:5513.
11:55Oh.
11:56How many?
11:56That's not good.
11:5713.
11:59I was really easily tricked by it, and that was when I was at my most vigilant, most,
12:04OK, I know in my head all of the cues of an AI, the wonky collars, the uneven hair.
12:10You guys did better than me.
12:12I think that over-vigilance of once I'm aware of it, I over-corrected and had the same result
12:21as if I didn't know anything.
12:23When they discovered how easily their own brains fell prey to some of this,
12:28there was a real humility in how they turned around and said,
12:31actually, it's not everybody else. I'm not immune to this because I'm young and I've grown up with this.
12:36It's me too, and this is going to be my future and I need to do better.
12:40Yeah. My brain is so cooked after all of this.
12:45I have analysed your data as a group.
12:49You were a perfect participant group in that you showed exactly what much larger samples from
12:55the community show. You were at chance. So overall, 51% accuracy, where 50% is chance,
13:03you might as well have flipped a coin.
13:08What these algorithms are doing is looking for patterns and then regenerating those patterns.
13:14So they're tending more towards the average. And average faces we see as being more familiar,
13:22a little bit more trustworthy, a little bit more attractive. And these are things that are coming
13:28through in the AI faces now. So these are entirely convincing now. We can't spot them online.
13:39Over 24 months, AI-generated Will Smith has been getting much better at eating spaghetti.
13:47Mmm, I can't get enough of this.
13:50In fact, it's thought AI can double its ability to perform complex tasks every few months.
13:57One of the sort of magical qualities of how generative AI works is it's not really thinking,
14:02it's mimicking. AI could rapidly flood our information environment with content generated by an AI that
14:12will look indistinguishable or even perform better than human-generated content.
14:17So we're being out-competed both in that AI-generated content is cheaper to produce,
14:23but second, that then people doubt and question the value of human-generated content and assume
14:28maybe it's made by a machine.
14:36My name is Joss and I am a video journalist, YouTuber. I make explainer videos about science and tech.
14:44Howtown is a new YouTube channel that I started with my colleague Adam Cole.
14:49Every episode answers a how do they know that question.
14:54Joss Fong knows how well AI can mimic humans. She's been forced to prove that she's not AI-generated.
15:01I'm working on a story about giant pterosaurs, which are flying reptiles that lived more than 60 million years ago.
15:09The only thing that it left behind is this pile of bones from its left wing.
15:15Okay, so we don't even have any part of the head here.
15:18No part of the head. So how do they know what it looked like or how it moved?
15:22I called up a bunch of paleontologists to find out how they reconstruct these animals.
15:26We've been publishing for about six months and we started to see the comments on YouTube
15:31that were accusing us of being AI-generated.
15:37You get a lot of comments on YouTube and it's hard to know how seriously to take any of them,
15:42but there was a trend and they'd say things like,
15:46are these people real? Or just like AI question mark.
15:50Or a lot of them said, um, this sounds just like an AI podcast.
15:55Then it all became clear. Notebook LM.
15:59Notebook LM, our nemesis.
16:04It's a product where you can upload documents and ask questions about them, which is cool.
16:13But they also had this feature where it would generate a podcast.
16:18To give you a more local example,
16:23I've asked Notebook LM to whip up a podcast about our longest running kids TV show.
16:33And within moments,
16:35we are diving into the essence of play school, entertaining and engaging children.
16:40A very human sounding conversation.
16:43Right.
16:44It proves the show has this fundamental structure that's just,
16:46well, it seems to be completely...
16:48Two generated voices, one male, one female sounding,
16:52that would have a conversation about your given topic.
16:55How does a kid's show manage this grand idea of exploring the world without,
16:59you know, huge budgets or fancy CGI or flying presenters everywhere?
17:04And it's very realistic.
17:06They breathe.
17:08Really important.
17:09And the source material is very specific.
17:11It lists, you know, they interrupt each other.
17:13They would kind of use the filler words that humans use.
17:17Or is that deliberate simplicity actually the secret weapon for survival?
17:22I mean, I'd argue the simplicity is absolutely core to its longevity.
17:25And so I think as more and more people started to use that app,
17:30especially students using it for studying,
17:33they started to associate that kind of conversation that is explanatory in nature.
17:38They associated that with AI.
17:41And that's a bummer for us, because when we started Howtown, we thought, okay, we know AI can do voiceover,
17:47and that's no problem.
17:49But at the time, AI didn't do conversation.
17:53Quetzalcoatlus, part of a group called the Asjarkids.
17:57Where does that come from?
17:58It feels sort of Tolkien-esque or something.
18:01I believe it is Persian.
18:03Adam and I thought we would add conversations into our videos as a way of showing that we're people.
18:09You're like, that's an Asjarkid humorous.
18:10I know it.
18:11Like the back of my hand.
18:13So if you scroll down on page 19.
18:16Yeah.
18:17One thing I would do is just not say vertical resting pose.
18:20Okay.
18:21I think you could just say, its head was up like a giraffe.
18:23Joss and Adam have had to convince their audience that they are real people.
18:28And having to do that has taken a toll.
18:31It tapped into some insecurities that I already had about how I present myself online.
18:36And so when I get these comments, it kind of, part of me thinks, okay, well,
18:40there's something unique to you that's causing them to accuse you of this,
18:44and not the other creators who are better at this.
18:49As AI becomes a better mimic of every form of human expression,
18:54artists, photographers, musicians are getting these accusations.
18:59And that sort of human connection that the internet made possible is now being interrupted
19:05by this growing sense that anything could be fake at any time.
19:10When really, there are people who are very rigorous, who follow a process,
19:15who have editorial supervision, and who get in trouble if they lie.
19:19And those are the people we need to stick with.
19:22And we need to protect.
19:24And they're real.
19:25And I'm one of them.
19:32Just like social media, AI isn't necessarily engineered to cause harm.
19:37It's engineered to create profit.
19:43US tech companies are the most powerful businesses the world has ever seen.
19:50The most brilliant people are gathered around this table.
19:54This is definitely a high IQ group.
19:56Mark, you're building some buildings that are as large as Manhattan.
20:02That works pretty good, right?
20:04You know, all of the companies here are building,
20:07just making huge investments in the country in order to build out data centers and infrastructure
20:12to power the next wave of innovation.
20:16Collectively, these tech executives spent more on developing AI in 2025 than the US government
20:22spent on education, jobs and social services combined.
20:27How much are you spending, would you say, over the next few years?
20:31Oh, gosh.
20:32Um, I mean, I think it's probably going to be something like,
20:35I don't know, at least $600 billion through 28 in the US.
20:42Yeah.
20:43It's a lot.
20:44No, it's significant.
20:46It's a lot.
20:47Thank you, Mark.
20:48It's great to have you.
20:50These magnates control the platforms that shape our information ecosystems.
20:55The AI moment is one of the most transformative moments any of us have ever seen or will see in
21:01our lifetimes.
21:03They do so with minimal oversight or accountability.
21:07What AI is doing even today for education and so many other industries is really great.
21:12So thank you so much for enabling this.
21:13We will invest a ton in the United States and we will do our best to make sure that we
21:19continue to lead here.
21:20That's right.
21:21Hundreds of billions of dollars and it's going to be well worth it.
21:25And you have an unlimited market, right?
21:28It seems like it.
21:31With AI, the race is, well, if I don't do it, if I don't build AI and go as fast
21:36as possible and take
21:37as many shortcuts as possible, even if that causes risk in society, I'm just going to lose to the
21:42company or country that is willing to go as fast as possible and take as many risks as possible.
21:47And that's what's so dangerous about AI is it's like the ring from Lord of the Rings.
21:51If I get the ring first, then I have all the power and I'll be the good king, I promise.
21:56But that is not a good way to raise the most powerful, inscrutable and uncontrollable technology
22:03you've ever invented.
22:04The country competing in the AI race with the US is China.
22:09We're leading China by a lot, by a really, by a great amount.
22:14The Chinese government has invested billions in the industry and it's using what it develops
22:20to create state-sanctioned disinformation campaigns targeting its neighbors.
22:30Perched on China's south-eastern coast is a small island called Kinman.
22:38It's littered with the remnants of war.
22:43Although only three kilometres from China, it is Taiwanese territory.
22:50For decades, Taiwan spent periods defending this island from China.
23:01Taiwanese soldiers blasted cannonballs across the strait.
23:05They broadcast propaganda from an enormous speaker directed at mainland China.
23:18The speaker extols the virtues of life beyond communism.
23:31The idea behind this was a psychological operation to try and sprit democracy to the Chinese mainland.
23:38They suggested that members of the PLA, the People's Liberation Army, should come over here and try it.
23:48These days though, the speaker is a little outdated.
23:52The Chinese superpower is wielding one of the world's most pervasive propaganda tools, AI-generated memes.
23:59Kids singing in the mix.
24:22Modern-day China has never ruled Taiwan.
24:26Taiwan, but considers it a breakaway province that needs to unify with the Chinese motherland.
24:34Taiwan is the divine power of China.
24:37No one can't stop the cars of history.
24:43You've probably heard talk of a possible global flashpoint over Taiwan.
24:47China seizing territory that historically, Beijing says belongs to it.
24:52But what if in this far more complex world, taking Taiwan doesn't involve landing on the beaches storming territory?
25:01What if taking Taiwan happens in an entirely different space?
25:13This is Taipei, the capital of Taiwan.
25:16Most security experts agree this is where China is truly on the attack.
25:23Stay vigilant against online propaganda.
25:26That's the message from officials here in Taiwan.
25:29There are growing fears that Chinese social media apps are hosting content that could threaten Taiwan's national security.
25:37Civil organisations are launching into action to fortify the nation's digital literacy.
25:43Hi, Titi Cat.
25:44Hi.
25:45I'm Hamish.
25:46Really nice to meet you.
25:47Nice to meet you.
25:47Yes.
25:47And you've got the chess ready to go.
25:50Yes.
25:51Wimin Suan, also known as Titi Cat, has co-founded Double Think Lab.
25:56It's a group working to track and expose Chinese digital disinformation.
26:01By the way, why Titi Cat?
26:03I had a cat.
26:04Okay.
26:04Her name was Titi.
26:05Okay.
26:06So I've been using this Alice for a very long time.
26:09Okay.
26:10It's my online handles, everything.
26:12So what is cognitive warfare for people living in Taiwan?
26:17Create more chaos, polarization, and eventually lead to we choose a more friendly attitude toward China government leaders.
26:27Because you're thinking about war, right?
26:30If you are the aggressor, how you actually win a war?
26:35There's no way you can kill everybody in the island.
26:40Taiwan has 23 million people.
26:42There's no way you can eliminate everybody here.
26:46So the only way you win a war is we surrender.
26:50The way to make us surrender is when we believe there's no chance we can win.
26:55Or there's no reason we should fight or resist at all.
27:01China's AI-generated disinformation campaign particularly targets Taiwan's democratic leaders.
27:08And drives the narrative that, inevitably, Taiwan will be absorbed by China.
27:13People of all ages need to be wary that when they're consuming content online,
27:17if there is a video that makes you feel particularly strong emotions,
27:22that you should maybe be online to the fact that that video could be trying to manipulate you in one
27:28way or another.
27:32The dominated narrative is that the U.S. will not help you.
27:37Nobody cares about you.
27:38You're a small state.
27:40They put this like a small state in your mind that you have no chance to survive by yourself.
27:47You can only react on a big power.
27:50And the U.S. is not trustworthy.
27:52They are hypocrites.
27:53And they do a lot of evil things, right?
27:56So you can only react on China.
27:58And China will be very friendly to you because we are family.
28:02So this is the main narrative.
28:05So does that mean that China is preconditioning you for surrender?
28:09I think so, yes.
28:11A lot of young people actually, if we see the polling,
28:15there's maybe 20% or 30% of young people doesn't really see through these lies.
28:20Are you scared?
28:21I'm very scared.
28:23Yeah, especially like generative AI, right?
28:27It will also going to change how people interact with social media or interact with news.
28:35The Taiwanese are resilient.
28:39Their island has been labelled one of the most dangerous places on Earth.
28:44Buffeted by earthquakes and typhoons.
28:49China frequently runs combat drills just kilometres off its coastline.
28:54And the information war is relentless.
28:59We always look at these foreign adversarial attacks,
29:03as well as our natural disasters,
29:06as ways for conflict to not turn into explosion,
29:11but rather see them like fire on the ground, not to be put out,
29:14but rather as energy sources to harness into co-creation.
29:19Audrey Tang is a tech genius
29:21who went from hacking into government sites to make them more user-friendly
29:25to becoming the world's first digital minister.
29:30In 1981, when Audrey Tang was born here in Taiwan,
29:34the internet was still very much in its infancy,
29:37and like most kids born around that time,
29:38the pair kind of grew up together.
29:41At first, the internet was shaping Audrey,
29:43but today Audrey is trying to leave her mark on the internet.
29:52I think we live in a time when people's interaction with the internet
29:58is being shaken very quickly now with the rise of AI.
30:04Almost three-quarter of the time, people think that AI is more human than human.
30:09And so it becomes very easy then to orchestrate entire villages of fake people
30:17that all look very real, shaping the political opinion in a very insidious way.
30:24But every time something attacks the society,
30:27the society needs to very quickly make sense of it and determine what to do with it.
30:32I think this is a classic case of the people's collective wisdom
30:37is actually much better than the politician's instinct.
30:43Audrey Tang has pioneered all of these new ways of integrating AI and technology
30:48into strengthening democracy.
30:51Using AI to take citizens deliberating on a topic that's contentious,
30:56having an AI listen to that conversation, automatically transcribe it,
31:00maybe even facilitate the conversation, then find the areas of unlikely consensus
31:04and rapidly do what would have taken lots of human labor to figure out.
31:09We send text message, SMS, to random numbers around Taiwan,
31:14asking what should we do about the deep fake advertisement online.
31:19And we chose 450 people.
31:22And those 450 people in rooms of 10, facilitated by AI system,
31:28it can summarize in real time what each room of people have agreed on.
31:34One room may say, Facebook earns advertisement money from those scammers,
31:39so Facebook should be liable.
31:41And maybe another room say, TikTok, ByteDance,
31:44we should slow down connection to their servers until all their business goes to Google.
31:49All these ideas become a draft law in May, and it passed last July.
31:54You simply do not see those sponsored fake advertisement anymore,
31:59because the people through citizen assembly set the rules that govern the AI's deep fake harm.
32:06And so I think we can use AI to address the harms of AI to democracy by using AI to
32:14foster democracy.
32:25I would love to see places like Australia engage and experiment with these kinds of ideas.
32:32Using technology to facilitate consultation, engagement, a greater sense of democracy,
32:37that I think is the antidote to the centralization of power within the technology industry.
32:43Democracy thrives on participation.
32:46And digital technology offers one of the best ways to boost that.
32:52Audrey's ideas about digital democracy have gained traction around the world.
32:59At home in Taiwan, she offers hope and a sense of agency.
33:04It doesn't take all that long in Taiwan to be deeply impressed by the place.
33:09I'm not talking just about the trains, they're fantastic, but this place is ambitious.
33:14They are politically engaged and tech savvy.
33:17And it's all the more remarkable when you think about the fact that this place emerged from martial law.
33:23Maybe even it's because of that.
33:25They know democracy is fragile and hardworn.
33:28What is a logic like?
33:37Ms Chen is a retired teacher who lives on the top floor of an apartment block in central Taipei.
33:45That's very beautiful. The scenery.
33:47Nice garden.
33:48Now in her early sixties, she's old enough to have lived under martial law in Taiwan,
33:52when new political parties, human rights, even free speech were banned here.
33:58Wow, this is spectacular.
34:00Yeah. Around the Taipei city is mountains, so many mountains.
34:05What do you love about this city?
34:07We are so free and so happy.
34:10How to keep Taiwan's freedom?
34:13So, um, how do you feel about China?
34:19Invade Taiwan. Anytime.
34:21You're worried that will happen?
34:24Yes, of course.
34:25And so you're going to classes to prepare?
34:29Yes, yes. Why?
34:31Because just protect my family, protect my country.
34:46Ms Chan has signed up for a full day's workshop at the Coomer Academy.
34:52It's a not-for-profit, teaching citizens defence strategies in case there's an invasion.
34:59OK, time is about.各位同學,
35:02everyone. Hello. Hello.
35:05They begin
35:06with first
35:07aid lessons.
35:09The students learn how to tie a tourniquet if a limb gets blown off.
35:25Then they learn what to do in case of broad-scale aerial attacks.
35:31But this course puts just as much emphasis on the impact of cognitive warfare.
35:39They teach students how to spot the difference between a real person and an AI-generated influencer,
35:45whose very realistic content may be flooding their feeds.
35:59This is it. The fact-check centre.
36:02Yes. The biggest fact-checking initiative in Taiwan.
36:06OK.
36:07TD Cat and his collaborators are using AI to mine social media for mis- and disinformation.
36:14So basically we are collecting the posts on social media platforms.
36:19The value of data is very, very high because we set up a rather large amount of keywords to collect.
36:27Their AI tool scans huge amounts of data to identify patterns in language and content.
36:34Like influencers, both Chinese and Taiwanese, who follow similar scripts and use the same vision
36:41in what appears to be an orchestrated disinformation campaign.
36:45How have you designed this technology?
36:47It's picking up when there's a trend, when something is just being artificially blown up as an issue.
36:54Yes.
36:55They identify a stream of false narratives on social media, designed to reduce confidence
37:00in Taiwan's leadership and their ally, the US.
37:09So this is an example of using AI to actually hang on to facts?
37:15Yes.
37:15If you fact-check something, what then travels faster, the lie or the fact?
37:21Of course the lie. Those rumours go faster than facts.
37:26If we have done the fact-checking, we need to promote our fact-checking results to people.
37:32We try to disseminate those fact-check reports through all the social media
37:37where we collect those disinformation and rumours.
37:41So do you think you can win that race?
37:43Don't know.
37:44Don't know.
37:44But yeah, if nobody do it, there's darkness, there is a candle, a small light.
37:53Therefore people know what's light, what's darkness, right?
37:57But if there's no light, so darkness will be the normal, nothing else.
38:03So that's what we believe and, yeah, what we're doing.
38:11For now, Taiwanese support for unification with China sits below 20%.
38:16The majority of people want to maintain the independence they have
38:20or build on that further.
38:22Ms Chan is determined to preserve that majority.
38:26She will share what she's learned at Cooma Academy
38:29so that friends and family are more alert to targeted disinformation.
38:40Unlike the European Union, the Australian government
38:43has chosen against legislation specific to AI.
38:47Yeah, give it a diva moment.
38:49This means more economic opportunities for new tech,
38:52but not mandatory guardrails.
38:55I think we're all just kind of used to being taken advantage of
38:58by big tech companies, that it's like,
39:00well, yeah, of course this would happen.
39:02This is perfectly feasible.
39:04Before...
39:05After!
39:06Whoa!
39:06That doesn't look like me at all.
39:08That's so weird.
39:09The magic of AI.
39:11Hey!
39:11You look so respectable.
39:14I am giving AI a wide berth
39:17and I'm realising I can't afford to do that.
39:27Wait a minute, this is not what I think it is.
39:30You know, this is not as cool or funny or engaging
39:34as it actually should be because it's actually not real.
39:37We're living in a world that is, in technological terms,
39:41advancing at an exponential rate
39:43and is surprising even the developers in terms of that rate.
39:49What I think is interesting about Australia specifically
39:51is that we rate as nearly the lowest on most questions globally
39:57about optimism in relation to AI.
40:04So Australians, they are not optimistic,
40:07they don't feel inspired by it.
40:10I think we might have AI smarter than any single human at anything
40:13as soon as next year.
40:16And then probably within, like say 2030,
40:19probably AI is smarter than the sum of all humans.
40:22Certainly the hype that comes out of the AI industry
40:25is focused on inevitability.
40:26They've invested hundreds of billions of dollars into this industry
40:29and they want to seek a return.
40:32One of the ways they do that is they create a narrative
40:35that this technology is inherently valuable
40:37and will generate productivity gains
40:40even when the data suggests that might not be true.
40:46Australia is in the global spotlight,
40:49having introduced a world-first ban
40:51on the creation of social media accounts for under-16s.
40:55But while Australia pursues a hard line on this,
40:59lawyer Lizzie O'Shea wants to know why the approach to AI
41:02is more passive.
41:0483% of Australians would be more prepared to use AI products
41:08if they felt there were guarantees
41:10around things like safety and standards.
41:14I think there's currently more regulation on producing a sandwich
41:18than there is on producing world-ending
41:20artificial general intelligence.
41:23Lizzie has founded Digital Rights Watch
41:26to defend Australians from data mining,
41:28surveillance and digital disinformation.
41:31She's also busy urging Australians to speak up
41:35about how AI will change our world.
41:38I really want to start a conversation with anyone
41:41who is interested in the topic
41:42about how we can make AI fairer,
41:45how we can make sure that we have a balance
41:48for the concentration of power in the technology industry
41:51that is currently dictating how AI will be developed and deployed.
41:55How many people are using AI regularly in their work or study now?
42:01I would put that at two-thirds.
42:03Yeah.
42:04Rather than just accept AI products
42:06and are part of our world,
42:08Lizzie suggests we interrogate their purpose.
42:10Yeah, well I'm going around asking anybody
42:12if they can give me a good use case for Sora
42:14because I don't know what the purpose of that is.
42:17Like a fully synthetic social media platform
42:20to generate deep fake videos.
42:22I think there are a lot of nefarious purposes I can think of.
42:26A lot of maybe marginally...
42:27We have certain sets of regulations
42:28around what is expected of consumer products,
42:32including that they don't harm people
42:34and that they're developed with a certain kind of trustworthiness
42:37in how they're put on the market.
42:40Here in Australia we've come to be very much a nation on wheels.
42:45Cars, when they started being sold en masse to the public,
42:48there was an assumption that drivers are responsible
42:50for all mishaps, accidents that occur in vehicles.
42:54We just make the product, there's nothing we can do.
42:57It's their responsibility if something goes wrong.
43:01People were harmed en masse by vehicles
43:04that were not designed well,
43:06even though the companies manufacturing them
43:08knew that to be true.
43:13We don't take that approach to cars anymore.
43:16We now have safety features,
43:18we have seat belts, we have airbags.
43:20That was not introduced because the car industry
43:23thought it was a good idea.
43:24They had to be cajoled into doing it.
43:29We need to take an approach to regulation
43:31that enshrines what we think is fair and reasonable
43:35as a society and then require companies to comply.
43:39So I think there is a clear agenda
43:42by large tech companies to experiment on users,
43:45to try and find a way to make these products profitable.
43:47And we need to think very seriously
43:49about the implications of systems that can't be trusted
43:51to answer basic questions.
43:54When government talks about the need
43:56to embrace the opportunities of AI,
43:58the way they ought to do that
43:59is by giving Australians confidence
44:01that they've done the work to protect them.
44:03My mission is to create a movement of people
44:05that can start to take power back
44:07and to ensure that elected representatives
44:10who want to do the right thing are supported.
44:12There's nothing inevitable about the future.
44:15The future is ours to shape.
44:16And the best way we can do that
44:18is by being active, engages in our democracy.
44:21It's not a spectator sport.
44:23Please join us and help us to build this movement.
44:33Ask yourself the question,
44:35is AI actually inevitable?
44:38Like, just, I know that's not true right now,
44:41but just imagine for a moment
44:42that literally no one on planet Earth wanted this to happen.
44:46Would the laws of physics force AI into existence?
44:50And the answer is no.
44:57As we all head off into this great technological unknown,
45:00it may be that we come to look back on this moment
45:03and see that we do have an opportunity to act,
45:06to shape what happens next.
45:08For me, already, this has been a mind-bending adventure,
45:12trying to figure out what the hell happened
45:13to all of the facts.
45:16Along the way, I have realised something,
45:17that facts alone are not going to solve these huge challenges
45:22we all face as humans.
45:23But, without facts,
45:26we don't have a hope of solving those problems.
45:29So perhaps, after all, facts are worth fighting for.
45:33Facts do matter.
45:47Factsplus knowledge.
45:50dagegen glass.
45:53DONE!!
46:00INCREDITION
Comments

Recommended