Skip to playerSkip to main content
  • 1 day ago
Transcript
00:03Hi, I'm Hamish McDonald and I'm a journalist, which means facts are my business.
00:09Where do you get your information from?
00:11Social media.
00:12Facebook.
00:13Facebook, Instagram and YouTube.
00:15Social media is where so many of us now get our information.
00:19But can we rely on it?
00:22When tech makes it so easy to create a fictional story, which can look and sound real.
00:29I went to the moon last week, don't you know?
00:31Even though it's entirely fake.
00:35Are you like so many people now, struggling to know what's true and what's false?
00:41Big tech is shifting our relationship with facts.
00:45We're going through a period of radical change in the way that people consume information.
00:49We've created this monster.
00:52An algorithm is outsourcing the human choice into a machine's decision.
00:57The end goal is keep you scrolling, hack our biology, change the way we feel, to change
01:02the way we see the world.
01:04Is anyone else paying attention to how all of this is affecting society?
01:10Over the course of this series, I've explored how social media is transforming the way we
01:15communicate.
01:16It's part of the propaganda, it's a biased content production.
01:19But we're about to take another technological leap.
01:23It took 68 seconds to replace his face.
01:27As AI makes everything believable and nothing certain.
01:32Anything could be fake at any time.
01:35So the experiment is detect AI faces.
01:39I was really easily tricked by it.
01:41If you don't want to sleepwalk into a future that nobody wants.
01:44Are you scared?
01:45I'm very scared.
01:47Join me on a journey right around Australia and the world to meet people determined to
01:52keep facts alive.
01:54In this real time information revolution.
02:10Social media has changed how we communicate.
02:14Algorithms have sent us into information echo chambers.
02:20Do you ever have those moments where you feel just a bit overwhelmed from all the information
02:24coming at you?
02:25You know, the light, the noise, even the speed of it all?
02:28Well, I don't want to alarm you, but it's probably going to get worse.
02:33You see, AI is here and it's revolutionising everything.
02:44Including our perception of who and what is real.
02:56Already, almost half our population is playing with generative AI.
03:03Even though no one knows where it might take us.
03:09One of the confusing things about AI is there's so many different kinds of risks that are all
03:14connected in a way that's even more profound than social media.
03:18Social media is like a baby AI.
03:22AI, if it really automates human labour, will rapidly automate billions of people's jobs.
03:28The actual mission statement is to automate every form of human cognitive labour in the economy.
03:36To be able to do every job better than a human can do.
03:40AI is a computer system that can do tasks otherwise requiring human intelligence.
03:46It's working for us every day by phones, cars, appliances.
03:55It's also data crunching, picking strawberries and diagnosing diseases.
04:04But AI is particularly good at work that involves distorting reality.
04:13As AI technology improves, it's getting harder for all of us, including experts, to detect what's human versus machine built.
04:26And am I looking in the same place each time?
04:28Same place every time, and don't you dare smile.
04:30Okay.
04:31Stuart White has spent decades perfecting his craft as a special effects artist.
04:37I'm going to walk an arc around the front here, take about 50 photos all one after the other.
04:41The whole process always starts with scanning somebody and turning them into a 3D model.
04:46Just do what the face is doing.
04:49Creating a three-dimensional version of my head is a painstaking process, requiring thousands of images and months of work.
04:57Wide eyes, surprise mouth.
05:00That's it?
05:02Funnel lips.
05:04That's a lot of muscles in the face.
05:06And you're done.
05:07I'm done.
05:07Yeah.
05:08I never need to work again.
05:11The robots can do the rest.
05:14Now Stu's got the full picture of my face, he needs a body he can connect it to.
05:29Highland dancer Tristan has been perfecting his art since he was six.
05:42He's now a national champion.
05:47First, 669.
05:52And with Stu's help, I can be too.
05:58Rather than take his usual, time-consuming 3D modelling approach, Stu is about to experiment
06:04with an AI-powered shortcut.
06:09So this is the software called Face Fusion.
06:12It costs about $1.50 an hour to rent a computer powerful enough to do this, so not much.
06:18And basically what it's asking for here is a still frame of the person whose face you want to use.
06:26So I'm going to find a fairly frontal picture of Hamish and drop it in there.
06:32We've got one of the shots from the dancing competition here.
06:36And hopefully, pretty much straight away, it'll start trying to get to work.
06:40It happens in basically about a minute, more or less.
06:46Which is astounding to me because historically this took six months with traditional techniques.
06:53And yeah, here we are, we have a result already.
06:57Stu is watching a machine supersede his job.
07:00It took 68 seconds to replace his face.
07:04It's done a pretty terrific job.
07:06And people who had never met Hamish could probably watch the end resulting clips
07:12and believe that that person was doing that dance.
07:15So what about someone who has met me?
07:18In fact, who knows my face better than anyone?
07:21Alright Mum, do you want to take a seat?
07:23I will.
07:23I've invited my mum, Carol, into work for the afternoon.
07:26I've told her I've got something special to show her.
07:30So you know I've been very busy.
07:32Yes, always busy.
07:34I've been working on this as well as other things.
07:37Yeah.
07:37But I've also been learning some new skills.
07:39Oh, right.
07:40You know I like to do things well.
07:42Yes, yes.
07:43I've actually done quite well.
07:45Oh, for me.
07:46So what?
07:48Do you want to have a look?
07:50I'll put my glasses on.
07:52Okay.
07:53You ready?
07:53Yeah, ready.
08:13I'll put my glasses on.
08:24Oh, that's gorgeous.
08:24Oh, that's gorgeous.
08:26Oh, that's gorgeous.
08:29Oh, that's gorgeous.
08:32You do things perfectly, always.
08:35I didn't realise you had that skill.
08:37Did you?
08:38Not really.
08:39I was a little bit surprised.
08:41Oh, my goodness.
08:42That was fantastic.
08:44Well, Dad would have loved it too.
08:46Yeah, Dad would have been very proud.
08:49How did you learn it?
08:51Was there anything about it that looks weird to you?
08:57No.
08:58Okay.
08:59Kind of scary.
09:00AI can trick even my own mum.
09:03What if I told you it's not really me?
09:06Well, some of it's you, but some of it's not.
09:09None of it's me.
09:10None of it.
09:14Well, I thought it was you when I got going,
09:17but then I thought it doesn't look quite like your face.
09:22And also the perfect arabesque.
09:29Do you look out for AI?
09:31Yeah, well, I do look at things and I do spot them usually.
09:36Oh, my God, it has rattled me.
09:40It's very clever.
09:43Love you.
09:43Love you too.
09:51AI is duping us, but Australians aren't blind to its dangers.
09:56Only 30% of us think the benefits of this tech outweigh the risks.
10:01We've recruited 10 participants and two cognitive psychologists
10:05to understand how we're adjusting to AI-generated content.
10:12So the experiment that I'm about to share with you is about our capacity to detect AI faces as they're
10:21looking now online.
10:24Go with your gut instinct and just keep on moving through.
10:46So you reckon he's real?
10:50That background is, sauce is all hell.
10:56The people who make the most errors on AI faces are actually the most confident.
11:04See, that, that looks fake. That looks fake, doesn't it?
11:11Um, real.
11:15Um, real.
11:18I'm not a fan of AI. I never have been.
11:21I don't like looking at AI-generated content in my head.
11:25AI is still that video of Will Smith eating spaghetti and then he kind of turns into spaghetti and then
11:30he's eating it again and then the fork disappears into the bowl.
11:34That's what AI still is to me.
11:36I don't know. I just, I just don't know anymore. My head's hurting.
11:46Okay, we're done over here.
11:50What'd you get?
11:52Oh, this would be interesting.
11:53What did you get?
11:54Thirteen.
11:55Oh.
11:55How many?
11:56That's not good.
11:57I was really easily tricked by it.
12:00And that was when I was at my most vigilant, most, okay, I know in my head all of the
12:06cues of an AI, the wonky collars, the uneven hair.
12:10You guys did better than me.
12:12I think that over vigilance of once I'm aware of it, I over corrected and did the, and had the
12:20same result as if I didn't know anything.
12:23When they discovered how easily their own brains fell prey to some of this, there was a real humility in
12:29how they turned around and said, actually, it's not everybody else.
12:33I'm not immune to this because I'm young and I've grown up with this.
12:36It's me too.
12:37And this is going to be my future and I need to do better.
12:41My brain is so cooked after all of this.
12:45I have analyzed your data as a group.
12:49You were a perfect participant group in that you showed exactly what much larger samples from the community show.
12:56You were at chance.
12:59So, overall, 51% accuracy where 50% is chance, you might as well flip the coin.
13:08What these algorithms are doing is looking for patterns and then regenerating those patterns.
13:14So, they're tending more towards the average.
13:17And average faces we see as being more familiar, a little bit more trustworthy, a little bit more attractive.
13:26And these are things that are coming through in the AI faces now.
13:30So, these are entirely convincing now.
13:32We can't spot them online.
13:39Over 24 months, AI-generated Will Smith has been getting much better at eating spaghetti.
13:47Mmm, I can't get enough of this.
13:50In fact, it's thought AI can double its ability to perform complex tasks every few months.
13:57One of the sort of magical qualities of how generative AI works is it's not really thinking.
14:01It's mimicking.
14:04AI could rapidly flood our information environment with content generated by an AI that will look indistinguishable or even perform
14:14better than human-generated content.
14:17So, we're being out-competed both in that AI-generated content is cheaper to produce, but second, that then people
14:24doubt and question the value of human-generated content and assume maybe it's made by a machine.
14:36My name is Joss, and I am a video journalist, YouTuber. I make explainer videos about science and tech.
14:44Howtown is a new YouTube channel that I started with my colleague, Adam Cole.
14:50Every episode answers a how do they know that question.
14:54Joss Fong knows how well AI can mimic humans.
14:58She's been forced to prove that she's not AI-generated.
15:02I'm working on a story about giant pterosaurs, which are flying reptiles that lived more than 60 million years ago.
15:09The only thing that it left behind is this pile of bones from its left wing.
15:15Okay, so we don't even have any part of the head here.
15:18No part of the head.
15:19So how do they know what it looked like or how it moves?
15:22I called up a bunch of paleontologists to find out how they reconstruct these animals.
15:26We've been publishing for about six months, and we started to see the comments on YouTube that were accusing us
15:33of being AI-generated.
15:37You get a lot of comments on YouTube, and it's hard to know how seriously to take any of them.
15:42But there was a trend, and they'd say things like, are these people real?
15:47Or just like, AI question mark?
15:50Or a lot of them said, um, this sounds just like an AI podcast.
15:55Then it all became clear.
15:58Notebook LM.
15:59Notebook LM, our nemesis.
16:04It's a product where you can upload documents and ask questions about them, which is cool.
16:13But they also had this feature where it would generate a podcast.
16:18To give you a more local example...
16:23I've asked Notebook LM to whip up a podcast about our longest-running kids' TV show.
16:33And within moments...
16:35We are diving into the essence of play school.
16:38Entertaining and engaging children.
16:40The fact that...
16:41A very human-sounding conversation.
16:43Right.
16:43It proves the show has this fundamental structure that's just, well, it seems to be completely...
16:48Two generated voices.
16:50One male, one female-sounding.
16:52Uh, that would have a conversation about your given topic.
16:55How does a kid's show manage this grand idea of exploring the world without, you know, huge budgets or fancy
17:02CGI or flying presenters everywhere?
17:04And it's very realistic.
17:05Realistic.
17:06They breathe.
17:08Really important.
17:09And the source material is very specific.
17:11It lists...
17:12You know, they interrupt each other.
17:13They would kind of use the filler words that humans use.
17:17Or is that deliberate simplicity actually the secret weapon for survival?
17:21I mean, I'd argue the simplicity is absolutely core to its longevity.
17:25And so, I think as more and more people started to use that app, especially students using it for studying,
17:33they started to associate that kind of conversation that is explanatory in nature, they associated that with AI.
17:40And that's a bummer for us, because when we started Howtown, we thought, okay, we know AI can do voiceover,
17:47and that's no problem.
17:49But at the time, AI didn't do conversation.
17:53Quetzalcoatlus, part of a group called the Ajdarkids.
17:57Where does that come from? It feels sort of Tolkien-esque or something.
18:01I believe it is Persian.
18:03Adam and I thought we would add conversations into our videos as a way of showing that we're people.
18:08You're like, that's an Ajdarkid humorous. I know it. Like the back of my hand.
18:13So, if you scroll down on page 19...
18:16Yeah. One thing I would do is just not say vertical resting pose.
18:20Okay.
18:21I think you could just say, its head was up like a giraffe instead...
18:23Joss and Adam have had to convince their audience that they are real people.
18:28And having to do that has taken a toll.
18:31It tapped into some insecurities that I already had about how I present myself online.
18:35And so when I get these comments, it kind of...
18:39Part of me thinks, okay, well there's something unique to you that's causing them to accuse you of this.
18:44And not the other creators who are better at this.
18:49As AI becomes a better mimic of every form of human expression, artists, photographers, musicians are getting these accusations.
18:58And that sort of human connection that the internet made possible is now being interrupted by this growing sense that
19:08anything could be fake at any time.
19:10When really, there are people who are very rigorous, who follow a process, who have editorial supervision, and who get
19:18in trouble if they lie.
19:19And those are the people we need to stick with, and we need to protect.
19:24And they're real! And I'm one of them!
19:32Just like social media, AI isn't necessarily engineered to cause harm.
19:37It's engineered to create profit.
19:42US tech companies are the most powerful businesses the world has ever seen.
19:50The most brilliant people are gathered around this table.
19:54This is definitely a high IQ group.
19:56Mark, you're building some buildings that are as large as Manhattan.
20:01That was pretty good, right?
20:03You know, all of the companies here are building, just making huge investments in the country in order to build
20:11out data centers and infrastructure to power the next wave of innovation.
20:16Collectively, these tech executives spent more on developing AI in 2025 than the US government spent on education, jobs and
20:25social services combined.
20:27How much are you spending, would you say, over the next few years?
20:30Oh, gosh. I mean, I think it's probably going to be something like, I don't know, at least $600 billion
20:38through 28 in the US.
20:42Yeah.
20:43It's a lot.
20:44No, it's significant.
20:46It's a lot. Thank you, Mark. Great to have you.
20:50These magnates control the platforms that shape our information ecosystems.
20:55The AI moment is one of the most transformative moments any of us have ever seen or will see in
21:01our lifetimes.
21:03They do so with minimal oversight or accountability.
21:07What AI is doing even today for education in so many other industries is really great.
21:11So, thank you so much for enabling this. We will invest a ton in the United States and we will
21:17do our best to make sure that we continue to lead here.
21:20That's right. Thank you. Hundreds of billions of dollars and it's going to be well worth it.
21:25And you have an unlimited market, right?
21:27It seems like it.
21:31With AI, the race is, well, if I don't do it, if I don't build AI and go as fast
21:36as possible and take as many shortcuts as possible, even if that causes risk in society, I'm just going to
21:41lose to the company or country that is willing to go as fast as possible and take as many risks
21:46as possible.
21:47And that's what's so dangerous about AI is it's like the ring from Lord of the Rings.
21:50If I get the ring first, then I have all the power and I'll be the good king, I promise.
21:56But that is not a good way to race the most powerful, inscrutable and uncontrollable technology you've ever invented.
22:05The country competing in the AI race with the US is China.
22:09We're leading China by a lot, by a really, by a great amount.
22:14The Chinese government has invested billions in the industry.
22:18And it's using what it develops to create state-sanctioned disinformation campaigns targeting its neighbours.
22:30Perched on China's southeastern coast is a small island called Kinman.
22:38It's littered with the remnants of war.
22:44Although only three kilometres from China, it is Taiwanese territory.
22:49For decades, Taiwan spent periods defending this island from China.
23:00Taiwanese soldiers blasted cannonballs across the strait.
23:05And they broadcast propaganda from an enormous speaker directed at mainland China.
23:18The speaker extols the virtues of life beyond communism.
23:30The idea behind this was a psychological operation to try and sprit democracy to the Chinese mainland.
23:37They suggested that members of the PLA, the People's Liberation Army, should come over here and try it.
23:48These days, though, the speaker is a little outdated.
23:53The Chinese superpower is wielding one of the world's most pervasive propaganda tools, AI-generated memes.
23:59These days, they are a great nation of China.
24:00They are so much, and they are so much more and more.
24:05And they are so much more.
24:10They are so much more musical and more so far.
24:21Now, we know you.
24:23If you are so much more than the localese people, you should be in the country.
24:23Many families and no longer have their money.
24:25Many families and so many people.
24:26And that they are selling out on overnight social media.
24:29that needs to unify with the Chinese motherland.
24:34Taiwan is the divine power of China.
24:37The railway of history is no one can't stop.
24:43You've probably heard talk of a possible global flashpoint over Taiwan,
24:47China seizing territory that historically Beijing says belongs to it.
24:53But what if, in this far more complex world,
24:56taking Taiwan doesn't involve landing on the beaches, storming territory.
25:01What if taking Taiwan happens in an entirely different space?
25:13This is Taipei, the capital of Taiwan.
25:16Most security experts agree this is where China is truly on the attack.
25:23Stay vigilant against online propaganda.
25:27That's the message from officials here in Taiwan.
25:29There are brewing fears that Chinese social media apps are hosting content
25:33that could threaten Taiwan's national security.
25:37Civil organizations are launching into action
25:39to fortify the nation's digital literacy.
25:43Hi, Teddycat.
25:44Hi.
25:45I'm Hamish.
25:46Really nice to meet you.
25:47Yes.
25:47And you've got the chess ready to go?
25:49Yes.
25:51Wimin Suan, also known as Titicat,
25:54has co-founded DoubleThinkLab.
25:56It's a group working to track and expose
25:58Chinese digital disinformation.
26:01By the way, why Titicat?
26:03I had a cat.
26:04Her name was Titi.
26:05OK.
26:06So I've been using this Alice for a very long time.
26:09OK.
26:10It's my online handles, everything.
26:13So what is cognitive warfare for people living in Taiwan?
26:16Create more chaos, polarization, and eventually that lead to we choose a more friendly attitude
26:24toward China government leaders, because you're thinking about war, right?
26:30If you are the aggressor, how you actually win a war?
26:35There's no way you can kill everybody in the island.
26:40Taiwan has 23 million people.
26:42There's no way you can eliminate everybody here.
26:46So the only way you win a war is we surrender.
26:49The way to make us surrender is when we believe there's no chance we can win,
26:55or there's no reason we should fight or resist at all.
27:01China's AI-generated disinformation campaign particularly targets Taiwan's democratic leaders
27:07and drives the narrative that, inevitably, Taiwan will be absorbed by China.
27:14People of all ages need to be wary that when they're consuming content online,
27:17if there is a video that makes you feel particularly strong emotions,
27:22that you should maybe be online to the fact that that video could be trying to manipulate you
27:28in one way or another.
27:32The dominated narrative is that the U.S. will not help you.
27:37Nobody cares about you.
27:38You're a small state.
27:40They put this like a small state in your mind that you have no chance to survive by yourself.
27:47You can only react on a big power.
27:49And the U.S. is not trustworthy.
27:52They are hypocrites.
27:53And they do a lot of evil things, right?
27:56So you can only react on China.
27:58And China will be very friendly to you because we are family.
28:02So this is the main narrative.
28:05So does that mean that China is preconditioning you for surrender?
28:09I think so, yes.
28:11A lot of young people actually, if we see the polling,
28:15there's maybe 20% or 30% of young people doesn't really see through these lies.
28:20Are you scared?
28:21I'm very scared.
28:23Yeah, especially like generative AI, right?
28:26It will also be going to change how people interact with social media or interact with news.
28:35The Taiwanese are resilient.
28:39Their island has been labelled one of the most dangerous places on Earth.
28:44It's buffeted by earthquakes and typhoons.
28:49China frequently runs combat drills just kilometres off its coastline.
28:54And the information war is relentless.
28:59We always look at these foreign adversarial attacks as well as our natural disasters
29:05as ways for conflict to not turn into explosion,
29:11but rather see them like fire on the ground, not to be put out,
29:14but rather as energy sources to harness into co-creation.
29:19Audrey Tang is a tech genius
29:21who went from hacking into government sites to make them more user-friendly
29:25to becoming the world's first digital minister.
29:30In 1981, when Audrey Tang was born here in Taiwan,
29:34the internet was still very much in its infancy,
29:36and like most kids born around that time,
29:38the pair kind of grew up together.
29:41At first, the internet was shaping Audrey,
29:43but today, Audrey is trying to leave her mark on the internet.
29:52I think we live in a time when people's interaction with internet
29:58is being shaken very quickly now with the rise of AI.
30:04Almost three-quarter of the time,
30:06people think that AI is more human than human.
30:09And so it becomes very easy then
30:12to orchestrate entire villages of fake people
30:16that all look very real,
30:19shaping the political opinion in a very insidious way.
30:24But every time something attacks the society,
30:27the society needs to very quickly make sense of it
30:30and determine what to do with it.
30:32I think this is a classic case
30:35of the people's collective wisdom
30:37is actually much better than the politician's instinct.
30:43Audrey Tang has pioneered all of these new ways
30:46of integrating AI and technology
30:48into strengthening democracy,
30:51using AI to take citizens deliberating on a topic
30:54that's contentious,
30:56having an AI listen to that conversation,
30:58automatically transcribe it,
31:00maybe even facilitate the conversation,
31:02then find the areas of unlikely consensus
31:04and rapidly do what would have taken
31:06lots of human labor to figure out.
31:08We send text message, SMS to random numbers
31:13around Taiwan,
31:14asking what should we do
31:16about the deep fake advertisement online.
31:19And we chose 450 people.
31:22And those 450 people in rooms of 10
31:25facilitated by AI system,
31:28it can summarize in real time
31:30what each room of people have agreed on.
31:33One room may say,
31:35Facebook earns advertisement money
31:37from those scammers,
31:38so Facebook should be liable.
31:41And maybe another room say,
31:43TikTok, buy dance.
31:44We should slow down connection to their servers
31:46until all their business goes to Google.
31:49All these ideas become a draft law in May,
31:52and it passed last July.
31:54You simply do not see those sponsored fake advertisement anymore,
31:59because the people through citizen assembly set the rules
32:03that govern the AI's deep fake harm.
32:06And so I think we can use AI to address the harms of AI
32:11to democracy by using AI to foster democracy.
32:26I would love to see places like Australia
32:29engage and experiment with these kinds of ideas.
32:32Using technology to facilitate consultation, engagement,
32:36a greater sense of democracy,
32:38that I think is the antidote
32:39to the centralisation of power within the technology industry.
32:43Democracy thrives on participation,
32:46and digital technology offers one of the best ways to boost that.
32:52Audrey's ideas about digital democracy
32:54have gained traction around the world.
32:59At home in Taiwan,
33:00she offers hope and a sense of agency.
33:04It doesn't take all that long in Taiwan
33:06to be deeply impressed by the place.
33:08I'm not talking just about trains.
33:10They're fantastic.
33:11But this place is ambitious.
33:14They are politically engaged and tech-savvy.
33:17And it's all the more remarkable
33:18when you think about the fact
33:20that this place emerged from martial law.
33:23Maybe even it's because of that
33:24they know democracy is fragile and hard-worn.
33:37Miss Chen is a retired teacher
33:39who lives on the top floor of an apartment block
33:42in central Taipei.
33:44That's very beautiful.
33:46The scenery.
33:47Nice garden.
33:48Now, in her early 60s,
33:49she's old enough to have lived under martial law in Taiwan
33:52when new political parties,
33:55human rights,
33:55even free speech were banned here.
33:58Wow, this is spectacular.
34:00Yeah.
34:02Around the Taipei city is mountains.
34:04Yeah.
34:04So many mountains.
34:05What do you love about this city?
34:07We are so free and so happy.
34:10How to keep Taiwan's freedom?
34:13So, um, how do you feel about China?
34:19Invade Taiwan.
34:21Anytime.
34:22You're worried that will happen?
34:23Yes, of course.
34:25And so you're going to classes to prepare?
34:29Yes, yes, yes.
34:30Why?
34:31Because just protect my family,
34:35protect my country.
34:46Miss Chen has signed up for a full day's workshop
34:49at the Kummer Academy.
34:52It's a not-for-profit,
34:53teaching citizens defence strategies
34:55in case there's an invasion.
34:59Hello.
35:05They begin with first aid lessons.
35:09The students learn how to tie a tourniquet
35:11if a limb gets blown off.
35:25Then they learn what to do
35:26in case of broad-scale aerial attacks.
35:31But this course puts just as much emphasis
35:33on the impact of cognitive warfare.
35:39They teach students how to spot the difference
35:41between a real person
35:42and an AI-generated influencer
35:45whose very realistic content
35:47may be flooding their feeds.
35:59This is it.
36:01The fact-check centre.
36:02Yes.
36:03The biggest fact-checking initiative in Taiwan.
36:06OK.
36:07TD Cat and his collaborators
36:09are using AI to mine social media
36:12for mis- and disinformation.
36:13So, basically, we are collecting the posts
36:17on social media platforms.
36:20The volume of data is very, very high
36:22because we set up a rather large amount
36:25of keywords to collect.
36:27Their AI tool scans huge amounts of data
36:30to identify patterns in language and content,
36:34like influencers, both Chinese and Taiwanese,
36:37who follow similar scripts
36:39and use the same vision
36:41in what appears to be
36:42an orchestrated disinformation campaign.
36:45How have you designed this technology?
36:47It's picking up when there's a trend,
36:49when something is just being artificially blown up as an issue.
36:54Yes.
36:55They identify a stream of false narratives
36:57on social media
36:58designed to reduce confidence
37:00in Taiwan's leadership
37:02and their ally, the US.
37:09So, this is an example of using AI
37:12to actually hang on to facts?
37:15Yes.
37:16If you fact-check something,
37:18what then travels faster,
37:20the lie or the fact?
37:21Of course, the lie.
37:23Those rumours go faster than facts.
37:26If we have done the fact-checking,
37:28we need to promote our fact-checking result to people.
37:32We try to disseminate those fact-checking reports
37:35through all the social media
37:37where we collect those disinformation and rumours.
37:41So, do you think you can win that race?
37:43Don't know.
37:44But, yeah, if nobody do it,
37:47there's darkness,
37:49there is a candle,
37:51a small light.
37:53Therefore, people know what's light,
37:56what's darkness, right?
37:57But if there's no light,
37:59so darkness will be the normal,
38:01nothing else.
38:03So, that's what we believe
38:05and, yeah, what we're doing.
38:11For now, Taiwanese support for unification with China
38:14sits below 20%.
38:16The majority of people want to maintain
38:18the independence they have
38:20or build on that further.
38:22Miss Chan is determined to preserve that majority.
38:26She will share what she's learned at Cooma Academy
38:29so that friends and family are more alert
38:31to targeted disinformation.
38:40Unlike the European Union,
38:42the Australian government has chosen against legislation
38:45specific to AI.
38:47Yeah, give it a diva moment.
38:49Diva moment.
38:49This means more economic opportunities for new tech,
38:52but not mandatory guardrails.
38:54I think we're all just kind of used
38:56to being taken advantage of
38:58by big tech companies
38:59that it's like,
39:00well, yeah, of course this would happen.
39:02This is perfectly feasible.
39:04Before, after.
39:06Oh, that doesn't look like me at all.
39:08That's so weird.
39:09The magic of AI.
39:11Hey, you look so respectable.
39:14I am giving AI a wide berth
39:17and I'm realising I can't afford to do that.
39:23I'm sorry.
39:24It looks very AI, doesn't it?
39:27Wait a minute.
39:29This is not what I think it is.
39:30You know, this is not as cool or funny
39:33or engaging as it actually should be
39:36because it's actually not real.
39:37We're living in a world that is,
39:40in technological terms,
39:41advancing at an exponential rate
39:43and is surprising even the developers
39:47in terms of that rate.
39:49What I think is interesting
39:50about Australia specifically
39:51is that we rate as nearly the lowest
39:54on most questions globally
39:57about optimism in relation to AI.
40:04So Australians, they are not optimistic.
40:07They don't feel inspired by it.
40:09I think we might have AI smarter
40:12than any single human at anything
40:13as soon as next year.
40:16And then probably within,
40:17like say 2030,
40:19probably AI is smarter
40:20than the sum of all humans.
40:22Certainly the hype that comes out
40:24of the AI industry
40:25is focused on inevitability.
40:26They've invested hundreds of billions
40:28of dollars into this industry
40:29and they want to seek a return.
40:32One of the ways they do that
40:33is they create a narrative
40:35that this technology
40:36is inherently valuable
40:37and will generate productivity gains
40:40even when the data suggests
40:42that might not be true.
40:46Australia is in the global spotlight,
40:49having introduced a world-first ban
40:51on the creation of social media accounts
40:53for under-16s.
40:55But while Australia pursues a hard line on this,
40:59lawyer Lizzie O'Shea wants to know
41:01why the approach to AI is more passive.
41:0483% of Australians would be more prepared
41:06to use AI products
41:08if they felt there were guarantees
41:10around things like safety and standards.
41:14I think there's currently more regulation
41:16on producing a sandwich
41:17than there is on producing
41:19world-ending artificial general intelligence.
41:23Lizzie has founded Digital Rights Watch
41:25to defend Australians from data mining,
41:28surveillance and digital disinformation.
41:31She's also busy urging Australians
41:34to speak up about how AI will change our world.
41:38I really want to start a conversation
41:40with anyone who is interested in the topic
41:42about how we can make AI fairer,
41:45how we can make sure that we have a balance
41:48for the concentration of power
41:50in the technology industry
41:51that is currently dictating
41:53how AI will be developed and deployed.
41:55How many people are using AI regularly
41:57in their work or study now?
42:01I would put that at two-thirds.
42:03Yeah.
42:04Rather than just accept AI products
42:06and are part of our world,
42:08Lizzie suggests we interrogate their purpose.
42:10Yeah, well, I'm going around asking anybody
42:12if they can give me a good use case for Sora
42:14because I don't know what the purpose of that is.
42:17Like a fully synthetic social media platform
42:19to generate deep fake videos,
42:21I think there are a lot of nefarious purposes I can think of.
42:26A lot of maybe marginally...
42:27We have certain sets of regulations
42:28around what is expected of consumer products,
42:31including that they don't harm people
42:34and that they're developed
42:35with a certain kind of trustworthiness
42:37in how they're put on the market.
42:40Here in Australia,
42:41we've come to be very much a nation on wheels.
42:45Cars, when they started being sold en masse to the public,
42:48there was an assumption that drivers are responsible
42:50for all mishaps, accidents that occur in vehicles.
42:54We just make the product.
42:55There's nothing we can do.
42:57It's their responsibility if something goes wrong.
43:01People were harmed en masse by vehicles
43:04that were not designed well,
43:06even though the companies manufacturing them
43:08knew that to be true.
43:13We don't take that approach to cars anymore.
43:16We now have safety features.
43:18We have seatbelts.
43:19We have airbags.
43:20That was not introduced
43:21because the car industry thought it was a good idea.
43:24They had to be cajoled into doing it.
43:29We need to take an approach to regulation
43:31that enshrines what we think is fair and reasonable
43:35as a society
43:36and then require companies to comply.
43:39So I think there is a clear agenda
43:42by large tech companies to experiment on users
43:45to try and find a way to make these products profitable
43:47and we need to think very seriously
43:49about the implications of systems
43:50that can't be trusted to answer basic questions.
43:54When government talks about the need
43:56to embrace the opportunities of AI,
43:58the way they ought to do that
43:59is by giving Australians confidence
44:01that they've done the work to protect them.
44:03My mission is to create a movement of people
44:05that can start to take power back
44:07and to ensure that elected representatives
44:10who want to do the right thing are supported.
44:12There's nothing inevitable about the future.
44:15The future is ours to shape
44:16and the best way we can do that
44:18is by being active, engages in our democracy.
44:21It's not a spectator sport.
44:23Please join us and help us to build this movement.
44:33Ask yourself the question,
44:35is AI actually inevitable?
44:38Like, just, I know that's not true right now,
44:41but just imagine for a moment
44:42that literally no one on planet Earth
44:44wanted this to happen.
44:46Would the laws of physics force AI into existence?
44:50And the answer is no.
44:57As we all head off into this great technological unknown,
45:00it may be that we come to look back on this moment
45:03and see that we do have an opportunity to act,
45:06to shape what happens next.
45:08For me, already, this has been a mind-bending adventure,
45:12trying to figure out what the hell happened
45:13to all of the facts.
45:16Along the way, I have realised something,
45:17that facts alone are not going to solve
45:21these huge challenges we all face as humans.
45:23But, without facts,
45:26we don't have a hope of solving those problems.
45:29So perhaps, after all, facts are worth fighting for.
45:33Facts do matter.
45:37Facts do matter.
Comments

Recommended