Skip to playerSkip to main content
  • 2 days ago
tele: https://t.me/TopFilmUSA1
#film#shows#usa#usashows#hot#filmhot

Category

😹
Fun
Transcript
00:03Hi, I'm Hamish MacDonald and I'm a journalist, which means facts are my business.
00:09Where do you get your information from?
00:11Social media.
00:12Facebook.
00:12Facebook, Instagram and YouTube.
00:15Social media is where so many of us now get our information.
00:19But can we rely on it?
00:22When tech makes it so easy to create a fictional story, which can look and sound real.
00:29I went to the moon last week, don't you know?
00:31Even though it's entirely fake.
00:35Are you like so many people now, struggling to know what's true and what's false?
00:41Big tech is shifting our relationship with facts.
00:45We're going through a period of radical change in the way that people consume information.
00:49We've created this monster.
00:52An algorithm is outsourcing the human choice into a machine's decision.
00:57The end goal is keep you scrolling, hack our biology, change the way we feel to change
01:02the way we see the world.
01:04Is anyone else paying attention to how all of this is affecting society?
01:10Over the course of this series, I've explored how social media is transforming the way we
01:15communicate.
01:16It's part of the propaganda.
01:18It's a biased content production.
01:20But we're about to take another technological leap.
01:23It took 68 seconds to replace his face.
01:26As AI makes everything believable and nothing certain.
01:32Anything could be fake at any time.
01:35So the experiment is detect AI faces.
01:39I was really easily tricked by it.
01:41We don't want to sleepwalk into a future that nobody wants.
01:44Are you scared?
01:45I'm very scared.
01:47Join me on a journey right around Australia and the world to meet people determined to
01:52keep facts alive.
01:54In this real-time information revolution.
02:10Social media has changed how we communicate.
02:14Algorithms have sent us into information echo chambers.
02:20Do you ever have those moments where you feel just a bit overwhelmed from all the information
02:24coming at you?
02:25You know, the light, the noise, even the speed of it all?
02:28Well, I don't want to alarm you, but it's probably going to get worse.
02:33You see, AI is here.
02:35And it's revolutionising everything.
02:44Including our perception of who and what.
02:47is real.
02:56Already, almost half our population is playing with generative AI.
03:03Even though no one knows where it might take us.
03:09One of the confusing things about AI is there's so many different kinds of risks that are all
03:14connected in a way that's even more profound than social media.
03:18Social media is like a baby AI.
03:22AI, if it really automates human labour, will rapidly automate billions of people's jobs.
03:28The actual mission statement is to automate every form of human cognitive labour in the economy.
03:36To be able to do every job better than a human can do.
03:40AI is a computer system that can do tasks otherwise requiring human intelligence.
03:46It's working for us every day.
03:48By phones, cars, appliances.
03:54It's also data crunching, picking strawberries, and diagnosing diseases.
04:04But AI is particularly good at work that involves distorting reality.
04:13As AI technology improves, it's getting harder for all of us, including experts, to detect what's human versus machine built.
04:26And am I looking in the same place each time?
04:28Same place every time, and don't you dare smile.
04:30Okay.
04:31Stuart White has spent decades perfecting his craft as a special effects artist.
04:37I'm going to walk an arc around the front here, take about 50 photos all one after the other.
04:41The whole process always starts with scanning somebody and turning them into a 3D model.
04:46Just do what the face is doing.
04:48Creating a three-dimensional version of my head is a painstaking process, requiring thousands of images and months of work.
04:57Wide eyes, surprise mouth.
05:00What's it?
05:01Funnel lips.
05:04It's a lot of muscles in the face.
05:06And you're done. I'm done.
05:08I never need to work again.
05:11The robots can do the rest.
05:14Now Stu's got the full picture of my face.
05:17He needs a body he can connect it to.
05:29Highland dancer Tristan has been perfecting his art since he was six.
05:42He's now a national champion.
05:46First, six, six, nine.
05:52And with Stu's help, I can be too.
05:58Rather than take his usual time-consuming 3D modelling approach, Stu is about to experiment with an AI-powered shortcut.
06:09So this is the software called Face Fusion.
06:12It costs about $1.50 an hour to rent a computer powerful enough to do this, so not much.
06:18And basically what it's asking for here is a still frame of the person whose face you want to use.
06:26So I'm going to find a fairly frontal picture of Hamish and drop it in there.
06:32We've got one of the shots from the dancing competition here.
06:35And hopefully pretty much straight away it'll start trying to get to work.
06:40It happens in basically about a minute or less.
06:46Which is astounding to me because historically this took six months with traditional techniques.
06:53And yeah, here we are, we have a result already.
06:57Stu is watching a machine supersede his job.
07:00It took 68 seconds to replace his face.
07:04It's done a pretty terrific job.
07:06And people who had never met Hamish could probably watch the end resulting clips
07:12and believe that that person was doing that dance.
07:15So what about someone who has met me?
07:17In fact, who knows my face better than anyone?
07:21Alright Mum, do you want to take a seat?
07:23I will.
07:23I've invited my mum, Carol, into work for the afternoon.
07:27I told her I've got something special to show her.
07:30So you know I've been very busy.
07:32Yes, always busy.
07:34I've been working on this as well as other things.
07:36But I've also been learning some new skills.
07:40You know I like to do things well.
07:42Yes, yes.
07:43I've actually done quite well.
07:45Oh, Hamish.
07:47Do you want to have a look?
07:50I'll put my glasses on.
07:52OK.
07:53You ready?
07:53Yeah, ready.
08:12Oh, that's gorgeous.
08:25First, 669.
08:27You're joking.
08:31You're very clever.
08:33You do things perfectly always.
08:35I didn't realise you had that skill.
08:37Did you?
08:38Not really.
08:39I was a little bit surprised.
08:41Oh my goodness.
08:42That was fantastic.
08:44Well, Dad would have loved it too.
08:46Yeah, Dad would have been very proud.
08:49How did you learn it?
08:51Was there anything about it that looks weird to you?
08:57No.
08:58OK.
08:59Kind of scary.
09:00AI can trick even my own mum.
09:03What if I told you it's not really me?
09:06Well, some of it's you, but some of it's not.
09:09None of it's me.
09:14Well, I thought it was you when I got going, but then I thought it doesn't look quite like your
09:21face.
09:22And also the perfect arabesque.
09:29Do you look out for AI?
09:31Yeah, well, I do look at things that are AI, and I do spot them, usually.
09:36Oh my gosh, it has rattled me.
09:40It's very clever.
09:43Love you.
09:43Love you too.
09:51AI is duping us, but Australians aren't blind to its dangers.
09:56Only 30% of us think the benefits of this tech outweigh the risks.
10:01We've recruited 10 participants and two cognitive psychologists
10:06to understand how we're adjusting to AI-generated content.
10:12So the experiment that I'm about to share with you
10:15is about our capacity to detect AI faces as they're looking now online.
10:24Go with your gut instinct and just keep on moving through.
10:46So you reckon he's real?
10:48Yeah.
10:50That background is, sauce is all hell.
10:56The people who make the most errors on AI faces are actually the most confident.
11:04See, that looks fake.
11:07That looks fake, doesn't it?
11:18I'm not a fan of AI. I never have been. I don't like looking at AI-generated content.
11:23And in my head, AI is still that video of Will Smith eating spaghetti
11:28and then he kind of turns into spaghetti and then he's eating it again
11:30and then the fork disappears into the bowl.
11:34That's what AI still is to me.
11:36I don't know. I just, I just don't know anymore. My head's hurting.
11:46Okay, we're done over here.
11:50What did you get?
11:52This will be interesting.
11:53What did you get?
11:5413.
11:55Oh.
11:55How many?
11:56That's not good.
11:5713.
11:59I was really easily tricked by it.
12:01And that was when I was at my most vigilant, most,
12:04okay, I know in my head all of the cues of an AI,
12:07the wonky collars, the uneven hair.
12:10You guys did better than me.
12:12And I think that over vigilance of once I'm aware of it,
12:17I over corrected and did the, and had the same result as if I didn't know anything.
12:23When they discovered how easily their own brains fell prey to some of this,
12:28there was a real humility in how they turned around and said,
12:31actually, it's not everybody else.
12:33I'm not immune to this because I'm young and I've grown up with this.
12:36It's me too.
12:37And this is going to be my future and I need to do better.
12:41My brain is so cooked after all of this.
12:46I have analyzed your data as a group.
12:48You were a perfect participant group in that you showed exactly what
12:54much larger samples from the community show.
12:56You were at chance.
12:59So overall 51% accuracy where 50% is chance.
13:03You might as well flip the coin.
13:07What these algorithms are doing is looking for patterns
13:11and then regenerating those patterns.
13:14So they're tending more towards the average.
13:17And average faces we see as being more familiar,
13:21a little bit more trustworthy,
13:23a little bit more attractive.
13:26And these are things that are coming through in the AI faces now.
13:30So these are entirely convincing now.
13:33We can't spot them online.
13:39Over 24 months, AI-generated Will Smith has been getting much better
13:44at eating spaghetti.
13:47Mmm, I can't get enough of this.
13:50In fact, it's thought AI can double its ability to perform complex tasks
13:55every few months.
13:57One of the sort of magical qualities of how generative AI works
14:00is it's not really thinking, it's mimicking.
14:04AI could rapidly flood our information environment
14:08with content generated by an AI that will look indistinguishable
14:13or even perform better than human-generated content.
14:17So we're being out-competed both in that AI-generated content
14:22is cheaper to produce,
14:23but second, that then people doubt and question the value
14:26of human-generated content and assume maybe it's made by a machine.
14:36My name is Joss and I am a video journalist, YouTuber.
14:41I make explainer videos about science and tech.
14:45Howtown is a new YouTube channel that I started with my colleague Adam Cole.
14:49Every episode answers a how do they know that question.
14:54Joss Fong knows how well AI can mimic humans.
14:58She's been forced to prove that she's not AI-generated.
15:02I'm working on a story about giant pterosaurs,
15:04which are flying reptiles that lived more than 60 million years ago.
15:09The only thing that it left behind
15:12is this pile of bones from its left wing.
15:15Okay, so we don't even have any part of the head here.
15:18No part of the head.
15:19So how do they know what it looked like or how it moves?
15:22I called up a bunch of paleontologists to find out how they reconstruct these animals.
15:26We've been publishing for about six months and we started to see the comments on YouTube
15:31that were accusing us of being AI-generated.
15:36You get a lot of comments on YouTube and it's hard to know how seriously to take any of them.
15:42But there was a trend and they'd say things like,
15:45are these people real?
15:47Or just like AI question mark?
15:50Or a lot of them said,
15:52this sounds just like an AI podcast.
15:55Then it all became clear.
15:59Notebook LM.
15:59Notebook LM, our nemesis.
16:04It's a product where you can upload documents and ask questions about them,
16:09which is cool.
16:13But they also had this feature where it would generate a podcast.
16:19To give you a more local example,
16:23I've asked Notebook LM to whip up a podcast about our longest running kids TV show.
16:33And within moments...
16:35We are diving into the essence of play school.
16:38Entertaining and engaging children.
16:40The fact that...
16:41A very human sounding conversation.
16:43Right.
16:44It proves the show has this fundamental structure that's just...
16:46Well, it seems to be completely...
16:48Two generated voices.
16:49One male, one female sounding.
16:52That would have a conversation about your given topic.
16:55How does a kid's show manage this grand idea of exploring the world without,
16:59you know, huge budgets or fancy CGI or flying presenters everywhere?
17:04And it's very realistic.
17:06They breathe.
17:08It's really important.
17:09And the source material is very specific.
17:11It lists...
17:12You know, they interrupt each other.
17:13They would kind of use the filler words that humans use.
17:17Or is that deliberate simplicity actually the secret weapon for survival?
17:22I mean, I'd argue the simplicity is absolutely core to its longevity.
17:25And so I think as more and more people started to use that app,
17:30especially students using it for studying,
17:33they started to associate that kind of conversation that is explanatory in nature
17:38and they associated that with AI.
17:41And that's a bummer for us because when we started Howtown, we thought,
17:45okay, we know AI can do voiceover and that's no problem.
17:49But at the time, AI didn't do conversation.
17:53Quetzalcoatlus, part of a group called the Ashdarkids.
17:57Where does that come from?
17:58It feels sort of Tolkien-esque or something.
18:01I believe it is Persian.
18:03Adam and I thought we would add conversations into our videos as a way of showing that we're people.
18:09You're like, that's an Ashdarkid humorous. I know it. Like the back of my hand.
18:13So if you scroll down on page 19.
18:16Yeah. One thing I would do is just not say vertical resting pose.
18:20Okay.
18:21I think you should just say its head was up like a giraffe instead.
18:23Joss and Adam have had to convince their audience that they are real people.
18:28And having to do that has taken a toll.
18:30It tapped into some insecurities that I already had about how I present myself online.
18:35And so when I get these comments, it kind of...
18:38Part of me thinks, okay, well, there's something unique to you that's causing them to accuse you of this.
18:44And not the other creators who are better at this.
18:49As AI becomes a better mimic of every form of human expression,
18:54artists, photographers, musicians are getting these accusations.
18:59And that sort of human connection that the internet made possible
19:04is now being interrupted by this growing sense that anything could be fake at any time.
19:10When really there are people who are very rigorous, who follow a process,
19:15who have editorial supervision and who get in trouble if they lie.
19:20And those are the people we need to stick with and we need to protect.
19:24And they're real. And I'm one of them.
19:32Just like social media, AI isn't necessarily engineered to cause harm.
19:37It's engineered to create profit.
19:42US tech companies are the most powerful businesses the world has ever seen.
19:50The most brilliant people are gathered around this table.
19:54This is definitely a high IQ group.
19:56Mark, you're building some buildings that are as large as Manhattan.
20:01That was pretty good, right?
20:03You know, all of the companies here are building, just making huge investments in the country in order to build
20:11out data centers and infrastructure to power the next wave of innovation.
20:16Collectively, these tech executives spent more on developing AI in 2025 than the US government spent on education, jobs and
20:25social services combined.
20:27How much are you spending, would you say, over the next few years?
20:31Oh, gosh. I mean, I think it's probably going to be something like, I don't know, at least $600 billion
20:38through 28 in the US.
20:42Yeah.
20:43It's a lot.
20:44No, it's significant.
20:46It's a lot. Thank you, Mark. It's great to have you.
20:50These magnates control the platforms that shape our information ecosystems.
20:55The AI moment is one of the most transformative moments any of us have ever seen or will see in
21:01our lifetimes.
21:03They do so with minimal oversight or accountability.
21:07What AI is doing even today for education in so many other industries is really great.
21:11So thank you so much for enabling this.
21:13We will invest a ton in the United States and we will do our best to make sure that we
21:19continue to lead here.
21:20That's right. Thank you. Hundreds of billions of dollars and it's going to be well worth it.
21:25And you have an unlimited market, right?
21:28It seems like it.
21:31With AI, the race is, well, if I don't do it, if I don't build AI and go as fast
21:36as possible and take as many shortcuts as possible, even if that causes risk in society,
21:41I'm just going to lose to the company or country that is willing to go as fast as possible and
21:45take as many risks as possible.
21:47And that's what's so dangerous about AI is it's like the ring from Lord of the Rings.
21:51If I get the ring first, then I have all the power and I'll be the good king, I promise.
21:56But that is not a good way to race the most powerful, inscrutable and uncontrollable technology you've ever invented.
22:05The country competing in the AI race with the US is China.
22:09We're leading China by a lot, by a really, by a great amount.
22:14The Chinese government has invested billions in the industry, and it's using what it develops to create state-sanctioned disinformation
22:22campaigns targeting its neighbours.
22:29Perched on China's south-eastern coast is a small island called Kinma.
22:38It's littered with the remnants of war.
22:44Although only three kilometres from China, it is Taiwanese territory.
22:50For decades, Taiwan spent periods defending this island from China.
22:55One, two, three, four, three, four, five, five, five, six, seven, nine, nine, nine, nine, nine, ten.
23:01Taiwanese soldiers blasted cannonballs across the strait.
23:06And they broadcast propaganda from an enormous speaker directed at mainland China.
23:18The speaker extols the virtues of life beyond communism.
23:31The idea behind this was a psychological operation
23:34to try and sprit democracy to the Chinese mainland.
23:38They suggested that members of the PLA,
23:40the People's Liberation Army, should come over here and try it.
23:48These days, though, the speaker is a little outdated.
23:52The Chinese superpower is wielding one of the world's
23:55most pervasive propaganda tools, AI-generated memes.
23:59What are the claims that we are paying for?
24:29that needs to unify with the Chinese motherland.
24:42You probably heard talk of a possible global flashpoint over Taiwan,
24:47China seizing territory that, historically, Beijing says belongs to it.
24:53But what if, in this far more complex world,
24:56taking Taiwan doesn't involve landing on the beaches,
24:59storming territory?
25:01What if taking Taiwan happens in an entirely different space?
25:13This is Taipei, the capital of Taiwan.
25:16Most security experts agree
25:18this is where China is truly on the attack.
25:23Stay vigilant against online propaganda.
25:26That's the message from officials here in Taiwan.
25:29There are growing fears that Chinese social media apps
25:32are hosting content that could threaten Taiwan's national security.
25:37Civil organizations are launching into action
25:39to fortify the nation's digital literacy.
25:43Hi, Titi Cat.
25:44Hi.
25:45I'm Hamish.
25:46Really nice to meet you.
25:47Nice to meet you.
25:47Yes.
25:47And you've got the chest ready to go?
25:50Yes.
25:51Wimin Suan, also known as Titi Cat,
25:54has co-founded Double Think Lab.
25:56It's a group working to track and expose
25:58Chinese digital disinformation.
26:01By the way, why Titi Cat?
26:03I had a cat.
26:04Her name was Titi.
26:05OK.
26:06So I've been using this Alice for a very long time.
26:09OK.
26:10It's my online handles, everything.
26:13So what is cognitive warfare for people living in Taiwan?
26:17Create more chaos, polarization,
26:19and eventually that lead to we choose a more friendly attitude
26:24toward China government leaders.
26:28Because you're thinking about war, right?
26:30If you are the aggressor, how you actually win a war?
26:35There's no way you can kill everybody in the island.
26:40Taiwan has 23 million people.
26:42There's no way you can eliminate everybody here.
26:46So the only way you win a war is we surrender.
26:49The way to make us surrender is when we believe
26:53there's no chance we can win,
26:55or there's no reason we should fight or resist.
26:59At all.
27:01China's AI-generated disinformation campaign
27:04particularly targets Taiwan's democratic leaders
27:08and drives the narrative that, inevitably,
27:11Taiwan will be absorbed by China.
27:13People of all ages need to be wary
27:15that when they're consuming content online,
27:17if there is a video that makes you feel
27:20particularly strong emotions,
27:22that you should maybe be online to the fact
27:25that that video could be trying to manipulate you
27:28in one way or another.
27:32The dominated narrative is that
27:35the US will not help you.
27:37Nobody cares about you.
27:38You're a small state.
27:40They put this like a small state in your mind
27:43that you have no chance to survive by yourself.
27:47You can only react on a big power.
27:49And the US is not trustworthy.
27:52They are a hypocrite.
27:53And they do a lot of evil things.
27:56Right? So you can only react on China.
27:58And China will be very friendly to you
28:00because we are family.
28:02So this is the main narrative.
28:05So does that mean that China is preconditioning you
28:08for surrender?
28:10I think so, yes.
28:11A lot of young people actually,
28:13if we see the polling,
28:15there's maybe 20% or 30% of young people
28:18doesn't really see through these lies.
28:20Are you scared?
28:21I'm very scared.
28:23Yeah, especially like generative AI, right?
28:26It will also going to change
28:28how people interact with social media
28:31or interact with news.
28:35The Taiwanese are resilient.
28:38Their island has been labelled
28:40one of the most dangerous places on Earth.
28:44Buffeted by earthquakes and typhoons.
28:49China frequently runs combat drills
28:52just kilometres off its coastline.
28:54And the information war is relentless.
28:59We always look at these foreign adversarial attacks
29:03as well as our natural disasters as ways for conflict
29:08to not turn into explosion,
29:11but rather see them like fire on the ground,
29:13not to be put out, but rather as energy sources
29:16to harness into co-creation.
29:19Audrey Tang is a tech genius
29:21who went from hacking into government sites
29:24to make them more user-friendly
29:25to becoming the world's first digital minister.
29:30In 1981, when Audrey Tang was born here in Taiwan,
29:34the internet was still very much in its infancy
29:36and like most kids born around that time,
29:38the pair kind of grew up together.
29:41At first, the internet was shaping Audrey,
29:43but today, Audrey is trying to leave her mark on the internet.
29:52I think we live in a time when people's interaction with internet
29:58is being shaken very quickly now with the rise of AI.
30:04Almost three-quarter of the time,
30:06people think that AI is more human than human.
30:09And so it becomes very easy then to orchestrate entire villages of fake people
30:16that all look very real,
30:19shaping the political opinion in a very insidious way.
30:24But every time something attacks the society,
30:27the society needs to very quickly make sense of it
30:30and determine what to do with it.
30:32I think this is a classic case of the people's collective wisdom
30:37is actually much better than the politician's instinct.
30:43Audrey Tang has pioneered all of these new ways
30:46of integrating AI and technology into strengthening democracy,
30:51using AI to take citizens deliberating on a topic that's contentious,
30:56having an AI listen to that conversation,
30:58automatically transcribe it, maybe even facilitate the conversation,
31:02then find the areas of unlikely consensus
31:04and rapidly do what would have taken lots of human labor to figure out.
31:09We send text message, SMS to random numbers around Taiwan,
31:14asking what should we do about the deep fake advertisement online.
31:19And we chose 450 people.
31:22And those 450 people in rooms of 10,
31:26facilitated by AI system,
31:28it can summarize in real time what each room of people have agreed on
31:33one room may say,
31:35Facebook earns advertisement money from those scammers,
31:39so Facebook should be liable.
31:41And maybe another room say,
31:43TikTok, by dance, we should slow down connection to their servers
31:46until all their business goes to Google.
31:49All these ideas become a draft law in May,
31:52and it passed last July.
31:54You simply do not see those sponsored fake advertisement anymore
31:59because the people through citizen assembly set the rules
32:03that govern the AI's deep fake harm.
32:06And so I think we can use AI to address the harms of AI to democracy
32:12by using AI to foster democracy.
32:25I would love to see places like Australia engage and experiment with these kinds of ideas.
32:32Using technology to facilitate consultation, engagement,
32:36a greater sense of democracy,
32:37that I think is the antidote to the centralization of power within the technology industry.
32:43Democracy thrives on participation,
32:47and digital technology offers one of the best ways to boost that.
32:52Audrey's ideas about digital democracy have gained traction around the world.
32:59At home in Taiwan, she offers hope and a sense of agency.
33:04It doesn't take all that long in Taiwan to be deeply impressed by the place.
33:08I'm not talking just about the trains, they're fantastic, but this place is ambitious.
33:14They are politically engaged and tech savvy.
33:17And it's all the more remarkable when you think about the fact that this place emerged from martial law.
33:23Maybe even it's because of that.
33:25They know democracy is fragile and hardworn.
33:28They're not quite a man...
33:30..Yeah.
33:35..YOU!'
33:37Ms Chen is a retired teacher
33:39who lives on the top floor of an apartment block in central Taipei.
33:45That's very beautiful. The scenery's.
33:47Nice garden.
33:48Now, in her early 60s, she's old enough to have lived under martial law in Taiwan,
33:52when new political parties, human rights, even free speech were banned here.
33:58Wow, this is spectacular.
34:00Yeah. Around the Taipei city is mountains, some free mountains.
34:05What do you love about this city?
34:07We are so free and so happy.
34:10How to keep Taiwan's freedom?
34:13So, how do you feel about China?
34:19Invade Taiwan. Anytime.
34:21You're worried that will happen?
34:23Yes, of course.
34:25And so you're going to classes to prepare?
34:29Yes, yes, yes.
34:30Why?
34:31Because I've just protected my family, protected my country.
34:46Ms Chan has signed up for a full day's workshop at the Coomer Academy.
34:52It's a not-for-profit, teaching citizens defense strategies, in case there's an invasion.
35:00OK, the time is about.
35:01All the students, everyone, good morning.
35:02Good morning.
35:03Good morning.
35:21Good morning.
35:25Then they learn what to do in case of broad-scale aerial attacks.
35:31But this course puts just as much emphasis on the impact of cognitive warfare.
35:39They teach students how to spot the difference between a real person and an AI-generated influencer,
35:45whose very realistic content may be flooding their feeds.
35:59This is it.
36:01The Fact Check Center.
36:02Yes.
36:02The biggest fact-checking initiative in Taiwan.
36:06OK.
36:07TD Cat and his collaborators are using AI to mine social media for mis- and disinformation.
36:13So, basically, we are collecting the posts on social media platforms.
36:20The volume of data is very, very high because we set up a rather large amount of keywords to
36:26collect.
36:27Their AI tool scans huge amounts of data to identify patterns in language and content, like
36:34influencers, both Chinese and Taiwanese, who follow similar scripts and use the same vision,
36:41in what appears to be an orchestrated disinformation campaign.
36:45How have you designed this technology?
36:47It's picking up when there's a trend, when something is just being artificially blown up
36:53as an issue.
36:54Yes.
36:55They identify a stream of false narratives on social media, designed to reduce confidence
37:00in Taiwan's leadership and their ally, the US.
37:09So, this is an example of using AI to actually hang on to facts?
37:15Yes.
37:16If you fact-check something, what then travels faster, the lie or the fact?
37:21Of course, the lie.
37:23Those rumours go faster than facts.
37:27If we have done the fact-checking, we need to promote our fact-checking results to people.
37:32We try to disseminate those fact-check reports through all the social media where we collect
37:38those disinformation and rumours.
37:41So, do you think you can win that race?
37:43Don't know.
37:44But, yeah, if nobody do it, there's darkness, there is a candle, a small light, therefore
37:53people know what's light, what's darkness, right?
37:57But if there's no light, so darkness will be the normal, nothing else.
38:02So, that's what we believe and, yeah, what we're doing.
38:11For now, Taiwanese support for unification with China sits below 20%.
38:16The majority of people want to maintain the independence they have or build on that further.
38:23Ms Chan is determined to preserve that majority.
38:26She will share what she's learned at Cooma Academy so that friends and family are more alert
38:31to target a disinformation.
38:40Unlike the European Union, the Australian government has chosen against legislation specific to
38:46AI.
38:47Yeah, give it a diva moment, diva moment.
38:49This means more economic opportunities for new tech, but not mandatory guardrails.
38:55I think we're all just kind of used to being taken advantage of by big tech companies that
38:59it's like, well, yeah, of course this would happen.
39:02This is perfectly feasible.
39:04Before.
39:05After.
39:06Whoa, that doesn't look like me at all.
39:08That's so weird.
39:09The magic of AI.
39:11Hey, you look so respectable.
39:13I am giving AI a wide berth and I'm realising I can't afford to do that.
39:23I'm sorry.
39:24It looks very AI, doesn't it?
39:27Wait a minute.
39:28This is not what I think it is.
39:30You know, this is not as cool or funny or engaging as it actually should be because it's
39:36actually not real.
39:37We're living in a world that is in technological terms advancing at an exponential rate and
39:44is surprising even the developers in terms of that rate.
39:49What I think is interesting about Australia specifically is that we rate as nearly the
39:54lowest on most questions globally about optimism in relation to AI.
40:05So Australians, they are not optimistic.
40:07They don't feel inspired by it.
40:10I think we might have AI smarter than any single human at anything as soon as next year.
40:16And then probably within like say 2030, probably AI is smarter than the sum of all humans.
40:22Certainly the hype that comes out of the AI industry is focused on inevitability.
40:26They've invested hundreds of billions of dollars into this industry and they want to seek a
40:30return.
40:32One of the ways they do that is they create a narrative that this technology is inherently
40:36valuable and will generate productivity gains, even when the data suggests that might not
40:43be true.
40:47Australia is in the global spotlight, having introduced a world first ban.
40:51It's a big deal.
40:51On the creation of social media accounts for under-16s.
40:56But while Australia pursues a hard line on this, lawyer Lizzie O'Shea wants to know why
41:01the approach to AI is more passive.
41:0483% of Australians would be more prepared to use AI products if they felt there were guarantees
41:10around things like safety and standards.
41:13I think there's currently more regulation on producing a sandwich than there is on producing
41:19world-ending artificial general intelligence.
41:23Lizzie has founded Digital Rights Watch to defend Australians from data mining, surveillance,
41:29and digital disinformation.
41:31She's also busy urging Australians to speak up about how AI will change our world.
41:38I really want to start a conversation with anyone who is interested in the topic about
41:43how we can make AI fairer, how we can make sure that we have a balance for the concentration
41:49of power in the technology industry that is currently dictating how AI will be developed
41:54and deployed.
41:55How many people are using AI regularly in their work or study now?
42:01I would put that at two-thirds.
42:03Yeah.
42:04Rather than just accept AI products and are part of our world, Lizzie suggests we interrogate
42:09their purpose.
42:10Yeah, well I'm going around asking anybody if they can give me a good use case for Sora
42:14because I don't know what the purpose of that is.
42:17Like a fully synthetic social media platform to generate deep fake videos.
42:22I think there are a lot of nefarious purposes I can think of.
42:25A lot of maybe marginally...
42:27We have certain sets of regulations around what is expected of consumer products, including
42:32that they don't harm people and that they're developed with a certain kind of trustworthiness
42:37in how they're put on the market.
42:40Here in Australia, we've come to be very much a nation on wheels.
42:45Cars, when they started being sold en masse to the public, there was an assumption that
42:49drivers are responsible for all mishaps, accidents that occur in vehicles.
42:54We just make the product, there's nothing we can do.
42:57It's their responsibility if something goes wrong.
43:01People were harmed en masse by vehicles that were not designed well.
43:06Even though the companies manufacturing them knew that to be true.
43:13We don't take that approach to cars anymore.
43:16We now have safety features.
43:18We have seat belts.
43:19We have airbags.
43:20That was not introduced because the car industry thought it was a good idea.
43:24They had to be cajoled into doing it.
43:29We need to take an approach to regulation that enshrines what we think is fair and reasonable
43:35as a society and then require companies to comply.
43:39So I think there is a clear agenda by large tech companies to experiment on users to try and find
43:45a way to make these products profitable.
43:47And we need to think very seriously about the implications of systems that can't be trusted to answer basic questions.
43:54When government talks about the need to embrace the opportunities of AI, the way they ought to do that is
43:59by giving Australians confidence that they've done the work to protect them.
44:03My mission is to create a movement of people that can start to take power back and to ensure that
44:09elected representatives who want to do the right thing are supported.
44:12There's nothing inevitable about the future.
44:15The future is ours to shape.
44:16And the best way we can do that is by being active, engages in our democracy.
44:21It's not a spectator sport.
44:23Please join us and help us to build this movement.
44:33Ask yourself the question, is AI actually inevitable?
44:38Like, just, I know that's not true right now, but just imagine for a moment that literally no one on
44:44planet Earth wanted this to happen.
44:46Would the laws of physics force AI into existence?
44:50And the answer is no.
44:57As we all head off into this great technological unknown, it may be that we come to look back on
45:02this moment
45:03and see that we do have an opportunity to act, to shape what happens next.
45:08For me, already, this has been a mind-bending adventure.
45:12Trying to figure out what the hell happened to all of the facts.
45:15Along the way, I have realised something.
45:18That facts alone are not going to solve these huge challenges we all face as humans.
45:24But without facts, we don't have a hope of solving those problems.
45:29So perhaps, after all, facts are worth fighting for.
45:33Facts do matter.
45:59Facts come up.
45:59We will haveเอilation for ourselves.
45:59Facts come up with a little tension in an evaluate process for us.
45:59They пят you up and you're into a shared disaster.
46:00The space is easier for them to spoil the world exactly what you have for.
Comments

Recommended