Skip to playerSkip to main content
  • 2 years ago
Deepfaked news segments that appear to be delivered by well-known journalists and TV networks are going viral on social media. Often times, the view counts of deepfaked segments largely outnumber those of real news stories. We take a look at how this application of AI has dramatically improved in quality, and why that is a grave concern as 4 billion people around the world will cast ballots in regional and national elections this year.

0:00 Why 2024 is a consequential year for democracy
1:26 Deepfake news quality is improving dramatically
4:26 How manipulated media is detected
6:39 Spread of deepfake news VS real news
8:50 Why deepfakes are protected by the First Amendment
11:00 How deepfakes can be used in targeted ways to impact elections
12:16 What is needed for defamation lawsuits against deepfake creators
13:11 Deepfakes can be a double edged sword for political campaigns
13:48 How can citizens protect themselves from deepfaked information?
14:45 Why the U.S. government should stay out of debates over truth

Read the full story on Forbes: https://www.forbes.com/sites/alexandralevine/2023/10/12/in-a-new-era-of-deepfakes-ai-makes-real-news-anchors-report-fake-stories/?sh=19112da657af

Subscribe to FORBES: https://www.youtube.com/user/Forbes?sub_confirmation=1

Fuel your success with Forbes. Gain unlimited access to premium journalism, including breaking news, groundbreaking in-depth reported stories, daily digests and more. Plus, members get a front-row seat at members-only events with leading thinkers and doers, access to premium video that can help you get ahead, an ad-light experience, early access to select products including NFT drops and more:

https://account.forbes.com/membership/?utm_source=youtube&utm_medium=display&utm_campaign=growth_non-sub_paid_subscribe_ytdescript

Stay Connected
Forbes newsletters: https://newsletters.editorial.forbes.com
Forbes on Facebook: http://fb.com/forbes
Forbes Video on Twitter: http://www.twitter.com/forbes
Forbes Video on Instagram: http://instagram.com/forbes
More From Forbes: http://forbes.com

Forbes covers the intersection of entrepreneurship, wealth, technology, business and lifestyle with a focus on people and success.
Transcript
00:00 [crowd cheering]
00:08 -2024 will be a record year for elections around the world.
00:12 Over 4 billion people, more than half of Earth's population,
00:16 are expected to cast a ballot.
00:18 Seven out of 10 of the most populous nations
00:20 are going to the polls, and many elections
00:23 will be in countries consequential to the news cycle.
00:27 Taiwan held its presidential elections on January 13th,
00:31 which saw William Lai of the Democratic Progressive Party
00:34 win with over 40% of the vote.
00:38 Lai's election is expected to make relations
00:40 between Taiwan and mainland China more antagonistic.
00:43 Both Ukraine and Russia,
00:47 who remain locked in war with one another,
00:49 have scheduled elections in March.
00:53 And US elections in November are bound to draw
00:55 intense international attention
00:57 in what is shaping up to be a rematch of the 2020 elections.
01:01 -Democracy is still at risk.
01:03 This is not hyperbole.
01:05 -Many academics, political analysts,
01:07 and think tanks expect 2024 to be a major stress test
01:11 on the concept of democracy itself.
01:13 And one particular variable
01:14 that will further complicate this test
01:16 is the rise of AI tools and the ability
01:19 to create convincing, deepfake news content.
01:26 -We're entering an era in which our enemies
01:28 can make it look like anyone is saying anything
01:31 at any point in time,
01:32 even if they would never say those things.
01:35 Moving forward, we need to be more vigilant
01:37 with what we trust from the Internet.
01:39 That's a time when we need to rely on trusted news sources.
01:44 -Deepfakes and cheapfakes are not new.
01:47 But with the explosion of AI that was ushered in
01:50 by the introduction of chat GPT just over a year ago,
01:53 we saw deepfakes proliferate.
01:56 And the types of deepfakes that we've been looking at,
01:58 which are these fake news segments
02:01 using the real likeness and real logos of real news outlets
02:05 and the faces of real broadcasters, are seemingly new.
02:09 And they are particularly problematic right now
02:11 as we are heading into a really high-stakes election
02:15 and also as we are in the midst of a war.
02:18 We have seen deepfake news segments
02:21 from top prominent anchors at all sorts of outlets,
02:25 ranging from CNN to CBS and beyond.
02:30 -Truth or fake, you're beginning with a story
02:33 of a video on social media
02:35 where President Zelensky appears to surrender to Russian forces.
02:38 What's that about?
02:40 -A false video of President Zelensky
02:41 was diffused yesterday,
02:43 where he's apparently making an announcement,
02:46 giving up to Russian forces.
02:48 This video was diffused on a hacked Ukrainian news website
02:52 called Ukraine 24.
02:54 I've been seeing it sort of come in and out
02:56 for several years now,
02:57 but in really seeing it consistently in high quality,
03:02 I would say in the last 12 months.
03:04 -It's only gotten worse.
03:07 Much worse.
03:10 There's no tomorrow.
03:13 -There's two main reasons for it.
03:15 One is that the technology to create deepfakes of news anchors
03:19 has just gotten better.
03:20 But two, and I think this is also important,
03:22 is that most of the major social-media companies
03:25 have eviscerated their trust and safety teams.
03:28 And that's not just Twitter, by the way.
03:29 That one's easy.
03:30 But it's even the Facebooks of the world,
03:32 the YouTubes and the TikToks.
03:34 And so, as a result of that, when people create fake content,
03:37 it's much, much easier to distribute.
03:40 So, remember that when we're talking about deepfakes,
03:42 there's really three parts to it.
03:44 There's the underlying technology,
03:46 the bad actors who are misusing these technologies,
03:49 but then there's the spread of that.
03:51 And the spread of that technology
03:53 is not an AI question.
03:54 That's a social-media question.
03:55 All three things have now lined up.
03:57 The technology's getting better.
03:59 Bad actors are figuring out that you can monetize
04:01 or abuse this content.
04:03 And the social-media companies have fallen asleep
04:05 at the wheel again.
04:08 -Using video from the CBS News archives,
04:11 Chris Ume was able to train his computer
04:14 to learn every aspect of my face and wipe away the decades.
04:20 -This is how I looked 30 years ago.
04:23 He can even remove my mustache.
04:25 -There are two approaches to detecting manipulated media,
04:28 what we call proactive and reactive.
04:30 So, the reactive is sort of my bread and butter here
04:33 as an academic at UC Berkeley.
04:34 What we do is we take an image, an audio, or a video,
04:37 we run it through a battery of tests,
04:38 and we try to figure out if it's been manipulated
04:41 or AI-generated all after the fact.
04:44 So, stuff gets online, some fact-checker contacts us,
04:47 we analyze the content,
04:49 and we eventually tell the fact-checker,
04:50 and they eventually set the record straight.
04:52 And meanwhile, the whole world has moved on
04:53 and gotten defrauded to the tune of millions of dollars.
04:55 So, the reactive stuff is good, if you will, as a post-mortem.
04:59 But at the speed at which social-media moves,
05:02 half-life of a social-media post can be measured in minutes,
05:05 you're not there fast enough to deal with the damage.
05:08 The proactive techniques, the way they work,
05:11 is that if you pick up your phone and record something
05:15 or you are in the business of generating AI content,
05:18 you can inject into that content,
05:20 whether it's real or AI-generated,
05:23 a digital watermark that is cryptographically signed,
05:26 and then downstream, your browser or a piece of software
05:30 can read that watermark and say,
05:31 "Nope, I know that this is AI-generated,"
05:34 or in fact, that it is real,
05:35 and you can do that instantaneously.
05:37 This only works when you have good players.
05:39 So, when Adobe decides it's gonna put watermarks
05:42 into its content, well, great, I trust Adobe,
05:44 but a lot of bad players out there,
05:46 and a lot of this code for creating deepfakes is open source.
05:50 So, if you have open source and you've got some code
05:52 in there for inserting a watermark,
05:53 well, the bad guy's gonna go in there and remove that code
05:56 and we're off to the races.
05:57 So, the watermarking absolutely are going to play a role here
06:01 but they will not, in and of themselves, solve the problem
06:04 because there's always ways around this technology
06:06 and there's open source and there's bad actors.
06:08 But I'm super supportive of that for the big players
06:12 like Adobe, OpenAI, and MidJourney,
06:15 and maybe we lop off half the problem.
06:17 - I'm being replicated by a stand-in actor
06:22 to read this monologue, and then my appearance
06:25 and voice is changed using artificial intelligence.
06:29 So, now that I've got your attention,
06:32 let's send it back to the real Brian Sullivan
06:34 as we dive deeper into this revolutionary technology.
06:38 - Some of the more prominent clips that we had found
06:40 were actually from a TikTok and YouTube creator.
06:43 His name is Krishna Sahai.
06:45 Some of his most watched and most viral videos
06:48 were news segments with CBS anchors
06:52 that interviewed him as the only survivor
06:56 in a school shooting.
06:57 - We are now interviewing the only survivor
06:59 of the recent Texas school shooting,
07:01 high school student Krishna Sahai.
07:03 - In one example, he was the shooter.
07:05 In one example, he was a survivor.
07:08 And across all the examples,
07:09 he was mocking the school shooting itself.
07:12 - We are now interviewing the only survivor
07:14 in the recent school shooting, TikToker Krishna Sahai.
07:17 - Dude, that must have been absolutely traumatizing.
07:19 What was going through everyone's head, man?
07:21 - Bullets, duh.
07:23 - One of the most interesting pieces of this
07:25 and troubling pieces of this is that in many cases,
07:28 the deepfake news segments that we found
07:30 were getting more views and more virality
07:33 than actual news segments from those same outlets
07:37 that were posted to their blue check verified
07:40 social media accounts around the same time.
07:42 One example that we found was from Face the Nation.
07:45 This YouTube and TikTok creator had a segment
07:47 that was actually one of his more innocuous segments
07:50 that was about a group of kids jumping in an elevator
07:53 and the elevator crashes down and then they owe
07:57 this building more than half a million dollars in damages.
08:00 - Over $560,000 in damages liable
08:03 after TikToker Krishna Sahai destroys elevator.
08:07 - So it's not the most threatening example,
08:09 but what was so fascinating about it
08:11 was that it was viewed more than 300,000 times
08:14 and it used again the Face the Nation logo
08:16 and on the Face the Nation's social media account on TikTok,
08:21 the post from the same day only garnered 7,000 views.
08:25 So when fake news segments from a creator
08:29 that uses the outlet's logo or anchors from that station
08:32 is in fact getting more eyeballs than actual news clips
08:37 from the actual outlet's blue check social media accounts,
08:40 you can see how that could become extremely problematic
08:43 and deter people from actually following
08:45 what is considered real news.
08:47 - So even deepfake technology is protected
08:54 by the First Amendment,
08:55 lying is protected by the First Amendment.
08:57 I could make false statements
08:59 and I am not going to be punished
09:01 unless that false statement carries
09:03 some additional harm with it, direct harm.
09:07 Usually harm that is perpetrated against an individual.
09:10 And frankly, when you're in a situation
09:13 talking, making political statements,
09:15 that's the strongest protection the First Amendment gives.
09:19 So a general statement of a political nature,
09:22 which I think a lot of deepfakes
09:24 we're seeing in this coming year will be,
09:26 are actually protected even when they're lies,
09:29 unless there is some direct harm
09:32 that is inflicted upon an individual
09:34 or even to some degree, a small segment of society.
09:38 And what we're talking about here
09:39 are things like defamation.
09:41 If I say something or if I use a deepfake
09:43 in a way that makes a false statement about you,
09:47 harms your reputation,
09:48 that would be something that is now
09:49 outside of the First Amendment,
09:50 not protected by the First Amendment.
09:52 While we do have a collective media literacy problem
09:55 in this country where people don't know the difference
09:57 necessarily between news and opinion,
10:00 or even within news,
10:01 the difference between a good source
10:04 and a not so good source,
10:05 or an outright lying source that has an agenda of its own,
10:08 we're getting better with that.
10:09 I think people collectively, and this is anecdotal,
10:11 people collectively are getting better
10:14 at identifying, separating truth from falsity.
10:17 It's harder when you bring in video
10:20 because they don't know the same tells
10:22 that we're already being trained to look for
10:24 in printed or online information.
10:27 So there's a level of validity,
10:28 of veracity to something that they see in video
10:31 and they go, "Oh, it's video.
10:32 It's really hard to fake that."
10:34 And it's happening so much.
10:35 And frankly, it's mostly being perpetrated
10:37 by people who want to take advantage of you.
10:40 We know that in the 2020 and 2016 elections,
10:43 a lot of the misinformation during the election period
10:46 was coming from overseas,
10:47 from places we aren't going to be able to get to punish.
10:52 And I think that's probably what's gonna happen again,
10:53 which is what makes it so difficult to combat.
10:58 -You said this is the most comprehensive report
11:00 that we've gotten about the 2020 election
11:02 and foreign interference by the intelligence community.
11:05 And it does make clear
11:06 that this massive Russian influence campaign
11:09 was designed, orchestrated by Putin,
11:11 to denigrate Joe Biden
11:13 and to support the re-election of President Donald Trump.
11:16 I wonder -- -I've heard people say,
11:18 "Look, disinformation, deepfakes,
11:19 they can't change an election."
11:21 And I don't think that's true
11:22 because if you look at the last two election cycles,
11:25 the difference between one candidate or another
11:27 in terms of the electoral vote
11:29 came down to some 80,000 votes in a handful of states.
11:33 You don't have to move tens of millions of votes.
11:36 You have to move tens of thousands of votes.
11:38 And not only that, I know where those votes are.
11:41 If I'm the bad guy trying to interfere with your election,
11:44 I know exactly what states,
11:45 I know exactly what towns, what localities,
11:47 and I know how to find these people on social media
11:49 and manipulate them.
11:51 And that, I think, should worry us.
11:53 And you need a series of defenses.
11:55 And so you need a series of proactive defenses
11:57 and a series of reactive defenses.
11:59 And you need better corporate responsibility.
12:01 And you need some liability.
12:02 And you need some regulation.
12:04 And you need good consumer protection.
12:06 And so, you know, when you put all those pieces together,
12:09 I think we can start to trust things
12:10 that we see online a little bit more.
12:12 -It's possible, it's difficult, both in a legal sense
12:19 and a practical sense, to bring a defamation lawsuit
12:22 against someone based on their creation of a deepfake.
12:26 So let's just say you created a deepfake about me.
12:30 I have to show, very specifically,
12:32 that not only you lied, you harmed me in some way,
12:34 and specifically, you harmed my reputation.
12:36 Ah, but beyond that, I have to show a number of other things.
12:39 I have to show us, specifically,
12:42 that you made a materially and substantially false
12:46 assertion of fact about me that was published
12:50 and harmed my reputation,
12:52 and that you did it with some level of fault.
12:55 -This ability to create so many images so rapidly,
13:00 it's an incredibly powerful tool.
13:01 -New artificial intelligence technology
13:03 makes it easy to create fake images
13:05 that can look very realistic, like these created by artist
13:08 and online trust and safety expert Tim Boucher.
13:11 -I think, absolutely, we are going to see the campaigns
13:13 use it against their opponents.
13:16 We also can see campaigns using it to bolster their own opponent,
13:19 to create images of them looking more heroic or taller,
13:22 for example.
13:23 But here's the other place that the candidates can use it.
13:27 Imagine now there's a hot mic of a candidate
13:30 or a sitting president saying something inappropriate
13:32 or illegal.
13:33 They don't have to cop to it anymore.
13:35 They can say it's fake.
13:37 And so they can also deny reality.
13:38 So the deepfake technology is a double-edged sword.
13:41 You can create harmful content,
13:43 but you can also dismiss real content
13:45 by simply saying it's fake and muddying the waters.
13:48 I think the most important thing right now is to remind people
13:51 to think before they share,
13:54 to be a bit skeptical of what they are consuming,
13:56 and to really try to pay attention to the source.
13:59 If the source is an authoritative news outlet, great.
14:03 I don't think we can rely anymore on which accounts
14:06 are blue-check verified accounts and which aren't,
14:09 because now we know that many of the social media platforms
14:12 allow people, any person, to purchase verification
14:16 where the bar is significantly lower for verified accounts.
14:19 But I think that you should really be focused on
14:21 where the news is coming from, who is posting it,
14:24 what their motive may be,
14:25 and what sorts of perspectives they are including in the clip.
14:29 I think all of those things are able to help us,
14:32 especially in a very fast-moving news environment,
14:34 better calculate what is worth sharing versus what isn't
14:37 and help us better understand what we are consuming.
14:44 We do not want the government to step into these discussions
14:48 about truth and falsity.
14:50 We don't want the government to step into decisions about,
14:52 this is my opinion, this is your opinion,
14:54 this is the right opinion.
14:56 We want the government to stay out of it.
14:57 We have a concept in the United States
14:59 called the marketplace of ideas.
15:02 And what it says is we have a place
15:04 where we can all come together and buy and sell
15:06 not products or goods, but thought, ideas, values.
15:10 And in that marketplace of ideas,
15:12 we not only get to test our own thoughts
15:14 and maybe change what we believe,
15:17 we may be able to change other people's opinions,
15:19 but we're also reinforced in what we believe.
15:21 And the whole point, though,
15:23 is we get to sort it out for ourselves.
15:26 We know that phrase that exists,
15:28 a lie travels halfway around the world
15:30 before the truth can put its shoes on.
15:32 I think now the lie travels around the world
15:34 once, twice, three times
15:35 before the truth even wakes up, right?
15:39 We've dealt with technological changes before
15:40 throughout our nation's history.
15:41 The First Amendment remains unchanged.
15:43 It's been there all along and it's worked.
15:47 And I believe in the First Amendment
15:50 and I believe it's gonna work this year as well.
15:52 Thanks, just one last thing while we're rolling.
15:56 How do I know you're not deepfaking me right now?
15:58 That's one of my worries.
16:01 I mean, we're sitting here about deepfakes.
16:03 We could turn, I don't even know if I see this.
16:05 How will I even know whether you're deepfaking me?
16:07 (laughing)
16:10 (silence)
16:12 (silence)
16:14 (silence)
16:16 [BLANK_AUDIO]
Comments

Recommended