Skip to playerSkip to main content
  • 2 months ago

Category

🗞
News
Transcript
00:00Sam Altman here, reporting from Straight Arrow News.
00:02Standing in for Kennedy Felton today, we're taking a closer look at the impact of Sora on the future of AI-generated media.
00:08Wait.
00:10Hold on, something's not right.
00:12No, I knew it wasn't going to work.
00:17Okay, you got me.
00:19No, that wasn't the real Sam Altman, but it was a video I created using OpenAI's new Sora 2 app.
00:30The term deepfake first popped up back in 2017, coined by a Reddit user who used AI to swap celebrity faces into existing videos.
00:41Since then, generative AI has evolved fast from image apps like Midjourney to video creators like Sora, a new invite-only social platform where every clip is completely AI-generated.
00:56It surpassed 1 million downloads during its first week.
01:00People often tell me I'm a woman of many talents.
01:04I mean, did you see my recent appearance at Fashion Week?
01:09Pretty good, right?
01:10I had to redeem myself after a failed time travel experiment that somehow became a big-budget movie.
01:17Hi, I just popped up out of nowhere, don't know what this place is called, but if there's...
01:21And yes, I even filmed a new cooking show last week.
01:25Well, Sora did.
01:26First things first, positive affirmations.
01:28You're magnificent marbling.
01:30You are worthy of a cast-iron throne.
01:32You can create just about anything on the app.
01:35Even the animation Sora 2 creates looks professionally made.
01:39You're slowing down, Russ.
01:41Try it.
01:42All right, keep your hands where I can see them.
01:44I don't want to do this, little guy.
01:44Turn around for me.
01:45I wasn't stealing.
01:46Just playing the game.
01:47Now, sure, some of the so-called AI slop looks fake, but a recent poll found more than 50% of people aren't confident they can detect whether something is made by AI or a human.
01:59And while most of the AI, especially on Sora 2, is made to be comical, the silly can sometimes have serious consequences.
02:08100% zany juice.
02:10There was a very famous robocall that used the voice of Joe Biden to try to convince voters in New Hampshire and Vermont not to go and vote.
02:19And that was manipulated by the other side to try to manipulate the election.
02:24Northern Illinois University professor David Gunkel says the biggest danger is deception and our laws aren't ready for it.
02:32Technology moves at light speed.
02:35Law and policy move at pen and paper speed.
02:38So we are always playing catch up.
02:40We are always trying to make existing laws fit novel circumstances and then trying to write new laws to cover unanticipated opportunities and challenges that these technologies make available to us.
02:53But Gunkel says generative AI isn't so much a turning point as it is an evolution of things that have been happening for years.
03:01In photography, for instance, you can capture something real or you can use lighting, angles and editing to create a reality that doesn't exist.
03:11AI, he says, is just the next step in that evolution, another tool that blurs the line between what's real and what's made.
03:19Dozens of lawsuits are now testing those boundaries.
03:22In a recent win for authors, AI company Anthropic agreed to pay $3,000 for each of an estimated half million books used to train its models without permission.
03:33Even as apps try to limit abuse, their own rules are raising eyebrows.
03:38Sora 2's strict content filters block certain requests and have become a running joke on the app.
03:43Just put a nice little tree right over. Huh? What is this? I can't. It won't let me. Let me paint.
03:50That's it. I'm coming for you, Sam Altman.
03:53Even I was flagged for violating the terms and conditions.
03:56When I instructed Sora to insert my likeness into a workout class, it flagged me for depictions of teens and children.
04:04I guess that's a compliment. Maybe I should do my next story on my skincare routine.
04:08Anyways, even OpenAI didn't realize how big of a problem these flagged requests would be.
04:15Tonight, in this very arena, my dream is to make freedom ring.
04:20But not everyone finds the app funny.
04:23You just saw artist Bob Ross in a video someone created.
04:26Other public figures like Robin Williams and Dr. Martin Luther King Jr. are being recreated,
04:31which isn't against the terms of service since deceased figures aren't protected.
04:36It's even prompting backlash from their families.
04:39Robin Williams' daughter, Zelda, posted on Instagram begging people to stop sending her AI videos of her father.
04:46She said, in part,
04:47To watch the legacies of real people be condensed down to this vaguely looks and sounds like them,
04:53so that's enough just so other people can churn out horrible TikTok slot puppeteering them is maddening.
04:59Martin Luther King's daughter, Bernice King, echoed those concerns online, urging people to stop.
05:05There's less of a risk of creating scary, hyper-realistic deepfakes that damage reputations or cause disruption
05:18because they've implemented a lot of these safety guardrails to make it lean funny.
05:25While safety guardrails have reduced the amount of inappropriate content,
05:30not everyone is using the tech responsibly.
05:33Some users are pushing the limits by making inappropriate or sexualized content,
05:37and that's becoming a whole other issue in itself, porn AI.
05:41Somebody was making all these videos of me and my clones, like, hanging out,
05:47which I thought was so funny at first.
05:49And then I see more and more videos, and he's trying to make my clones, like, make out.
05:54And he does this with a lot of girls.
05:57And you can read their prompts trying to get around these guardrails
06:00because you can get around anything.
06:02It's the Internet.
06:03And to confuse an AI is not that hard.
06:08Currently, not only are there no federal regulations governing generative AI,
06:12but it's unclear who would be held responsible if something harmful happens.
06:17Gunkel said we'll likely see a lawsuit over the next few years on the topic.
06:21Usually when you use a tool to do something,
06:24it is the user of the tool and not the tool or the manufacturer of the tool
06:29who's held accountable for the good or bad outcomes.
06:33But we are seeing this as a kind of moving target now.
06:38While the technology might be scary,
06:41AI is being weaved into our daily lives more and more.
06:44Gunkel wants to remind people that this sort of pushback happens
06:48any time something new is introduced.
06:50Case in point, Socrates once came out against writing when it was first introduced
06:55because he thought it wouldn't be an effective means of communicating knowledge.
06:59We are only three years out from ChatGPT being released.
07:03That's really early on in a new technology.
07:07And if there is a lot of hyperbole on both sides of the debate,
07:10people are really excited about it, people are really afraid of it,
07:13that's par for the course.
07:15We've been here before.
07:16And I think it is a matter of some thoughtful response to this technology,
07:22some critical perspective,
07:24and recognition that, you know, we've done this before
07:27and we can be confident in the face of these new challenges.
07:30So for now, Sam Altman won't be filling in for me.
07:34No hard feelings, Kennedy.
07:38But if my AI twin starts reporting from Cabo,
07:42don't say I didn't warn you.
07:45With Straight Arrow News, I'm Kennedy Felton.
07:47For more on this story and others, visit san.com or download our mobile app.
Be the first to comment
Add your comment

Recommended