- hace 2 días
En YouTube, explica Angela, la IA se ha convertido en la primera línea de defensa contra los contenidos nocivos. La tecnología no solo acelera las tareas de moderación de contenidos, sino que hace que el proceso sea más humano, al filtrar los contenidos problemáticos antes de que lleguen a un revisor humano.
Categoría
📚
AprendizajeTranscripción
00:00What inspired a chemical engineer to work in program management for trust and safety?
00:07Find out on today's episode.
00:09I'm Angelana Kalembe from YouTube, and you're listening to Me, Myself, and AI.
00:15Welcome to Me, Myself, and AI, a podcast from MIT Sloan Management Review,
00:21exploring the future of artificial intelligence.
00:24I'm Sam Ransbotham, professor of analytics at Boston College.
00:27I've been researching data, analytics, and AI at MIT SMR since 2014,
00:34with research articles, annual industry reports, case studies, and now 12 seasons of podcast episodes.
00:42On each episode, corporate leaders, cutting-edge researchers, and AI policymakers
00:47join us to break down what separates AI hype from AI success.
00:52Hey, listeners.
00:58Thanks for joining us again.
01:00Today, we've got Angela Nakalembe, Engineering Program Manager at YouTube, with us in the virtual studio.
01:06Angela, thanks for joining us.
01:08Thanks for having me, Sam.
01:09I'm excited to be here.
01:10Typically, we start with some background, but it's really hard to imagine any listener that doesn't know about YouTube.
01:16But still, can you give us a brief overview of YouTube, and in particular, what your role there is?
01:21YouTube is an online video-sharing platform started about 20 years ago as an online community for folks to share videos on any topic that they were interested in.
01:31It got acquired by Google Alphabet shortly after, and since then, it's just grown into this behemoth of an online presence with almost 2-3 billion users monthly,
01:43and created such an incredible online community for creators to connect and also just make a living as well.
01:50And what I do specifically at YouTube is I work behind the scenes within YouTube's trust and safety department, where, as an engineering program manager,
01:59I'm responsible for overseeing a lot of the tools and feature launches, a lot of them now using AI,
02:06that help keep the platform safe from things like hate speech, misinformation, and graphing content, and the like.
02:12So it's pretty rewarding work, and I feel like as we're in this ever-evolving world of AI that we're seeing this technology get ever more integrated into our lives,
02:22it's really important that we find ways to not only use this technology to power our products and make it easier for us to send emails,
02:31but also to find ways to protect the people who are forming online communities.
02:37You teed it up. What is happening with artificial intelligence in YouTube right now?
02:41One of the ways we're using AI is to basically turbocharge our content moderation efforts and just make it a lot faster, but also a lot more humane.
02:52Imagine your job is to sit for eight hours a day, five days a week, reviewing videos filled with, you know, like, sexually explicit content or graphic violence or misinformation.
03:03That's the reality that exists for a lot of the human moderators that YouTube and other online social media platforms use to moderate content and keep that violative content off the platform.
03:16And doing that day in, day out can have such a huge emotional and psychological toll on our reviewers.
03:22But now with AI and machine learning, we're able to just really change that and evolve the role of the reviewer and have them work hand in hand, not just solo, but hand in hand with this AI tooling.
03:35And specifically, we're developing tools that can act as a first line of defense, right?
03:41That flag or catch harmful content before it ever reaches a person's eyes, whether on the platform or like a human reviewer.
03:49So while human reviewers are still needed as a step to kind of verify what these like AI models are doing,
03:56they no longer have to carry the weight and that burden alone.
03:59Right. And I think to me, that's one of the most powerful uses of AI is not particularly replacing people, but helping protect them in a sense, you know,
04:09because now we are reporting much higher reviewer satisfaction scores in terms of well-being.
04:14We're also receiving a lot of positive feedback from our users or creators who notice a much faster handle time for a lot of issues or cases that they've reported.
04:23So things are getting taken down a lot faster or things are just not actually making it onto the platform.
04:28So people feel it's a lot safer.
04:30But it's been incredible to see how we're able to use this technology to kind of just overall make it a much better experience for everyone across the board, both within YouTube and out in the world.
04:40Yeah, that seems like a really interesting, you know, we had a family friend who's a policeman and he worked in the, I think, parts of the police force that dealt with things you just didn't want to see.
04:52And, you know, he talked about the turnover that that job has, even though they're doing and they believe in the job, it's a rough job.
04:59And it's the kind of thing that they end up taking home with them inadvertently, even if they don't want to.
05:04They can't just forget that which they've seen all day until you mentioned it here.
05:09It really kind of makes it sink in this idea of you are in some sense replacing human workers.
05:16And that's a place where we really don't want humans to be doing something.
05:22Typically, with artificial intelligence, we think about dirty, dull and dangerous jobs.
05:26Yeah, there's still an incredible need for these human moderators, right?
05:29There's a lot of nuance that AI just can't understand, you know, especially when it comes to racial sensitivity and issues, being able to have that extra.
05:40But I think AI is fantastic because it gives that first line of defense and just tackles a lot of the more gory, very obvious instances of policy guideline violation.
05:52But then that leaves space for humans to handle, you know, the work that needs a little bit more nuance.
05:57Right. And so there's still need for human support in there.
06:00Hey, folks, Stephen Johnson here, co-founder of Notebook LM.
06:05As an author, I've always been obsessed with how software could help organize ideas and make connections.
06:10So we built Notebook LM as an AI-first tool for anyone trying to make sense of complex information.
06:18Upload your documents and Notebook LM instantly becomes your personal expert, uncovering insights and helping you brainstorm.
06:26Try it at notebooklm.google.com.
06:29Where you bank matters.
06:35MIT Federal Credit Union believes in putting people over profits, which is why members save more with higher savings rates and lower loan rates, all while giving back to their community.
06:46Learn more at mitfcu.org slash community.
06:50MIT Federal Credit Union is federally insured by NCUA.
06:59And at the same time, we're talking about the time spent curating content and you're not having a decreasing number of submissions.
07:07In fact, at the same time, we have artificial intelligence helping people produce content much more quickly.
07:14And so then you're getting much more content that you need to curate.
07:19But at the same time, I know you're having some struggles there with exactly what you do with AI content.
07:24Alphabet is working on developing these models.
07:27And at the same time, you've got an interesting position of curating them.
07:31What's the YouTube take on that?
07:33Yeah, that's a really good point.
07:35I do want to underscore that one of the benefits of AI has been frictionless content creation, right?
07:42It's helping people express themselves a lot easier across language barriers as well and scaling ideas that they previously weren't able to, right?
07:51But the downside of that becomes when anyone can generate high quality text or video or image instantly, we're flooding the internet with a lot of content that looks legitimate, but really isn't.
08:03And that's a big thing that we've been struggling with, not just YouTube, but in general with the rise in AI.
08:08Like a good example could be, like last year was a really big election season globally, and we saw an uptick in huge AI-generated videos that people would create about political opponents saying, you know, the most rage-baity information you can think of to get a rise out of people and spread all that misinformation.
08:26We've put a lot of time at YouTube on thinking about how we can prevent the spread of misinformation and just make sure people are more aware of the modality of creation of the content that they're consuming, right?
08:40A really good example I'd want to call out here is, like at Google and YouTube, what we're working on is this big initiative called C2PA or Alliance.
08:48It's the Coalition for Content, Providence, and Authenticity, or C2PA in short, basically an open technical standard for publishers, creators, consumers to establish the origins of whatever content they're observing.
09:03It's a group of tech companies from across the world, Adobe and Microsoft, like a bunch of other companies are involved as well and are committed to using this and requiring, at least with YouTube,
09:14we're requiring our creators to validate whether their content was created with a camera or with AI.
09:20That's one of the ways we're mitigating it.
09:23We're also training our models to flag any misleading or low effort or manipulated content a lot faster.
09:31And the goal is really trying to restore context or also what we call cognitive security,
09:37which is people's ability to be able to distinguish what's real or not in a world where it's becoming increasingly difficult to tell the difference.
09:46Because back to the example I shared earlier, last year, it was somewhat easy to tell the difference between what was AI generated versus not in terms of those like political videos and such.
09:56But fast forward eight months later, Sam, there are some videos that I watched that I, first glance, second glance, could not tell that this was AI generated.
10:04It was very, very realistic, right?
10:06As the technology evolves, I think we as a company, Google, YouTube, we are trying our best to keep up with that and make sure that we can find ways to help people distinguish what's real versus what's not.
10:18I'm glad you mentioned provenance, though, because I think in general, I mean, not just video, provenance is a big deal for any sorts of data that we're working with.
10:26And where data comes from, how it's used, what its source was, I think that's gotten much more difficult.
10:34I mean, it's wonderful that we have all this data and this video and content available, but at the same time, it has made it much harder to know where that came from.
10:43If I collected data myself, I knew where it came from.
10:46But if I use data from someone else, I'm one step removed.
10:50And I think, you know, provenance in general is something for everyone to start paying attention more to.
10:55In some sense, you know, there's such market power that YouTube has.
10:58A lot of the onus then comes to you and what you do ends up becoming, to some degree, a de facto standard in the marketplace.
11:06And, you know, not to put more pressure on you, but the things you're doing are pretty important there.
11:12Sure. I want to switch a little bit in here.
11:15Your title is engineering program manager, and you're actually an engineer.
11:19I always have to bring this out.
11:20I was a chemical engineer back in the day.
11:22You were a chemical engineer.
11:24Yes.
11:25I think for whatever reason, we have a preponderance of chemical engineers on the show.
11:31Tell us a little bit about how you ended up at YouTube.
11:34My background, as you mentioned, we're both chemical engineering majors, and I had a passion for chemistry, always, you know, a curious mind.
11:41And that's what drove me to it in the beginning.
11:43But I quickly realized, one, that there was not a lot of chemistry involved in chemical engineering.
11:49And then, two, as a chemical engineer or just as an engineer in general, you get to work on all these exciting projects and initiatives and problem solving.
11:57But you don't have quite a bit of like autonomy in terms of the direction of where you want to take all of the work that you're doing.
12:03And I became increasingly hungry for the ability to be able to sway business strategy and decisions on where we took product.
12:14And so that's where I made the switch from engineering into management consulting of all places.
12:19But it was a fantastic time because it allowed me to basically create my own, what I would call, like, rotation program, right?
12:27Because I was able to work with front-facing, with clients.
12:31There are Fortune 50 companies across a wide range of industries doing all these sort of digital transformations and helping them scale their technologies, maximize their impact globally.
12:41I ended up learning that I enjoyed working in the tech space because it was just always evolving in terms of industries like consumer technologies, just always evolving.
12:53I love working on tooling or products that people get to use in their everyday life and having a direct impact on that.
12:59The joy from being able to turn your TV on and tell somebody that, hey, part of the work I do every day, you know, is to keep this app safe.
13:06Or, you know, like, I helped launch XYZ feature.
13:09It's very rewarding, right?
13:10And very nice when you can, like, easily pinpoint that.
13:13That's how I landed on working in technology and working on work that I could have a direct impact on.
13:20And so I started looking for platforms or, like, roles at companies that I got to use on an everyday basis.
13:27And YouTube was really high on my list because there's very few things that I watch or consume more than YouTube outside of, like, you know, books and the like.
13:33And so I landed at YouTube, fell into trust and safety after kind of, like, jumping around at different places and really just fell in love.
13:41I think I enjoyed the process of being able to build tools that keep the platform safe.
13:47Not just for its users, but also just, like, we really are the beating heart of trust and safety.
13:51It's really fulfilling work.
13:54And, like, as AI becomes more of a presence in the work that we're doing, which honestly has happened, like, two and a half, three years ago, it's been exciting to be at the forefront of that.
14:04Having to think ahead of the curve to make sure the app still stays delivering the level of quality experience for our users that it has been, even as this technology comes in and becomes enmeshed in our everyday lives.
14:17Thank you very much.
14:47Thank you.
15:17Thank you.
15:19Thank you.
15:47Thank you.
15:49Thank you.
15:51Thank you.
15:52Thank you.
15:53Thank you.
15:55Thank you.
15:56which is taken from all disciplines
15:59and as I said
16:00it's not very uncommon in
16:02some parts of the world but at that time
16:05in India it was quite a novelty
16:07and we've seen
16:09very very successful graduates
16:11make their mark in many
16:13parts of the world
16:14What's the role of artificial intelligence
16:17in the study programs?
16:19We have
16:20emphasis on AI
16:22now it has taken on
16:25a larger dimension
16:26but in terms of our
16:28research and our
16:30interest in anything
16:32related to these areas
16:34we have been pursuing for a long time
16:36very recently we've set
16:38up the AI Center
16:41for Excellence which has
16:43three areas of
16:44focus and our
16:46aspiration is to lead in this space
16:49We'll explore those three areas
16:51of focus on our next segment
16:52Ananya, thanks for joining us today
16:54Where should prospective students go to learn
16:56more about Shiv Nader?
16:57We would encourage people to visit our
17:00website
17:01snu.edu.in
17:05slash home
17:07That is where you would find everything
17:09Thank you very much for having me
17:12I hope you feel interested
17:14to visit our website
17:15and if you come to India
17:17please visit us
17:18What I liked about that
17:31as you were talking about
17:32the engineering
17:33I saw some parallels
17:34you know one of the
17:35plights of the engineer
17:36is that there's never
17:37the headline
17:38in the newspaper
17:39that you know
17:40bridge doesn't fall
17:41for one more day
17:41you know right
17:42it's only when the bridge
17:43falls that it's
17:44in the headline
17:45and this is what
17:45engineers around the world
17:47are constantly frustrated about
17:48by the fact that
17:49they do their job
17:50means that
17:51nobody knows about it
17:52and just like
17:53as you were saying
17:54if you're on YouTube
17:55if no one sees a
17:56content that's inappropriate
17:58then they don't think about
18:00the fact that
18:00they didn't see that
18:01and so it's hidden there
18:02I've had to trade myself
18:04to think of that
18:04as a sign that
18:06we're doing a good job
18:07you know
18:07and be like
18:07you know what
18:08we're doing a good
18:08if no one's talking about us
18:10it means we're doing
18:10exactly what we need to do
18:11and we should pat ourselves
18:12on the back
18:12so yeah
18:13very proud of the work
18:14that we're doing
18:15and that we're going
18:16to continue doing
18:16so yeah
18:17I like that
18:18bridge analogy
18:18that you shared
18:19the opposite though
18:21is what's different
18:22about let's say
18:23the traditional engineering
18:24is that
18:24traditional engineering
18:26has like civil engineering
18:27and chemical engineering
18:28has established principles
18:30that are
18:30the physics isn't changing
18:32on you every day
18:33that the
18:34building materials
18:35you know
18:36there are improvements
18:36and I'm not denigrating
18:38the fact that there's improvements
18:39in those industries
18:40but it feels very different
18:42than how rapidly
18:44or you said
18:45over the last
18:45two and a half years
18:46how rapidly
18:47the tools have changed
18:48so maybe some of the
18:49engineering analogy
18:50breaks down
18:51and you have to respond
18:52so quickly
18:53yeah
18:54100%
18:55I feel like there's
18:56never a dull moment
18:57right
18:57I think that's
18:58one of the reasons
18:58that I've enjoyed
18:59being in this space
19:00is you have to
19:02constantly be learning
19:04and constantly
19:05evolving
19:06and constantly
19:08looking at new ways
19:08to use the tooling
19:09or the skills
19:10that you have
19:10at your disposal
19:11right
19:11it used to be
19:12hey
19:13you just have
19:13a set
19:14of languages
19:16that you're really
19:16good at coding at
19:17but now it's
19:18there's vibe coding
19:18coming into play
19:19and now everyone's
19:20learning how to
19:21you know
19:21build things
19:22in half amount
19:23of time
19:24using these LLMs
19:25right
19:25which has been
19:26a very exciting
19:27opportunity for us
19:29I played around
19:30and built an app
19:31over the weekend
19:31with one of the
19:32Google DeepMind Labs
19:34tools
19:34and it was really exciting
19:36it was just like
19:36these are things
19:37that I historically
19:38didn't think
19:38I'd be able to
19:39you know
19:40build out
19:41or at least
19:41it would take me
19:42a really long time
19:43to but it's been
19:44really cool to see
19:45all the things
19:45that you can
19:46sort of like
19:47build out
19:48with this technology
19:49when you really
19:49focus on using it
19:50as a learning tool
19:51as a tool for good
19:52you know
19:53and it goes back
19:54to what I was
19:54saying earlier
19:55about adaptability
19:57right
19:58like my story
19:59itself
19:59right
19:59started out
20:00in chemical
20:00engineering
20:01and now we're
20:01interested in safety
20:02it's as humans
20:03we are very
20:04adaptable
20:04I think that's
20:05one of our
20:06superpowers
20:06that we can
20:07harness
20:07and as this
20:08technology becomes
20:09more prevalent
20:11in the workforce
20:12I think we're
20:12better suited
20:13like feeling
20:14the fear
20:15of the change
20:16that's coming
20:17with this tool
20:17but then also
20:18guiding that fear
20:19towards like a
20:20curiosity
20:20and using that
20:22to learn more
20:24about this technology
20:25and how we can
20:26basically like
20:26bulletproof
20:27whether it's our
20:27careers
20:28or just our
20:28lives
20:29or you know
20:30how we can
20:30use this
20:31technology
20:32for good
20:33we got some
20:34lofty ideas
20:35there but I
20:36have to go back
20:36and push you
20:37what was the app
20:38come on every
20:39listeners want to
20:40know what was
20:40the interview
20:41that you made
20:42over the weekend
20:43we're not gonna
20:44let that go so
20:45quickly
20:45I was trying to
20:46figure out what
20:47I wanted to do
20:48next with my hair
20:49it took me
20:50an embarrassingly
20:51long amount of
20:51time but it
20:52was really fun
20:53and I was able
20:54to generate a
20:55little app where
20:56I can just
20:56upload a photo
20:57and then like
20:58scroll through a
20:58bunch of
20:59different hairstyles
21:00you know and
21:00figure out what
21:00it is I wanted
21:01to do different
21:02hairstyles hair
21:03colors
21:03like a fun
21:04little exercise
21:05okay you
21:07responded well
21:07to me putting
21:08you on the
21:08spot thank
21:09you I
21:10might go
21:10off on a
21:10bit of a
21:11tangent here
21:11by like
21:12Sam do
21:13you know
21:14what the
21:14biggest as
21:16of 2025
21:16what the
21:17like number
21:18one use
21:19case for
21:19AI was
21:20oh if
21:21it's my
21:21class it's
21:22doing homework
21:23you know
21:25what I
21:25think it's
21:26close let
21:26me actually
21:26pull up the
21:27picture because
21:27someone sent
21:28this to me
21:28a couple
21:28weeks ago
21:29and it
21:29was actually
21:30kind of
21:30mind-blowing
21:31homework is
21:32definitely up
21:33there yeah
21:34in 2024
21:35the number
21:36one use
21:36case was
21:37generating
21:37ideas I
21:38think homework
21:40is probably
21:40somewhere around
21:41there generating
21:42ideas therapy and
21:43companionship was
21:44number two and
21:44then specific
21:45research but in
21:462025 the
21:48biggest use
21:48case for
21:49generative AI
21:50is therapy and
21:51companionship and
21:53number two is
21:54organizing life and
21:55then number three
21:55is finding purpose
21:57which is so
21:57interesting like
21:58when you look at
21:59those top three
22:00things it's and
22:01I mentioned
22:01earlier how our
22:02communities are
22:03becoming increasingly
22:04online people are
22:05now also leaning
22:06on to generative AI
22:08as some form of
22:08your community
22:09right I mean that
22:10presents such an
22:11interesting dynamic
22:13people using this
22:14tool for emotional
22:16support or for
22:17emotional connection
22:18is how do we
22:21remind people as
22:22they use this tool
22:23that AI is not
22:24sentient it might
22:25appear to be you
22:26know but it
22:27really isn't it's
22:28very convincing
22:29but we need to
22:30be very cognizant
22:32of the parasocial
22:32relationships that
22:33could be formed
22:34from using that and
22:35the cognitive
22:35vulnerabilities that
22:36could come as a
22:37result of becoming
22:37over reliant on
22:39these tools right
22:40and at Google
22:41YouTube we're
22:42building guardrails
22:43to kind of help
22:44prevent this type
22:44of thing from
22:45happening people
22:46becoming overly
22:47reliant on this
22:48technology or
22:49developing some
22:51sort of well how
22:52do I put it
22:53basically just like
22:54putting safeguards
22:55against role play
22:57that could simulate
22:58sentience and just
23:00make sure that
23:00people are still
23:01very aware of
23:03the fact that when
23:04they're engaging
23:05with these tools
23:05they're engaging
23:06with like a
23:07machine and this
23:08is not a real
23:09person because
23:11back to the
23:12concept of
23:12cognitive security
23:13it's crucially
23:14important for us
23:15to as we build
23:17with AI tooling
23:18make sure that
23:19AI remains helpful
23:20and human-centered
23:21but we do not
23:23want it to
23:23pretend to be
23:24human you know
23:25and so how do we
23:26keep that at the
23:27forefront of all the
23:27work that we're
23:28doing
23:28and that's you
23:30know to tie back
23:30to what you
23:31started this by
23:32saying is learning
23:34more about the
23:34technology and
23:35learning how it
23:35works and learning
23:36what is good for
23:38and not good for
23:39very related to
23:40understanding that
23:41it is in fact a
23:42machine and and
23:43not an actual
23:44person for a
23:45relationship and
23:46what you speak to
23:47I think speaks of
23:48a potential growing
23:50divide between
23:51people who do
23:51learn more and
23:53get better using
23:53these tools and
23:55then people who
23:55then rely on the
23:56tools maybe
23:57overly rely or
23:58use them in
23:59unhealthy ways
24:00we've had a
24:01digital divide
24:02before and now
24:03we have a even
24:04larger digital
24:05chasm perhaps
24:06between these
24:07people and the
24:07way that they
24:08are used
24:08so actually one
24:10of the things
24:10that we have
24:11is we have a
24:11segment where I
24:12ask you some
24:12rapid-fire questions
24:13we've already
24:14sort of touched
24:15on a lot of the
24:15interesting ones
24:16here but what's
24:18moved faster or
24:19slower with
24:19artificial intelligence
24:20than you've
24:21expected
24:21it was the
24:24text-to-video
24:25or text-to-image
24:25and the
24:26accuracy of it
24:27I think we all
24:28remember a few
24:29of the images
24:29that went viral
24:30like a year ago
24:31where it was
24:31like the Pope
24:32in like a fancy
24:33winter coat
24:33and stuff like
24:34that and how
24:35it was still
24:36very easy to
24:36distinguish that
24:37that wasn't
24:37real but
24:38fast forward to
24:392025 it's
24:40becoming increasingly
24:41difficult to
24:42distinguish between
24:43what's real and
24:44what's not and
24:45to me that's the
24:46thing that's kind
24:46of taking me
24:47aback
24:47what do you wish
24:49that AI could do
24:49better
24:50sometimes I
24:53feel like I
24:54have to put in
24:54a lot of
24:55effort in my
24:56prompt engineering
24:57and I think it's
24:58really just a
24:58matter of time
24:59until we get
24:59there right
25:00that I
25:01wouldn't need
25:02to be as
25:04detailed or
25:04like as specific
25:05with my prompts
25:06and it would be
25:07a lot easier for
25:08it to retain a
25:08lot more context
25:09or just intuit
25:10based on whether
25:11it's like past
25:12conversations and
25:12such what it is
25:13I'm trying to
25:14accomplish
25:14what about AI
25:16frustrates you the
25:17most
25:18oh it frustrates
25:19me
25:20I don't know if
25:22it's like artificial
25:23intelligence itself
25:24that frustrates me
25:25but I think it's
25:25how it's being
25:26used and maybe
25:27it's not frustrated
25:28but it's more
25:28concerned
25:29I did mention
25:30about the
25:31you know people
25:32using artificial
25:33intelligence as
25:34almost like an
25:34emotional support
25:36person building
25:37those very
25:38parasocial
25:38relationships it
25:39does kind of
25:39scare me a
25:40little bit
25:41what's the best
25:42way you like to
25:43use AI
25:44was it making
25:44your app
25:45what are other
25:45ways you personally
25:46like to use it
25:47the app was a
25:49fun way
25:49another way is
25:50I recently
25:51signed up for a
25:52triathlon and I
25:53had no idea what
25:54I was doing
25:54I basically like
25:56talked to my
25:57LLM and said
25:57hey I have
25:58x number of
25:59months to train
26:00for this sprint
26:01triathlon
26:01can you help me
26:02create a workout
26:03plan
26:04I think like life
26:05planning in general
26:05is a fantastic
26:06use for it
26:07and so that's
26:08one of my
26:09favorite ways
26:09just at least
26:10that I've
26:10currently been
26:11able to use
26:12AI for
26:12gave me a
26:13fantastic week
26:14by week
26:14breakdown
26:15that I can
26:15use based on
26:16my current
26:16goals and
26:17fitness level
26:18and then from
26:20a work
26:20standpoint
26:21maybe like
26:22ramping up
26:22onto a new
26:23effort and
26:23really trying
26:24to understand
26:25what it is
26:26we're trying
26:26to do
26:26notebook LM
26:27has been
26:28such a
26:29fantastic tool
26:30it's one of
26:30the google
26:31tools out
26:32of deny
26:32where you
26:32can just
26:33drop a
26:33bunch of
26:34documents and
26:35such and
26:35have it
26:36synthesize a
26:36really fantastic
26:38summary for
26:38you can create
26:39a podcast for
26:40you based on
26:40any question
26:41you ask
26:41and it's
26:42been a
26:42really good
26:42way to
26:43ingest
26:44information in
26:45a short
26:45amount of
26:46time and
26:46kind of just
26:47get up to
26:47speed on
26:48things really
26:48great so
26:49yeah I think
26:50those are my
26:50two favorite
26:50use cases
26:51that's great
26:53actually too
26:53bad it won't
26:54do the
26:54exercise for
26:55you but
26:55maybe it
26:56will save
26:56you enough
26:57time by
26:57summarizing
26:58things that
26:58you have
26:59the time
26:59to exercise
27:00yeah
27:00thanks so
27:02much for
27:02joining us
27:03today we've
27:03enjoyed talking
27:04with you and
27:05thanks for
27:05coming on the
27:06show yeah
27:07of course it
27:07was great
27:07thanks for
27:08having me
27:08Sam
27:08thanks for
27:10listening on
27:11our next
27:12episode I'll
27:13speak with
27:13G2 Patel
27:14president and
27:15chief product
27:16officer at
27:16Cisco please
27:18join us
27:18thanks for
27:20listening to
27:21me myself and
27:21AI our show
27:22is able to
27:23continue in
27:23large part due
27:24to listener
27:25support your
27:26streams and
27:26downloads make a
27:27big difference
27:28if you have a
27:29moment please
27:29consider leaving
27:30us an Apple
27:31podcast review or
27:32a rating on
27:32Spotify and
27:34share our show
27:34with others you
27:35think might find
27:35it interesting and
27:36helpful
27:36you
27:48you
27:48you
27:49you
27:49you
Sé la primera persona en añadir un comentario