Skip to playerSkip to main contentSkip to footer
  • 5/15/2025
Google drops the mind-blowing Gemini 2.5 update, shaking up the AI world like never before! From groundbreaking features to crazy new AI breakthroughs, this is the latest revolution you don’t want to miss. Stay tuned for all the exciting AI news! 🌐✨

Google #GeminiUpdate #AIRevolution #ArtificialIntelligence #TechNews #Innovation #MachineLearning #AI #FutureTech #GoogleAI #TechUpdate #AIbreakthrough #NextGenAI #DeepLearning #TechTrends #AInews #Gemini25 #DigitalRevolution #CuttingEdge #AIcommunity
Transcript
00:00Google just dropped a surprise Gemini update that turns web app coding into a
00:06one-prompt magic trick weeks before I.O. Apple's secretly striking a deal to
00:12stuff that same AI into iPhones. Meanwhile, OpenAI is tearing up its
00:17corporate plans, slashing Microsoft's cut, and casually spending three billion to
00:23buy a coding startup. And that's not even the wild part. Because HeyGen just
00:29launched avatars so real they'll freak you out. Light tricks dropped a Hollywood
00:34level video model you can run on your laptop. And a new music AI just made four
00:40minute tracks in 20 seconds a reality. Yeah, the last few days in AI have been
00:45pure chaos in the best way possible. So let's talk about it. Hold up, before we go
00:51any further, this is big. We just launched our most advanced course yet inside the
00:57paid school community. And it's made for creators who actually want to build and
01:01cash in on AI avatars, influencers, and digital personas. If you're ready to level
01:06up and turn all this AI hype into something real and profitable, hit the
01:11link below. Don't miss it. So yesterday Google basically shouted surprise and
01:16pushed an early preview of Gemini 2.5 Pro the i slash o edition out the door a
01:20couple of weeks ahead of schedule. People at Google's AI studio are already calling it
01:25the web dev arena champ because it jumped 147 ELO points over the previous build.
01:31That ELO thing is basically a popularity contest judged by humans on how nice and
01:35functional your generated web apps look. And the new score plants Gemini on top of
01:41the leaderboard. It's also flexing an 84.8% on the video MME benchmark which
01:46measures how well a model actually understands what's happening in video
01:50clips instead of just pretending. Michael Trally, the cursor CEO who lives
01:54inside VS Code basically, said internally they're seeing far fewer botched tool
02:00calls. Meaning the model finally stops hallucinating that a function exists when
02:05it doesn't. Tulsi Doshi who runs product for Gemini claims they rushed the release
02:11because devs wouldn't stop begging for it. And I kind of believe her if you're
02:15playing with the Gemini API right now you get the new model automatically in Google
02:20AI studio, Vertex AI, and the consumer Gemini app where the canvas feature lives so
02:25you can drag boxes around and have the bot spit react code on the fly. Oh and the
02:31crazy part the context window is still 1 million tokens. Basically an hour of 4k
02:37video or 11 hours of audio and Google says they're aiming at 2 million. Now while
02:42Google's busy leveling up Gemini, Apple's been watching from the sidelines
02:46thinking hmm maybe we borrow that for a minute. According to people familiar with
02:51the talks, Apple intelligence on iOS 19 is set to integrate Gemini, at least
02:58temporarily. Remember Samsung's Galaxy S25 has already been bragging about Gemini in
03:03its camera app. So Cupertino doesn't want to look slow when the iPhone 17 lands this
03:10fall. Sundar Pichai hinted they are basically at the handshake stage. The idea is
03:15that Siri and all the fancy onboard models Apple's been teasing just aren't
03:20cooking fast enough so Gemini gives them a booster shot. Analysts figure Apple will
03:25revert to its own stack once those gaps close but for now you might actually get
03:30Google's large language mojo inside iOS the same way you already get Google Maps.
03:36It's a bit funny Apple keeps bragging about privacy islands and running
03:40everything on device yet here they are calling up Mountain View for reinforcements
03:45because well competition and for the retail side think bigger baskets at
03:50checkout if your phone suddenly crafts shopping lists and augmented reality
03:55product demos that don't lag. While Google and Apple trade high fives, OpenAI just
04:01ripped off its own corporate band-aid. Sam Altman wrote a letter to staff saying in
04:06effect look we tried flirting with the idea of a fully separate for-profit arm but nah the
04:12nonprofit stays in charge. You remember the November 2023 drama when Sam got booted for
04:20a weekend and everyone started sweating governance? That aftershock never really faded so Monday's
04:26statement locks in the nonprofit as the controlling shareholder of the public benefit corporation rather
04:32than spinning it out. Brett Taylor who chairs the board said they even worked with the attorneys
04:37general of Delaware and California to ensure everything stays aligned with OpenAI's original
04:42nonprofit mission just to avoid any accusations of straying off course. But Elon Musk predictably
04:50is still suing. He originally filed the lawsuit over OpenAI's plan to shift toward a
04:55for-profit model and now that they've scrapped those plans entirely and doubled down on nonprofit
05:00control. He's still suing. He's clinging to a fight that no longer exists and honestly it's
05:05starting to feel like a tantrum in slow motion. Sam Altman brushed it off basically saying we've got
05:11bigger things to deal with like scaling enough GPUs to meet global demand. Money still talks though
05:17and that's where OpenAI's second bombshell lands. According to leaked investor slides they're slicing
05:24Microsoft's revenue share. Under the current deal 20% of OpenAI's top line flows to Redmond through
05:312030 but OpenAI now says that drops to 10% by decade's end and it may shrink further if they hit
05:38certain volume tier. Microsoft's cool with it publicly because they still want first dibs on the
05:43tech but you can feel the renegotiation tension simmering. Meanwhile OpenAI's trying to raise another
05:4940 billion at a 300 billion valuation soft bank style so they need that revenue margin any way
05:56they can carve it. Which brings us to the third headline. OpenAI is buying Windsurf. Yeah that's
06:02Codium's rebrand for about 3 billion bucks. Easily its biggest acquisition yet. Windsurf was last valued
06:09at 1.25 billion in August so that's a tasty markup. The tool's whole gimmick is real-time code
06:16completion plus a neat canvas view that lets you and the bot edit the same snippet side by side.
06:23By swallowing Windsurf, OpenAI beefs up ChatGPT's developer mode, competes head-on with GitHub
06:29Copilot, Anthropic's Claude-powered features, and Cursor's own IDE plugin. Right remember ChatGPT Pro
06:38already ships a code interpreter and small-scale canvas collab space but the Windsurf tech means
06:43broader language support and possibly a richer offline experience. OpenAI claims ChatGPT now has
06:50over 400 million weekly active users, up 100 million since December, so giving that crowd first-class
06:56coding toys matters if they want to monetize beyond the 9.99 subscription. Now let's jump to the fun
07:02stuff you'll actually see on screen. Hey Jen just rolled out Avatar 4 and people are calling it the
07:07upload one selfie and watch yourself talks update. You literally feed it a single photo and a voice script,
07:12maybe a 10 second WAV file, and the new audio to expression engine maps your tone, rhythm, and pauses
07:18onto hyper real facial motion, real enough that early testers on Twitter or X, whatever, are dropping
07:26microfilms of themselves, their pets, even aliens, with lip sync that doesn't jitter. One reviewer said,
07:34no words, and posted a 30 frame clip that looks like a Hollywood ADR session. The bigger idea is
07:41that this isn't animation in the Pixar sense, it's direct expression transfer. If you hate being on
07:48camera, now you can send your avatar to present your quarterly slide deck while you sip tea off screen.
07:54Not to be outdone, light tricks. Those guys behind Facetune just open sourced LTX Video 13B,
08:01a 13 billion parameter video model that they claim runs on consumer GPUs. The original LTXV had only
08:08two billion parameters but made headlines last November for spitting out five second clips on a
08:13gaming laptop. The new ref jumps in size yet somehow still flies thanks to something called the UE Efficient
08:20Q8 kernel. It layers frames the way an artist starts with a pencil outline before dropping paint,
08:27a multi-scale rendering approach that lets you refine scenes step-by-step and speeds final rendering up
08:33to 30 times faster than similarly sized models. You can do camera motion curves, multi-shot sequencing,
08:41keyframe edits, and because it's open source, the weights sit on Hugging Face under a license that's free
08:47for orgs making under 10 million a year. Another key piece, light tricks source its training set from
08:53Getty and Shutterstock, meaning you can ship the output commercially without sweating hidden copyright
08:59traps. For indie filmmakers or influencers on a budget, that's a huge yes plea. An open source party
09:05doesn't stop at video. Ace Studio just unveiled Ace Step V1 3.5B, a music generation model that's 15 times
09:14faster than large Englage model approaches. Translation, it produces a four minute track in about 20 seconds on an
09:21NVIDIA A100. And because it combines diffusion with a linear transformer conditioned by a deep compression
09:27autoencoder, it keeps melody, harmony, and rhythm coherent over the full length. On a desktop RTX 419,
09:35the real-time factor jumps above 30, which is absurd, like GarageBand on rocket fuel. You can guide the
09:42structure with text, give me a mellow Lofi beat chorus at 90 seconds, fade out strings at 330, and the model
09:49handles the timeline. They even list hardware benchmarks, A100, 4090, 3090, Mac M2 Max, so you
09:57know what to expect. It's Apache 2.0, meaning free for basically anything except disallowed uses like
10:04copyrighted track clones. There are caveats, generate past five minutes and the structure might drift,
10:10Chinese rap turns out wobbly, and vocals still sound kind of plastic, but for backing tracks or quick
10:16demos, it's borderline magical. Anyway, that's the whirlwind. Update your bookmarks, maybe clear
10:22out some SSD space for model weights, and keep your eyes on what drops at actual Google I slash O in a
10:28couple of weeks, because if Gemini's preview arrived early, you can bet the onstage demo will try to one
10:33up itself. And hey, if all this feels like drinking from a fire hose, welcome to 2025, the year creative
10:41tooling stopped waiting for humans to catch up. Thanks for watching, and I'll see you in the next one.

Recommended