Skip to playerSkip to main content
In this fiery episode, we dive deep into the world of cryptocurrency to explore a pressing question: Will Nano Banana be overtaken? ๐Ÿš€ Join us as we analyze the latest trends, key factors, and expert opinions surrounding this exciting digital asset!

With the crypto market constantly evolving, it's essential to stay ahead of the game. Weโ€™ll unveil potential contenders and give you insights on how to navigate this thrilling landscape! ๐Ÿ’ธ๐Ÿ“ˆ

Donโ€™t miss out on the future of investments!

Registration Link: [https://Tava.short.gy/aibanana]

If you're serious about maximizing your crypto potential, check out these platforms where you can trade, invest, and grow your assets:

- Bitunix: [https://Tava.short.gy/bitunix-signup]
- Toobit: [https://Tava.short.gy/toobit-signup]
- Coinex: [https://Tava.short.gy/coinex-signup]

Smash that like button if you enjoyed the video, subscribe for more insightful content, and donโ€™t forget to drop a comment with your thoughts on Nano Banana vs. competitors! ๐Ÿ”ฅ๐Ÿ’ฌ

#NanoBanana
#Cryptocurrency
#Investing
#CryptoTrends
#Bitcoin
#Altcoins
#Trading
#FinancialFreedom
Transcript
00:00Hello friends, so it is that time of the year again. OpenAI has blessed us with a new model,
00:05more specifically with a new image model called GPT-Image 2.0, which is consequently the successor
00:11to GPT-Image 1.5. You may still remember it from I think over a year ago when I believe
00:17the first
00:18version of GPT-Image, i.e. 1.0, came out and led to this whole Studio Ghibli craze. Over time,
00:26Google more specifically caught up with the release of Nano Banana, I believe it was in
00:31August of last year, and ever since it released other models such as Nano Banana Pro and Nano
00:38Banana 2 as of recently, and both of them have been basically de facto leader in the AI image
00:45generation space since again Nano Banana was launched back in August. But with the release
00:51of GPT-Image 2, I think we now have a viable competitor, finally. And what makes GPT-Image
00:582 that powerful and that much of a viable competitor? Well, first of all, its text rendering capabilities
01:04are second to none. You can now create detailed infographics, you can create scribbles and
01:11everything. You'll see it in some of the generations that I'll show you. The text is insanely good.
01:17What I sometimes don't like about Nano Banana is that whenever they create text, it's almost
01:21always the same font. With GPT-Image 2, the font variability is actually much, much better
01:27as well. The next thing that it's really, really good at and it's really powerful is its word-level
01:32understanding. Meaning, if you, for example, give it an image and say, okay, have this specific
01:37book cover or create a YouTube thumbnail, et cetera, et cetera, it will know about a specific
01:42thing that you tell it and then use that image and insert that image into the image that it
01:47generates. And then finally, the hyper-realism, i.e. the generation of people, is also really,
01:54really good. Now, the previous generations, i.e. 1.0 and 1.5, the people that you will be generating
02:00kind of look very comic-like and wasn't really usable for a UGC-type content, especially then
02:06if you use that as a start frame for a video. So, without further ado, I'll stop talking and let's
02:10actually start and use the model and I'll show you a bunch of things and a bunch of prompts
02:14that I've played around with. So, the first one would be something, again, to show you
02:19the word-level understanding, which is a screenshot of the YouTube landing page with regards to
02:25showing marketing-related content. So, while this generates, let's do another one and let's,
02:32coming back to what I was saying around the generations, meaning the hyper-realism of
02:40people. Now, let's do this. And this would be an image of a UGC influencer, Asian mid-20s,
02:45showing off a facial cream called Eternal Youth, and it's taken in an apartment. Now,
02:48the one thing that I want to add is 916 aspect, sorry, aspect ratio, because again, she takes
02:55a selfie, so we want 916 to be enabled. Now, while these load, the next thing that I want
03:00to show you is the other capability that it now has, because what OpenAI also added is tool calling.
03:09And what is tool calling? Well, what tool calling allows you to do is that, for example,
03:12you can now insert links and it will scrape those links, scrape the content of those pages
03:17and create the image graphics around it. So, what I want to do is, I want to ask it,
03:21please create a 16 to 9 infographic about the books Patrick Colson recommends. For those of you
03:33who don't know, Patrick Colson is the founder of Stripe. List the 20 best business books from the
03:41page. All right, and let's fire this off. Yeah, we said 16 to 9. So, let's just come back real
03:46quick
03:46to our generation. So, let's see. Here we have Neil Patel. That looks pretty damn good. Think Media
03:51is a very good marketing channel. Gary Vee. Nice, nice, nice. I like it quite a bit. Now, the one
03:58thing
03:58that you can maybe see, I mean, it depends on how much you're in the marketing game, but this is
04:03actually not Neil Patel. I mean, it kind of looks like Neil Patel, but it isn't really Neil Patel.
04:07And so, that is one of the things that OpenAI's GPT-2, as well as also Nano Banana, etc.,
04:15are still pretty bad at, which is character resemblance, meaning you give it an image of,
04:19let's say, a person, or you tell it, create an image of Sam Altman, etc., etc., and it's not really
04:23good to extrapolate from that image or from that knowledge and then create other images of that
04:28particular person. But I do have a solution for you, and that is the quick self-plug of this video,
04:33because we, at our startup GenViral, are a content generation, as well as a content publishing tool,
04:41meaning you can create content and then publish it to various social media accounts.
04:44And what we have solved is basically consistent character generation. So, as you can see here,
04:48that's actually me. Hopefully, you can see it in the video. And what you can do is,
04:52when you go here into AI Studio, you can go into Use Cases and click on Consistent Characters,
04:56and you can select yourself. I'm not sure why this is currently happening. Just give me a quick
05:01second and refresh the page. And, yeah, there you go. Okay, so, basically, what you could do is,
05:06you can create an image of yourself. So, this is what I've basically done before. I told it,
05:10hey, please create an image of myself, just to show you here. I can use it as concept. Please create
05:15a picture of me, Victor, looking straight into the camera with a plain white background, etc., etc.,
05:18and then I selected me as a person. Now, what I want to show you is, just to, again, quickly
05:23refresh
05:24the page. I want to show you how you can then use these types of images with GPT-2. So,
05:29now I've selected GPT-2, 69. I want to select Quality, Medium. Yep, that's fine. And I will pick
05:34this image that I've generated of myself. So, let's pick this one here. And now I can say, okay,
05:39please create a thumbnail with the attached person in the center about the launch of GPT-2. Make it like
05:54a very engaging YouTube thumbnail and include OpenAI-related things such as their logo. Okay,
06:07so let's see. Obviously, again, the prompt would need some refining and so working on it really,
06:11really depends on also the prompt and everything that you give it. But let's see what it comes up
06:16of. Now, coming back to our GPT-chat and interface. Now, coming back, we can see here, again, this is
06:25absolutely ridiculous. Like, really, really powerful because, again, these types of images are then
06:30being used if you are, for example, in the marketing space like me, if you want to promote your own
06:34apps
06:35and everything, these very same images will be used as a start and sometimes end frame for,
06:41for example, CDense 2.0. And the better your images are, the better, again, your start and end frames are,
06:46the better the video generations are that you get. And so, as a result, this would be an amazing
06:51start frame for that particular AI influencer to then plug this product called Eternal Youth. Now,
06:58let's come back to the Patrick Colson example. And as we can see, good to great. I think that's,
07:03yeah, Crossing the Chasm, I've read that. That is definitely the cover. Let's see,
07:07Build 2, yeah, the Lean Startup. I mean, again, world-level understanding is second to none.
07:13It knows from its training data, I'm honestly not even sure where exactly they get it from,
07:19but I'm probably sure that they, again, from the training data, have that already in place.
07:24Now, what you can see is it's not gonna be always super perfect. So, as you can maybe see in
07:29the video,
07:30how to win friends and influence people, it butchered this one here. So, it will also ultimately
07:35depend on how much text you generate in a given image. The more obvious you generate,
07:40the more fine-grained the text is, the more it will struggle with it to get something readily
07:45available. By the way, again, just coming back to our product. Let's see, GPT-2 image. All right,
07:52that is actually, okay, that's actually not too bad. I might as well use that for this video. So,
07:58let me know what you, if you like it and how you like it. Now, just real quick, by the
08:03way,
08:04you can also, in general, upscale images. So, you can either select 4K resolution, high quality here,
08:10or you can now take this image, what I'll normally do is and upscale it so that it looks like
08:13super,
08:14super crisp and can be used for, again, all kinds of different purposes, whether it is for product
08:20photography, et cetera, et cetera. So, let's just do one or two more things of the kind of like tools
08:26and
08:26everything that I want to show you and the capabilities. The last thing I want to do is,
08:30we have this app called PantryEye. This is our own cooking app, as you can see here. And basically,
08:36what I want to do is, let's again open a new chat, upload this link, and we are going to
08:41add a bunch
08:42of photos. And these photos are of screenshots, other app store screenshots that I like. And so,
08:47what I want to do is now, I want to ask it, please generate an app store screenshot preview in
08:56the
08:57attached style for our app pantry AI. Let's do 16 to 9 aspect ratio and fire this off. Now, again,
09:09ultimately, a lot of the times when you use these types of tools, what's really, really important is
09:14that you give it references meaning. In this case, I gave it these like app store screenshots, because
09:20again, I wanted to emulate a certain style. Now, in this case, it might not really know exactly what
09:24I mean, because all of these images are really different in terms of their coloring and everything,
09:29their styling. So, in your case, if you want to do something a little bit more detailed and on the
09:34nose,
09:35so to say, you also need to do things such as, for example, attach the logo, maybe you add like
09:40just straight up screenshots from the app. It really, really is dependent on your specific use
09:46case. But as a general rule of thumb, you really want to make sure that you give the AI as
09:51much
09:51context as you need. One more thing that I actually forgot to mention, and I really want to point out,
09:56I'm not sure if OpenAI is going to change it, but you can see it or you may or may
10:00not be able to see
10:01because you're actually not able to see it, which is that there's no watermark here, which I found
10:04very, very surprising. So, what Gemini always does is that they add this like star watermark here in
10:09the bottom, but with OpenAI, for whatever reason, they opted against including such a thing. And so,
10:13you could theoretically, and probably that's what a lot of people do, you get unlimited generations
10:17on the pro plan like I'm on, and then you can use these images for, again, whatever other tooling
10:21it is that you use. So, let's go back and let's see on Pantry Eye, Featured and Fast Company. Okay,
10:25it's not too bad. Again, the reason being why these are kind of okay, but not really that good,
10:31to be honest, is because we didn't also give it that much context. So, it obviously doesn't know
10:35about like how our app really looks like, and that was really the point that I tried to take home,
10:39is that yes, it can scrape links, it has good level, well level understanding, but you do, with
10:45regards to any images that you generate, you really need to give it the appropriate context. And so,
10:50with that being said, again, it's a huge step change. I'm super happy that we finally have some
10:54competition to Google's Nano Banana 2, because that will then motivate Google to come up with
10:58something probably even better, and overall, obviously, inspire both companies and many,
11:02many others to follow suit. And so, with that being said, guys, thank you very much for watching this
11:07video. If you have any questions about any of the workflows that I have, also about our tool GenViral,
11:13please feel free to ask those in the comment section, and otherwise, I'll see you in the next video. Peace!
Comments

Recommended