00:00Alright, let's get the Quen image model and its components set up in ComfyUI.
00:05We'll start by grabbing the main model files.
00:08Right now I'm on the ComfyUI examples page, but I'm going to navigate to Hugging Face.
00:14First up, the GGUF model.
00:16For 8GB VRAM, choose this option.
00:21Press Download.
00:27Now I'll click on the link for the text encoder model.
00:30And we'll do the same for the VAE.
00:47Okay, with all the models downloaded, let's get them organized in the correct ComfyUI folders.
00:52I'll navigate into my ComfyUI folder, Models, and then Diffusion Models.
01:02Then I'll drag the GGUF file into the Diffusion Models folder and paste it here.
01:11I'll open the Text Encoders folder and drag the Scaled Safe Tensors file into it.
01:22And finally, I'll go into the VAE folder and place the VAE Safe Tensors file there.
01:45All the models are in place.
01:57Now before we do anything else, I'll open the ComfyUI manager and click on Update All to make sure ComfyUI itself is completely up to date.
02:06I'll confirm the Restart to apply the updates.
02:12Now let's grab a sample workflow to get started.
02:26I'm going back to the Hugging Face page, and I'll click on the Example Workflows link.
02:35To download the workflow, I'll click Raw.
02:38Press Control and S to save the JSON file.
02:41Back in ComfyUI, I'll load that workflow by going to Workflow and Open.
02:54Select the workflow file.
03:04Okay, in the UNet loader, we are going to choose the GGUF file we just downloaded.
03:11For the Load VAE, we'll select the Quen image, VAE.
03:27And for the Load Clip, we'll click the FP8 Scaled Safe Tensors.
03:37It's time to test the workflow.
03:39I'm pasting in a cyberpunk-themed prompt.
03:49Now I'll click Run.
03:53And there we have it.
03:54The first image is generated successfully.
03:56I'm going to enter another similar prompt.
04:10While that's going on, let's check and see how much time it took to generate our last image.
04:17A little over eight minutes.
04:19Okay.
04:26While we're at it, to refresh the front-end preview images for your templates in ComfyUI,
04:38go into your main ComfyUI folder and run the update ComfyUI bat file.
04:45This will open a terminal window and automatically pull the latest updates from the ComfyUI repository,
04:52including front-end assets like template thumbnails.
04:55In case you don't see the latest ones.
04:57Back in ComfyUI with the update finished, a new image is ready.
05:03A wolf in a neon-drenched city.
05:06Looks great.
05:08I was testing out the CFG from 4.5 to 3.7
05:13and found that on 3.7 it generated the images in under four minutes without loss in quality.
05:19So for some reason my generations were all taking more than eight minutes to generate
05:37and decided to speed up the workflow.
05:39I included the Sage Attention and the FP16 accumulation along with Allura.
05:45Sage Attention makes image generation faster and lighter on VRAM.
05:49F-P16 accumulation speeds things up by using lower precision math.
05:56You can check the link for the video on how to install these in the description.
06:00Also, I'll include this workflow to make things easier.
06:03Allura Loader applies Allura file to your base model and controls how strongly it affects the image.
06:09Go back to the Hugging Face page and download the Quen Image Lightning LoRa
06:13and place this file into the LoRa's folder.
06:17Then update ComfyUI again.
06:19The Quen Image Lightning took 4 minutes, 42 seconds to generate.
06:29While the LightX2V took 3 minutes, 55 seconds to generate.
06:35These are while using the CFG at 3.7
06:37and the quality between them were still identical.
06:42I found that LightX2V released a new Quen Image Lightning 1.1 LoRa model on their Hugging Face page.
06:51Quen Image Lightning, 8 Steps V1, 0.1 is faster and more stable than V1.0 with smoother textures and less fuzziness.
07:01Compared to LightX2V, it keeps strong text clarity and realism.
07:07From here, you'll press Download.
07:11Click on ComfyUI, then Models and place the file into the LoRa's folder.
07:28Go back to your ComfyUI.
07:46Press Manager and update all.
07:48Go back to your ComfyUI.
07:48Go back to your ComfyUI.
07:49Go back to your ComfyUI.
07:50Go back to your ComfyUI.
07:51Now in the LoRa's Loader model, you'll notice the Quen Image Lightning 1.1 model.
08:18Choose this option.
08:21Now we'll press Run.
08:30Wow, look at the difference here.
08:32This took almost 8 minutes with the CFG at 3.7.
08:38The results are absolutely stunning.
08:41Every detail is razor sharp, and the image looks way cleaner and more polished than the 1.0 and LightX2V.
08:51Let's run another similar prompt.
09:06Just like the first, this second image came out flawless, crisp edges, vibrant colors, and every detail perfectly rendered.
09:16I decided to adjust the CFG to 4 for this generation.
09:22It took 7 minutes to generate.
09:24Let's see how long it take for the LightX2V to generate with the CFG at 4.
09:35So, having the CFG at 4 using the LightX2V brought the generation time back to 5 and a half minutes.
09:44I'm going to switch back to the 1.1 version and adjust the CFG to 4.5 and see the difference in results.
09:51You can see it takes 5 and a half minutes to generate them this way without losing any quality.
10:08So, comparing the LightX2V with the Quen image, Lightning 1.1, I would choose the Quen image Lightning 1.1 while having the CFG at 4.5.
10:21It brought the generation time to the same as the LightX2V while gaining a huge quality difference.
10:38For more tutorials like this, smash that subscribe button and stay tuned.
Comments