Skip to playerSkip to main contentSkip to footer
Just released: Wan 2.2 — a powerful open-source image-to-video AI model now running smoothly in ComfyUI, even on setups with just 8GB of VRAM 💻

In this step-by-step tutorial, I walk you through the complete installation and execution process, including:
• Downloading and setting up Wan 2.2 GGUF models (5B & 14B)
• Installing the LoRA, VAE, and UMT5 text encoder
• Running official Hugging Face workflows
• Generating your first cinematic AI animation from a single image
• BONUS: Using Gemini CLI to fix a Python dependency automatically

📹 This guide is beginner-friendly, fully visual, and open-source.
Whether you're an AI artist, developer, or tech enthusiast, you'll be able to recreate the results on your own setup.

🎬 Watch the full setup + test results on YouTube here:
🔗 https://youtu.be/7hUO6KhUsvQ

#Wan22 #ComfyUI #AItools #ImageToVideo #AIVideo #GenerativeAI #AIworkflow #OpenSourceAI #AIArt #Tutorial
Transcript
00:00Alright, let's get WAN 2.2 fully installed in ComfyUI, including all the necessary models and dependencies.
00:08First, we're on the Hugging Face page for a GGUF version of the WAN 2.2 image-to-video model.
00:15I'm scrolling through the files here.
00:17For this setup, I'm going to download two of the 14 billion parameter quantized models.
00:22I'll click the download icon for the WAN 2.2 image-to-video-high-noise-gguf file first.
00:31And now I'll grab the low-noise file as well.
00:53Now that those are downloaded, let's move them to the right place.
00:57I have my Downloads folder open, and my main ComfyUI Portable Installation folder open in another window.
01:10For these specific image-to-video models to be recognized, they need to go into the ComfyUI Models U-Net folder.
01:22So, I'll just select both files from my Downloads and drag them right into the U-Net folder.
01:44Next up, many of these advanced workflows use Allora for extra control.
01:49I'm on the page for the Light X2 Volts Image 2 Video LoRa, and I'm going to download the Rank 32 Safe Tensors file.
02:02Okay, the LoRa is downloaded.
02:05This one goes into the LoRa's folder.
02:19And we'll move that Light X2V Safe Tensors file from Downloads into the LoRa's folder.
02:26A full WAN 2.2 workflow requires a few more key components.
02:42We'll start with the text encoders.
02:44I'll follow their link to the repackaged models on Hugging Face.
02:48This page has the text encoders we need.
02:50We'll see you next time.
03:20Okay, with the text encoders downloaded, it's time to move them.
03:25These files belong in the text encoders directory.
03:28I'll navigate to that folder now.
03:30Now I'll drag the Safe Tensors file and drop it into the text encoders folder.
03:43The last essential piece we're missing is the VAE.
03:56Back on the GitHub page, click the link for the WAN 2.1 Safe Tensors model.
04:01I already have it downloaded, but if you don't, download it through the link.
04:05This file goes into the VAE folder.
04:13Okay, all our models should be in place.
04:28I've also downloaded the WAN 2.2 Safe Tensors.
04:32This file goes into the VAE folder.
04:43Now let's grab a sample workflow to test everything.
05:13Now inside the Comfy UI, the first thing I'll do is open the manager.
05:24I'm going to click Update All to make sure all the custom nodes are current.
05:32It will prompt for a restart, which will confirm.
05:43Let the browser reload.
05:56Okay, with a fresh, updated interface, I'll click Open, then open the workflow file we downloaded.
06:13If you don't have Sage Attention installed, you will get a message like this.
06:27This means we're missing a Python dependency.
06:31I'm going to bring up my Gemini CLI.
06:34I've given it the error message and access to my Comfy UI folder.
06:38It's immediately identified the missing Sage Attention Module.
06:42Gemini CLI is smart enough to search for a better solution.
06:46It's found an installation helper script on GitHub.
06:49I'll authorize it to download and run the script.
06:53It automatically downloads and installs all the correct versions of the dependencies like Triton and Torch.
07:08Make sure to copy and paste this link when needed for the CLI.
07:20I've also added all the links in the description for easier access to all the necessary files you'll need for this workflow.
07:38After the install, it found an error related to FP-16 accumulation and diagnosed a PyTorch version issue.
07:52If you get the same message, just write it to the CLI.
07:55It's now fixing it by uninstalling the conflicting version and installing the correct nightly build of PyTorch 2.7.0.
08:04This saves a ton of manual troubleshooting.
08:06All error messages should be resolved.
08:09Restart your Comfy UI one more time and everything should be good.
08:21I'm loading a different workflow now, one that uses an image of a wolf as the input.
08:26I'll then prompt for the wolf to stand on a cliff with dark clouds and lightning.
08:36A perfect animation of our wolf standing on a cliff with dark clouds and lightning.
08:43These generations take about four minutes to generate, which is pretty fast.
08:57Just to show another example, here's a workflow ready to go with a seal wearing sunglasses.
09:05The possibilities are endless now that the setup is complete.
09:11Look at the high quality of the video.
09:14Hairs and whiskers moving.
09:16Reflection on the glasses.
09:18Clothes flapping.
09:19This looks really good.
09:20And there you have it.
09:23A complete installation successfully generated videos.
09:27I really hope this tutorial was helpful for you.
09:30If you enjoyed the video, please like and share.
09:32And be sure to subscribe to my channel for more tutorials just like this one.
09:36Thanks for watching.
Be the first to comment
Add your comment

Recommended