Create AI Videos with WAN AI
Open Source Leader

Experience WAN 2.2 with revolutionary MoE architecture and 27B parameters. Generate 720p@24fps videos on consumer GPUs with SOTA performance in both text-to-video and image-to-video generation.

Describe your vision - WAN 2.2 will create cinematic videos with SOTA quality

30M+
Videos Generated
2M+
Happy Creators
4.9/5
User Rating

Trusted by Professionals and Creators from leading brands and companies

Join thousands of creators and professionals who trust our AI video generation platform

Endless Creative Possibilities

Explore videos created by our community across different styles and categories

Advanced WAN AI Features for Developers & Creators

Everything you need for professional video creation with WAN 2.2's open-source MoE architecture and consumer GPU compatibility

Introducing WAN 2.2 MoE

WAN 2.2 debuts the first open-source Mixture-of-Experts architecture for video diffusion models. With 27B parameters activating only 14B per step, achieve enhanced efficiency without added computational cost.

SOTA Performance & Consumer GPU Support

WAN 2.2 achieves state-of-the-art performance ranking Top 3 on T2V and I2V leaderboards. Generate 720p@24fps videos on consumer GPUs like RTX 4090 with only 8.19 GB BRAM requirement.

Multi-Language Text Generation

WAN 2.1 is the first video model capable of generating both Chinese and English text. Features advanced Wan-VAE for encoding 1080P videos of any length while preserving temporal information.

Ready to experience open-source SOTA video generation?

Get Started with WAN

How to generate videos with WAN AI

Creating professional videos with WAN 2.2's open-source MoE architecture has never been more accessible. Experience SOTA quality on consumer hardware.

01

Choose Your Generation Mode

Select from text-to-video (T2V), image-to-video (I2V), or hybrid text-image-to-video (TI2V) modes. WAN 2.2 supports all generation types with its unified MoE architecture and advanced VAE system.

Text-to-Video (T2V)
Image-to-Video (I2V)
Hybrid TI2V mode
Multi-language text support
02

Configure & Generate

Set your WAN 2.2 preferences for resolution (480P-720P) and duration. The dual-expert MoE system uses high-noise for structure and low-noise for details, delivering movie-quality visuals with rich textures.

27B parameter MoE architecture
480P-720P resolution support
Movie-quality visuals
Enhanced training dataset
03

Download & Integrate

Your WAN 2.2 generated video is ready! Download in high quality or integrate using the open-source codebase. Access through ComfyUI, Diffusers, Hugging Face, and ModelScope for seamless workflows.

Open-source integration
ComfyUI & Diffusers support
Multiple platform access
Professional quality output

Ready to create your first WAN 2.2 video?

Join the open-source community using WAN's SOTA video generation technology with consumer GPU accessibility.

Start Creating with WAN
TESTIMONIALS

What Creators Say

Trusted by filmmakers, creators, and marketing teams worldwide

  • 5.0

    Crevas completely transformed my filmmaking workflow. I turned a rough script into a full shot list and generated cinematic-quality video in just one day. What used to take a production team a week, I can now achieve on my own.

    Jack Smith

    Jack Smith, Independent Filmmaker

  • 4.9

    As a creative director, my biggest challenge was producing consistent style videos at scale. With Crevas’ multi-model integration and one-click export, we produced an entire season of campaign videos 5x faster than our usual workflow.

    David Johnson

    David Johnson, Creative Director

  • 5.0

    Our studio delivered its first AI-driven film project using Crevas. The built-in collaboration and version control made remote teamwork seamless, and our client was impressed by both the speed and the cinematic quality of the output.

    Kevin Brown

    Kevin Brown, Film Producer

Frequently Asked Questions

Everything you need to know about WAN 2.2's open-source MoE architecture and SOTA video generation capabilities

WAN 2.2 debuts the first open-source Mixture-of-Experts (MoE) architecture for video diffusion models. Its dual-expert system uses high-noise experts for initial structure and low-noise experts for refined details, with 27B parameters but only activating 14B per step for enhanced efficiency.
WAN 2.2 achieves SOTA (State-of-the-Art) performance, ranking in the Top 3 on both T2V and I2V leaderboards at Artificial Analysis. It consistently outperforms existing open-source models and matches or exceeds state-of-the-art commercial solutions.
Yes! WAN is designed for consumer GPU accessibility. The T2V-1.3B model supports video generation on almost all consumer-grade GPUs, requiring only 8.19 GB of BRAM to produce a 5-second 480P video. WAN 2.2 can generate 720p@24fps videos on RTX 4090.
WAN 2.1 is the first video model capable of generating both Chinese and English text within videos. It features the advanced Wan-VAE that can encode and decode 1080P videos of any length while preserving temporal information, setting it apart from other models.
WAN is fully open-source with integration support for ComfyUI, Diffusers, Hugging Face, and ModelScope. The inference code and model weights are available on GitHub, making it easy to integrate into existing creative and development workflows.
WAN 2.2 features a significantly expanded training dataset with 65.6% more images and 83.2% more videos compared to previous versions. This enhanced training improves motion quality, semantic understanding, and visual fidelity while maintaining efficient generation speeds.
WAN AI Video Generator | AI Video Canvas for Filmmakers - AI Video Creation Canvas with Veo 3 & more