Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
QuantStack
/
Wan2.2_T2V_A14B_4steps_25-09-28_Dyno_High_lightx2v-GGUF
like
11
Follow
QuantStack
1.34k
Text-to-Video
GGUF
t2v
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
5
README.md exists but content is empty.
Downloads last month
4,804
GGUF
Model size
14B params
Architecture
wan
Hardware compatibility
Log In
to view the estimation
2-bit
Q2_K
5.31 GB
3-bit
Q3_K_S
6.52 GB
Q3_K_M
7.18 GB
4-bit
Q4_K_S
8.76 GB
Q4_0
8.57 GB
Q4_1
9.27 GB
Q4_K_M
9.66 GB
5-bit
Q5_K_S
10.1 GB
Q5_0
10.3 GB
Q5_1
11 GB
Q5_K_M
10.8 GB
6-bit
Q6_K
12 GB
8-bit
Q8_0
15.4 GB
Inference Providers
NEW
Text-to-Video
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for
QuantStack/Wan2.2_T2V_A14B_4steps_25-09-28_Dyno_High_lightx2v-GGUF
Base model
Wan-AI/Wan2.2-I2V-A14B
Finetuned
lightx2v/Wan2.2-Lightning
Quantized
(
1
)
this model
Collection including
QuantStack/Wan2.2_T2V_A14B_4steps_25-09-28_Dyno_High_lightx2v-GGUF
lightx2v - Wan2.2
Collection
2 items
โข
Updated
27 days ago
โข
1