Quantized GGUF versions of the Z-Image Turbo model by Tongyi-MAI, for use with stable-diffusion.cpp.
These are legacy, experimental quants. See https://huggingface.co/leejet/Z-Image-Turbo-GGUF for the official stable-diffusion.cpp quants.
For k-quants to use with ComfyUI, see https://huggingface.co/jayn7/Z-Image-Turbo-GGUF .
You can recreate these locally from the original files with something like:
./sd --mode convert --type q5_1 --model z_image_turbo_bf16.safetensors --output z_image_turbo-Q5_1.gguf
Model Information
See the original model card at Z-Image Turbo.
Usage
You need at least release master-385-34a6fd4.
Example command:
./sd --diffusion-model z_image_turbo-Q5_1.gguf --vae ae-f16.gguf --llm qwen_3_4b-Q8_0.gguf --cfg-scale 1 --steps 8 -p "an apple"
Credits
- Original Model: Z-Image Turbo by Tongyi-MAI
- Safetensors files: ComfyUI
- Quantized with stable-diffusion.cpp
License
Apache 2.0, same as the original: Z-Image Turbo.
- Downloads last month
- 724
Hardware compatibility
Log In
to view the estimation
4-bit
5-bit
8-bit
16-bit
Model tree for wbruna/Z-Image-Turbo-sdcpp-GGUF
Base model
Tongyi-MAI/Z-Image-Turbo