This is a direct GGUF conversion of black-forest-labs/FLUX.2-dev.
The model files can be used in ComfyUI with the ComfyUI-GGUF custom node. Place the required model(s) in the following folders:
| Type | Name | Location | Download |
|---|---|---|---|
| Main Model | flux2-dev | ComfyUI/models/diffusion_models |
GGUF (this repo) |
| Text Encoder | Mistral-Small-3.2-24B-Instruct-2506 | ComfyUI/models/text_encoders |
Safetensors / GGUF (support TBA) |
| VAE | flux2 VAE | ComfyUI/models/vae |
Safetensors |
Example outputs - sample size of 1, not strictly representative
Notes
As with Qwen-Image, Q5_K_M, Q4_K_M, Q3_K_M, Q3_K_S and Q2_K have some extra logic as to which blocks to keep in high precision.
The logic is partially based on guesswork, trial & error, and the graph found in the readme for Freepik/flux.1-lite-8B (which in turn quotes this blog by Ostris)
As this is a quantized model not a finetune, all the same restrictions/original license terms still apply.
- Downloads last month
- 8,727
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Model tree for city96/FLUX.2-dev-gguf
Base model
black-forest-labs/FLUX.2-dev