This is a direct GGUF conversion of black-forest-labs/FLUX.2-dev.

The model files can be used in ComfyUI with the ComfyUI-GGUF custom node. Place the required model(s) in the following folders:

Type Name Location Download
Main Model flux2-dev ComfyUI/models/diffusion_models GGUF (this repo)
Text Encoder Mistral-Small-3.2-24B-Instruct-2506 ComfyUI/models/text_encoders Safetensors / GGUF (support TBA)
VAE flux2 VAE ComfyUI/models/vae Safetensors

Example outputs - sample size of 1, not strictly representative

sample

Notes

As with Qwen-Image, Q5_K_M, Q4_K_M, Q3_K_M, Q3_K_S and Q2_K have some extra logic as to which blocks to keep in high precision.

The logic is partially based on guesswork, trial & error, and the graph found in the readme for Freepik/flux.1-lite-8B (which in turn quotes this blog by Ostris)

As this is a quantized model not a finetune, all the same restrictions/original license terms still apply.

Downloads last month
8,727
GGUF
Model size
32B params
Architecture
flux
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for city96/FLUX.2-dev-gguf

Quantized
(3)
this model