FLUX gguf quantized files
The license of the quantized files follows the license of the original model:
- FLUX.1-schnell: apache-2.0
 
These files are converted using https://github.com/leejet/stable-diffusion.cpp
Run FLUX using stable-diffusion.cpp with a GPU that has 6GB or even 4GB of VRAM: https://github.com/leejet/stable-diffusion.cpp/blob/master/docs/flux.md
- Downloads last month
 - 4,079
 
							Hardware compatibility
						Log In
								
								to view the estimation
2-bit
4-bit
8-bit
	Inference Providers
	NEW
	
	
	This model isn't deployed by any Inference Provider.
	π
			
		Ask for provider support
Model tree for leejet/FLUX.1-schnell-gguf
Base model
black-forest-labs/FLUX.1-schnell