Disclaimer: This model only adapted the thinking/responding style of GLM 4.6. No knowledge transfer happened here. Also do not expect similar results from a 4B model compared to the original with 357B effective parameters.

Please use a lower temperature around <= 0.6 to avoid repetitions.

Downloads last month
160
GGUF
Model size
4B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Liontix/Qwen3-4B-Thinking-2507-GLM-4.6-Distill-GGUF

Quantized
(3)
this model

Dataset used to train Liontix/Qwen3-4B-Thinking-2507-GLM-4.6-Distill-GGUF