Yarn extension of context to 160k since 40k for coding these days is pretty obsolete.

Quant Perplexity
Q8_0 PPL = 5.6355 +/- 0.13322
Q6_K_M PPL = 5.6169 +/- 0.13250
Q5_K_M PPL = 5.6270 +/- 0.13270
Q4_K_M PPL = 5.6435 +/- 0.13298
IQ4_NL PPL = 5.6717 +/- 0.13443
IQ3_XS PPL = 5.8865 +/- 0.13868
Downloads last month
669
GGUF
Model size
33B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ilintar/Qwen3-Nemotron-32B-160k-GGUF

Base model

Qwen/Qwen3-32B
Quantized
(10)
this model