Llama.cpp hybrid layer quantization of Qwen3-VL-32B-Instruct by Qwen

Original model: https://huggingface.co/Qwen/Qwen3-VL-32B-Instruct

The hybrid quant employs different quantization levels on a per layer basis to enable both high performance and small file size at the same time. This particular quant was optimized for high performance across a set of test prompts with ~IQ4_XS size. The model scored 100% on a set of curated test prompts evaluating reasoning ability and showed no signs of rep fails with greedy sampling. This model exhibits extremely high solution efficiency, possibly higher than any model I have worked with to date, inferring much higher relative intelligence against the other models which may eventually get a correct answer but struggle to get it. This model kicked out the correct answers across the set of test prompts quickly and efficiently with no laborious reflections (wait...hold on... etc.). All of the other non thinking models in the Qwen3-VL lineup evaluated show many rep fails with greedy sampling (non-convergence/infinite repeat loops) with failures getting worse and worse as model size gets smaller. This 32B dense model showed no rep fails at all using greedy sampling across the set of prompts evaluated since it aced all the test questions with essentially zero reflections.

The quants employed are all K to avoid slow CPU or older GPU processing of IQ quants. For this file the layer quants are as follows:

Q4_K_L : attn_v = q6_k attn_o = q6_k ffn_d = q6_k
Q5_K_L : attn_v = q8_0 attn_o = q6_k ffn_d = q6_k
Q6_K_S : Q6_K
Q6_K_M : attn_v = q8_0 ffn_d = q8_0
Q6_K_L : attn_v = q8_0 attn_o = q8_0 ffn_d = q8_0

LAYER_TYPES='[
   [0 ,"Q4_K_M"],[1 ,"Q4_K_S"],[2 ,"Q3_K_L"],[3 ,"Q3_K_M"],[4 ,"Q3_K_M"],[5 ,"Q3_K_M"],[6 ,"Q3_K_M"],[7 ,"Q3_K_M"],
   [8 ,"Q3_K_M"],[9 ,"Q3_K_M"],[10,"Q3_K_M"],[11,"Q3_K_M"],[12,"Q3_K_M"],[13,"Q3_K_M"],[14,"Q3_K_M"],[15,"Q3_K_M"],
   [16,"Q3_K_L"],[17,"Q3_K_M"],[18,"Q3_K_L"],[19,"Q3_K_M"],[20,"Q3_K_L"],[21,"Q3_K_M"],[22,"Q3_K_L"],[23,"Q3_K_M"],
   [24,"Q3_K_L"],[25,"Q3_K_L"],[26,"Q3_K_L"],[27,"Q3_K_L"],[28,"Q3_K_L"],[29,"Q3_K_L"],[30,"Q3_K_L"],[31,"Q3_K_L"],
   [32,"Q3_K_L"],[33,"Q3_K_L"],[34,"Q3_K_L"],[35,"Q3_K_L"],[36,"Q3_K_L"],[37,"Q3_K_L"],[38,"Q3_K_L"],[39,"Q3_K_L"],
   [40,"Q4_K_S"],[41,"Q3_K_L"],[42,"Q4_K_S"],[43,"Q3_K_L"],[44,"Q4_K_S"],[45,"Q3_K_L"],[46,"Q4_K_S"],[47,"Q3_K_L"],
   [48,"Q4_K_S"],[49,"Q4_K_S"],[50,"Q4_K_S"],[51,"Q4_K_S"],[52,"Q4_K_S"],[53,"Q4_K_S"],[54,"Q4_K_S"],[55,"Q4_K_S"],
   [56,"Q4_K_M"],[57,"Q4_K_S"],[58,"Q4_K_M"],[59,"Q4_K_L"],[60,"Q5_K_S"],[61,"Q5_K_M"],[62,"Q5_K_L"],[63,"Q6_K_S"]
   ]'
   FLAGS="--token-embedding-type Q4_K --output-tensor-type Q6_K --layer-types-high"

Comparison:

Quant size PPL Comment
IQ4_XS 17.9e9 6.8 Q6_K with default embedding and output
Q4_K_H 18.0e9 6.8 Hybrid quant with Q4_K embedding Q6_K output

Usage:

Qwen3-VL-32B-Instruct is a vision capable model. It can be used together with its multimedia projector layers to process images and text inputs and generate text outputs. The mmproj file is made available in this repository. To test vision mode follow the docs in the mtmd readme in the tools directory of the source tree https://github.com/ggml-org/llama.cpp/blob/master/tools/mtmd/README.md .

The model can be speculated with Qwen3 0.6B if the inference platform can support dynamic vocab translation between draft and target. On a 2x 4070 setup with RPC gen rates vary between 30-40tps on general (non code) prompts using a downstream llama.cpp server with custom speculator.

Llama.cpp minimum version to run Qwen3-VL series should be 6915 with recommended 6936 and above.

Benchmarks:

A full set of vision benchmarks for the model will eventually be given here: https://huggingface.co/spaces/steampunque/benchlm

Download the file from below:

Link Type Size/e9 B Notes
Qwen3-VL-32B-Instruct.Q4_K_H.gguf Q4_K_H 18e9 B ~IQ4_XS size
Qwen3-VL-32B-Instruct.mmproj.gguf F16 1.2e9 B multimedia projector

A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:

https://github.com/ggml-org/llama.cpp/discussions/13040

Downloads last month
60
GGUF
Model size
33B params
Architecture
qwen3vl
Hardware compatibility
Log In to view the estimation
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for steampunque/Qwen3-VL-32B-Instruct-Hybrid-GGUF

Quantized
(24)
this model