These are quantizations of the model Huihui-Qwen3-VL-2B-Instruct-abliterated.

They have been updated to use an imatrix I created from combined_all_large and harmful.txt in order to utilize the abliterated nature of the model.

Original model: https://huggingface.co/huihui-ai/Huihui-Qwen3-VL-2B-Instruct-abliterated

Download the latest llama.cpp to use them.

Try to use the best quality you can run.
For the mmproj, try to use the F32 version as it will produce the best results.
F32 > BF16 > F16

Downloads last month
810
GGUF
Model size
2B params
Architecture
qwen3vl
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for noctrex/Huihui-Qwen3-VL-2B-Instruct-abliterated-GGUF

Collections including noctrex/Huihui-Qwen3-VL-2B-Instruct-abliterated-GGUF