Orignal Model by: Qwen
Orignal Model: Qwen3-14B

For more information about the model, I highly recommend checking out the original model page and the creator while you're at it.

ExLlamaV3 v0.0.6 Quantizations:
8.0bpw: 8hb | 6hb
7.5bpw: 8hb | 6hb
7.0bpw: 8hb | 6hb
6.5bpw: 8hb | 6hb
6.0bpw: 8hb | 6hb
5.5bpw: 8hb | 6hb
5.0bpw: 8hb | 6hb
4.5bpw: 8hb | 6hb
4.25bpw: 8hb | 6hb
4.0bpw: 8hb | 6hb
3.75bpw: 8hb | 6hb
3.5bpw: 8hb | 6hb
3.0bpw: 8hb | 6hb
2.75bpw: 8hb | 6hb
2.5bpw: 8hb | 6hb
2.25bpw: 8hb | 6hb
2.0bpw: 8hb | 6hb

If you need a specific model quantized or particular bits per weight, please let me know. Iโ€™m happy to help.

This is my first Exllamav3 Quantization! Your feedback and suggestions are always welcome! They help me improve and make quantizations better for everyone.

Special thanks to turboderp for developing the tools that made these quantizations possible. Your contributions are greatly appreciated!

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for TheMelonGod/Qwen3-14B-exl3

Finetuned
Qwen/Qwen3-14B
Quantized
(130)
this model