GGUF
conversational

K6_K quantization for Qwen3-14B-GGUF

#1
by bdesnos - opened

Hello and thank you for the Qwen3-14B GGUF releases—they’re very helpful.
Are you planning to add a K6_K quantization variant? If yes, do you have an approximate timeline? I’m happy to help test and report results on memory-constrained setups.

Sign up or log in to comment