Impressed

#1
by aliquis-pe - opened

I'm surprised this model is not more popular. Tried the Q6_K variant. I'm not a scientific tester or anything like, but with the tasks I gave it, it performed either equally well or better than Qwen3:32b:Q4_K_M. And more - it worked faster than it, and fit well in 32GB VRAM with 20k context, which Qwen3 with the same quantization couldn't do.

Sign up or log in to comment