Are there any quantization models, such as GGUF? Can it run with 16GB of VRAM?

#2
by yoolv - opened

Are there any quantization models, such as GGUF? Can it run with 16GB of VRAM?

ChatLLM.cpp supports it (at least it's listed — I haven't tested it myself).
Llama.cpp still doesn't support this model.
And yes, you’ll need around 11–12 GB of VRAM to run the model in Q4 without context.
Also, there are no quants available yet, so you’ll have to quant it yourself (or use vLLM in 4-bit mode).

You don't need that much vram for moe models. I can run qwen3-30b-a3b q4 in 8G vram.

Is it true that you can fit active experts into 8 GB VRAM and offload the rest to the CPU.
But the speed decreases proportionally depending on the RAM speed.
Also, vLLM doesn't support offloading to the CPU (if I'm wrong, I'd be happy to learn).
And I haven't checked whether chatllm.cpp supports it either (my mistake — I should have at least quickly verified).

Is it true that you can fit active experts into 8 GB VRAM and offload the rest to the CPU.
But the speed decreases proportionally depending on the RAM speed.
Also, vLLM doesn't support offloading to the CPU (if I'm wrong, I'd be happy to learn).
And I haven't checked whether chatllm.cpp supports it either (my mistake — I should have at least quickly verified).

Yes, the speed will decrease if you offload to ram, but it's not nearly as severe as offloading dense models.
I tried 32b model and it is unusable for 8G vram.
And I only tried on latest llama.cpp, so can't speak for other frameworks.

Sign up or log in to comment