4-bit GPTQ quantized version of EVA-Qwen2.5-14B-v0.2 for inference with the Private LLM app.
Base model