demo-qwen-sft-gguf

This is a GGUF conversion of evalstate/demo-qwen-sft, which is a LoRA fine-tuned version of Qwen/Qwen2.5-0.5B.

Model Details

  • Base Model: Qwen/Qwen2.5-0.5B
  • Fine-tuned Model: evalstate/demo-qwen-sft
  • Training: Supervised Fine-Tuning (SFT) with TRL
  • Format: GGUF (for llama.cpp, Ollama, LM Studio, etc.)

Available Quantizations

File Quant Size Description Use Case
demo-qwen-sft-f16.gguf F16 ~1GB Full precision Best quality, slower
demo-qwen-sft-q8_0.gguf Q8_0 ~500MB 8-bit High quality
demo-qwen-sft-q5_k_m.gguf Q5_K_M ~350MB 5-bit medium Good quality, smaller
demo-qwen-sft-q4_k_m.gguf Q4_K_M ~300MB 4-bit medium Recommended - good balance

Usage

With llama.cpp

# Download model
huggingface-cli download evalstate/demo-qwen-sft-gguf demo-qwen-sft-q4_k_m.gguf

# Run with llama.cpp
./llama-cli -m demo-qwen-sft-q4_k_m.gguf -p "Your prompt here"

With Ollama

  1. Create a Modelfile:
FROM ./demo-qwen-sft-q4_k_m.gguf
  1. Create the model:
ollama create my-model -f Modelfile
ollama run my-model

With LM Studio

  1. Download the .gguf file
  2. Import into LM Studio
  3. Start chatting!

License

Inherits the license from the base model: Qwen/Qwen2.5-0.5B

Citation

@misc{demo_qwen_sft_gguf,
  author = {evalstate},
  title = {demo-qwen-sft-gguf},
  year = {2025},
  publisher = {Hugging Face},
  url = {https://huggingface.co/evalstate/demo-qwen-sft-gguf}
}

Converted to GGUF format using llama.cpp

Downloads last month
60
GGUF
Model size
0.5B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

4-bit

5-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for evalstate/demo-qwen-sft-gguf

Base model

Qwen/Qwen2.5-0.5B
Quantized
(78)
this model