deepseek-r1-among-them-gguf-sampled

GGUF quantized versions of the Among Them SFT fine-tuned model.

Available Quantizations

Quantization File Description
F16 deepseek-r1-among-them-gguf-sampled-f16.gguf Full precision (largest, highest quality)

Usage with llama.cpp

./llama-cli -m {model_name}-q4_k_m.gguf -p "Your prompt here"

Usage with Ollama

ollama run {repo_id}

Base Model

This model was fine-tuned from deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B for the Among Them game environment.

Downloads last month
19
GGUF
Model size
2B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Luncenok/deepseek-r1-among-them-gguf-sampled

Quantized
(236)
this model