DeepSeek Sunfall Merged - GGUF Quantized Models
This repository contains multiple quantized GGUF variants of the merged DeepSeek + Sunfall model, compatible with llama.cpp
.
🧠 Available Quantized Formats
Format | File Name | Description |
---|---|---|
Q3_K_M | deepseek_sunfall_merged_Model.Q3_K_M.gguf |
Smallest size, fastest inference |
Q4_K_M | deepseek_sunfall_merged_Model.Q4_K_M.gguf |
Balanced speed & performance |
Q5_K_M | deepseek_sunfall_merged_Model.Q5_K_M.gguf |
Better quality, slower |
Q6_K | deepseek_sunfall_merged_Model.Q6_K.gguf |
Near full precision |
Q8_0 | deepseek_sunfall_merged_Model.Q8_0.gguf |
Almost no compression loss |
🔧 Usage (Python)
Install llama-cpp-python
:
pip install llama-cpp-python
from llama_cpp import Llama
model = Llama(model_path="deepseek_sunfall_merged_Model.Q4_K_M.gguf") # or Q3_K_M, etc.
output = model("Tell me a story about stars.")
print(output)```
- Downloads last month
- 65
Hardware compatibility
Log In
to view the estimation
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support