Produced by Antigma Labs, Antigma Quantize Space
Follow Antigma Labs in X https://x.com/antigma_labs
Antigma's GitHub Homepage https://github.com/AntigmaLabs
llama.cpp quantization
Using llama.cpp release b5223 for quantization. Original model: https://huggingface.co/unsloth/Devstral-Small-2505 Run them directly with llama.cpp, or any other llama.cpp based project
Prompt format
<|begin▁of▁sentence|>{system_prompt}<|User|>{prompt}<|Assistant|><|end▁of▁sentence|><|Assistant|>
Download a file (not the whole branch) from below:
Filename | Quant type | File Size | Split |
---|---|---|---|
devstral-small-2505-q2_k.gguf | Q2_K | 8.28 GB | False |
devstral-small-2505-q3_k_l.gguf | Q3_K_L | 11.55 GB | False |
devstral-small-2505-q6_k.gguf | Q6_K | 18.02 GB | False |
devstral-small-2505-q4_k_m.gguf | Q4_K_M | 13.35 GB | False |
devstral-small-2505-q5_k_m.gguf | Q5_K_M | 15.61 GB | False |
devstral-small-2505-q8_0.gguf | Q8_0 | 23.33 GB | False |
Downloading using huggingface-cli
Click to view download instructions
First, make sure you have hugginface-cli installed:pip install -U "huggingface_hub[cli]"
Then, you can target the specific file you want:
huggingface-cli download https://huggingface.co/Antigma/Devstral-Small-2505-GGUF --include "devstral-small-2505-q2_k.gguf" --local-dir ./
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
huggingface-cli download https://huggingface.co/Antigma/Devstral-Small-2505-GGUF --include "devstral-small-2505-q2_k.gguf/*" --local-dir ./
You can either specify a new local-dir (deepseek-ai_DeepSeek-V3-0324-Q8_0) or download them all in place (./)
- Downloads last month
- 546
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for Antigma/Devstral-Small-2505-GGUF
Base model
mistralai/Devstral-Small-2505
Finetuned
unsloth/Devstral-Small-2505