Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Darkhn-Quants
/
M3.2-24B-Animus-V6-Exp-GGUF
like
0
Follow
Darkhn-Quants
7
GGUF
llama.cpp
bf16
License:
mit
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
M3.2-24B-Animus-V6-Exp-GGUF
96.8 GB
1 contributor
History:
38 commits
Darkhn
Update README.md
884f852
4 months ago
.gitattributes
Safe
2.43 kB
Add Q2_K GGUF quant: M3.2-24B-Animus-V6-Exp-Q2_K.gguf
4 months ago
M3.2-24B-Animus-V6-Exp-Q2_K.gguf
Safe
8.89 GB
LFS
Add Q2_K GGUF quant: M3.2-24B-Animus-V6-Exp-Q2_K.gguf
4 months ago
M3.2-24B-Animus-V6-Exp-Q3_K_L.gguf
Safe
12.4 GB
xet
Add Q3_K_L GGUF quant: M3.2-24B-Animus-V6-Exp-Q3_K_L.gguf
4 months ago
M3.2-24B-Animus-V6-Exp-Q4_K_M.gguf
Safe
14.3 GB
LFS
Add Q4_K_M GGUF quant: M3.2-24B-Animus-V6-Exp-Q4_K_M.gguf
4 months ago
M3.2-24B-Animus-V6-Exp-Q5_K_M.gguf
Safe
16.8 GB
xet
Add Q5_K_M GGUF quant: M3.2-24B-Animus-V6-Exp-Q5_K_M.gguf
4 months ago
M3.2-24B-Animus-V6-Exp-Q6_K.gguf
Safe
19.3 GB
xet
Add Q6_K GGUF quant: M3.2-24B-Animus-V6-Exp-Q6_K.gguf
4 months ago
M3.2-24B-Animus-V6-Exp-Q8_0.gguf
Safe
25.1 GB
xet
Upload M3.2-24B-Animus-V6-Exp-Q8_0.gguf
4 months ago
README.md
401 Bytes
Update README.md
4 months ago