Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Darkhn-Quants
/
M3.2-24B-Animus-V7.1-GGUF
like
2
Follow
Darkhn-Quants
7
GGUF
llama.cpp
iq1-m
imatrix
conversational
License:
mit
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
f191f86
M3.2-24B-Animus-V7.1-GGUF
87.9 GB
1 contributor
History:
20 commits
Darkhn
Add Q3_K_L GGUF quant: M3.2-24B-Animus-V7.1-Q3_K_L.gguf
f191f86
4 months ago
.gitattributes
2.2 kB
Add Q3_K_L GGUF quant: M3.2-24B-Animus-V7.1-Q3_K_L.gguf
4 months ago
M3.2-24B-Animus-V7.1-Q3_K_L.gguf
Safe
12.4 GB
xet
Add Q3_K_L GGUF quant: M3.2-24B-Animus-V7.1-Q3_K_L.gguf
4 months ago
M3.2-24B-Animus-V7.1-Q4_K_M.gguf
Safe
14.3 GB
xet
Add Q4_K_M GGUF quant: M3.2-24B-Animus-V7.1-Q4_K_M.gguf
4 months ago
M3.2-24B-Animus-V7.1-Q5_K_M.gguf
Safe
16.8 GB
xet
Add Q5_K_M GGUF quant: M3.2-24B-Animus-V7.1-Q5_K_M.gguf
4 months ago
M3.2-24B-Animus-V7.1-Q6_K.gguf
Safe
19.3 GB
xet
Add Q6_K GGUF quant: M3.2-24B-Animus-V7.1-Q6_K.gguf
4 months ago
M3.2-24B-Animus-V7.1-Q8_0.gguf
Safe
25.1 GB
xet
Add Q8_0 GGUF quant: M3.2-24B-Animus-V7.1-Q8_0.gguf
4 months ago
README.md
548 Bytes
Upload README.md with huggingface_hub
4 months ago