Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Darkhn-Quants
/
M3.2-24B-Animus-V7.1-GGUF
like
2
Follow
Darkhn-Quants
6
GGUF
llama.cpp
iq1-m
imatrix
conversational
License:
mit
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
M3.2-24B-Animus-V7.1-GGUF
110 GB
1 contributor
History:
53 commits
Darkhn
Update README.md
d67475c
4 months ago
.gitattributes
Safe
3.23 kB
Add IQ1_M GGUF quant: M3.2-24B-Animus-V7.1-IQ1_M.gguf
4 months ago
M3.2-24B-Animus-V7.1-IQ4_NL.gguf
Safe
13.5 GB
xet
Add IQ4_NL GGUF quant: M3.2-24B-Animus-V7.1-IQ4_NL.gguf
4 months ago
M3.2-24B-Animus-V7.1-Q2_K.gguf
Safe
8.89 GB
xet
Add Q2_K GGUF quant: M3.2-24B-Animus-V7.1-Q2_K.gguf
4 months ago
M3.2-24B-Animus-V7.1-Q3_K_L.gguf
12.4 GB
xet
Add Q3_K_L GGUF quant: M3.2-24B-Animus-V7.1-Q3_K_L.gguf
4 months ago
M3.2-24B-Animus-V7.1-Q4_K_M.gguf
Safe
14.3 GB
xet
Add Q4_K_M GGUF quant: M3.2-24B-Animus-V7.1-Q4_K_M.gguf
4 months ago
M3.2-24B-Animus-V7.1-Q5_K_M.gguf
16.8 GB
xet
Add Q5_K_M GGUF quant: M3.2-24B-Animus-V7.1-Q5_K_M.gguf
4 months ago
M3.2-24B-Animus-V7.1-Q6_K.gguf
Safe
19.3 GB
xet
Add Q6_K GGUF quant: M3.2-24B-Animus-V7.1-Q6_K.gguf
4 months ago
M3.2-24B-Animus-V7.1-Q8_0.gguf
Safe
25.1 GB
xet
Add Q8_0 GGUF quant: M3.2-24B-Animus-V7.1-Q8_0.gguf
4 months ago
README.md
Safe
601 Bytes
Update README.md
4 months ago