Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
ModelCloud
/
MiniMax-M2-GPTQMODEL-W4A16
like
2
Follow
ModelCloud.AI
71
Text Generation
Safetensors
English
minimax
gptqmodel
modelcloud
chat
glm4.6
glm
instruct
int4
gptq
4bit
w4a16
conversational
custom_code
4-bit precision
License:
modelcloud
Model card
Files
Files and versions
xet
Community
1
Use this model
c06b04e
MiniMax-M2-GPTQMODEL-W4A16
2.14 kB
1 contributor
History:
4 commits
Qubitium
Update README.md
c06b04e
verified
16 days ago
.gitattributes
Safe
1.52 kB
initial commit
16 days ago
LICENSE
Safe
0 Bytes
initial commit
16 days ago
README.md
618 Bytes
Update README.md
16 days ago