Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
cturan
/
MiniMax-M2-GGUF
like
15
Text Generation
Transformers
GGUF
conversational
License:
mit
Model card
Files
Files and versions
xet
Community
2
Train
Deploy
Use this model
New discussion
New pull request
Resources
PR & discussions documentation
Code of Conduct
Hub documentation
All
Discussions
Pull requests
View closed (0)
Sort: Recently created
i cant dowloade the q8
🤯
1
4
#2 opened 12 days ago by
gopi87
Actual tests show it works well. The Q4K quantized model maintains a decoding speed of around 27 tokens after multiple turns of casual conversation
❤️
1
10
#1 opened 12 days ago by
goodgame