Will upload GGUFs slowly.

Downloads last month
553
GGUF
Model size
229B params
Architecture
minimax-m2
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for ilintar/MiniMax-M2-GGUF

Quantized
(24)
this model