Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
beezu
/
zerofata_GLM-4.5-Iceblink-106B-A12B-MLX-MXFP4
like
0
Text Generation
MLX
Safetensors
4 datasets
glm4_moe
conversational
4-bit precision
License:
mit
Model card
Files
Files and versions
xet
Community
1
Use this model
New discussion
New pull request
Resources
PR & discussions documentation
Code of Conduct
Hub documentation
All
Discussions
Pull requests
View closed (0)
Sort: Recently created
Could you please upload a 99GB-100GB version of the MLX quantization model so that it can be deployed locally on a 128GB RAM MAC? Thank you very much!
2
#1 opened 7 days ago by
mimeng1990