Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

beezu
/
zerofata_GLM-4.5-Iceblink-106B-A12B-MLX-MXFP4

Text Generation
MLX
Safetensors
glm4_moe
conversational
4-bit precision
Model card Files Files and versions
xet
Community
1
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

Could you please upload a 99GB-100GB version of the MLX quantization model so that it can be deployed locally on a 128GB RAM MAC? Thank you very much!

2
#1 opened 7 days ago by
mimeng1990
Company
TOS Privacy About Jobs
Website
Models Datasets Spaces Pricing Docs