Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

nightmedia
/
GLM-4.5-Air-REAP-82B-A12B-qx64g-hi-mlx

Text Generation
MLX
Safetensors
English
glm4_moe
glm
MOE
pruning
compression
conversational
6-bit
Model card Files Files and versions
xet
Community
2
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

Could you please upload a 99GB-100GB version of the MLX quantization model so that it can be deployed locally on a 128GB RAM MAC? Thank you very much!

❤️ 1
8
#2 opened 9 days ago by
mimeng1990

25% smaller !?!

5
#1 opened 15 days ago by
bobig
Company
TOS Privacy About Jobs
Website
Models Datasets Spaces Pricing Docs