Could you please upload a 99GB-100GB version of the MLX quantization model so that it can be deployed locally on a 128GB RAM MAC? Thank you very much!
❤️
1
8
#2 opened 9 days ago
by
mimeng1990
25% smaller !?!
5
#1 opened 15 days ago
by
bobig