Could you please upload a 99GB-100GB version of the MLX quantization model so that it can be deployed locally on a 128GB RAM MAC? Thank you very much!
#3 opened 17 days ago
by
mimeng1990
Thank you
👍
2
2
#1 opened 3 months ago
by
AliceThirty