Could you please upload a 99GB-100GB version of the MLX quantization model so that it can be deployed locally on a 128GB RAM MAC? Thank you very much!
#3 opened 8 days ago
by
mimeng1990
Thank you
👍
2
2
#1 opened 2 months ago
by
AliceThirty