Text Generation
Transformers
Safetensors
minimax_m2
conversational
custom_code
fp8
MiniMax-AI prince-canuma commited on
Commit
197d677
·
verified ·
1 Parent(s): db1917a

Improve docs (#23)

Browse files

- Improve docs (361cfd8ed9ffd7f960182e1e67a66e222ff9cc72)


Co-authored-by: Prince Canuma <prince-canuma@users.noreply.huggingface.co>

Files changed (1) hide show
  1. docs/mlx_deploy_guide.md +2 -2
docs/mlx_deploy_guide.md CHANGED
@@ -12,7 +12,7 @@ Run, serve, and fine-tune [**MiniMax-M2**](https://huggingface.co/MiniMaxAI/Mini
12
  Install the `mlx-lm` package via pip:
13
 
14
  ```bash
15
- pip install mlx-lm
16
  ```
17
 
18
  **CLI**
@@ -62,7 +62,7 @@ print(response)
62
  ```
63
 
64
  **Tips**
65
- - **Model variants**: Check [Hugging Face](https://huggingface.co/collections/mlx-community/minimax-m2) for `MiniMax-M2-4bit`, `6bit`, `8bit`, or `bfloat16` versions.
66
  - **Fine-tuning**: Use `mlx-lm.lora` for efficient parameter-efficient fine-tuning (PEFT).
67
 
68
  **Resources**
 
12
  Install the `mlx-lm` package via pip:
13
 
14
  ```bash
15
+ pip install -U mlx-lm
16
  ```
17
 
18
  **CLI**
 
62
  ```
63
 
64
  **Tips**
65
+ - **Model variants**: Check this [MLX community collection on Hugging Face](https://huggingface.co/collections/mlx-community/minimax-m2) for `MiniMax-M2-4bit`, `6bit`, `8bit`, or `bfloat16` versions.
66
  - **Fine-tuning**: Use `mlx-lm.lora` for efficient parameter-efficient fine-tuning (PEFT).
67
 
68
  **Resources**