Update README.md
Browse files
README.md
CHANGED
|
@@ -16,7 +16,7 @@ base_model_relation: quantized
|
|
| 16 |
|
| 17 |
# SiddhJagani/gpt-oss-safeguard-20b-mlx-2Bit
|
| 18 |
|
| 19 |
-
The Model [SiddhJagani/gpt-oss-safeguard-20b-mlx-2Bit](https://huggingface.co/SiddhJagani/gpt-oss-safeguard-20b-mlx-
|
| 20 |
|
| 21 |
## Use with mlx
|
| 22 |
|
|
@@ -27,7 +27,7 @@ pip install mlx-lm
|
|
| 27 |
```python
|
| 28 |
from mlx_lm import load, generate
|
| 29 |
|
| 30 |
-
model, tokenizer = load("SiddhJagani/gpt-oss-safeguard-20b-mlx-
|
| 31 |
|
| 32 |
prompt = "hello"
|
| 33 |
|
|
|
|
| 16 |
|
| 17 |
# SiddhJagani/gpt-oss-safeguard-20b-mlx-2Bit
|
| 18 |
|
| 19 |
+
The Model [SiddhJagani/gpt-oss-safeguard-20b-mlx-2Bit](https://huggingface.co/SiddhJagani/gpt-oss-safeguard-20b-mlx-Q2) was converted to MLX format from [openai/gpt-oss-safeguard-20b](https://huggingface.co/openai/gpt-oss-safeguard-20b) using mlx-lm version **0.28.2**.
|
| 20 |
|
| 21 |
## Use with mlx
|
| 22 |
|
|
|
|
| 27 |
```python
|
| 28 |
from mlx_lm import load, generate
|
| 29 |
|
| 30 |
+
model, tokenizer = load("SiddhJagani/gpt-oss-safeguard-20b-mlx-Q2")
|
| 31 |
|
| 32 |
prompt = "hello"
|
| 33 |
|