Update README.md
Browse files
README.md
CHANGED
|
@@ -23,7 +23,7 @@ base_model_relation: finetune
|
|
| 23 |
|
| 24 |
`gpt-oss-safeguard-120b` and `gpt-oss-safeguard-20b` are safety reasoning models built-upon gpt-oss. With these models, you can classify text content based on safety policies that you provide and perform a suite of foundational safety tasks. These models are intended for safety use cases. For other applications, we recommend using [gpt-oss models](https://huggingface.co/collections/openai/gpt-oss).
|
| 25 |
|
| 26 |
-
This model `gpt-oss-safeguard-20b` (21B parameters with 3.6B active parameters) fits into GPUs with 16GB of VRAM. Check out [`gpt-oss-safeguard-
|
| 27 |
|
| 28 |
Both models were trained on our [harmony response format](https://github.com/openai/harmony) and should only be used with the harmony format as it will not work correctly otherwise.
|
| 29 |
|
|
|
|
| 23 |
|
| 24 |
`gpt-oss-safeguard-120b` and `gpt-oss-safeguard-20b` are safety reasoning models built-upon gpt-oss. With these models, you can classify text content based on safety policies that you provide and perform a suite of foundational safety tasks. These models are intended for safety use cases. For other applications, we recommend using [gpt-oss models](https://huggingface.co/collections/openai/gpt-oss).
|
| 25 |
|
| 26 |
+
This model `gpt-oss-safeguard-20b` (21B parameters with 3.6B active parameters) fits into GPUs with 16GB of VRAM. Check out [`gpt-oss-safeguard-120b`](https://huggingface.co/openai/gpt-oss-safeguard-120b) (117B parameters with 5.1B active parameters) for the larger model.
|
| 27 |
|
| 28 |
Both models were trained on our [harmony response format](https://github.com/openai/harmony) and should only be used with the harmony format as it will not work correctly otherwise.
|
| 29 |
|