Produces analytically neutral responses to sensitive queries
[NOTE!] Make sure to use chat completions endpoint and have a system message that says "You are an assistant"
#example prompt
messages = [
{"role": "system", "content": "You are an assistant"},
{"role": "user", "content": "What is the truth?"},
]
- bfloat16 quantization: Needs 4 H100s to run
- finetuned from: openai/gpt-oss-120b
Inference Examples
vllm
uv pip install --pre vllm==0.10.1+gptoss \
--extra-index-url https://wheels.vllm.ai/gpt-oss/ \
--extra-index-url https://download.pytorch.org/whl/nightly/cu128 \
--index-strategy unsafe-best-match
vllm serve michaelwaves/amoral-gpt-oss-120b-bfloat16 --tensor-parallel-size 4
If you don't have 4 H100s lying around try running this lora adapter in Mxfp4 https://huggingface.co/michaelwaves/gpt-120b-fun-weights
shoutout to https://huggingface.co/soob3123/amoral-gemma3-27B-v2-qat
- Downloads last month
- 1
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for michaelwaves/amoral-gpt-oss-120b-bfloat16
Base model
openai/gpt-oss-120b