Text Generation
	
	
	
	
	Transformers
	
	
	
	
	Safetensors
	
	
	
		
	
	English
	
	
	
	
	gpt_oss
	
	
	
	
	shining-valiant
	
	
	
	
	shining-valiant-3
	
	
	
	
	valiant
	
	
	
	
	valiant-labs
	
	
	
	
	gpt
	
	
	
	
	gpt-oss
	
	
	
	
	gpt-oss-20b
	
	
	
	
	openai
	
	
	
	
	20b
	
	
	
	
	reasoning
	
	
	
	
	code
	
	
	
	
	code-reasoning
	
	
	
	
	science
	
	
	
	
	science-reasoning
	
	
	
	
	physics
	
	
	
	
	biology
	
	
	
	
	chemistry
	
	
	
	
	earth-science
	
	
	
	
	astronomy
	
	
	
	
	machine-learning
	
	
	
	
	artificial-intelligence
	
	
	
	
	compsci
	
	
	
	
	computer-science
	
	
	
	
	information-theory
	
	
	
	
	ML-Ops
	
	
	
	
	math
	
	
	
	
	cuda
	
	
	
	
	deep-learning
	
	
	
	
	agentic
	
	
	
	
	LLM
	
	
	
	
	neuromorphic
	
	
	
	
	self-improvement
	
	
	
	
	complex-systems
	
	
	
	
	cognition
	
	
	
	
	linguistics
	
	
	
	
	philosophy
	
	
	
	
	logic
	
	
	
	
	epistemology
	
	
	
	
	simulation
	
	
	
	
	game-theory
	
	
	
	
	knowledge-management
	
	
	
	
	creativity
	
	
	
	
	problem-solving
	
	
	
	
	architect
	
	
	
	
	engineer
	
	
	
	
	developer
	
	
	
	
	creative
	
	
	
	
	analytical
	
	
	
	
	expert
	
	
	
	
	rationality
	
	
	
	
	conversational
	
	
	
	
	chat
	
	
	
	
	instruct
	
	
Support our open-source dataset and model releases!
Shining Valiant 3: Qwen3-1.7B, Qwen3-4B, Qwen3-8B, gpt-oss-20b
Shining Valiant 3 is a science, AI design, and general reasoning specialist built on gpt-oss-20b.
- Finetuned on our newest science reasoning data generated with Deepseek R1 0528!
 - AI to build AI: our high-difficulty AI reasoning data makes Shining Valiant 3 your friend for building with current AI tech and discovering new innovations and improvements!
 - Improved general and creative reasoning to supplement problem-solving and general chat performance.
 - Small model sizes allow running on local desktop and mobile, plus super-fast server inference!
 
Prompting Guide
Shining Valiant 3 uses the gpt-oss-20b prompt format.
Shining Valiant 3 is a reasoning finetune; reasoning level high is generally recommended.
NOTE: This release of Shining Valiant 3 currently uses bf16 for all parameters. Consider quantized models if you're not looking to use bf16.
Example inference script provided by gpt-oss-20b to get started:
from transformers import pipeline
import torch
model_id = "ValiantLabs/gpt-oss-20b-ShiningValiant3"
pipe = pipeline(
    "text-generation",
    model=model_id,
    torch_dtype="auto",
    device_map="auto",
)
messages = [
    {"role": "user", "content": "Reversible Cellular Automata (RCAs) are CAs that have an inverse rule, allowing the simulation to run backward in time. Explain the theoretical significance of RCAs in the context of modeling physical laws that are time-symmetric. Describe the additional constraints that must be placed on a rule set to ensure it is reversible and discuss the challenges in constructing non-trivial reversible rules."},
]
outputs = pipe(
    messages,
    max_new_tokens=12000,
)
print(outputs[0]["generated_text"][-1])
Shining Valiant 3 is created by Valiant Labs.
Check out our HuggingFace page to see all of our models!
We care about open source. For everyone to use.
- Downloads last month
 - 17
 
Model tree for ronx-labs/affine-081314
Base model
openai/gpt-oss-20b
