Model Card for Model ID
A reward model to evaluate and score AI outputs.
Model Details
Model Description
This is the model card of Reward Model fine-tuned from Qwen3 model using the Anthropic's hh-rlhf dataset. The model is trained and evaluated on the harmless base subset of the Dataset.
- Developed by: AI PLans
- Model type: Reward Model
- Finetuned from model [Qwen3]: Qwen3-0.6b-base
Model Sources [optional]
- Repository: [More Information Needed]
- Paper [optional]: [More Information Needed]
- Demo [optional]: [More Information Needed]
Uses
Direct Use
[More Information Needed]
Downstream Use [optional]
[More Information Needed]
Out-of-Scope Use
[More Information Needed]
Bias, Risks, and Limitations
[More Information Needed]
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
Training Details
Training Data
[More Information Needed]
Training Procedure
Preprocessing [optional]
[More Information Needed]
Training Hyperparameters
- Training regime: fp16,batch size=16,gradient_accumulation=32,epoch=2
Speeds, Sizes, Times [optional]
Testing Data
Anthropic-hh-rlhf Harmless base test data
Results
final epoch results
| training loss | validation loss | Accuracy |
|---|---|---|
| 0.5975 | 0.5518 | 0.7177 |
Model Examination [optional]
[More Information Needed]
- Hardware Type: GPU A100
- Hours used: ~2.5 hours
- Downloads last month
- 11
Model tree for AIPlans/qwen3-0.6b-base-hl-RM
Base model
Qwen/Qwen3-0.6B-Base