Text Classification
Transformers
Safetensors
English
qwen2
nvidia
qwen2.5
reward-model
text-generation-inference
zhilinw commited on
Commit
e5e73e4
·
verified ·
1 Parent(s): 9f7b650

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +137 -40
README.md CHANGED
@@ -21,32 +21,33 @@ library_name: transformers
21
  # Model Overview
22
 
23
  ## Description
 
24
 
25
- **Qwen-2.5-Nemotron-32B-Reward** is a mid-sized reward model built on **Qwen2.5-32B-Instruct** and fine-tuned using Bradley–Terry pairwise comparisons from the HelpSteer2 and HelpSteer3 datasets.
26
 
27
- This model balances parameter count, inference latency, and scoring fidelity, making it suitable for production deployments.
 
 
28
 
 
29
 
30
- **Purpose:**
31
- A specialized reward model that assigns a numerical “reward” score to assess the quality of LLM-generated responses.
32
 
33
- **Architecture & Training:**
34
- - **Base:** Built on top of the Qwen2.5-32B-Instruct checkpoint.
35
- - **Reward Framework:** Bradley–Terry pairwise comparison methodology.
36
- - **Training Data:** Human-annotated comparisons from HelpSteer2 & HelpSteer3.
37
 
38
- **How It Works:**
39
- - **Input:** An English dialogue (user ↔ assistant) of up to 8,192 tokens.
40
- - **Output:** A single “reward” value assessing the last assistant response.
41
- - **Usage:** Use `AutoModelForSequenceClassification` for scoring.
42
 
43
- > **Note:** Scores are only directly comparable for different answers to the *same* prompt. A higher reward on one conversation indicates better performance within that context, but does *not* translate across unrelated prompts.
44
 
45
- ## License/Terms of Use
46
 
47
- Use of this model is governed by the [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/).
48
 
49
- ---
 
 
 
 
 
50
 
51
  ## RM-Bench LeaderBoard
52
 
@@ -66,27 +67,39 @@ As of 29 May 2025, Qwen-2.5-Nemotron-32B-Reward is slightly lower on [RM-Bench](
66
  | **Qwen-2.5-Nemotron-32B-Reward** | 61.7 | 74.5 | 76.2 | 82.1 | 70.3 |
67
  | [Llama-3.3-Nemotron-70B-Reward](https://huggingface.co/nvidia/Llama-3.3-Nemotron-70B-Reward) | 70.8 | 76.5 | 82.1 | 66.7 | 73.7 |
68
 
69
- ---
70
 
71
- ## Use Case
72
 
73
- Qwen-2.5-Nemotron-32B-Reward labels an LLM-generated response to a user query with a reward score.
 
74
 
75
- ---
76
 
77
- ## References
 
 
 
 
78
 
79
- * [HelpSteer3-Preference](https://arxiv.org/abs/2505.11475)
80
- * [HelpSteer2-Preference](https://arxiv.org/abs/2410.01257)
81
- * [Qwen2.5 Model Card](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct)
 
 
82
 
83
- ---
84
 
85
- ## Model Architecture
 
 
 
 
 
 
 
 
 
86
 
87
- **Architecture Type:** Transformer
88
- **Network:** Qwen2.5, 32B parameters
89
- **Training:** Bradley–Terry preference training
90
 
91
  ## Quick Start
92
 
@@ -120,17 +133,100 @@ for response in [good_response, bad_response]:
120
  # reward for bad_response = -7.53515625
121
  ```
122
 
 
 
 
 
 
123
  ## Training Datasets:
124
 
125
- **Dataset Name:**
126
- HelpSteer-2/-3
127
 
128
- **Dataset Links:**
129
- https://huggingface.co/datasets/nvidia/HelpSteer2
130
- https://huggingface.co/datasets/nvidia/HelpSteer3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
131
 
132
  ## Ethical Considerations:
133
  NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their supporting model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
 
 
134
  Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
135
 
136
  ## Citation
@@ -148,12 +244,13 @@ If you find this model useful, please cite the following works:
148
  url={https://arxiv.org/abs/2505.11475},
149
  }
150
 
151
- @misc{wang2024helpsteer2,
152
- title={Help{S}teer2: Open-source dataset for training top-performing reward models},
153
- author={Zhilin Wang and Yi Dong and Olivier Delalleau and Jiaqi Zeng and Gerald Shen and Daniel Egert and Jimmy J. Zhang and Makesh Narsimhan Sreedhar and Oleksii Kuchaiev},
154
- year={2024},
155
- eprint={2406.08673},
156
  archivePrefix={arXiv},
157
- primaryClass={id='cs.CL' full_name='Computation and Language' is_active=True alt_name='cmp-lg' in_archive='cs' is_general=False description='Covers natural language processing. Roughly includes material in ACM Subject Class I.2.7. Note that work on artificial languages (programming languages, logics, formal systems) that does not explicitly address natural-language issues broadly construed (natural-language processing, computational linguistics, speech, text retrieval, etc.) is not appropriate for this area.'}
 
158
  }
159
  ```
 
21
  # Model Overview
22
 
23
  ## Description
24
+ Qwen-2.5-Nemotron-32B-Reward is a reward model that assigns a numerical “reward” score to evaluate the quality of LLM-generated responses. A higher reward on one conversation indicates better performance within that context, but does *not* translate across unrelated prompts.
25
 
26
+ This model is ready for commercial/non-commercial use.
27
 
28
+ ## License/Terms of Use
29
+
30
+ Use of this model is governed by the [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/).
31
 
32
+ ### Deployment Geography
33
 
34
+ Global
 
35
 
36
+ ## Use Case
 
 
 
37
 
38
+ Qwen-2.5-Nemotron-32B-Reward labels an LLM-generated response to a user query with a reward score.
 
 
 
39
 
40
+ ## Release Date:
41
 
42
+ HuggingFace 06/27/2025 via https://huggingface.co/nvidia/Qwen-2.5-Nemotron-32B-Reward
43
 
 
44
 
45
+ ## References
46
+
47
+ * [HelpSteer3-Preference](https://arxiv.org/abs/2505.11475)
48
+ * [HelpSteer2-Preference](https://arxiv.org/abs/2410.01257)
49
+ * [Qwen2.5 Model Card](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct)
50
+
51
 
52
  ## RM-Bench LeaderBoard
53
 
 
67
  | **Qwen-2.5-Nemotron-32B-Reward** | 61.7 | 74.5 | 76.2 | 82.1 | 70.3 |
68
  | [Llama-3.3-Nemotron-70B-Reward](https://huggingface.co/nvidia/Llama-3.3-Nemotron-70B-Reward) | 70.8 | 76.5 | 82.1 | 66.7 | 73.7 |
69
 
 
70
 
71
+ ## Model Architecture
72
 
73
+ **Architecture Type:** Transformer
74
+ **Network Architecture:** Qwen2.5
75
 
76
+ We developed this model using [Qwen-2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) as its foundation. This model contains 32 billion parameters.
77
 
78
+ ## Input:
79
+ **Input Type(s):** Text <br>
80
+ **Input Format:** String <br>
81
+ **Input Parameters:** One Dimensional (1D) <br>
82
+ **Other Properties Related to Input:** Max of 128k tokens (but trained only on conversations up to 8K tokens) <br>
83
 
84
+ ## Output:
85
+ **Output Type(s):** Float <br>
86
+ **Output Format:** One Single Float <br>
87
+ **Output Parameters:** One-Dimensional (1D) <br>
88
+ **Other Properties Related to Output:** The float value represents the quality of the response, with a higher value representing higher quality. <br>
89
 
90
+ Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. <br>
91
 
92
+ ## Software Integration:
93
+ **Runtime Engine(s):** <br>
94
+ * [NeMo - 24.05.llama.3.1] <br>
95
+
96
+ **Supported Hardware Microarchitecture Compatibility:** <br>
97
+ * NVIDIA Ampere <br>
98
+ * NVIDIA Hopper <br>
99
+ * NVIDIA Turing <br>
100
+
101
+ **Supported Operating System(s):** Linux <br>
102
 
 
 
 
103
 
104
  ## Quick Start
105
 
 
133
  # reward for bad_response = -7.53515625
134
  ```
135
 
136
+ ## Model Version:
137
+ v1.0
138
+
139
+ # Training, Testing and Evaluation Datasets:
140
+
141
  ## Training Datasets:
142
 
143
+ **Dataset Name:** HelpSteer3 <br>
144
+ **Dataset Link:** https://huggingface.co/datasets/nvidia/HelpSteer3
145
 
146
+ **Data Collection Method by dataset** <br>
147
+ * [Hybrid: Human, Synthetic] <br>
148
+
149
+ **Labeling Method by dataset** <br>
150
+ * [Human] <br>
151
+
152
+ **Properties:** <br>
153
+ * 38,459 prompts, each with a pair of responses as well as human preferences between the pair of responses.
154
+
155
+ **Dataset Name:** HelpSteer2 <br>
156
+ **Dataset Link:** https://huggingface.co/datasets/nvidia/HelpSteer2
157
+
158
+ **Data Collection Method by dataset** <br>
159
+ * [Hybrid: Human, Synthetic] <br>
160
+
161
+ **Labeling Method by dataset** <br>
162
+ * [Human] <br>
163
+
164
+ **Properties:** <br>
165
+ * 6,766 prompts, each with a pair of responses as well as human preferences between the pair of responses.
166
+
167
+
168
+ ## Testing Datasets:
169
+
170
+ **Dataset Name:** HelpSteer3 <br>
171
+ **Dataset Link:** https://huggingface.co/datasets/nvidia/HelpSteer3
172
+
173
+ **Data Collection Method by dataset** <br>
174
+ * [Hybrid: Human, Synthetic] <br>
175
+
176
+ **Labeling Method by dataset** <br>
177
+ * [Human] <br>
178
+
179
+ **Properties:** <br>
180
+ * 2,017 prompts, each with a pair of responses as well as human preferences between the pair of responses.
181
+
182
+
183
+ **Dataset Name:** HelpSteer2 <br>
184
+ **Dataset Link:** https://huggingface.co/datasets/nvidia/HelpSteer2
185
+
186
+ **Data Collection Method by dataset** <br>
187
+ * [Hybrid: Human, Synthetic] <br>
188
+
189
+ **Labeling Method by dataset** <br>
190
+ * [Human] <br>
191
+
192
+ **Properties:** <br>
193
+ * 352 prompts, each with a pair of responses as well as human preferences between the pair of responses.
194
+
195
+ ## Evaluation Datasets
196
+
197
+ **Dataset Name:** RM-Bench <br>
198
+ **Dataset Link:** https://huggingface.co/datasets/THU-KEG/RM-Bench
199
+
200
+ **Data Collection Method by dataset** <br>
201
+ * [Hybrid: Human, Synthetic] <br>
202
+
203
+ **Labeling Method by dataset** <br>
204
+ * [Hybrid: Human, Synthetic] <br>
205
+
206
+ **Properties:** <br>
207
+ * 1,327 prompts, each with three pairs of responses as well as preferences between the pair of responses.
208
+
209
+
210
+ **Dataset Name:** JudgeBench <br>
211
+ **Dataset Link:** https://huggingface.co/datasets/ScalerLab/JudgeBench
212
+
213
+ **Data Collection Method by dataset** <br>
214
+ * [Hybrid: Human, Synthetic] <br>
215
+
216
+ **Labeling Method by dataset** <br>
217
+ * [Hybrid: Human, Synthetic] <br>
218
+
219
+ **Properties:** <br>
220
+ * 350 prompts, each with a pair of responses as well as preferences between the pair of responses.
221
+
222
+ # Inference:
223
+ **Engine:** PyTorch <br>
224
+ **Test Hardware:** H100, A100 80GB, A100 40GB <br>
225
 
226
  ## Ethical Considerations:
227
  NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their supporting model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
228
+ For more detailed information on ethical considerations for this model, please see the Model Card++ [Explainability](explainability.md), [Bias](bias.md), [Safety & Security](safety.md), and [Privacy](privacy.md) Subcards.
229
+
230
  Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
231
 
232
  ## Citation
 
244
  url={https://arxiv.org/abs/2505.11475},
245
  }
246
 
247
+ @misc{wang2025helpsteer2preferencecomplementingratingspreferences,
248
+ title={HelpSteer2-Preference: Complementing Ratings with Preferences},
249
+ author={Zhilin Wang and Alexander Bukharin and Olivier Delalleau and Daniel Egert and Gerald Shen and Jiaqi Zeng and Oleksii Kuchaiev and Yi Dong},
250
+ year={2025},
251
+ eprint={2410.01257},
252
  archivePrefix={arXiv},
253
+ primaryClass={cs.LG},
254
+ url={https://arxiv.org/abs/2410.01257},
255
  }
256
  ```