--- pipeline_tag: text-generation library_name: transformers license: apache-2.0 --- # Rethinking Reward Models for Multi-Domain Test-Time Scaling The `gORM-14B-merged` model is a generative outcome reward model presented in the paper [Rethinking Reward Models for Multi-Domain Test-Time Scaling](https://huggingface.co/papers/2510.00492). This model is designed to evaluate reasoning trajectories and final answers from large language models across diverse domains. ## Abstract The reliability of large language models (LLMs) during test-time scaling is often assessed with *external verifiers* or *reward models* that distinguish correct reasoning from flawed logic. Prior work generally assumes that process reward models (PRMs), which score every intermediate reasoning step, outperform outcome reward models (ORMs) that assess only the final answer. This view is based mainly on evidence from narrow, math-adjacent domains. We present the first unified evaluation of four reward model variants, discriminative ORM and PRM (DisORM, DisPRM) and generative ORM and PRM (GenORM, GenPRM), across 14 diverse domains. Contrary to conventional wisdom, we find that (i) DisORM performs on par with DisPRM, (ii) GenPRM is not competitive, and (iii) overall, GenORM is the most robust, yielding significant and consistent gains across every tested domain. We attribute this to PRM-style stepwise scoring, which inherits label noise from LLM auto-labeling and has difficulty evaluating long reasoning trajectories, including those involving self-correcting reasoning. Our theoretical analysis shows that step-wise aggregation compounds errors as reasoning length grows, and our empirical observations confirm this effect. These findings challenge the prevailing assumption that fine-grained supervision is always better and support generative outcome verification for multi-domain deployment. We publicly release [our code, datasets, and checkpoints](https://github.com/db-Lee/Multi-RM) to facilitate future research in multi-domain settings. ## Code The official code and other checkpoints can be found at the [GitHub repository](https://github.com/db-Lee/Multi-RM). ## Usage This model is compatible with the `transformers` library and can be used for generative reward inference. ### Installation First, set up your environment by creating a conda environment and installing the required dependencies as described in the GitHub repository: ```bash conda create -n multi-rm python=3.10.14 conda activate multi-rm pip install -r requirements.txt pip install flash-attn --no-build-isolation ``` ### Inference (Reward) To perform inference with the `gORM-14B-merged` model, you can use the script provided in the GitHub repository. Replace `[TEST]` with a specific test dataset (e.g., `test`, `test_smollm`, `test_qwen`, `test_gemma`, `test_llama`) as listed in the [Datasets section of the GitHub README](https://github.com/db-Lee/Multi-RM#datasets). ```python # Inference for gORM # Use the appropriate model checkpoint: # dongboklee/gORM-14B-merged, TASK_TYPE=gORM python -m generative.get_reward \ --data_path dongboklee/[TEST] \ --model_id dongboklee/gORM-14B-merged \ --output_dir ./[REWARD_RESULTS]/gORM-14B-[TEST] \ --task_type gORM \ --category all ``` ## Citation If you find our work useful, please cite our paper: ```bibtex @article{multi-rm, title = {Rethinking Reward Models for Multi-Domain Test-Time Scaling}, author = {Lee, Dong Bok and Lee, Seanie and Park, Sangwoo and Kang, Minki and Baek, Jinheon and Kim, Dongki and Wagner, Dominik and Jin, Jiongdao and Lee, Heejun and Bocklet, Tobias and Wang, Jinyu and Fu, Jingjing and Hwang, Sung Ju and Bian, Jiang and Song, Lei}, journal = {arXiv preprint arXiv:2510.00492}, year = {2025} } ```