Papers
arxiv:2510.18672

Reasoning Language Model Inference Serving Unveiled: An Empirical Study

Published on Oct 21
· Submitted by Qi Li on Oct 30
Authors:
Qi Li ,
,
,
,
,
,
,
,

Abstract

A study on the serving performance and behavior of reasoning large language models (RLLMs) reveals distinct differences from traditional LLMs and evaluates the effectiveness of various inference optimization techniques.

AI-generated summary

The reasoning large language model (RLLM) has been proven competitive in solving complex reasoning tasks such as mathematics, coding, compared to general LLM. However, the serving performance and behavior of RLLM remains unexplored, which may undermine the deployment and utilization of RLLM in real-world scenario. To close this gap, in this paper, we conduct a comprehensive study of RLLM service. We first perform a pilot study on comparing the serving performance between RLLM and traditional LLM and reveal that there are several distinct differences regarding serving behavior: (1) significant memory usage and fluctuations; (2) straggler requests; (3) adaptive running time; (4) domain preference. Then we further investigate whether existing inference optimization techniques are valid for RLLM. Our main takeaways are that model quantization methods and speculative decoding can improve service system efficiency with small compromise to RLLM accuracy, while prefix caching, KV cache quantization may even degrade accuracy or serving performance for small RLLM. Lastly, we conduct evaluation under real world workload modeled by Gamma distribution to verify our findings. Empirical results of real world workload evaluation across different dataset are aligned with our main findings regarding RLLM serving. We hope our work can provide the research community and industry with insights to advance RLLM inference serving.

Community

Paper author Paper submitter

We hope our work can provide the research community and industry with insightful perspectives to
help advance studies in efficient RLLM serving. To the best of our knowledge, we are the first to
dissect the RLLM serving performance.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2510.18672 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2510.18672 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2510.18672 in a Space README.md to link it from this page.

Collections including this paper 2