FREESON: Retriever-Free Retrieval-Augmented Reasoning via Corpus-Traversing MCTS
Abstract
FREESON, a novel framework that integrates retrieval and reasoning roles within LRMs using CT-MCTS, improves the performance of multistep reasoning models in QA tasks by reducing representation bottlenecks.
Large Reasoning Models (LRMs) have demonstrated remarkable capabilities in multi-step reasoning and calling search engines at appropriate steps. However, existing retrieval-augmented reasoning approaches rely on separate retrieval models, limiting the LRM's role in retrieval to deciding when to retrieve and how to query. This separation not only increases hardware and operational costs but also leads to errors in the retrieval process due to the representation bottleneck, a phenomenon where the retriever's embedding space is not expressive enough to meet the generator's requirements. To address this, we shift our perspective from sequence-to-sequence matching to locating the answer-containing paths within the corpus, and propose a novel framework called FREESON (Retriever-FREE Retrieval-Augmented ReaSONing). This framework enables LRMs to retrieve relevant knowledge on their own by acting as both a generator and retriever. To achieve this, we introduce a variant of the MCTS algorithm specialized for the retrieval task, which we call CT-MCTS (Corpus-Traversing Monte Carlo Tree Search). In this algorithm, LRMs traverse through the corpus toward answer-containing regions. Our results on five open-domain QA benchmarks, including single-hop and multi-hop questions, show that FREESON achieves an average improvement of 14.4% in EM and F1 over four multi-step reasoning models with a separate retriever, and it also performs comparably to the strongest baseline, surpassing it by 3% on PopQA and 2WikiMultihopQA.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Memory-Aware and Uncertainty-Guided Retrieval for Multi-Hop Question Answering (2025)
- Search and Refine During Think: Autonomous Retrieval-Augmented Reasoning of LLMs (2025)
- ARise: Towards Knowledge-Augmented Reasoning via Risk-Adaptive Search (2025)
- PropRAG: Guiding Retrieval with Beam Search over Proposition Paths (2025)
- Training a Utility-based Retriever Through Shared Context Attribution for Retrieval-Augmented Language Models (2025)
- ELITE: Embedding-Less retrieval with Iterative Text Exploration (2025)
- CRAFT: Training-Free Cascaded Retrieval for Tabular QA (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper