Add comprehensive model card for LimRank-7B

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +43 -0
README.md ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ library_name: transformers
4
+ pipeline_tag: text-ranking
5
+ ---
6
+
7
+ # LimRank: Less is More for Reasoning-Intensive Information Reranking
8
+
9
+ This repository contains the `limrank-7b` model, based on `Qwen2.5-7B`, which was presented in the paper [LimRank: Less is More for Reasoning-Intensive Information Reranking](https://huggingface.co/papers/2510.23544).
10
+
11
+ LimRank demonstrates an efficient approach to adapt modern Large Language Models (LLMs) for reasoning-intensive information reranking tasks. This is achieved by leveraging `LIMRANK-SYNTHESIZER`, a reusable and open-source pipeline that generates minimal yet high-quality synthetic supervision data. Through this approach, LimRank achieves competitive performance on challenging benchmarks like BRIGHT and FollowIR, utilizing less than 5% of the data typically required by prior methods. The model also shows strong generalization capabilities across various downstream tasks, including scientific literature search and retrieval-augmented generation.
12
+
13
+ ## Links
14
+ - **Paper**: [LimRank: Less is More for Reasoning-Intensive Information Reranking](https://huggingface.co/papers/2510.23544)
15
+ - **Code/GitHub Repository**: [https://github.com/SighingSnow/LimRank](https://github.com/SighingSnow/LimRank)
16
+ - **Trained LimRank Model**: [songtingyu/limrank-7b](https://huggingface.co/songtingyu/limrank)
17
+ - **Training Datasets**: [songtingyu/limrank-data](https://huggingface.co/datasets/songtingyu/limrank-data)
18
+ - **Evaluation Results**: [sogntingyu/limrank-results](https://huggingface.co/datasets/songtingyu/limrank-results)
19
+ - **Running Files for Reproduction**: [songtingyu/limrank-run-files](https://huggingface.co/datasets/songtingyu/limrank-run-files)
20
+
21
+ ## Citation
22
+ If you find our paper useful, please cite our work:
23
+
24
+ ```bibtex
25
+ @misc{song2025limrankreasoningintensiveinformationreranking,
26
+ title={LimRank: Less is More for Reasoning-Intensive Information Reranking},
27
+ author={Tingyu Song and Yilun Zhao and Siyue Zhang and Chen Zhao and Arman Cohan},
28
+ year={2025},
29
+ eprint={2510.23544},
30
+ archivePrefix={arXiv},
31
+ primaryClass={cs.CL},
32
+ url={https://arxiv.org/abs/2510.23544},
33
+ }
34
+ ```
35
+
36
+ ## Acknowledgements
37
+ We would like to thank the authors of the following papers and repos for their open-source contributions.
38
+ * [rank1](https://github.com/orionw/rank1)
39
+ * [ReasonIR](https://github.com/facebookresearch/ReasonIR)
40
+ * [MTEB](https://github.com/embeddings-benchmark/mteb)
41
+
42
+ ## License
43
+ The model is released under the [MIT License](https://github.com/SighingSnow/LimRank/blob/main/LICENSE).