Papers
arxiv:2506.05176

Qwen3 Embedding: Advancing Text Embedding and Reranking Through Foundation Models

Published on Jun 5
ยท Submitted by thenlper on Jun 6
#2 Paper of the day
Authors:
,
,
,
,
,
,
,
,

Abstract

The Qwen3 Embedding series, built on Qwen3 foundation models, offers advanced text embedding and reranking capabilities through a multi-stage training pipeline, achieving state-of-the-art performance across multilingual and retrieval benchmarks.

AI-generated summary

In this work, we introduce the Qwen3 Embedding series, a significant advancement over its predecessor, the GTE-Qwen series, in text embedding and reranking capabilities, built upon the Qwen3 foundation models. Leveraging the Qwen3 LLMs' robust capabilities in multilingual text understanding and generation, our innovative multi-stage training pipeline combines large-scale unsupervised pre-training with supervised fine-tuning on high-quality datasets. Effective model merging strategies further ensure the robustness and adaptability of the Qwen3 Embedding series. During the training process, the Qwen3 LLMs serve not only as backbone models but also play a crucial role in synthesizing high-quality, rich, and diverse training data across multiple domains and languages, thus enhancing the training pipeline. The Qwen3 Embedding series offers a spectrum of model sizes (0.6B, 4B, 8B) for both embedding and reranking tasks, addressing diverse deployment scenarios where users can optimize for either efficiency or effectiveness. Empirical evaluations demonstrate that the Qwen3 Embedding series achieves state-of-the-art results across diverse benchmarks. Notably, it excels on the multilingual evaluation benchmark MTEB for text embedding, as well as in various retrieval tasks, including code retrieval, cross-lingual retrieval and multilingual retrieval. To facilitate reproducibility and promote community-driven research and development, the Qwen3 Embedding models are publicly available under the Apache 2.0 license.

Community

Paper author Paper submitter

๐Ÿš€ We introduce the Qwen3-Embedding and Qwen3-Reranker Series: setting new standards in multilingual text embedding and reranking!

โœจ Highlights:
โœ… Available in 0.6B / 4B / 8B versions
โœ… Supports 119 languages
โœ… State-of-the-Art performance on MMTEB , MTEB , and MTEB-Code
โœ… Open-source on Hugging Face, GitHub & ModelScope
โœ… Ready-to-use via API on Alibaba Cloud

๐Ÿ” Empowering use cases:
Document retrieval, RAG, classification, sentiment analysis, code search & more!

๐Ÿ”— Explore Now :
Hugging Face
Qwen3-Embedding: https://huggingface.co/collections/Qwen/qwen3-embedding-6841b2055b99c44d9a4c371f
Qwen3-Reranker: https://huggingface.co/collections/Qwen/qwen3-reranker-6841b22d0192d7ade9cdefea

GitHub : https://github.com/QwenLM/Qwen3-Embedding
Blog : https://qwenlm.github.io/blog/qwen3-embedding/

Hi

Sign up or log in to comment

Models citing this paper 26

Browse 26 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2506.05176 in a dataset README.md to link it from this page.

Spaces citing this paper 14

Collections including this paper 10