Qwen3 Embedding: Advancing Text Embedding and Reranking Through Foundation Models
Abstract
The Qwen3 Embedding series, built on Qwen3 foundation models, offers advanced text embedding and reranking capabilities through a multi-stage training pipeline, achieving state-of-the-art performance across multilingual and retrieval benchmarks.
In this work, we introduce the Qwen3 Embedding series, a significant advancement over its predecessor, the GTE-Qwen series, in text embedding and reranking capabilities, built upon the Qwen3 foundation models. Leveraging the Qwen3 LLMs' robust capabilities in multilingual text understanding and generation, our innovative multi-stage training pipeline combines large-scale unsupervised pre-training with supervised fine-tuning on high-quality datasets. Effective model merging strategies further ensure the robustness and adaptability of the Qwen3 Embedding series. During the training process, the Qwen3 LLMs serve not only as backbone models but also play a crucial role in synthesizing high-quality, rich, and diverse training data across multiple domains and languages, thus enhancing the training pipeline. The Qwen3 Embedding series offers a spectrum of model sizes (0.6B, 4B, 8B) for both embedding and reranking tasks, addressing diverse deployment scenarios where users can optimize for either efficiency or effectiveness. Empirical evaluations demonstrate that the Qwen3 Embedding series achieves state-of-the-art results across diverse benchmarks. Notably, it excels on the multilingual evaluation benchmark MTEB for text embedding, as well as in various retrieval tasks, including code retrieval, cross-lingual retrieval and multilingual retrieval. To facilitate reproducibility and promote community-driven research and development, the Qwen3 Embedding models are publicly available under the Apache 2.0 license.
Community
๐ We introduce the Qwen3-Embedding and Qwen3-Reranker Series: setting new standards in multilingual text embedding and reranking!
โจ Highlights:
โ
Available in 0.6B / 4B / 8B versions
โ
Supports 119 languages
โ
State-of-the-Art performance on MMTEB , MTEB , and MTEB-Code
โ
Open-source on Hugging Face, GitHub & ModelScope
โ
Ready-to-use via API on Alibaba Cloud
๐ Empowering use cases:
Document retrieval, RAG, classification, sentiment analysis, code search & more!
๐ Explore Now :
Hugging Face
Qwen3-Embedding: https://huggingface.co/collections/Qwen/qwen3-embedding-6841b2055b99c44d9a4c371f
Qwen3-Reranker: https://huggingface.co/collections/Qwen/qwen3-reranker-6841b22d0192d7ade9cdefea
GitHub : https://github.com/QwenLM/Qwen3-Embedding
Blog : https://qwenlm.github.io/blog/qwen3-embedding/
Hi
Models citing this paper 26
Browse 26 models citing this paperDatasets citing this paper 0
No dataset linking this paper