Update README.md with TEI support
#26
by
alvarobartt
HF Staff
- opened
README.md
CHANGED
@@ -7,6 +7,7 @@ tags:
|
|
7 |
- sentence-transformers
|
8 |
- sentence-similarity
|
9 |
- feature-extraction
|
|
|
10 |
---
|
11 |
# Qwen3-Embedding-0.6B
|
12 |
|
@@ -23,6 +24,7 @@ The Qwen3 Embedding model series is the latest proprietary model of the Qwen fam
|
|
23 |
**Comprehensive Flexibility**: The Qwen3 Embedding series offers a full spectrum of sizes (from 0.6B to 8B) for both embedding and reranking models, catering to diverse use cases that prioritize efficiency and effectiveness. Developers can seamlessly combine these two modules. Additionally, the embedding model allows for flexible vector definitions across all dimensions, and both embedding and reranking models support user-defined instructions to enhance performance for specific tasks, languages, or scenarios.
|
24 |
|
25 |
**Multilingual Capability**: The Qwen3 Embedding series offer support for over 100 languages, thanks to the multilingual capabilites of Qwen3 models. This includes various programming languages, and provides robust multilingual, cross-lingual, and code retrieval capabilities.
|
|
|
26 |
## Model Overview
|
27 |
|
28 |
**Qwen3-Embedding-0.6B** has the following features:
|
@@ -203,6 +205,29 @@ print(scores.tolist())
|
|
203 |
|
204 |
π **Tip**: We recommend that developers customize the `instruct` according to their specific scenarios, tasks, and languages. Our tests have shown that in most retrieval scenarios, not using an `instruct` on the query side can lead to a drop in retrieval performance by approximately 1% to 5%.
|
205 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
206 |
## Evaluation
|
207 |
|
208 |
### MTEB (Multilingual)
|
|
|
7 |
- sentence-transformers
|
8 |
- sentence-similarity
|
9 |
- feature-extraction
|
10 |
+
- text-embeddings-inference
|
11 |
---
|
12 |
# Qwen3-Embedding-0.6B
|
13 |
|
|
|
24 |
**Comprehensive Flexibility**: The Qwen3 Embedding series offers a full spectrum of sizes (from 0.6B to 8B) for both embedding and reranking models, catering to diverse use cases that prioritize efficiency and effectiveness. Developers can seamlessly combine these two modules. Additionally, the embedding model allows for flexible vector definitions across all dimensions, and both embedding and reranking models support user-defined instructions to enhance performance for specific tasks, languages, or scenarios.
|
25 |
|
26 |
**Multilingual Capability**: The Qwen3 Embedding series offer support for over 100 languages, thanks to the multilingual capabilites of Qwen3 models. This includes various programming languages, and provides robust multilingual, cross-lingual, and code retrieval capabilities.
|
27 |
+
|
28 |
## Model Overview
|
29 |
|
30 |
**Qwen3-Embedding-0.6B** has the following features:
|
|
|
205 |
|
206 |
π **Tip**: We recommend that developers customize the `instruct` according to their specific scenarios, tasks, and languages. Our tests have shown that in most retrieval scenarios, not using an `instruct` on the query side can lead to a drop in retrieval performance by approximately 1% to 5%.
|
207 |
|
208 |
+
### Text Embeddings Inference (TEI) Usage
|
209 |
+
|
210 |
+
You can either run / deploy TEI on NVIDIA GPUs as:
|
211 |
+
|
212 |
+
```bash
|
213 |
+
docker run --gpus all -p 8080:80 -v hf_cache:/data --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.7.2 --model-id Qwen/Qwen3-Embedding-0.6B --dtype float16
|
214 |
+
```
|
215 |
+
|
216 |
+
Or on CPU devices as:
|
217 |
+
|
218 |
+
```bash
|
219 |
+
docker run -p 8080:80 -v hf_cache:/data --pull always ghcr.io/huggingface/text-embeddings-inference:1.7.2 --model-id Qwen/Qwen3-Embedding-0.6B
|
220 |
+
```
|
221 |
+
|
222 |
+
And then, generate the embeddings sending a HTTP POST request as:
|
223 |
+
|
224 |
+
```bash
|
225 |
+
curl http://localhost:8080/embed \
|
226 |
+
-X POST \
|
227 |
+
-d '{"inputs": ["Instruct: Given a web search query, retrieve relevant passages that answer the query\nQuery: What is the capital of China?", "Instruct: Given a web search query, retrieve relevant passages that answer the query\nQuery: Explain gravity"]}' \
|
228 |
+
-H "Content-Type: application/json"
|
229 |
+
```
|
230 |
+
|
231 |
## Evaluation
|
232 |
|
233 |
### MTEB (Multilingual)
|