Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -32,7 +32,38 @@ Note, that the embeddings are not normalized so you will need to normalize them
|
|
32 |
Retrieval performance for the TREC DL21-23, MSMARCOV2-Dev and Raggy Queries can be found below with BM25 as a baseline. For both systems, retrieval is at the segment level and Doc Score = Max (passage score).
|
33 |
Retrieval is done via a dot product and happens in BF16.
|
34 |
|
35 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
36 |
## Loading the dataset
|
37 |
|
38 |
### Loading the document embeddings
|
@@ -67,7 +98,7 @@ import numpy as np
|
|
67 |
|
68 |
|
69 |
top_k = 100
|
70 |
-
docs_stream = load_dataset("spacemanidol/msmarco-v2.1-gte-large-en-v1.5,split="train", streaming=True)
|
71 |
|
72 |
docs = []
|
73 |
doc_embeddings = []
|
|
|
32 |
Retrieval performance for the TREC DL21-23, MSMARCOV2-Dev and Raggy Queries can be found below with BM25 as a baseline. For both systems, retrieval is at the segment level and Doc Score = Max (passage score).
|
33 |
Retrieval is done via a dot product and happens in BF16.
|
34 |
|
35 |
+
NDCG @ 10
|
36 |
+
| Dataset | BM25 | GTE-Large-v1.5 |
|
37 |
+
|--------------------|--------|----------------|
|
38 |
+
| Deep Learning 2021 | 0.5778 | 0.7193 |
|
39 |
+
| Deep Learning 2022 | 0.3576 | 0.5358 |
|
40 |
+
| Deep Learning 2023 | 0.3356 | 0.4642 |
|
41 |
+
| msmarcov2-dev | N/A | 0.3538 |
|
42 |
+
| msmarcov2-dev2 | N/A | 0.3470 |
|
43 |
+
| Raggy Queries | 0.4227 | 0.5678 |
|
44 |
+
| TREC RAG (eval) | N/A | 0.5676 |
|
45 |
+
|
46 |
+
Recall @ 100
|
47 |
+
| Dataset | BM25 | GTE-Large-v1.5 |
|
48 |
+
|--------------------|--------|----------------|
|
49 |
+
| Deep Learning 2021 | 0.3811 | 0.4156 |
|
50 |
+
| Deep Learning 2022 | 0.233 | 0.31173 |
|
51 |
+
| Deep Learning 2023 | 0.3049 | 0.35236 |
|
52 |
+
| msmarcov2-dev | 0.6683 | 0.85135 |
|
53 |
+
| msmarcov2-dev2 | 0.6771 | 0.84333 |
|
54 |
+
| Raggy Queries | 0.2807 | 0.35125 |
|
55 |
+
| TREC RAG (eval) | N/A | 0.25223 |
|
56 |
+
|
57 |
+
Recall @ 1000
|
58 |
+
| Dataset | BM25 | GTE-Large-v1.5 |
|
59 |
+
|--------------------|--------|----------------|
|
60 |
+
| Deep Learning 2021 | 0.7115 | 0.73185 |
|
61 |
+
| Deep Learning 2022 | 0.479 | 0.55174 |
|
62 |
+
| Deep Learning 2023 | 0.5852 | 0.6167 |
|
63 |
+
| msmarcov2-dev | 0.8528 | 0.93549 |
|
64 |
+
| msmarcov2-dev2 | 0.8577 | 0.93997 |
|
65 |
+
| Raggy Queries | 0.5745 | 0.63515 |
|
66 |
+
| TREC RAG (eval) | N/A | 0.63133 |
|
67 |
## Loading the dataset
|
68 |
|
69 |
### Loading the document embeddings
|
|
|
98 |
|
99 |
|
100 |
top_k = 100
|
101 |
+
docs_stream = load_dataset("spacemanidol/msmarco-v2.1-gte-large-en-v1.5",split="train", streaming=True)
|
102 |
|
103 |
docs = []
|
104 |
doc_embeddings = []
|