rajatgupta99924 commited on
Commit
28a0b7b
·
verified ·
1 Parent(s): 4ff6141

Add new SentenceTransformer model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 1024,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,694 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sentence-similarity
5
+ - feature-extraction
6
+ - dense
7
+ - generated_from_trainer
8
+ - dataset_size:160
9
+ - loss:MatryoshkaLoss
10
+ - loss:MultipleNegativesRankingLoss
11
+ base_model: Snowflake/snowflake-arctic-embed-l
12
+ widget:
13
+ - source_sentence: Why might ChatGPT's answers change as the date approaches the holidays?
14
+ sentences:
15
+ - 'There’s now a fascinating ecosystem of people training their own models on top
16
+ of these foundations, publishing those models, building fine-tuning datasets and
17
+ sharing those too.
18
+
19
+ The Hugging Face Open LLM Leaderboard is one place that tracks these. I can’t
20
+ even attempt to count them, and any count would be out-of-date within a few hours.
21
+
22
+ The best overall openly licensed LLM at any time is rarely a foundation model:
23
+ instead, it’s whichever fine-tuned community model has most recently discovered
24
+ the best combination of fine-tuning data.
25
+
26
+ This is a huge advantage for open over closed models: the closed, hosted models
27
+ don’t have thousands of researchers and hobbyists around the world collaborating
28
+ and competing to improve them.'
29
+ - 'On the one hand, we keep on finding new things that LLMs can do that we didn’t
30
+ expect—and that the people who trained the models didn’t expect either. That’s
31
+ usually really fun!
32
+
33
+ But on the other hand, the things you sometimes have to do to get the models to
34
+ behave are often incredibly dumb.
35
+
36
+ Does ChatGPT get lazy in December, because its hidden system prompt includes the
37
+ current date and its training data shows that people provide less useful answers
38
+ coming up to the holidays?
39
+
40
+ The honest answer is “maybe”! No-one is entirely sure, but if you give it a different
41
+ date its answers may skew slightly longer.'
42
+ - 'Getting back to models that beat GPT-4: Anthropic’s Claude 3 series launched
43
+ in March, and Claude 3 Opus quickly became my new favourite daily-driver. They
44
+ upped the ante even more in June with the launch of Claude 3.5 Sonnet—a model
45
+ that is still my favourite six months later (though it got a significant upgrade
46
+ on October 22, confusingly keeping the same 3.5 version number. Anthropic fans
47
+ have since taken to calling it Claude 3.6).'
48
+ - source_sentence: What significance did the year 2024 have in relation to the word
49
+ "slop"?
50
+ sentences:
51
+ - 'Intuitively, one would expect that systems this powerful would take millions
52
+ of lines of complex code. Instead, it turns out a few hundred lines of Python
53
+ is genuinely enough to train a basic version!
54
+
55
+ What matters most is the training data. You need a lot of data to make these
56
+ things work, and the quantity and quality of the training data appears to be the
57
+ most important factor in how good the resulting model is.
58
+
59
+ If you can gather the right data, and afford to pay for the GPUs to train it,
60
+ you can build an LLM.'
61
+ - 'The year of slop
62
+
63
+ 2024 was the year that the word "slop" became a term of art. I wrote about this
64
+ in May, expanding on this tweet by @deepfates:'
65
+ - 'On the other hand, as software engineers we are better placed to take advantage
66
+ of this than anyone else. We’ve all been given weird coding interns—we can use
67
+ our deep knowledge to prompt them to solve coding problems more effectively than
68
+ anyone else can.
69
+
70
+ The ethics of this space remain diabolically complex
71
+
72
+ In September last year Andy Baio and I produced the first major story on the unlicensed
73
+ training data behind Stable Diffusion.
74
+
75
+ Since then, almost every major LLM (and most of the image generation models) have
76
+ also been trained on unlicensed data.'
77
+ - source_sentence: Why does the author find large language models (LLMs) infuriating
78
+ as a computer scientist and software engineer?
79
+ sentences:
80
+ - 'Stuff we figured out about AI in 2023
81
+
82
+
83
+
84
+
85
+
86
+
87
+
88
+
89
+
90
+
91
+
92
+
93
+
94
+
95
+
96
+
97
+
98
+
99
+
100
+
101
+
102
+
103
+ Simon Willison’s Weblog
104
+
105
+ Subscribe
106
+
107
+
108
+
109
+
110
+
111
+
112
+
113
+ Stuff we figured out about AI in 2023
114
+
115
+ 31st December 2023
116
+
117
+ 2023 was the breakthrough year for Large Language Models (LLMs). I think it’s
118
+ OK to call these AI—they’re the latest and (currently) most interesting development
119
+ in the academic field of Artificial Intelligence that dates back to the 1950s.
120
+
121
+ Here’s my attempt to round up the highlights in one place!'
122
+ - 'The May 13th announcement of GPT-4o included a demo of a brand new voice mode,
123
+ where the true multi-modal GPT-4o (the o is for “omni”) model could accept audio
124
+ input and output incredibly realistic sounding speech without needing separate
125
+ TTS or STT models.
126
+
127
+ The demo also sounded conspicuously similar to Scarlett Johansson... and after
128
+ she complained the voice from the demo, Skye, never made it to a production product.
129
+
130
+ The delay in releasing the new voice mode after the initial demo caused quite
131
+ a lot of confusion. I wrote about that in ChatGPT in “4o” mode is not running
132
+ the new features yet.'
133
+ - 'Still, I’m surprised that no-one has beaten the now almost year old GPT-4 by
134
+ now. OpenAI clearly have some substantial tricks that they haven’t shared yet.
135
+
136
+ Vibes Based Development
137
+
138
+ As a computer scientist and software engineer, LLMs are infuriating.
139
+
140
+ Even the openly licensed ones are still the world’s most convoluted black boxes.
141
+ We continue to have very little idea what they can do, how exactly they work and
142
+ how best to control them.
143
+
144
+ I’m used to programming where the computer does exactly what I tell it to do.
145
+ Prompting an LLM is decidedly not that!
146
+
147
+ The worst part is the challenge of evaluating them.
148
+
149
+ There are plenty of benchmarks, but no benchmark is going to tell you if an LLM
150
+ actually “feels” right when you try it for a given task.'
151
+ - source_sentence: How did Google’s NotebookLM enhance audio output in its September
152
+ release?
153
+ sentences:
154
+ - 'Your browser does not support the audio element.
155
+
156
+
157
+ OpenAI aren’t the only group with a multi-modal audio model. Google’s Gemini also
158
+ accepts audio input, and the Google Gemini apps can speak in a similar way to
159
+ ChatGPT now. Amazon also pre-announced voice mode for Amazon Nova, but that’s
160
+ meant to roll out in Q1 of 2025.
161
+
162
+ Google’s NotebookLM, released in September, took audio output to a new level by
163
+ producing spookily realistic conversations between two “podcast hosts” about anything
164
+ you fed into their tool. They later added custom instructions, so naturally I
165
+ turned them into pelicans:
166
+
167
+
168
+
169
+ Your browser does not support the audio element.'
170
+ - 'If you think about what they do, this isn’t such a big surprise. The grammar
171
+ rules of programming languages like Python and JavaScript are massively less complicated
172
+ than the grammar of Chinese, Spanish or English.
173
+
174
+ It’s still astonishing to me how effective they are though.
175
+
176
+ One of the great weaknesses of LLMs is their tendency to hallucinate—to imagine
177
+ things that don’t correspond to reality. You would expect this to be a particularly
178
+ bad problem for code—if an LLM hallucinates a method that doesn’t exist, the code
179
+ should be useless.'
180
+ - 'I think people who complain that LLM improvement has slowed are often missing
181
+ the enormous advances in these multi-modal models. Being able to run prompts against
182
+ images (and audio and video) is a fascinating new way to apply these models.
183
+
184
+ Voice and live camera mode are science fiction come to life
185
+
186
+ The audio and live video modes that have started to emerge deserve a special mention.
187
+
188
+ The ability to talk to ChatGPT first arrived in September 2023, but it was mostly
189
+ an illusion: OpenAI used their excellent Whisper speech-to-text model and a new
190
+ text-to-speech model (creatively named tts-1) to enable conversations with the
191
+ ChatGPT mobile apps, but the actual model just saw text.'
192
+ - source_sentence: What type of dish is shown in the photo and what does it contain?
193
+ sentences:
194
+ - 'Against this photo of butterflies at the California Academy of Sciences:
195
+
196
+
197
+
198
+ A shallow dish, likely a hummingbird or butterfly feeder, is red. Pieces of orange
199
+ slices of fruit are visible inside the dish.
200
+
201
+ Two butterflies are positioned in the feeder, one is a dark brown/black butterfly
202
+ with white/cream-colored markings. The other is a large, brown butterfly with
203
+ patterns of lighter brown, beige, and black markings, including prominent eye
204
+ spots. The larger brown butterfly appears to be feeding on the fruit.'
205
+ - 'Except... you can run generated code to see if it’s correct. And with patterns
206
+ like ChatGPT Code Interpreter the LLM can execute the code itself, process the
207
+ error message, then rewrite it and keep trying until it works!
208
+
209
+ So hallucination is a much lesser problem for code generation than for anything
210
+ else. If only we had the equivalent of Code Interpreter for fact-checking natural
211
+ language!
212
+
213
+ How should we feel about this as software engineers?
214
+
215
+ On the one hand, this feels like a threat: who needs a programmer if ChatGPT can
216
+ write code for you?'
217
+ - 'On the other hand, as software engineers we are better placed to take advantage
218
+ of this than anyone else. We’ve all been given weird coding interns—we can use
219
+ our deep knowledge to prompt them to solve coding problems more effectively than
220
+ anyone else can.
221
+
222
+ The ethics of this space remain diabolically complex
223
+
224
+ In September last year Andy Baio and I produced the first major story on the unlicensed
225
+ training data behind Stable Diffusion.
226
+
227
+ Since then, almost every major LLM (and most of the image generation models) have
228
+ also been trained on unlicensed data.'
229
+ pipeline_tag: sentence-similarity
230
+ library_name: sentence-transformers
231
+ metrics:
232
+ - cosine_accuracy@1
233
+ - cosine_accuracy@3
234
+ - cosine_accuracy@5
235
+ - cosine_accuracy@10
236
+ - cosine_precision@1
237
+ - cosine_precision@3
238
+ - cosine_precision@5
239
+ - cosine_precision@10
240
+ - cosine_recall@1
241
+ - cosine_recall@3
242
+ - cosine_recall@5
243
+ - cosine_recall@10
244
+ - cosine_ndcg@10
245
+ - cosine_mrr@10
246
+ - cosine_map@100
247
+ model-index:
248
+ - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
249
+ results:
250
+ - task:
251
+ type: information-retrieval
252
+ name: Information Retrieval
253
+ dataset:
254
+ name: Unknown
255
+ type: unknown
256
+ metrics:
257
+ - type: cosine_accuracy@1
258
+ value: 0.95
259
+ name: Cosine Accuracy@1
260
+ - type: cosine_accuracy@3
261
+ value: 1.0
262
+ name: Cosine Accuracy@3
263
+ - type: cosine_accuracy@5
264
+ value: 1.0
265
+ name: Cosine Accuracy@5
266
+ - type: cosine_accuracy@10
267
+ value: 1.0
268
+ name: Cosine Accuracy@10
269
+ - type: cosine_precision@1
270
+ value: 0.95
271
+ name: Cosine Precision@1
272
+ - type: cosine_precision@3
273
+ value: 0.33333333333333326
274
+ name: Cosine Precision@3
275
+ - type: cosine_precision@5
276
+ value: 0.20000000000000004
277
+ name: Cosine Precision@5
278
+ - type: cosine_precision@10
279
+ value: 0.10000000000000002
280
+ name: Cosine Precision@10
281
+ - type: cosine_recall@1
282
+ value: 0.95
283
+ name: Cosine Recall@1
284
+ - type: cosine_recall@3
285
+ value: 1.0
286
+ name: Cosine Recall@3
287
+ - type: cosine_recall@5
288
+ value: 1.0
289
+ name: Cosine Recall@5
290
+ - type: cosine_recall@10
291
+ value: 1.0
292
+ name: Cosine Recall@10
293
+ - type: cosine_ndcg@10
294
+ value: 0.9815464876785729
295
+ name: Cosine Ndcg@10
296
+ - type: cosine_mrr@10
297
+ value: 0.975
298
+ name: Cosine Mrr@10
299
+ - type: cosine_map@100
300
+ value: 0.975
301
+ name: Cosine Map@100
302
+ ---
303
+
304
+ # SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
305
+
306
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
307
+
308
+ ## Model Details
309
+
310
+ ### Model Description
311
+ - **Model Type:** Sentence Transformer
312
+ - **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b -->
313
+ - **Maximum Sequence Length:** 512 tokens
314
+ - **Output Dimensionality:** 1024 dimensions
315
+ - **Similarity Function:** Cosine Similarity
316
+ <!-- - **Training Dataset:** Unknown -->
317
+ <!-- - **Language:** Unknown -->
318
+ <!-- - **License:** Unknown -->
319
+
320
+ ### Model Sources
321
+
322
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
323
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
324
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
325
+
326
+ ### Full Model Architecture
327
+
328
+ ```
329
+ SentenceTransformer(
330
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertModel'})
331
+ (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
332
+ (2): Normalize()
333
+ )
334
+ ```
335
+
336
+ ## Usage
337
+
338
+ ### Direct Usage (Sentence Transformers)
339
+
340
+ First install the Sentence Transformers library:
341
+
342
+ ```bash
343
+ pip install -U sentence-transformers
344
+ ```
345
+
346
+ Then you can load this model and run inference.
347
+ ```python
348
+ from sentence_transformers import SentenceTransformer
349
+
350
+ # Download from the 🤗 Hub
351
+ model = SentenceTransformer("rajatgupta99924/AIE6-S09-eca4bfc6-eb64-44a4-a71d-e09bf2b78f50")
352
+ # Run inference
353
+ queries = [
354
+ "What type of dish is shown in the photo and what does it contain?",
355
+ ]
356
+ documents = [
357
+ 'Against this photo of butterflies at the California Academy of Sciences:\n\n\nA shallow dish, likely a hummingbird or butterfly feeder, is red. Pieces of orange slices of fruit are visible inside the dish.\nTwo butterflies are positioned in the feeder, one is a dark brown/black butterfly with white/cream-colored markings. The other is a large, brown butterfly with patterns of lighter brown, beige, and black markings, including prominent eye spots. The larger brown butterfly appears to be feeding on the fruit.',
358
+ 'Except... you can run generated code to see if it’s correct. And with patterns like ChatGPT Code Interpreter the LLM can execute the code itself, process the error message, then rewrite it and keep trying until it works!\nSo hallucination is a much lesser problem for code generation than for anything else. If only we had the equivalent of Code Interpreter for fact-checking natural language!\nHow should we feel about this as software engineers?\nOn the one hand, this feels like a threat: who needs a programmer if ChatGPT can write code for you?',
359
+ 'On the other hand, as software engineers we are better placed to take advantage of this than anyone else. We’ve all been given weird coding interns—we can use our deep knowledge to prompt them to solve coding problems more effectively than anyone else can.\nThe ethics of this space remain diabolically complex\nIn September last year Andy Baio and I produced the first major story on the unlicensed training data behind Stable Diffusion.\nSince then, almost every major LLM (and most of the image generation models) have also been trained on unlicensed data.',
360
+ ]
361
+ query_embeddings = model.encode_query(queries)
362
+ document_embeddings = model.encode_document(documents)
363
+ print(query_embeddings.shape, document_embeddings.shape)
364
+ # [1, 1024] [3, 1024]
365
+
366
+ # Get the similarity scores for the embeddings
367
+ similarities = model.similarity(query_embeddings, document_embeddings)
368
+ print(similarities)
369
+ # tensor([[ 0.4179, -0.0420, 0.0399]])
370
+ ```
371
+
372
+ <!--
373
+ ### Direct Usage (Transformers)
374
+
375
+ <details><summary>Click to see the direct usage in Transformers</summary>
376
+
377
+ </details>
378
+ -->
379
+
380
+ <!--
381
+ ### Downstream Usage (Sentence Transformers)
382
+
383
+ You can finetune this model on your own dataset.
384
+
385
+ <details><summary>Click to expand</summary>
386
+
387
+ </details>
388
+ -->
389
+
390
+ <!--
391
+ ### Out-of-Scope Use
392
+
393
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
394
+ -->
395
+
396
+ ## Evaluation
397
+
398
+ ### Metrics
399
+
400
+ #### Information Retrieval
401
+
402
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
403
+
404
+ | Metric | Value |
405
+ |:--------------------|:-----------|
406
+ | cosine_accuracy@1 | 0.95 |
407
+ | cosine_accuracy@3 | 1.0 |
408
+ | cosine_accuracy@5 | 1.0 |
409
+ | cosine_accuracy@10 | 1.0 |
410
+ | cosine_precision@1 | 0.95 |
411
+ | cosine_precision@3 | 0.3333 |
412
+ | cosine_precision@5 | 0.2 |
413
+ | cosine_precision@10 | 0.1 |
414
+ | cosine_recall@1 | 0.95 |
415
+ | cosine_recall@3 | 1.0 |
416
+ | cosine_recall@5 | 1.0 |
417
+ | cosine_recall@10 | 1.0 |
418
+ | **cosine_ndcg@10** | **0.9815** |
419
+ | cosine_mrr@10 | 0.975 |
420
+ | cosine_map@100 | 0.975 |
421
+
422
+ <!--
423
+ ## Bias, Risks and Limitations
424
+
425
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
426
+ -->
427
+
428
+ <!--
429
+ ### Recommendations
430
+
431
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
432
+ -->
433
+
434
+ ## Training Details
435
+
436
+ ### Training Dataset
437
+
438
+ #### Unnamed Dataset
439
+
440
+ * Size: 160 training samples
441
+ * Columns: <code>sentence_0</code> and <code>sentence_1</code>
442
+ * Approximate statistics based on the first 160 samples:
443
+ | | sentence_0 | sentence_1 |
444
+ |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
445
+ | type | string | string |
446
+ | details | <ul><li>min: 12 tokens</li><li>mean: 20.58 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 34 tokens</li><li>mean: 133.43 tokens</li><li>max: 214 tokens</li></ul> |
447
+ * Samples:
448
+ | sentence_0 | sentence_1 |
449
+ |:-----------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
450
+ | <code>What topics are covered in the articles related to large language models (LLMs) and AI development in the provided context?</code> | <code>Embeddings: What they are and why they matter<br>61.7k<br>79.3k<br><br><br>Catching up on the weird world of LLMs<br>61.6k<br>85.9k<br><br><br>llamafile is the new best way to run an LLM on your own computer<br>52k<br>66k<br><br><br>Prompt injection explained, with video, slides, and a transcript<br>51k<br>61.9k<br><br><br>AI-enhanced development makes me more ambitious with my projects<br>49.6k<br>60.1k<br><br><br>Understanding GPT tokenizers<br>49.5k<br>61.1k<br><br><br>Exploring GPTs: ChatGPT in a trench coat?<br>46.4k<br>58.5k<br><br><br>Could you train a ChatGPT-beating model for $85,000 and run it in a browser?<br>40.5k<br>49.2k<br><br><br>How to implement Q&A against your documentation with GPT3, embeddings and Datasette<br>37.3k<br>44.9k<br><br><br>Lawyer cites fake cases invented by ChatGPT, judge is not amused<br>37.1k<br>47.4k</code> |
451
+ | <code>Which article discusses the potential cost and feasibility of training a ChatGPT-beating model to run in a browser?</code> | <code>Embeddings: What they are and why they matter<br>61.7k<br>79.3k<br><br><br>Catching up on the weird world of LLMs<br>61.6k<br>85.9k<br><br><br>llamafile is the new best way to run an LLM on your own computer<br>52k<br>66k<br><br><br>Prompt injection explained, with video, slides, and a transcript<br>51k<br>61.9k<br><br><br>AI-enhanced development makes me more ambitious with my projects<br>49.6k<br>60.1k<br><br><br>Understanding GPT tokenizers<br>49.5k<br>61.1k<br><br><br>Exploring GPTs: ChatGPT in a trench coat?<br>46.4k<br>58.5k<br><br><br>Could you train a ChatGPT-beating model for $85,000 and run it in a browser?<br>40.5k<br>49.2k<br><br><br>How to implement Q&A against your documentation with GPT3, embeddings and Datasette<br>37.3k<br>44.9k<br><br><br>Lawyer cites fake cases invented by ChatGPT, judge is not amused<br>37.1k<br>47.4k</code> |
452
+ | <code>What are some of the capabilities of Large Language Models mentioned in the context?</code> | <code>Here’s the sequel to this post: Things we learned about LLMs in 2024.<br>Large Language Models<br>In the past 24-36 months, our species has discovered that you can take a GIANT corpus of text, run it through a pile of GPUs, and use it to create a fascinating new kind of software.<br>LLMs can do a lot of things. They can answer questions, summarize documents, translate from one language to another, extract information and even write surprisingly competent code.<br>They can also help you cheat at your homework, generate unlimited streams of fake content and be used for all manner of nefarious purposes.</code> |
453
+ * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
454
+ ```json
455
+ {
456
+ "loss": "MultipleNegativesRankingLoss",
457
+ "matryoshka_dims": [
458
+ 768,
459
+ 512,
460
+ 256,
461
+ 128,
462
+ 64
463
+ ],
464
+ "matryoshka_weights": [
465
+ 1,
466
+ 1,
467
+ 1,
468
+ 1,
469
+ 1
470
+ ],
471
+ "n_dims_per_step": -1
472
+ }
473
+ ```
474
+
475
+ ### Training Hyperparameters
476
+ #### Non-Default Hyperparameters
477
+
478
+ - `eval_strategy`: steps
479
+ - `per_device_train_batch_size`: 10
480
+ - `per_device_eval_batch_size`: 10
481
+ - `num_train_epochs`: 10
482
+ - `multi_dataset_batch_sampler`: round_robin
483
+
484
+ #### All Hyperparameters
485
+ <details><summary>Click to expand</summary>
486
+
487
+ - `overwrite_output_dir`: False
488
+ - `do_predict`: False
489
+ - `eval_strategy`: steps
490
+ - `prediction_loss_only`: True
491
+ - `per_device_train_batch_size`: 10
492
+ - `per_device_eval_batch_size`: 10
493
+ - `per_gpu_train_batch_size`: None
494
+ - `per_gpu_eval_batch_size`: None
495
+ - `gradient_accumulation_steps`: 1
496
+ - `eval_accumulation_steps`: None
497
+ - `torch_empty_cache_steps`: None
498
+ - `learning_rate`: 5e-05
499
+ - `weight_decay`: 0.0
500
+ - `adam_beta1`: 0.9
501
+ - `adam_beta2`: 0.999
502
+ - `adam_epsilon`: 1e-08
503
+ - `max_grad_norm`: 1
504
+ - `num_train_epochs`: 10
505
+ - `max_steps`: -1
506
+ - `lr_scheduler_type`: linear
507
+ - `lr_scheduler_kwargs`: {}
508
+ - `warmup_ratio`: 0.0
509
+ - `warmup_steps`: 0
510
+ - `log_level`: passive
511
+ - `log_level_replica`: warning
512
+ - `log_on_each_node`: True
513
+ - `logging_nan_inf_filter`: True
514
+ - `save_safetensors`: True
515
+ - `save_on_each_node`: False
516
+ - `save_only_model`: False
517
+ - `restore_callback_states_from_checkpoint`: False
518
+ - `no_cuda`: False
519
+ - `use_cpu`: False
520
+ - `use_mps_device`: False
521
+ - `seed`: 42
522
+ - `data_seed`: None
523
+ - `jit_mode_eval`: False
524
+ - `use_ipex`: False
525
+ - `bf16`: False
526
+ - `fp16`: False
527
+ - `fp16_opt_level`: O1
528
+ - `half_precision_backend`: auto
529
+ - `bf16_full_eval`: False
530
+ - `fp16_full_eval`: False
531
+ - `tf32`: None
532
+ - `local_rank`: 0
533
+ - `ddp_backend`: None
534
+ - `tpu_num_cores`: None
535
+ - `tpu_metrics_debug`: False
536
+ - `debug`: []
537
+ - `dataloader_drop_last`: False
538
+ - `dataloader_num_workers`: 0
539
+ - `dataloader_prefetch_factor`: None
540
+ - `past_index`: -1
541
+ - `disable_tqdm`: False
542
+ - `remove_unused_columns`: True
543
+ - `label_names`: None
544
+ - `load_best_model_at_end`: False
545
+ - `ignore_data_skip`: False
546
+ - `fsdp`: []
547
+ - `fsdp_min_num_params`: 0
548
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
549
+ - `fsdp_transformer_layer_cls_to_wrap`: None
550
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
551
+ - `parallelism_config`: None
552
+ - `deepspeed`: None
553
+ - `label_smoothing_factor`: 0.0
554
+ - `optim`: adamw_torch_fused
555
+ - `optim_args`: None
556
+ - `adafactor`: False
557
+ - `group_by_length`: False
558
+ - `length_column_name`: length
559
+ - `ddp_find_unused_parameters`: None
560
+ - `ddp_bucket_cap_mb`: None
561
+ - `ddp_broadcast_buffers`: False
562
+ - `dataloader_pin_memory`: True
563
+ - `dataloader_persistent_workers`: False
564
+ - `skip_memory_metrics`: True
565
+ - `use_legacy_prediction_loop`: False
566
+ - `push_to_hub`: False
567
+ - `resume_from_checkpoint`: None
568
+ - `hub_model_id`: None
569
+ - `hub_strategy`: every_save
570
+ - `hub_private_repo`: None
571
+ - `hub_always_push`: False
572
+ - `hub_revision`: None
573
+ - `gradient_checkpointing`: False
574
+ - `gradient_checkpointing_kwargs`: None
575
+ - `include_inputs_for_metrics`: False
576
+ - `include_for_metrics`: []
577
+ - `eval_do_concat_batches`: True
578
+ - `fp16_backend`: auto
579
+ - `push_to_hub_model_id`: None
580
+ - `push_to_hub_organization`: None
581
+ - `mp_parameters`:
582
+ - `auto_find_batch_size`: False
583
+ - `full_determinism`: False
584
+ - `torchdynamo`: None
585
+ - `ray_scope`: last
586
+ - `ddp_timeout`: 1800
587
+ - `torch_compile`: False
588
+ - `torch_compile_backend`: None
589
+ - `torch_compile_mode`: None
590
+ - `include_tokens_per_second`: False
591
+ - `include_num_input_tokens_seen`: False
592
+ - `neftune_noise_alpha`: None
593
+ - `optim_target_modules`: None
594
+ - `batch_eval_metrics`: False
595
+ - `eval_on_start`: False
596
+ - `use_liger_kernel`: False
597
+ - `liger_kernel_config`: None
598
+ - `eval_use_gather_object`: False
599
+ - `average_tokens_across_devices`: False
600
+ - `prompts`: None
601
+ - `batch_sampler`: batch_sampler
602
+ - `multi_dataset_batch_sampler`: round_robin
603
+ - `router_mapping`: {}
604
+ - `learning_rate_mapping`: {}
605
+
606
+ </details>
607
+
608
+ ### Training Logs
609
+ | Epoch | Step | cosine_ndcg@10 |
610
+ |:-----:|:----:|:--------------:|
611
+ | 1.0 | 16 | 0.9815 |
612
+ | 2.0 | 32 | 0.9815 |
613
+ | 1.0 | 16 | 0.9815 |
614
+ | 2.0 | 32 | 0.9815 |
615
+ | 3.0 | 48 | 0.9815 |
616
+ | 3.125 | 50 | 0.9815 |
617
+ | 4.0 | 64 | 0.9815 |
618
+ | 5.0 | 80 | 0.9815 |
619
+ | 6.0 | 96 | 0.9815 |
620
+ | 6.25 | 100 | 0.9815 |
621
+ | 7.0 | 112 | 0.9815 |
622
+ | 8.0 | 128 | 0.9815 |
623
+ | 9.0 | 144 | 0.9815 |
624
+ | 9.375 | 150 | 0.9815 |
625
+ | 10.0 | 160 | 0.9815 |
626
+
627
+
628
+ ### Framework Versions
629
+ - Python: 3.13.7
630
+ - Sentence Transformers: 5.1.0
631
+ - Transformers: 4.56.1
632
+ - PyTorch: 2.8.0+cpu
633
+ - Accelerate: 1.10.1
634
+ - Datasets: 4.0.0
635
+ - Tokenizers: 0.22.0
636
+
637
+ ## Citation
638
+
639
+ ### BibTeX
640
+
641
+ #### Sentence Transformers
642
+ ```bibtex
643
+ @inproceedings{reimers-2019-sentence-bert,
644
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
645
+ author = "Reimers, Nils and Gurevych, Iryna",
646
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
647
+ month = "11",
648
+ year = "2019",
649
+ publisher = "Association for Computational Linguistics",
650
+ url = "https://arxiv.org/abs/1908.10084",
651
+ }
652
+ ```
653
+
654
+ #### MatryoshkaLoss
655
+ ```bibtex
656
+ @misc{kusupati2024matryoshka,
657
+ title={Matryoshka Representation Learning},
658
+ author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
659
+ year={2024},
660
+ eprint={2205.13147},
661
+ archivePrefix={arXiv},
662
+ primaryClass={cs.LG}
663
+ }
664
+ ```
665
+
666
+ #### MultipleNegativesRankingLoss
667
+ ```bibtex
668
+ @misc{henderson2017efficient,
669
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
670
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
671
+ year={2017},
672
+ eprint={1705.00652},
673
+ archivePrefix={arXiv},
674
+ primaryClass={cs.CL}
675
+ }
676
+ ```
677
+
678
+ <!--
679
+ ## Glossary
680
+
681
+ *Clearly define terms in order to be accessible across audiences.*
682
+ -->
683
+
684
+ <!--
685
+ ## Model Card Authors
686
+
687
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
688
+ -->
689
+
690
+ <!--
691
+ ## Model Card Contact
692
+
693
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
694
+ -->
config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "BertModel"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "classifier_dropout": null,
7
+ "dtype": "float32",
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 1024,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 4096,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 16,
17
+ "num_hidden_layers": 24,
18
+ "pad_token_id": 0,
19
+ "position_embedding_type": "absolute",
20
+ "transformers_version": "4.56.1",
21
+ "type_vocab_size": 2,
22
+ "use_cache": true,
23
+ "vocab_size": 30522
24
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "5.1.0",
4
+ "transformers": "4.56.1",
5
+ "pytorch": "2.8.0+cpu"
6
+ },
7
+ "prompts": {
8
+ "query": "Represent this sentence for searching relevant passages: ",
9
+ "document": ""
10
+ },
11
+ "default_prompt_name": null,
12
+ "model_type": "SentenceTransformer",
13
+ "similarity_fn_name": "cosine"
14
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8711c49a1bee20d28b62e5a5391f7ff98932c2ec2f6df0a374a691b843ac524b
3
+ size 1336413848
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_lower_case": true,
47
+ "extra_special_tokens": {},
48
+ "mask_token": "[MASK]",
49
+ "max_length": 512,
50
+ "model_max_length": 512,
51
+ "pad_to_multiple_of": null,
52
+ "pad_token": "[PAD]",
53
+ "pad_token_type_id": 0,
54
+ "padding_side": "right",
55
+ "sep_token": "[SEP]",
56
+ "stride": 0,
57
+ "strip_accents": null,
58
+ "tokenize_chinese_chars": true,
59
+ "tokenizer_class": "BertTokenizer",
60
+ "truncation_side": "right",
61
+ "truncation_strategy": "longest_first",
62
+ "unk_token": "[UNK]"
63
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff