rajatgupta99924's picture
Add new SentenceTransformer model
28a0b7b verified
metadata
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - dense
  - generated_from_trainer
  - dataset_size:160
  - loss:MatryoshkaLoss
  - loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
  - source_sentence: Why might ChatGPT's answers change as the date approaches the holidays?
    sentences:
      - >-
        There’s now a fascinating ecosystem of people training their own models
        on top of these foundations, publishing those models, building
        fine-tuning datasets and sharing those too.

        The Hugging Face Open LLM Leaderboard is one place that tracks these. I
        can’t even attempt to count them, and any count would be out-of-date
        within a few hours.

        The best overall openly licensed LLM at any time is rarely a foundation
        model: instead, it’s whichever fine-tuned community model has most
        recently discovered the best combination of fine-tuning data.

        This is a huge advantage for open over closed models: the closed, hosted
        models don’t have thousands of researchers and hobbyists around the
        world collaborating and competing to improve them.
      - >-
        On the one hand, we keep on finding new things that LLMs can do that we
        didn’t expect—and that the people who trained the models didn’t expect
        either. That’s usually really fun!

        But on the other hand, the things you sometimes have to do to get the
        models to behave are often incredibly dumb.

        Does ChatGPT get lazy in December, because its hidden system prompt
        includes the current date and its training data shows that people
        provide less useful answers coming up to the holidays?

        The honest answer is “maybe”! No-one is entirely sure, but if you give
        it a different date its answers may skew slightly longer.
      - >-
        Getting back to models that beat GPT-4: Anthropic’s Claude 3 series
        launched in March, and Claude 3 Opus quickly became my new favourite
        daily-driver. They upped the ante even more in June with the launch of
        Claude 3.5 Sonnet—a model that is still my favourite six months later
        (though it got a significant upgrade on October 22, confusingly keeping
        the same 3.5 version number. Anthropic fans have since taken to calling
        it Claude 3.6).
  - source_sentence: What significance did the year 2024 have in relation to the word "slop"?
    sentences:
      - >-
        Intuitively, one would expect that systems this powerful would take
        millions of lines of complex code. Instead, it turns out a few hundred
        lines of Python is genuinely enough to train a basic version!

        What matters most is the training  data. You need a lot of data to make
        these things work, and the quantity and quality of the training data
        appears to be the most important factor in how good the resulting model
        is.

        If you can gather the right data, and afford to pay for the GPUs to
        train it, you can build an LLM.
      - >-
        The year of slop

        2024 was the year that the word "slop" became a term of art. I wrote
        about this in May, expanding on this tweet by @deepfates:
      - >-
        On the other hand, as software engineers we are better placed to take
        advantage of this than anyone else. We’ve all been given weird coding
        interns—we can use our deep knowledge to prompt them to solve coding
        problems more effectively than anyone else can.

        The ethics of this space remain diabolically complex

        In September last year Andy Baio and I produced the first major story on
        the unlicensed training data behind Stable Diffusion.

        Since then, almost every major LLM (and most of the image generation
        models) have also been trained on unlicensed data.
  - source_sentence: >-
      Why does the author find large language models (LLMs) infuriating as a
      computer scientist and software engineer?
    sentences:
      - >-
        Stuff we figured out about AI in 2023






















        Simon Willison’s Weblog

        Subscribe







        Stuff we figured out about AI in 2023

        31st December 2023

        2023 was the breakthrough year for Large Language Models (LLMs). I think
        it’s OK to call these AI—they’re the latest and (currently) most
        interesting development in the academic field of Artificial Intelligence
        that dates back to the 1950s.

        Here’s my attempt to round up the highlights in one place!
      - >-
        The May 13th announcement of GPT-4o included a demo of a brand new voice
        mode, where the true multi-modal GPT-4o (the o is for “omni”) model
        could accept audio input and output incredibly realistic sounding speech
        without needing separate TTS or STT models.

        The demo also sounded conspicuously similar to Scarlett Johansson... and
        after she complained the voice from the demo, Skye, never made it to a
        production product.

        The delay in releasing the new voice mode after the initial demo caused
        quite a lot of confusion. I wrote about that in ChatGPT in “4o” mode is
        not running the new features yet.
      - >-
        Still, I’m surprised that no-one has beaten the now almost year old
        GPT-4 by now. OpenAI clearly have some substantial tricks that they
        haven’t shared yet.

        Vibes Based Development

        As a computer scientist and software engineer, LLMs are infuriating.

        Even the openly licensed ones are still the world’s most convoluted
        black boxes. We continue to have very little idea what they can do, how
        exactly they work and how best to control them.

        I’m used to programming where the computer does exactly what I tell it
        to do. Prompting an LLM is decidedly not that!

        The worst part is the challenge of evaluating them.

        There are plenty of benchmarks, but no benchmark is going to tell you if
        an LLM actually “feels” right when you try it for a given task.
  - source_sentence: How did Google’s NotebookLM enhance audio output in its September release?
    sentences:
      - >-
        Your browser does not support the audio element.


        OpenAI aren’t the only group with a multi-modal audio model. Google’s
        Gemini also accepts audio input, and the Google Gemini apps can speak in
        a similar way to ChatGPT now. Amazon also pre-announced voice mode for
        Amazon Nova, but that’s meant to roll out in Q1 of 2025.

        Google’s NotebookLM, released in September, took audio output to a new
        level by producing spookily realistic conversations between two “podcast
        hosts” about anything you fed into their tool. They later added custom
        instructions, so naturally I turned them into pelicans:



        Your browser does not support the audio element.
      - >-
        If you think about what they do, this isn’t such a big surprise. The
        grammar rules of programming languages like Python and JavaScript are
        massively less complicated than the grammar of Chinese, Spanish or
        English.

        It’s still astonishing to me how effective they are though.

        One of the great weaknesses of LLMs is their tendency to hallucinate—to
        imagine things that don’t correspond to reality. You would expect this
        to be a particularly bad problem for code—if an LLM hallucinates a
        method that doesn’t exist, the code should be useless.
      - >-
        I think people who complain that LLM improvement has slowed are often
        missing the enormous advances in these multi-modal models. Being able to
        run prompts against images (and audio and video) is a fascinating new
        way to apply these models.

        Voice and live camera mode are science fiction come to life

        The audio and live video modes that have started to emerge deserve a
        special mention.

        The ability to talk to ChatGPT first arrived in September 2023, but it
        was mostly an illusion: OpenAI used their excellent Whisper
        speech-to-text model and a new text-to-speech model (creatively named
        tts-1) to enable conversations with the ChatGPT mobile apps, but the
        actual model just saw text.
  - source_sentence: What type of dish is shown in the photo and what does it contain?
    sentences:
      - >-
        Against this photo of butterflies at the California Academy of Sciences:



        A shallow dish, likely a hummingbird or butterfly feeder, is red. 
        Pieces of orange slices of fruit are visible inside the dish.

        Two butterflies are positioned in the feeder, one is a dark brown/black
        butterfly with white/cream-colored markings.  The other is a large,
        brown butterfly with patterns of lighter brown, beige, and black
        markings, including prominent eye spots. The larger brown butterfly
        appears to be feeding on the fruit.
      - >-
        Except... you can run generated code to see if it’s correct. And with
        patterns like ChatGPT Code Interpreter the LLM can execute the code
        itself, process the error message, then rewrite it and keep trying until
        it works!

        So hallucination is a much lesser problem for code generation than for
        anything else. If only we had the equivalent of Code Interpreter for
        fact-checking natural language!

        How should we feel about this as software engineers?

        On the one hand, this feels like a threat: who needs a programmer if
        ChatGPT can write code for you?
      - >-
        On the other hand, as software engineers we are better placed to take
        advantage of this than anyone else. We’ve all been given weird coding
        interns—we can use our deep knowledge to prompt them to solve coding
        problems more effectively than anyone else can.

        The ethics of this space remain diabolically complex

        In September last year Andy Baio and I produced the first major story on
        the unlicensed training data behind Stable Diffusion.

        Since then, almost every major LLM (and most of the image generation
        models) have also been trained on unlicensed data.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
  - cosine_accuracy@1
  - cosine_accuracy@3
  - cosine_accuracy@5
  - cosine_accuracy@10
  - cosine_precision@1
  - cosine_precision@3
  - cosine_precision@5
  - cosine_precision@10
  - cosine_recall@1
  - cosine_recall@3
  - cosine_recall@5
  - cosine_recall@10
  - cosine_ndcg@10
  - cosine_mrr@10
  - cosine_map@100
model-index:
  - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
    results:
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: Unknown
          type: unknown
        metrics:
          - type: cosine_accuracy@1
            value: 0.95
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 1
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 1
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 1
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.95
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.33333333333333326
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.20000000000000004
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.10000000000000002
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.95
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 1
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 1
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 1
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.9815464876785729
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.975
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.975
            name: Cosine Map@100

SentenceTransformer based on Snowflake/snowflake-arctic-embed-l

This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-l. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Snowflake/snowflake-arctic-embed-l
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 1024 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertModel'})
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("rajatgupta99924/AIE6-S09-eca4bfc6-eb64-44a4-a71d-e09bf2b78f50")
# Run inference
queries = [
    "What type of dish is shown in the photo and what does it contain?",
]
documents = [
    'Against this photo of butterflies at the California Academy of Sciences:\n\n\nA shallow dish, likely a hummingbird or butterfly feeder, is red.  Pieces of orange slices of fruit are visible inside the dish.\nTwo butterflies are positioned in the feeder, one is a dark brown/black butterfly with white/cream-colored markings.  The other is a large, brown butterfly with patterns of lighter brown, beige, and black markings, including prominent eye spots. The larger brown butterfly appears to be feeding on the fruit.',
    'Except... you can run generated code to see if it’s correct. And with patterns like ChatGPT Code Interpreter the LLM can execute the code itself, process the error message, then rewrite it and keep trying until it works!\nSo hallucination is a much lesser problem for code generation than for anything else. If only we had the equivalent of Code Interpreter for fact-checking natural language!\nHow should we feel about this as software engineers?\nOn the one hand, this feels like a threat: who needs a programmer if ChatGPT can write code for you?',
    'On the other hand, as software engineers we are better placed to take advantage of this than anyone else. We’ve all been given weird coding interns—we can use our deep knowledge to prompt them to solve coding problems more effectively than anyone else can.\nThe ethics of this space remain diabolically complex\nIn September last year Andy Baio and I produced the first major story on the unlicensed training data behind Stable Diffusion.\nSince then, almost every major LLM (and most of the image generation models) have also been trained on unlicensed data.',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 1024] [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[ 0.4179, -0.0420,  0.0399]])

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.95
cosine_accuracy@3 1.0
cosine_accuracy@5 1.0
cosine_accuracy@10 1.0
cosine_precision@1 0.95
cosine_precision@3 0.3333
cosine_precision@5 0.2
cosine_precision@10 0.1
cosine_recall@1 0.95
cosine_recall@3 1.0
cosine_recall@5 1.0
cosine_recall@10 1.0
cosine_ndcg@10 0.9815
cosine_mrr@10 0.975
cosine_map@100 0.975

Training Details

Training Dataset

Unnamed Dataset

  • Size: 160 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 160 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 12 tokens
    • mean: 20.58 tokens
    • max: 33 tokens
    • min: 34 tokens
    • mean: 133.43 tokens
    • max: 214 tokens
  • Samples:
    sentence_0 sentence_1
    What topics are covered in the articles related to large language models (LLMs) and AI development in the provided context? Embeddings: What they are and why they matter
    61.7k
    79.3k


    Catching up on the weird world of LLMs
    61.6k
    85.9k


    llamafile is the new best way to run an LLM on your own computer
    52k
    66k


    Prompt injection explained, with video, slides, and a transcript
    51k
    61.9k


    AI-enhanced development makes me more ambitious with my projects
    49.6k
    60.1k


    Understanding GPT tokenizers
    49.5k
    61.1k


    Exploring GPTs: ChatGPT in a trench coat?
    46.4k
    58.5k


    Could you train a ChatGPT-beating model for $85,000 and run it in a browser?
    40.5k
    49.2k


    How to implement Q&A against your documentation with GPT3, embeddings and Datasette
    37.3k
    44.9k


    Lawyer cites fake cases invented by ChatGPT, judge is not amused
    37.1k
    47.4k
    Which article discusses the potential cost and feasibility of training a ChatGPT-beating model to run in a browser? Embeddings: What they are and why they matter
    61.7k
    79.3k


    Catching up on the weird world of LLMs
    61.6k
    85.9k


    llamafile is the new best way to run an LLM on your own computer
    52k
    66k


    Prompt injection explained, with video, slides, and a transcript
    51k
    61.9k


    AI-enhanced development makes me more ambitious with my projects
    49.6k
    60.1k


    Understanding GPT tokenizers
    49.5k
    61.1k


    Exploring GPTs: ChatGPT in a trench coat?
    46.4k
    58.5k


    Could you train a ChatGPT-beating model for $85,000 and run it in a browser?
    40.5k
    49.2k


    How to implement Q&A against your documentation with GPT3, embeddings and Datasette
    37.3k
    44.9k


    Lawyer cites fake cases invented by ChatGPT, judge is not amused
    37.1k
    47.4k
    What are some of the capabilities of Large Language Models mentioned in the context? Here’s the sequel to this post: Things we learned about LLMs in 2024.
    Large Language Models
    In the past 24-36 months, our species has discovered that you can take a GIANT corpus of text, run it through a pile of GPUs, and use it to create a fascinating new kind of software.
    LLMs can do a lot of things. They can answer questions, summarize documents, translate from one language to another, extract information and even write surprisingly competent code.
    They can also help you cheat at your homework, generate unlimited streams of fake content and be used for all manner of nefarious purposes.
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • num_train_epochs: 10
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step cosine_ndcg@10
1.0 16 0.9815
2.0 32 0.9815
1.0 16 0.9815
2.0 32 0.9815
3.0 48 0.9815
3.125 50 0.9815
4.0 64 0.9815
5.0 80 0.9815
6.0 96 0.9815
6.25 100 0.9815
7.0 112 0.9815
8.0 128 0.9815
9.0 144 0.9815
9.375 150 0.9815
10.0 160 0.9815

Framework Versions

  • Python: 3.13.7
  • Sentence Transformers: 5.1.0
  • Transformers: 4.56.1
  • PyTorch: 2.8.0+cpu
  • Accelerate: 1.10.1
  • Datasets: 4.0.0
  • Tokenizers: 0.22.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}