SentenceTransformer

This is a sentence-transformers model trained. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Maximum Sequence Length: 384 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 384, 'do_lower_case': False, 'architecture': 'MPNetModel'})
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    '180, 181, 185–189, 194\nrisk Consider a hypothesis h that is used to predict the label y of a data point based on\nits features x.',
    'We measure the quality of a particular prediction using a loss function\nL\n\x00(x, y), h\n\x01\n.',
    'Before formally defining these heuristics, we need to have a mech-\nanism for formally defining supervised learning problems.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.6953, 0.2131],
#         [0.6953, 1.0000, 0.2814],
#         [0.2131, 0.2814, 1.0000]])

Evaluation

Metrics

Semantic Similarity

Metric Value
pearson_cosine nan
spearman_cosine nan

Training Details

Training Dataset

Training Data

The model was fine-tuned using 17 reference books in Data Science and Machine Learning, including: All source books were preprocessed using PyMuPDF, an open-source tool for extracting and structuring text from PDF documents.
The raw PDF files were converted into structured text, and segmented into sentences before being used for training.
This ensured consistent formatting and reliable sentence boundaries across the dataset.

  1. Aßenmacher, Matthias. Multimodal Deep Learning. Self-published, 2023.
  2. Bertsekas, Dimitri P. A Course in Reinforcement Learning. Arizona State University.
  3. Boykis, Vicki. What are Embeddings. Self-published, 2023.
  4. Bruce, Peter, and Andrew Bruce. Practical Statistics for Data Scientists: 50 Essential Concepts. O’Reilly Media, 2017.
  5. Daumé III, Hal. A Course in Machine Learning. Self-published.
  6. Deisenroth, Marc Peter, A. Aldo Faisal, and Cheng Soon Ong. Mathematics for Machine Learning. Cambridge University Press, 2020.
  7. Devlin, Hannah, Guo Kunin, Xiang Tian. Seeing Theory. Self-published.
  8. Gutmann, Michael U. Pen & Paper: Exercises in Machine Learning. Self-published.
  9. Jung, Alexander. Machine Learning: The Basics. Springer, 2022.
  10. Langr, Jakub, and Vladimir Bok. Deep Learning with Generative Adversarial Networks. Manning Publications, 2019.
  11. MacKay, David J.C. Information Theory, Inference, and Learning Algorithms. Cambridge University Press, 2003.
  12. Montgomery, Douglas C., Cheryl L. Jennings, and Murat Kulahci. Introduction to Time Series Analysis and Forecasting. 2nd Edition, Wiley, 2015.
  13. Nilsson, Nils J. Introduction to Machine Learning: An Early Draft of a Proposed Textbook. Stanford University, 1996.
  14. Prince, Simon J.D. Understanding Deep Learning. Draft Edition, 2024.
  15. Shashua, Amnon. Introduction to Machine Learning. The Hebrew University of Jerusalem, 2008.
  16. Sutton, Richard S., and Andrew G. Barto. Reinforcement Learning: An Introduction. 2nd Edition, MIT Press, 2018.
  17. Alpaydin, Ethem. Introduction to Machine Learning. 3rd Edition, MIT Press, 2014.

⚠️ Note: Due to copyright restrictions, the full text of these books is not included in this repository. Only the fine-tuned model weights are shared.

Unnamed Dataset

  • Size: 193,902 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 7 tokens
    • mean: 38.64 tokens
    • max: 384 tokens
    • min: 7 tokens
    • mean: 37.46 tokens
    • max: 384 tokens
  • Samples:
    sentence_0 sentence_1
    For example it holds even when wk
    has nonzero mean.
    This is an important part of the RL methodology, which we
    will discuss later in this chapter, and in more detail in Chapter 2.
    Consider a huge collection of outdoor pictures
    you have taken during your last adventure trip.
    You want to organize these pictures as three
    categories (or classes) dog, bird and fish.
    Universities use regression to predict
    students’ GPA based on their SAT scores.
    A regression model that fits the data well is set up such that changes in X lead to
    changes in Y.
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim",
        "gather_across_devices": false
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 6
  • fp16: True
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 6
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • project: huggingface
  • trackio_space_id: trackio
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: no
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: True
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss val_spearman_cosine
0.0413 500 1.6444 -
0.0825 1000 1.4038 -
0.1238 1500 1.2286 -
0.1650 2000 1.1638 -
0.2063 2500 1.0558 -
0.2475 3000 1.0104 -
0.2888 3500 1.0025 -
0.3301 4000 0.9369 -
0.3713 4500 0.8901 -
0.4126 5000 0.8522 -
0.4538 5500 0.8362 -
0.4951 6000 0.8342 -
0.5363 6500 0.7747 -
0.5776 7000 0.7395 -
0.6189 7500 0.7245 -
0.6601 8000 0.7039 -
0.7014 8500 0.6576 -
0.7426 9000 0.6487 -
0.7839 9500 0.6461 -
0.8252 10000 0.635 -
0.8664 10500 0.6133 -
0.9077 11000 0.5723 -
0.9489 11500 0.5687 -
0.9902 12000 0.556 -
1.0 12119 - nan

Framework Versions

  • Python: 3.11.7
  • Sentence Transformers: 5.1.1
  • Transformers: 4.57.0
  • PyTorch: 2.5.1+cu121
  • Accelerate: 1.12.0
  • Datasets: 4.4.1
  • Tokenizers: 0.22.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

If you use this model, please cite:

@misc{aghakhani2025synergsticrag,
  author       = {Danial Aghakhani Zadeh},
  title        = {Fine-tuned all-mpnet-base-v2 for Data Science RAG},
  year         = {2025},
  publisher    = {Hugging Face},
  howpublished = {\url{https://huggingface.co/DigitalAsocial/all-mpnet-base-v2-ds-rag-17r}}
}
Downloads last month
23
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DigitalAsocial/all-mpnet-base-v2-ds-rag-17r

Finetuned
(328)
this model

Dataset used to train DigitalAsocial/all-mpnet-base-v2-ds-rag-17r

Evaluation results