SentenceTransformer

This is a answerdotai/ModernBERT-base model trained on the code_search_net dataset with MultipleNegativesRankingLoss with in-batch negatives. Model can be used for code retrieval and reranking.

Perfomance on code retrieval benchmarks

RTEB

On 14.10.2025 the model is 6th on RTEB leaderbord among models with <500M parameters:

Click

Perfomance per task:

Model AppsRetrieval Code1Retrieval (Private) DS1000Retrieval FreshStackRetrieval HumanEvalRetrieval JapaneseCode1Retrieval (Private) MBPPRetrieval WikiSQLRetrieval
english_code_retriever 8.04 75.36 32.42 18.30 71.82 46.59 72.06 87.92

COIR:

Model AppsRetrieval COIRCodeSearchNetRetrieval CodeFeedbackMT CodeFeedbackST CodeSearchNetCCRetrieval CodeTransOceanContest CodeTransOceanDL CosQA StackOverflowQA SyntheticText2SQL
english_code_retriever 8.04 74.23 44.01 57.79 42.71 60.68 35.16 25.56 56.53 42.79

more information you cand find in MTEB leaderbord

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 768
  • Similarity Function: Cosine Similarity
  • Mean pooling

Usage

Using is easy with Sentence Transformers.

Pay attention that model was trained with prefixes 'search_query' for queries and 'search_document' for docs with code. So using with prefixes will improve model retrieving abilities.

import torch
from sentence_transformers import SentenceTransformer, util

device = "cuda" if torch.cuda.is_available() else "cpu"
model = SentenceTransformer("fyaronskiy/english_code_retriever").to(device)

queries = [
    "Write a Python function that calculates the factorial of a number recursively.",
    "How to check if a given string reads the same backward and forward?",
    "Combine two sorted lists into a single sorted list."
]

corpus = [
    # Relevant for Q1
    """def factorial(n):
    if n == 0:
        return 1
    return n * factorial(n-1)""",

    # Hard negative for Q1 (similar structure but computes sum)
    """def sum_recursive(n):
    if n == 0:
        return 0
    return n + sum_recursive(n-1)""",

    # Relevant for Q2
    """def is_palindrome(s: str) -> bool:
    s = s.lower().replace(" ", "")
    return s == s[::-1]""",

    # Hard negative for Q2 (string reverse but not palindrome check)
    """def reverse_string(s: str) -> str:
    return s[::-1]""",

    # Relevant for Q3
    """def merge_sorted_lists(a, b):
    result = []
    i = j = 0
    while i < len(a) and j < len(b):
        if a[i] < b[j]:
            result.append(a[i])
            i += 1
        else:
            result.append(b[j])
            j += 1
    result.extend(a[i:])
    result.extend(b[j:])
    return result""",

    # Hard negative for Q3 (similar iteration but sums two lists elementwise)
    """def add_lists(a, b):
    return [x + y for x, y in zip(a, b)]"""
]


doc_embeddings = model.encode(corpus, prompt_name='search_query', convert_to_tensor=True, device=device)
query_embeddings = model.encode(queries, prompt_name='search_document', convert_to_tensor=True, device=device)

# Compute cosine similarity and retrieve top-1
for i, query in enumerate(queries):
    scores = util.cos_sim(query_embeddings[i], doc_embeddings)[0]
    best_idx = torch.argmax(scores).item()
    print(f"\n Query {i+1}: {query}")
    print(f"Top-1 match (score={scores[best_idx]:.4f}):\n{corpus[best_idx]}")

''' Query 1: Write a Python function that calculates the factorial of a number recursively.
Top-1 match (score=0.5983):
def factorial(n):
    if n == 0:
        return 1
    return n * factorial(n-1)

 Query 2: How to check if a given string reads the same backward and forward?
Top-1 match (score=0.4925):
def is_palindrome(s: str) -> bool:
    s = s.lower().replace(" ", "")
    return s == s[::-1]

 Query 3: Combine two sorted lists into a single sorted list.
Top-1 match (score=0.6524):
def merge_sorted_lists(a, b):
    result = []
    i = j = 0
    while i < len(a) and j < len(b):
        if a[i] < b[j]:
            result.append(a[i])
            i += 1
        else:
            result.append(b[j])
            j += 1
    result.extend(a[i:])
    result.extend(b[j:])
    return result
'''

Using with Transformers

import torch
from transformers import AutoTokenizer, AutoModel

device = "cuda" if torch.cuda.is_available() else "cpu"

model_name = "fyaronskiy/english_code_retriever"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name).to(device)
model.eval()



queries = [
"function of addition of two numbers",
"finding the maximum element in an array",
"sorting a list in ascending order"
]

corpus = [
    "def add(a, b): return a + b",
    "def find_max(arr): return max(arr)",
    "def sort_list(lst): return sorted(lst)"
]

def mean_pooling(model_output, attention_mask):
    token_embeddings = model_output[0]  # (batch_size, seq_len, hidden_dim)
    input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
    return (token_embeddings * input_mask_expanded).sum(1) / input_mask_expanded.sum(1).clamp(min=1e-9)

def encode_texts(texts):
    encoded = tokenizer(
        texts,
        padding=True,
        truncation=True,
        return_tensors="pt",
        max_length=8192
    ).to(device)
    with torch.no_grad():
        model_output = model(**encoded)
    return mean_pooling(model_output, encoded["attention_mask"])

doc_embeddings = encode_texts(["search_document: " + document  for document in corpus])
query_embeddings = encode_texts(["search_query: " + query  for query in queries])

# Normalize embeddings for cosine similarity
doc_embeddings = torch.nn.functional.normalize(doc_embeddings, p=2, dim=1)
query_embeddings = torch.nn.functional.normalize(query_embeddings, p=2, dim=1)

# Compute cosine similarity and retrieve top-1
for i, query in enumerate(queries):
    scores = torch.matmul(query_embeddings[i], doc_embeddings.T)
    best_idx = torch.argmax(scores).item()
    print(f"\n Query {i+1}: {query}")
    print(f"Top-1 match (score={scores[best_idx]:.4f}):\n{corpus[best_idx]}")

''' Query 1: function of addition of two numbers
Top-1 match (score=0.6047):
def add(a, b): return a + b

 Query 2: finding the maximum element in an array
Top-1 match (score=0.7772):
def find_max(arr): return max(arr)

 Query 3: sorting a list in ascending order
Top-1 match (score=0.7389):
def sort_list(lst): return sorted(lst)
'''

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.8926
cosine_accuracy@3 0.9454
cosine_accuracy@5 0.9545
cosine_accuracy@10 0.9638
cosine_precision@1 0.8926
cosine_precision@3 0.3151
cosine_precision@5 0.1909
cosine_precision@10 0.0964
cosine_recall@1 0.8926
cosine_recall@3 0.9454
cosine_recall@5 0.9545
cosine_recall@10 0.9638
cosine_ndcg@10 0.9313
cosine_mrr@10 0.9206
cosine_map@100 0.9212

Training Details

Training Dataset

code_search_net

  • Dataset: train part of code_search_net
  • Size: 1,880,853 training samples
  • queries - function docstrings in english, relevant document - code of function
  • negatives was sampled from batch
  • Distribution of programming languages:

image

Training Hyperparameters

Non-Default Hyperparameters

  • batch_size: 64
  • learning_rate: 2e-05
  • num_epochs: 2
  • warmup_ratio: 0.1

Framework Versions

  • Python: 3.10.11
  • Sentence Transformers: 5.1.0
  • Transformers: 4.52.3
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.10.0
  • Datasets: 3.6.0
  • Tokenizers: 0.21.4
Downloads last month
22
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for fyaronskiy/english_code_retriever

Finetuned
(970)
this model

Dataset used to train fyaronskiy/english_code_retriever

Evaluation results