Datasets:
license: mit
language:
- fr
tags:
- life-sciences
- clinical
- biomedical
- bio
- medical
- biology
- synthetic
pretty_name: TransCorpus-bio-fr
size_categories:
- 10M<n<100M
TransCorpus-bio-fr
TransCorpus-bio-fr is a large-scale, parallel biomedical corpus consisting of French synthetic translations of PubMed abstracts. This dataset was created using the TransCorpus framework and is designed to enable high-quality French biomedical language modeling and downstream NLP research.
Dataset Details
- Source: PubMed abstracts (English)
- Target: French (synthetic, machine-translated)
- Translation Model: M2M-100 (1.2B) using TransCorpus Toolkit
- Size: 22 million abstracts, 36.4GB of text
- Domain: Biomedical, clinical, life sciences
- Format: one abstract per line
- Pre-Trained Model: TransBERT-bio-fr 🤗
Motivation
French is a low-resource language for biomedical NLP, with limited availability of large, high-quality corpora. TransCorpus-bio-fr bridges this gap by leveraging state-of-the-art neural machine translation to generate a massive, high-quality synthetic corpus, enabling robust pretraining and evaluation of French biomedical language models.
from datasets import load_dataset
dataset = load_dataset("jknafou/TransCorpus-bio-fr", split="train")
print(dataset)
# Output:
# Dataset({
# features: ['text'],
# num_rows: 21567136
# })
print(dataset[0])
# Output: {'text': " [Études biochimiques sur les composants de la camomille/III. Des études in vitro sur l'activité antipeptique de (--)-alpha-bisabolol (transl de l'auteur)].\t(--)-l'alpha-bisabolol a une action antipeptique primaire en fonction de la posologie, qui n'est pas causée par une altération de la valeur du pH. L'activité protéolytique de la pepsine est réduite de 50 % par l'ajout de bisabolol dans un rapport de 1/0,5. L'action antipeptique du bisabolol n'a lieu qu'en cas de contact direct. En cas de contact préalable avec le substrat, l'effet inhibiteur est perdu."}
Benchmark Results
TransBERT-bio-fr pretrained on TransCorpus-bio-fr achieve state-of-the-art results on the French biomedical benchmark DrBenchmark, outperforming both general-domain and previous domain-specific models on classification, NER, POS, and STS tasks. See TransBERT-bio-fr for details.
Why Synthetic Translation?
- Scalable: Enables creation of large-scale corpora for any language with a strong MT system.
- Effective: Supports state-of-the-art performance in downstream tasks.
- Accessible: Makes domain-specific NLP feasible for any languages.
Citation
If you use this corpus, please cite:
@inproceedings{
knafou2025transbert,
title={Trans{BERT}: A Framework for Synthetic Translation in Domain-Specific Language Modeling},
author={Julien Knafou and Luc Mottin and Ana{\"\i}s Mottaz and Alexandre Flament and Patrick Ruch},
booktitle={The 2025 Conference on Empirical Methods in Natural Language Processing},
year={2025},
url={https://transbert.s3.text-analytics.ch/TransBERT.pdf}
}