The dataset viewer is not available for this split.
Error code: TooBigContentError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Talk2Ref: A Dataset for Reference Prediction from Scientific Talks
Scientific talks are a growing medium for disseminating research, and automatically identifying relevant literature that
grounds or enriches a talk would be highly valuable for researchers and students alike. We introduce Reference
Prediction from Talks (RPT), a new task that maps long and unstructured scientific presentations to relevant papers.
To support research on RPT, we present Talk2Ref, the first large-scale dataset of its kind, containing 6,279 talks
and 43,429 cited papers (26 per talk on average), where relevance is approximated by the papers cited in the talk’s
corresponding source publication.
We establish strong baselines by evaluating state-of-the-art text embedding
models in zero-shot retrieval scenarios and propose a dual-encoder architecture trained on Talk2Ref. We further
explore strategies for handling long transcripts and training for domain adaptation.
Our results show that fine-tuning on Talk2Ref significantly improves citation prediction performance, demonstrating both
the challenges of the task and the effectiveness of our dataset for learning semantic representations from spoken scientific content.
The dataset and trained models are released under an open license to foster future research on integrating spoken scientific communication into citation recommendation systems.
Dataset Summary
To the best of our knowledge, no existing dataset supports research on Reference Prediction from Talks (RPT). Talk2Ref is the first large-scale resource pairing scientific presentations with their corresponding relevant papers. Relevance is modeled using the citations in each talk’s source publication.
Talk2Ref includes:
- 6,279 scientific talks
- 43,429 cited papers
- ≈26 references per talk
- Spanning 2017–2022
- Covering ACL, NAACL, and EMNLP conferences
This dataset provides a foundation for systematically studying reference prediction from spoken scientific content at scale.
Dataset Structure
| Split | Conferences | Years | Talks | Avg. Length (min) | Avg. Words | Avg. References | Total References |
|---|---|---|---|---|---|---|---|
| Train | ACL, NAACL, EMNLP | 2017–2021 | 3,971 | 12.1 | 1615 | 26.75 | 31,064 |
| Dev | ACL | 2022 | 882 | 9.9 | 1327 | 26.05 | 11,805 |
| Test | EMNLP, NAACL | 2022 | 1,426 | 9.1 | 1186 | 25.66 | 16,935 |
| Total | ACL, NAACL, EMNLP | 2017–2022 | 6,279 | 11.1 | 1,478 | 26.4 | 43,429 |
Talks are partitioned chronologically by conference year.
Earlier years form the training split (2017–2021), and later years (2022) are used for development and testing,
ensuring temporal consistency between splits.
Dataset Fields
| Field | Type | Description |
|---|---|---|
video_path |
string | URL or path to the original conference talk video. |
audio |
audio | Audio waveform of the talk segment with sampling rate information. |
sr |
int | Sampling rate (Hz) of the audio recording. |
abstract |
string | Abstract of the corresponding scientific paper. |
language |
string | Language of the talk (English). |
split |
string | Split name (“train”, “dev”, or “test”). |
duration |
float | Duration of the audio in seconds. |
conference |
string | Conference name (ACL, NAACL, or EMNLP). |
year |
string | Year of the conference. |
transcription |
string | Automatic speech recognition (ASR) transcript of the talk. |
title |
string | Paper title associated with the talk. |
references |
list | List of structured metadata for cited papers, including title, authors, abstract, year. |
Data Collection and Processing
Source Acquisition:
Conference talks and associated papers were obtained from the ACL Anthology.Audio Extraction:
Audio tracks were extracted from videos and converted to.wavformat using FFmpeg.Transcription:
Speech was transcribed using Whisper-Large-v3.Reference Extraction:
The corresponding paper PDFs were parsed with GROBID, extracting all cited references and metadata.Abstract Retrieval:
Missing abstracts were filled by querying CrossRef, arXiv, OpenAlex, and Semantic Scholar.Filtering:
Invalid or placeholder abstracts were removed.
This process results in a rich dataset linking each talk to its cited papers, including audio, transcript, and metadata.
Use Cases
Talk2Ref supports research on:
- Reference Prediction from Spoken Content
- Speech-to-Text and Speech-to-Abstract Generation
- Retrieval and Representation Learning
Licensing
The dataset is distributed under the Creative Commons Attribution 4.0 International License (CC BY 4.0).
Users are free to share and adapt the dataset with appropriate attribution.
Citation
If you use this dataset, please cite the following paper:
@misc{broy2025talk2refdatasetreferenceprediction,
title = {Talk2Ref: A Dataset for Reference Prediction from Scientific Talks},
author = {Frederik Broy and Maike Züfle and Jan Niehues},
year = {2025},
eprint = {2510.24478},
archivePrefix= {arXiv},
primaryClass = {cs.CL},
url = {https://arxiv.org/abs/2510.24478}
}
- Downloads last month
- 245