Datasets:

Modalities:
Audio
Size:
< 1K
ArXiv:
Libraries:
Datasets
Dataset Viewer
Auto-converted to Parquet
Search is not available for this dataset
audio
audioduration (s)
6.09
14.3
label
class label
2 classes
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
1singerB
1singerB
1singerB
1singerB
1singerB
1singerB
1singerB
1singerB
1singerB
1singerB
1singerB
1singerB
1singerB
1singerB
1singerB
1singerB
1singerB
1singerB
1singerB
1singerB
1singerB
1singerB
1singerB
1singerB
1singerB
1singerB
1singerB
1singerB
1singerB
1singerB
1singerB
1singerB
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
0singerA
1singerB
1singerB
1singerB
1singerB
1singerB
1singerB
1singerB
1singerB
1singerB
1singerB
1singerB
1singerB
End of preview. Expand in Data Studio
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

SVCC 2025 Dataset

About the Challenge

The Singing Voice Conversion Challenge (SVCC) 2025 focuses on advancing singing style conversion (SSC) technology. Unlike singing voice conversion (SVC) which only converts singer identity, SSC aims to transform how a singer performs a song by changing the singing style, while preserving the linguistic contents and identity of the source singer. This represents the intersection of speech and music processing, creating a novel and challenging research field.

If you plan to use this dataset or the test set samples from other systems, please cite the following paper. We will also add the papers from other systems as soon as they are released by the authors.

# main findings paper
@article{violeta2025svcc25,
      title={{The Singing Voice Conversion Challenge 2025: From Singer Identity Conversion To Singing Style Conversion}}, 
      author={Lester Phillip Violeta and Xueyao Zhang and Jiatong Shi and Yusuke Yasuda and Wen-Chin Huang and Zhizheng Wu and Tomoki Toda},
      year={2025},
      archivePrefix={arXiv},
      url={https://arxiv.org/abs/2509.15629}, 
}

# baseline 1
@inproceedings{violeta2025serenade,
      title={{Serenade: A Singing Style Conversion Framework Based On Audio Infilling}}, 
      author={Lester Phillip Violeta and Wen-Chin Huang and Tomoki Toda},
      year={2025},
      booktitle = {Proc. EUSIPCO},
      year      = {2025},
      pages={411--415},
}

# baseline 2
@article{zhang2025vevo2,
  title={{Vevo2: Bridging Controllable Speech and Singing Voice Generation via Unified Prosody Learning}},
  author={Zhang, Xueyao and Zhang, Junan and Wang, Yuancheng and Wang, Chaoren and Chen, Yuanzhe and Jia, Dongya and Chen, Zhuo and Wu, Zhizheng},
  journal={arXiv preprint arXiv:2508.16332},
  year={2025}
}

# baseline 3
@inproceedings{yamamoto2023svcc,
  title={{A Comparative Study of Voice Conversion Models with Large-Scale Speech and Singing Data: The T13 Systems for the Singing Voice Conversion Challenge 2023}},
  author={Yamamoto, Ryuichi and Yoneyama, Reo and Violeta, Lester Phillip and Huang, Wen-Chin and Toda, Tomoki},
  booktitle={Proc. ASRU},
  year={2023}
}

System Submissions

We also release samples from the baseline systems, along with some of the system submissions. These can be used by researchers to compare new systems with previous submissions. Please access them from the link here.

Notes

  • groundtruth_*_0.9 and groundtruth_*_1.1 are the pitch shifted samples, which are to be used for the singing style similarity XAB test.

Dataset Overview

This dataset is provided for participants of the SVCC 2025. It contains singing voice recordings in 7 different singing styles:

  • Breathy
  • Falsetto
  • Mixed Voice
  • Pharyngeal
  • Glissando
  • Vibrato
  • Control style

Dataset structure

  • train/Language/singer/style/song/group/0000.*.wav, .TextGrid, .musicxml, .json, .h5 # contains the official training set, stored in SVCC2025.tar.gz
  • test/singer/0000_Style.wav # contains the official test set
  • test_gt/singer/0000_Style.wav # contains the ground truth target samples, to be used only for evaluation

Loading

from datasets import load_dataset

# Paths-only (fast)
ds_train = load_dataset("lestervioleta/svcc2025", name="default", split="train")
# With inline text of TextGrid/MusicXML/JSON
ds_train_rich = load_dataset("lestervioleta/svcc2025", name="with_text", split="train")

row = ds_train[0]
audio = row["audio"]  # dict with 'array' and 'sampling_rate' after access
print(row["language"], row["singer"], row["style"], row["song"], row["group"])
print(row["wav_path"], row["textgrid_path"], row["musicxml_path"], row["json_path"], row["h5_path"])

Tasks

The challenge consists of two main tasks:

Task 1: In-Domain Singing Style Conversion

  • Convert source singer A's singing style from style 1 to style 2
  • Singer A is included in the training dataset (located in train/English/SingerA-EN-Tenor-1/)
  • Reference singing voice in style 2 from singer A is provided in the training dataset

Task 2: Zero-Shot Singing Style Conversion

  • Convert source singer B's singing style from style 1 to style 2
  • Singer B is NOT included in the training dataset
  • No reference singing voice from singer B will be provided
  • Participants need to use reference singing voices in style 2 from different singers in the training dataset

Style Conversion Pairs

  • Please follow the guide below and convert each source style into the target style.
  • For phrase, there are four singing styles.
  • You will be tasked to convert each into three different singing styles.
  • Thus, you need to submit a total of 96 converted samples for each task.
  • The style conversion pairs are the same for both Task 1 and Task 2.
Phrase 0000:
  Sources: ['Breathy', 'Falsetto', 'Mixed', 'Control']
  Targets: ['Glissando', 'Pharyngeal', 'Vibrato']

Phrase 0001:
  Sources: ['Glissando', 'Pharyngeal', 'Vibrato', 'Control']
  Targets: ['Breathy', 'Falsetto', 'Mixed']

Phrase 0002:
  Sources: ['Breathy', 'Falsetto', 'Glissando', 'Control']
  Targets: ['Mixed', 'Pharyngeal', 'Vibrato']

Phrase 0003:
  Sources: ['Breathy', 'Mixed', 'Pharyngeal', 'Control']
  Targets: ['Falsetto', 'Glissando', 'Vibrato']

Phrase 0004:
  Sources: ['Falsetto', 'Mixed', 'Vibrato', 'Control']
  Targets: ['Breathy', 'Glissando', 'Pharyngeal']

Phrase 0005:
  Sources: ['Breathy', 'Falsetto', 'Pharyngeal', 'Control']
  Targets: ['Mixed', 'Glissando', 'Vibrato']

Phrase 0006:
  Sources: ['Breathy', 'Falsetto', 'Vibrato', 'Control']
  Targets: ['Mixed', 'Glissando', 'Pharyngeal']

Phrase 0007:
  Sources: ['Breathy', 'Mixed', 'Glissando', 'Control']
  Targets: ['Falsetto', 'Pharyngeal', 'Vibrato']

Test set description

  • There are two singers in the test dataset.
  • The audio are in 24 kHz sampling rate.
  • For Task 1: Use Singer A. This singer is equivalent to English-Tenor1 in the SVCC 2025 training set. The goal will be to evaluate the model's ability to convert into a target singing style.
  • For Task 2: Use Singer B. This singer is NOT in the SVCC 2025 training dataset. The goal will be to evaluate both the model's ability to convert into a target singing style and retain the source singer's identity.

Rules and Important Notes

  • Please read the full challenge rules at the website before using this dataset.
  • This dataset is a subset of the GTSinger dataset. As per challenge rules, participants are NOT allowed to use the full GTSinger dataset for training.
  • Any other datasets are allowed for training (open-sourced or not). However, we will distinguish in the final paper if participants only used open-sourced datasets for training.

Dataset License

This dataset is a subset of the GTSinger dataset and is subject to the GTSinger license terms. By using this dataset, you agree to comply with the GTSinger license conditions. For more details on the GTSinger license, please visit GTSinger GitHub repository.

Please read the license terms carefully. This dataset is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0). For more details, visit: this LICENSE page.

Please cite their original paper if you use this dataset:

@article{zhang2024gtsinger,
  title={Gtsinger: A global multi-technique singing corpus with realistic music scores for all singing tasks},
  author={Zhang, Yu and Pan, Changhao and Guo, Wenxiang and Li, Ruiqi and Zhu, Zhiyuan and Wang, Jialei and Xu, Wenhao and Lu, Jingyu and Hong, Zhiqing and Wang, Chuxin and others},
  journal={arXiv preprint arXiv:2409.13832},
  year={2024}
}

Contact

For any questions or issues regarding the dataset, please contact: svcc2025@vc-challenge.org

Downloads last month
17