source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech-encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech-encoder-decoder/#initialising-speechencoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder
|
.md
|
[`SpeechEncoderDecoderModel`] can be initialized from a pretrained encoder checkpoint and a pretrained decoder checkpoint. Note that any pretrained Transformer-based speech model, *e.g.* [Wav2Vec2](wav2vec2), [Hubert](hubert) can serve as the encoder and both pretrained auto-encoding models, *e.g.* BERT, pretrained causal language models, *e.g.* GPT2, as well as the pretrained decoder part of sequence-to-sequence models, *e.g.* decoder of BART, can be used as the decoder.
|
168_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech-encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech-encoder-decoder/#initialising-speechencoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder
|
.md
|
Depending on which architecture you choose as the decoder, the cross-attention layers might be randomly initialized.
Initializing [`SpeechEncoderDecoderModel`] from a pretrained encoder and decoder checkpoint requires the model to be fine-tuned on a downstream task, as has been shown in [the *Warm-starting-encoder-decoder blog post*](https://huggingface.co/blog/warm-starting-encoder-decoder).
|
168_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech-encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech-encoder-decoder/#initialising-speechencoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder
|
.md
|
To do so, the `SpeechEncoderDecoderModel` class provides a [`SpeechEncoderDecoderModel.from_encoder_decoder_pretrained`] method.
```python
>>> from transformers import SpeechEncoderDecoderModel
|
168_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech-encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech-encoder-decoder/#initialising-speechencoderdecodermodel-from-a-pretrained-encoder-and-a-pretrained-decoder
|
.md
|
>>> model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained(
... "facebook/hubert-large-ll60k", "google-bert/bert-base-uncased"
... )
```
|
168_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech-encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech-encoder-decoder/#loading-an-existing-speechencoderdecodermodel-checkpoint-and-perform-inference
|
.md
|
To load fine-tuned checkpoints of the `SpeechEncoderDecoderModel` class, [`SpeechEncoderDecoderModel`] provides the `from_pretrained(...)` method just like any other model architecture in Transformers.
To perform inference, one uses the [`generate`] method, which allows to autoregressively generate text. This method supports various forms of decoding, such as greedy, beam search and multinomial sampling.
```python
>>> from transformers import Wav2Vec2Processor, SpeechEncoderDecoderModel
|
168_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech-encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech-encoder-decoder/#loading-an-existing-speechencoderdecodermodel-checkpoint-and-perform-inference
|
.md
|
```python
>>> from transformers import Wav2Vec2Processor, SpeechEncoderDecoderModel
>>> from datasets import load_dataset
>>> import torch
|
168_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech-encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech-encoder-decoder/#loading-an-existing-speechencoderdecodermodel-checkpoint-and-perform-inference
|
.md
|
>>> # load a fine-tuned speech translation model and corresponding processor
>>> model = SpeechEncoderDecoderModel.from_pretrained("facebook/wav2vec2-xls-r-300m-en-to-15")
>>> processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-xls-r-300m-en-to-15")
|
168_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech-encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech-encoder-decoder/#loading-an-existing-speechencoderdecodermodel-checkpoint-and-perform-inference
|
.md
|
>>> # let's perform inference on a piece of English speech (which we'll translate to German)
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values
|
168_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech-encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech-encoder-decoder/#loading-an-existing-speechencoderdecodermodel-checkpoint-and-perform-inference
|
.md
|
>>> # autoregressively generate transcription (uses greedy decoding by default)
>>> generated_ids = model.generate(input_values)
>>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
>>> print(generated_text)
Mr. Quilter ist der Apostel der Mittelschicht und wir freuen uns, sein Evangelium willkommen heißen zu können.
```
|
168_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech-encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech-encoder-decoder/#training
|
.md
|
Once the model is created, it can be fine-tuned similar to BART, T5 or any other encoder-decoder model on a dataset of (speech, text) pairs.
As you can see, only 2 inputs are required for the model in order to compute a loss: `input_values` (which are the
speech inputs) and `labels` (which are the `input_ids` of the encoded target sequence).
```python
>>> from transformers import AutoTokenizer, AutoFeatureExtractor, SpeechEncoderDecoderModel
>>> from datasets import load_dataset
|
168_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech-encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech-encoder-decoder/#training
|
.md
|
>>> encoder_id = "facebook/wav2vec2-base-960h" # acoustic model encoder
>>> decoder_id = "google-bert/bert-base-uncased" # text decoder
>>> feature_extractor = AutoFeatureExtractor.from_pretrained(encoder_id)
>>> tokenizer = AutoTokenizer.from_pretrained(decoder_id)
>>> # Combine pre-trained encoder and pre-trained decoder to form a Seq2Seq model
>>> model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained(encoder_id, decoder_id)
|
168_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech-encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech-encoder-decoder/#training
|
.md
|
>>> model.config.decoder_start_token_id = tokenizer.cls_token_id
>>> model.config.pad_token_id = tokenizer.pad_token_id
>>> # load an audio input and pre-process (normalise mean/std to 0/1)
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> input_values = feature_extractor(ds[0]["audio"]["array"], return_tensors="pt").input_values
|
168_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech-encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech-encoder-decoder/#training
|
.md
|
>>> # load its corresponding transcription and tokenize to generate labels
>>> labels = tokenizer(ds[0]["text"], return_tensors="pt").input_ids
>>> # the forward function automatically creates the correct decoder_input_ids
>>> loss = model(input_values=input_values, labels=labels).loss
>>> loss.backward()
```
|
168_5_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech-encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech-encoder-decoder/#speechencoderdecoderconfig
|
.md
|
[`SpeechEncoderDecoderConfig`] is the configuration class to store the configuration of a
[`SpeechEncoderDecoderModel`]. It is used to instantiate an Encoder Decoder model according to the specified
arguments, defining the encoder and decoder configs.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
kwargs (*optional*):
Dictionary of keyword arguments. Notably:
|
168_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech-encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech-encoder-decoder/#speechencoderdecoderconfig
|
.md
|
Args:
kwargs (*optional*):
Dictionary of keyword arguments. Notably:
- **encoder** ([`PretrainedConfig`], *optional*) -- An instance of a configuration object that defines
the encoder config.
- **decoder** ([`PretrainedConfig`], *optional*) -- An instance of a configuration object that defines
the decoder config.
Examples:
```python
>>> from transformers import BertConfig, Wav2Vec2Config, SpeechEncoderDecoderConfig, SpeechEncoderDecoderModel
|
168_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech-encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech-encoder-decoder/#speechencoderdecoderconfig
|
.md
|
>>> # Initializing a Wav2Vec2 & BERT style configuration
>>> config_encoder = Wav2Vec2Config()
>>> config_decoder = BertConfig()
>>> config = SpeechEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder)
>>> # Initializing a Wav2Vec2Bert model from a Wav2Vec2 & google-bert/bert-base-uncased style configurations
>>> model = SpeechEncoderDecoderModel(config=config)
|
168_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech-encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech-encoder-decoder/#speechencoderdecoderconfig
|
.md
|
>>> # Accessing the model configuration
>>> config_encoder = model.config.encoder
>>> config_decoder = model.config.decoder
>>> # set decoder config to causal lm
>>> config_decoder.is_decoder = True
>>> config_decoder.add_cross_attention = True
>>> # Saving the model, including its configuration
>>> model.save_pretrained("my-model")
|
168_6_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech-encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech-encoder-decoder/#speechencoderdecoderconfig
|
.md
|
>>> # Saving the model, including its configuration
>>> model.save_pretrained("my-model")
>>> # loading model and config from pretrained folder
>>> encoder_decoder_config = SpeechEncoderDecoderConfig.from_pretrained("my-model")
>>> model = SpeechEncoderDecoderModel.from_pretrained("my-model", config=encoder_decoder_config)
```
|
168_6_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech-encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech-encoder-decoder/#speechencoderdecodermodel
|
.md
|
This class can be used to initialize a speech-sequence-to-text-sequence model with any pretrained speech
autoencoding model as the encoder and any pretrained text autoregressive model as the decoder. The encoder is
loaded via [`~AutoModel.from_pretrained`] function and the decoder is loaded via
[`~AutoModelForCausalLM.from_pretrained`] function. Cross-attention layers are automatically added to the decoder
and should be fine-tuned on a downstream generative task, like summarization.
|
168_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech-encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech-encoder-decoder/#speechencoderdecodermodel
|
.md
|
and should be fine-tuned on a downstream generative task, like summarization.
The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation
tasks was shown in [Leveraging Pre-trained Checkpoints for Sequence Generation
Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi
Zhou, Wei Li, Peter J. Liu.
Additionally, in [Large-Scale Self- and Semi-Supervised Learning for Speech
|
168_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech-encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech-encoder-decoder/#speechencoderdecodermodel
|
.md
|
Zhou, Wei Li, Peter J. Liu.
Additionally, in [Large-Scale Self- and Semi-Supervised Learning for Speech
Translation](https://arxiv.org/abs/2104.06678) it is shown how leveraging large pretrained speech models for speech
translation yields a significant performance improvement.
After such an Speech-Encoder Decoder model has been trained/fine-tuned, it can be saved/loaded just like any other
models (see the examples for more information).
|
168_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech-encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech-encoder-decoder/#speechencoderdecodermodel
|
.md
|
models (see the examples for more information).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
|
168_7_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech-encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech-encoder-decoder/#speechencoderdecodermodel
|
.md
|
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`SpeechEncoderDecoderConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
|
168_7_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech-encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech-encoder-decoder/#speechencoderdecodermodel
|
.md
|
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
[`SpeechEncoderDecoderModel`] is a generic model class that will be instantiated as a transformer architecture with
one of the base model classes of the library as encoder and another one as decoder when created with the
:meth*~transformers.AutoModel.from_pretrained* class method for the encoder and
:meth*~transformers.AutoModelForCausalLM.from_pretrained* class method for the decoder.
Methods: forward
|
168_7_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech-encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech-encoder-decoder/#speechencoderdecodermodel
|
.md
|
:meth*~transformers.AutoModelForCausalLM.from_pretrained* class method for the decoder.
Methods: forward
- from_encoder_decoder_pretrained
|
168_7_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speech-encoder-decoder.md
|
https://huggingface.co/docs/transformers/en/model_doc/speech-encoder-decoder/#flaxspeechencoderdecodermodel
|
.md
|
No docstring available for FlaxSpeechEncoderDecoderModel
Methods: __call__
- from_encoder_decoder_pretrained
|
168_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mimi.md
|
https://huggingface.co/docs/transformers/en/model_doc/mimi/
|
.md
|
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
169_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mimi.md
|
https://huggingface.co/docs/transformers/en/model_doc/mimi/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
169_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mimi.md
|
https://huggingface.co/docs/transformers/en/model_doc/mimi/#overview
|
.md
|
The Mimi model was proposed in [Moshi: a speech-text foundation model for real-time dialogue](https://kyutai.org/Moshi.pdf) by Alexandre Défossez, Laurent Mazaré, Manu Orsini, Amélie Royer, Patrick Pérez, Hervé Jégou, Edouard Grave and Neil Zeghidour. Mimi is a high-fidelity audio codec model developed by the Kyutai team, that combines semantic and acoustic information into audio tokens running at 12Hz and a bitrate of 1.1kbps. In other words, it can be used to map audio waveforms into “audio tokens”, known
|
169_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mimi.md
|
https://huggingface.co/docs/transformers/en/model_doc/mimi/#overview
|
.md
|
running at 12Hz and a bitrate of 1.1kbps. In other words, it can be used to map audio waveforms into “audio tokens”, known as “codebooks”.
|
169_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mimi.md
|
https://huggingface.co/docs/transformers/en/model_doc/mimi/#overview
|
.md
|
The abstract from the paper is the following:
|
169_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mimi.md
|
https://huggingface.co/docs/transformers/en/model_doc/mimi/#overview
|
.md
|
*We introduce Moshi, a speech-text foundation model and full-duplex spoken dialogue framework. Current systems for spoken dialogue rely on pipelines of independent components, namely voice activity detection, speech recognition, textual dialogue and text-to-speech. Such frameworks cannot emulate the experience of real conversations. First, their complexity induces a latency of several seconds between interactions. Second, text being the intermediate modality for dialogue, non-linguistic information that
|
169_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mimi.md
|
https://huggingface.co/docs/transformers/en/model_doc/mimi/#overview
|
.md
|
seconds between interactions. Second, text being the intermediate modality for dialogue, non-linguistic information that modifies meaning— such as emotion or non-speech sounds— is lost in the interaction. Finally, they rely on a segmentation into speaker turns, which does not take into account overlapping speech, interruptions and interjections. Moshi solves these independent issues altogether by casting spoken dialogue as speech-to-speech generation. Starting from a text language model backbone, Moshi
|
169_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mimi.md
|
https://huggingface.co/docs/transformers/en/model_doc/mimi/#overview
|
.md
|
altogether by casting spoken dialogue as speech-to-speech generation. Starting from a text language model backbone, Moshi generates speech as tokens from the residual quantizer of a neural audio codec, while modeling separately its own speech and that of the user into parallel streams. This allows for the removal of explicit speaker turns, and the modeling of arbitrary conversational dynamics. We moreover extend the hierarchical semantic-to-acoustic token generation of previous work to first predict
|
169_1_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mimi.md
|
https://huggingface.co/docs/transformers/en/model_doc/mimi/#overview
|
.md
|
dynamics. We moreover extend the hierarchical semantic-to-acoustic token generation of previous work to first predict time-aligned text tokens as a prefix to audio tokens. Not only this “Inner Monologue” method significantly improves the linguistic quality of generated speech, but we also illustrate how it can provide streaming speech recognition and text-to-speech. Our resulting model is the first real-time full-duplex spoken large language model, with a theoretical latency of 160ms, 200ms in practice,
|
169_1_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mimi.md
|
https://huggingface.co/docs/transformers/en/model_doc/mimi/#overview
|
.md
|
model is the first real-time full-duplex spoken large language model, with a theoretical latency of 160ms, 200ms in practice, and is available at github.com/kyutai-labs/moshi.*
|
169_1_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mimi.md
|
https://huggingface.co/docs/transformers/en/model_doc/mimi/#overview
|
.md
|
Its architecture is based on [Encodec](model_doc/encodec) with several major differences:
* it uses a much lower frame-rate.
* it uses additional transformers for encoding and decoding for better latent contextualization
* it uses a different quantization scheme: one codebook is dedicated to semantic projection.
|
169_1_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mimi.md
|
https://huggingface.co/docs/transformers/en/model_doc/mimi/#usage-example
|
.md
|
Here is a quick example of how to encode and decode an audio using this model:
```python
>>> from datasets import load_dataset, Audio
>>> from transformers import MimiModel, AutoFeatureExtractor
>>> librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> # load model and feature extractor
>>> model = MimiModel.from_pretrained("kyutai/mimi")
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("kyutai/mimi")
|
169_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mimi.md
|
https://huggingface.co/docs/transformers/en/model_doc/mimi/#usage-example
|
.md
|
>>> # load audio sample
>>> librispeech_dummy = librispeech_dummy.cast_column("audio", Audio(sampling_rate=feature_extractor.sampling_rate))
>>> audio_sample = librispeech_dummy[-1]["audio"]["array"]
>>> inputs = feature_extractor(raw_audio=audio_sample, sampling_rate=feature_extractor.sampling_rate, return_tensors="pt")
|
169_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mimi.md
|
https://huggingface.co/docs/transformers/en/model_doc/mimi/#usage-example
|
.md
|
>>> encoder_outputs = model.encode(inputs["input_values"], inputs["padding_mask"])
>>> audio_values = model.decode(encoder_outputs.audio_codes, inputs["padding_mask"])[0]
>>> # or the equivalent with a forward pass
>>> audio_values = model(inputs["input_values"], inputs["padding_mask"]).audio_values
```
This model was contributed by [Yoach Lacombe (ylacombe)](https://huggingface.co/ylacombe).
The original code can be found [here](https://github.com/kyutai-labs/moshi).
|
169_2_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mimi.md
|
https://huggingface.co/docs/transformers/en/model_doc/mimi/#mimiconfig
|
.md
|
This is the configuration class to store the configuration of an [`MimiModel`]. It is used to instantiate a
Mimi model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the
[kyutai/mimi](https://huggingface.co/kyutai/mimi) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
169_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mimi.md
|
https://huggingface.co/docs/transformers/en/model_doc/mimi/#mimiconfig
|
.md
|
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
sampling_rate (`int`, *optional*, defaults to 24000):
The sampling rate at which the audio waveform should be digitalized expressed in hertz (Hz).
frame_rate (`float`, *optional*, defaults to 12.5):
Framerate of the model.
audio_channels (`int`, *optional*, defaults to 1):
|
169_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mimi.md
|
https://huggingface.co/docs/transformers/en/model_doc/mimi/#mimiconfig
|
.md
|
frame_rate (`float`, *optional*, defaults to 12.5):
Framerate of the model.
audio_channels (`int`, *optional*, defaults to 1):
Number of channels in the audio data. Either 1 for mono or 2 for stereo.
hidden_size (`int`, *optional*, defaults to 512):
Intermediate representation dimension.
num_filters (`int`, *optional*, defaults to 64):
Number of convolution kernels of first `MimiConv1d` down sampling layer.
num_residual_layers (`int`, *optional*, defaults to 1):
Number of residual layers.
|
169_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mimi.md
|
https://huggingface.co/docs/transformers/en/model_doc/mimi/#mimiconfig
|
.md
|
num_residual_layers (`int`, *optional*, defaults to 1):
Number of residual layers.
upsampling_ratios (`Sequence[int]`, *optional*):
Kernel size and stride ratios. The encoder uses downsampling ratios instead of upsampling ratios, hence it
will use the ratios in the reverse order to the ones specified here that must match the decoder order.
If not specified, will defaults to `[8, 6, 5, 4]`
kernel_size (`int`, *optional*, defaults to 7):
Kernel size for the initial convolution.
|
169_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mimi.md
|
https://huggingface.co/docs/transformers/en/model_doc/mimi/#mimiconfig
|
.md
|
kernel_size (`int`, *optional*, defaults to 7):
Kernel size for the initial convolution.
last_kernel_size (`int`, *optional*, defaults to 3):
Kernel size for the last convolution layer.
residual_kernel_size (`int`, *optional*, defaults to 3):
Kernel size for the residual layers.
dilation_growth_rate (`int`, *optional*, defaults to 2):
How much to increase the dilation with each layer.
use_causal_conv (`bool`, *optional*, defaults to `True`):
Whether to use fully causal convolution.
|
169_3_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mimi.md
|
https://huggingface.co/docs/transformers/en/model_doc/mimi/#mimiconfig
|
.md
|
use_causal_conv (`bool`, *optional*, defaults to `True`):
Whether to use fully causal convolution.
pad_mode (`str`, *optional*, defaults to `"constant"`):
Padding mode for the convolutions.
compress (`int`, *optional*, defaults to 2):
Reduced dimensionality in residual branches.
trim_right_ratio (`float`, *optional*, defaults to 1.0):
Ratio for trimming at the right of the transposed convolution under the `use_causal_conv = True` setup. If
equal to 1.0, it means that all the trimming is done at the right.
|
169_3_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mimi.md
|
https://huggingface.co/docs/transformers/en/model_doc/mimi/#mimiconfig
|
.md
|
equal to 1.0, it means that all the trimming is done at the right.
codebook_size (`int`, *optional*, defaults to 2048):
Number of discret codes in each codebooks.
codebook_dim (`int`, *optional*, defaults to 256):
Dimension of the unquantized codebook vectors. If not defined, uses `hidden_size`.
num_quantizers (`int`, *optional*, defaults to 32):
Number of quantizer channels, or codebooks, in the quantizer.
use_conv_shortcut (`bool`, *optional*, defaults to `False`):
|
169_3_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mimi.md
|
https://huggingface.co/docs/transformers/en/model_doc/mimi/#mimiconfig
|
.md
|
Number of quantizer channels, or codebooks, in the quantizer.
use_conv_shortcut (`bool`, *optional*, defaults to `False`):
Whether to use a convolutional layer as the 'skip' connection in the `MimiResnetBlock` block. If False,
an identity function will be used, giving a generic residual connection.
vector_quantization_hidden_dimension (`int`, *optional*, defaults to 256):
Intermediate representation dimension in the residual vector quantization space.
|
169_3_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mimi.md
|
https://huggingface.co/docs/transformers/en/model_doc/mimi/#mimiconfig
|
.md
|
Intermediate representation dimension in the residual vector quantization space.
num_semantic_quantizers (`int`, *optional*, defaults to 1):
Number of semantic quantizer channels, or codebooks, in the semantic quantizer. Must be lower than `num_quantizers`.
upsample_groups (`int`, *optional*, defaults to 512):
If `frame_rate!=encodec_frame_rate`, indicates the number of groups used in the upsampling operation to go from one rate to another.
num_hidden_layers (`int`, *optional*, defaults to 8):
|
169_3_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mimi.md
|
https://huggingface.co/docs/transformers/en/model_doc/mimi/#mimiconfig
|
.md
|
num_hidden_layers (`int`, *optional*, defaults to 8):
Number of hidden layers in the Transformer models.
intermediate_size (`int`, *optional*, defaults to 2048):
Dimension of the MLP representations.
num_attention_heads (`int`, *optional*, defaults to 8):
Number of attention heads for each attention layer in the Transformer encoder.
num_key_value_heads (`int`, *optional*, defaults to 8):
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
|
169_3_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mimi.md
|
https://huggingface.co/docs/transformers/en/model_doc/mimi/#mimiconfig
|
.md
|
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
`num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
`num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
by meanpooling all the original heads within that group. For more details checkout [this
|
169_3_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mimi.md
|
https://huggingface.co/docs/transformers/en/model_doc/mimi/#mimiconfig
|
.md
|
by meanpooling all the original heads within that group. For more details checkout [this
paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to `8`.
head_dim (`int`, *optional*, defaults to `hidden_size // num_attention_heads`):
The attention head dimension.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the decoder.
max_position_embeddings (`int`, *optional*, defaults to 8000):
|
169_3_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mimi.md
|
https://huggingface.co/docs/transformers/en/model_doc/mimi/#mimiconfig
|
.md
|
max_position_embeddings (`int`, *optional*, defaults to 8000):
The maximum sequence length that this model might ever be used with. Mimi's sliding window attention
allows sequence of up to 8000 tokens.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon used by the LayerNorm normalization layers.
|
169_3_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mimi.md
|
https://huggingface.co/docs/transformers/en/model_doc/mimi/#mimiconfig
|
.md
|
norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon used by the LayerNorm normalization layers.
use_cache (`bool`, *optional*, defaults to `False`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`.
rope_theta (`float`, *optional*, defaults to 10000.0):
The base period of the RoPE embeddings.
sliding_window (`int`, *optional*, defaults to 250):
|
169_3_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mimi.md
|
https://huggingface.co/docs/transformers/en/model_doc/mimi/#mimiconfig
|
.md
|
The base period of the RoPE embeddings.
sliding_window (`int`, *optional*, defaults to 250):
Sliding window attention window size. If not specified, will default to `250`.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
layer_scale_initial_scale (`float`, *optional*, defaults to 0.01):
Initiale scale of the residual rescaling operation done in the Transformer models.
attention_bias (`bool`, defaults to `False`, *optional*, defaults to `False`):
|
169_3_14
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mimi.md
|
https://huggingface.co/docs/transformers/en/model_doc/mimi/#mimiconfig
|
.md
|
attention_bias (`bool`, defaults to `False`, *optional*, defaults to `False`):
Whether to use a bias in the query, key, value and output projection layers during self-attention.
Example:
```python
>>> from transformers import MimiModel, MimiConfig
|
169_3_15
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mimi.md
|
https://huggingface.co/docs/transformers/en/model_doc/mimi/#mimiconfig
|
.md
|
>>> # Initializing a "kyutai/mimi" style configuration
>>> configuration = MimiConfig()
>>> # Initializing a model (with random weights) from the "kyutai/mimi" style configuration
>>> model = MimiModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```
|
169_3_16
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mimi.md
|
https://huggingface.co/docs/transformers/en/model_doc/mimi/#mimimodel
|
.md
|
The Mimi neural audio codec model.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
|
169_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mimi.md
|
https://huggingface.co/docs/transformers/en/model_doc/mimi/#mimimodel
|
.md
|
and behavior.
Parameters:
config ([`MimiConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: decode
- encode
- forward
|
169_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/
|
.md
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
170_0_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
|
170_0_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#overview
|
.md
|
The MMS model was proposed in [Scaling Speech Technology to 1,000+ Languages](https://arxiv.org/abs/2305.13516)
by Vineel Pratap, Andros Tjandra, Bowen Shi, Paden Tomasello, Arun Babu, Sayani Kundu, Ali Elkahky, Zhaoheng Ni, Apoorv Vyas, Maryam Fazel-Zarandi, Alexei Baevski, Yossi Adi, Xiaohui Zhang, Wei-Ning Hsu, Alexis Conneau, Michael Auli
The abstract from the paper is the following:
|
170_1_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#overview
|
.md
|
The abstract from the paper is the following:
*Expanding the language coverage of speech technology has the potential to improve access to information for many more people.
However, current speech technology is restricted to about one hundred languages which is a small fraction of the over 7,000
languages spoken around the world.
The Massively Multilingual Speech (MMS) project increases the number of supported languages by 10-40x, depending on the task.
|
170_1_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#overview
|
.md
|
The Massively Multilingual Speech (MMS) project increases the number of supported languages by 10-40x, depending on the task.
The main ingredients are a new dataset based on readings of publicly available religious texts and effectively leveraging
self-supervised learning. We built pre-trained wav2vec 2.0 models covering 1,406 languages,
a single multilingual automatic speech recognition model for 1,107 languages, speech synthesis models
|
170_1_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#overview
|
.md
|
a single multilingual automatic speech recognition model for 1,107 languages, speech synthesis models
for the same number of languages, as well as a language identification model for 4,017 languages.
Experiments show that our multilingual speech recognition model more than halves the word error rate of
Whisper on 54 languages of the FLEURS benchmark while being trained on a small fraction of the labeled data.*
|
170_1_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#overview
|
.md
|
Whisper on 54 languages of the FLEURS benchmark while being trained on a small fraction of the labeled data.*
Here are the different models open sourced in the MMS project. The models and code are originally released [here](https://github.com/facebookresearch/fairseq/tree/main/examples/mms). We have add them to the `transformers` framework, making them easier to use.
|
170_1_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#automatic-speech-recognition-asr
|
.md
|
The ASR model checkpoints can be found here : [mms-1b-fl102](https://huggingface.co/facebook/mms-1b-fl102), [mms-1b-l1107](https://huggingface.co/facebook/mms-1b-l1107), [mms-1b-all](https://huggingface.co/facebook/mms-1b-all). For best accuracy, use the `mms-1b-all` model.
Tips:
- All ASR models accept a float array corresponding to the raw waveform of the speech signal. The raw waveform should be pre-processed with [`Wav2Vec2FeatureExtractor`].
|
170_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#automatic-speech-recognition-asr
|
.md
|
- The models were trained using connectionist temporal classification (CTC) so the model output has to be decoded using
[`Wav2Vec2CTCTokenizer`].
- You can load different language adapter weights for different languages via [`~Wav2Vec2PreTrainedModel.load_adapter`]. Language adapters only consists of roughly 2 million parameters
and can therefore be efficiently loaded on the fly when needed.
|
170_2_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#loading
|
.md
|
By default MMS loads adapter weights for English. If you want to load adapter weights of another language
make sure to specify `target_lang=<your-chosen-target-lang>` as well as `"ignore_mismatched_sizes=True`.
The `ignore_mismatched_sizes=True` keyword has to be passed to allow the language model head to be resized according
to the vocabulary of the specified language.
Similarly, the processor should be loaded with the same target language
```py
from transformers import Wav2Vec2ForCTC, AutoProcessor
|
170_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#loading
|
.md
|
model_id = "facebook/mms-1b-all"
target_lang = "fra"
|
170_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#loading
|
.md
|
processor = AutoProcessor.from_pretrained(model_id, target_lang=target_lang)
model = Wav2Vec2ForCTC.from_pretrained(model_id, target_lang=target_lang, ignore_mismatched_sizes=True)
```
<Tip>
You can safely ignore a warning such as:
```text
Some weights of Wav2Vec2ForCTC were not initialized from the model checkpoint at facebook/mms-1b-all and are newly initialized because the shapes did not match:
|
170_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#loading
|
.md
|
- lm_head.bias: found shape torch.Size([154]) in the checkpoint and torch.Size([314]) in the model instantiated
- lm_head.weight: found shape torch.Size([154, 1280]) in the checkpoint and torch.Size([314, 1280]) in the model instantiated
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
</Tip>
If you want to use the ASR pipeline, you can load your chosen target language as such:
```py
from transformers import pipeline
|
170_3_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#loading
|
.md
|
model_id = "facebook/mms-1b-all"
target_lang = "fra"
pipe = pipeline(model=model_id, model_kwargs={"target_lang": "fra", "ignore_mismatched_sizes": True})
```
|
170_3_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#inference
|
.md
|
Next, let's look at how we can run MMS in inference and change adapter layers after having called [`~PretrainedModel.from_pretrained`]
First, we load audio data in different languages using the [Datasets](https://github.com/huggingface/datasets).
```py
from datasets import load_dataset, Audio
|
170_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#inference
|
.md
|
# English
stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "en", split="test", streaming=True)
stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000))
en_sample = next(iter(stream_data))["audio"]["array"]
|
170_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#inference
|
.md
|
# French
stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "fr", split="test", streaming=True)
stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000))
fr_sample = next(iter(stream_data))["audio"]["array"]
```
Next, we load the model and processor
```py
from transformers import Wav2Vec2ForCTC, AutoProcessor
import torch
model_id = "facebook/mms-1b-all"
|
170_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#inference
|
.md
|
model_id = "facebook/mms-1b-all"
processor = AutoProcessor.from_pretrained(model_id)
model = Wav2Vec2ForCTC.from_pretrained(model_id)
```
Now we process the audio data, pass the processed audio data to the model and transcribe the model output,
just like we usually do for [`Wav2Vec2ForCTC`].
```py
inputs = processor(en_sample, sampling_rate=16_000, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs).logits
|
170_4_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#inference
|
.md
|
ids = torch.argmax(outputs, dim=-1)[0]
transcription = processor.decode(ids)
# 'joe keton disapproved of films and buster also had reservations about the media'
```
We can now keep the same model in memory and simply switch out the language adapters by
calling the convenient [`~Wav2Vec2ForCTC.load_adapter`] function for the model and [`~Wav2Vec2CTCTokenizer.set_target_lang`] for the tokenizer.
We pass the target language as an input - `"fra"` for French.
```py
|
170_4_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#inference
|
.md
|
We pass the target language as an input - `"fra"` for French.
```py
processor.tokenizer.set_target_lang("fra")
model.load_adapter("fra")
|
170_4_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#inference
|
.md
|
inputs = processor(fr_sample, sampling_rate=16_000, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs).logits
|
170_4_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#inference
|
.md
|
ids = torch.argmax(outputs, dim=-1)[0]
transcription = processor.decode(ids)
# "ce dernier est volé tout au long de l'histoire romaine"
```
In the same way the language can be switched out for all other supported languages. Please have a look at:
```py
processor.tokenizer.vocab.keys()
```
to see all supported languages.
To further improve performance from ASR models, language model decoding can be used. See the documentation [here](https://huggingface.co/facebook/mms-1b-all) for further details.
|
170_4_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#speech-synthesis-tts
|
.md
|
MMS-TTS uses the same model architecture as VITS, which was added to 🤗 Transformers in v4.33. MMS trains a separate
model checkpoint for each of the 1100+ languages in the project. All available checkpoints can be found on the Hugging
Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts), and the inference
documentation under [VITS](https://huggingface.co/docs/transformers/main/en/model_doc/vits).
|
170_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#inference
|
.md
|
To use the MMS model, first update to the latest version of the Transformers library:
```bash
pip install --upgrade transformers accelerate
```
Since the flow-based model in VITS is non-deterministic, it is good practice to set a seed to ensure reproducibility of
the outputs.
- For languages with a Roman alphabet, such as English or French, the tokenizer can be used directly to
pre-process the text inputs. The following code example runs a forward pass using the MMS-TTS English checkpoint:
```python
|
170_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#inference
|
.md
|
pre-process the text inputs. The following code example runs a forward pass using the MMS-TTS English checkpoint:
```python
import torch
from transformers import VitsTokenizer, VitsModel, set_seed
|
170_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#inference
|
.md
|
tokenizer = VitsTokenizer.from_pretrained("facebook/mms-tts-eng")
model = VitsModel.from_pretrained("facebook/mms-tts-eng")
inputs = tokenizer(text="Hello - my dog is cute", return_tensors="pt")
set_seed(555) # make deterministic
with torch.no_grad():
outputs = model(**inputs)
waveform = outputs.waveform[0]
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
|
170_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#inference
|
.md
|
waveform = outputs.waveform[0]
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("synthesized_speech.wav", rate=model.config.sampling_rate, data=waveform)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
|
170_6_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#inference
|
.md
|
Audio(waveform, rate=model.config.sampling_rate)
```
For certain languages with non-Roman alphabets, such as Arabic, Mandarin or Hindi, the [`uroman`](https://github.com/isi-nlp/uroman)
perl package is required to pre-process the text inputs to the Roman alphabet.
You can check whether you require the `uroman` package for your language by inspecting the `is_uroman` attribute of
the pre-trained `tokenizer`:
```python
from transformers import VitsTokenizer
|
170_6_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#inference
|
.md
|
tokenizer = VitsTokenizer.from_pretrained("facebook/mms-tts-eng")
print(tokenizer.is_uroman)
```
If required, you should apply the uroman package to your text inputs **prior** to passing them to the `VitsTokenizer`,
since currently the tokenizer does not support performing the pre-processing itself.
To do this, first clone the uroman repository to your local machine and set the bash variable `UROMAN` to the local path:
```bash
git clone https://github.com/isi-nlp/uroman.git
cd uroman
|
170_6_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#inference
|
.md
|
```bash
git clone https://github.com/isi-nlp/uroman.git
cd uroman
export UROMAN=$(pwd)
```
You can then pre-process the text input using the following code snippet. You can either rely on using the bash variable
`UROMAN` to point to the uroman repository, or you can pass the uroman directory as an argument to the `uromanize` function:
```python
import torch
from transformers import VitsTokenizer, VitsModel, set_seed
import os
import subprocess
|
170_6_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#inference
|
.md
|
tokenizer = VitsTokenizer.from_pretrained("facebook/mms-tts-kor")
model = VitsModel.from_pretrained("facebook/mms-tts-kor")
def uromanize(input_string, uroman_path):
"""Convert non-Roman strings to Roman using the `uroman` perl package."""
script_path = os.path.join(uroman_path, "bin", "uroman.pl")
command = ["perl", script_path]
|
170_6_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#inference
|
.md
|
command = ["perl", script_path]
process = subprocess.Popen(command, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# Execute the perl command
stdout, stderr = process.communicate(input=input_string.encode())
if process.returncode != 0:
raise ValueError(f"Error {process.returncode}: {stderr.decode()}")
# Return the output as a string and skip the new-line character at the end
return stdout.decode()[:-1]
|
170_6_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#inference
|
.md
|
# Return the output as a string and skip the new-line character at the end
return stdout.decode()[:-1]
text = "이봐 무슨 일이야"
uromanized_text = uromanize(text, uroman_path=os.environ["UROMAN"])
inputs = tokenizer(text=uromanized_text, return_tensors="pt")
set_seed(555) # make deterministic
with torch.no_grad():
outputs = model(inputs["input_ids"])
|
170_6_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#inference
|
.md
|
waveform = outputs.waveform[0]
```
**Tips:**
|
170_6_10
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#inference
|
.md
|
```
**Tips:**
* The MMS-TTS checkpoints are trained on lower-cased, un-punctuated text. By default, the `VitsTokenizer` *normalizes* the inputs by removing any casing and punctuation, to avoid passing out-of-vocabulary characters to the model. Hence, the model is agnostic to casing and punctuation, so these should be avoided in the text prompt. You can disable normalisation by setting `normalize=False` in the call to the tokenizer, but this will lead to un-expected behaviour and is discouraged.
|
170_6_11
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#inference
|
.md
|
* The speaking rate can be varied by setting the attribute `model.speaking_rate` to a chosen value. Likewise, the randomness of the noise is controlled by `model.noise_scale`:
```python
import torch
from transformers import VitsTokenizer, VitsModel, set_seed
|
170_6_12
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#inference
|
.md
|
tokenizer = VitsTokenizer.from_pretrained("facebook/mms-tts-eng")
model = VitsModel.from_pretrained("facebook/mms-tts-eng")
inputs = tokenizer(text="Hello - my dog is cute", return_tensors="pt")
# make deterministic
set_seed(555)
# make speech faster and more noisy
model.speaking_rate = 1.5
model.noise_scale = 0.8
with torch.no_grad():
outputs = model(**inputs)
```
|
170_6_13
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#language-identification-lid
|
.md
|
Different LID models are available based on the number of languages they can recognize - [126](https://huggingface.co/facebook/mms-lid-126), [256](https://huggingface.co/facebook/mms-lid-256), [512](https://huggingface.co/facebook/mms-lid-512), [1024](https://huggingface.co/facebook/mms-lid-1024), [2048](https://huggingface.co/facebook/mms-lid-2048), [4017](https://huggingface.co/facebook/mms-lid-4017).
|
170_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#inference
|
.md
|
First, we install transformers and some other libraries
```bash
pip install torch accelerate datasets[audio]
pip install --upgrade transformers
````
Next, we load a couple of audio samples via `datasets`. Make sure that the audio data is sampled to 16000 kHz.
```py
from datasets import load_dataset, Audio
|
170_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#inference
|
.md
|
# English
stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "en", split="test", streaming=True)
stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000))
en_sample = next(iter(stream_data))["audio"]["array"]
|
170_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mms.md
|
https://huggingface.co/docs/transformers/en/model_doc/mms/#inference
|
.md
|
# Arabic
stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "ar", split="test", streaming=True)
stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000))
ar_sample = next(iter(stream_data))["audio"]["array"]
```
Next, we load the model and processor
```py
from transformers import Wav2Vec2ForSequenceClassification, AutoFeatureExtractor
import torch
model_id = "facebook/mms-lid-126"
|
170_8_2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.