source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/switch_transformers.md | https://huggingface.co/docs/transformers/en/model_doc/switch_transformers/#switchtransformersconfig | .md | Number of dense hidden layers in the Transformer encoder layer.
num_sparse_encoder_layers (`int`, *optional*, defaults to 3):
Number of sparse (MoE) dense hidden layers in the Transformer encoder layer.
num_decoder_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer decoder. Will use the same value as `num_layers` if not set.
num_sparse_decoder_layers (`int`, *optional*, defaults to 3):
Number of sparse (MoE) dense hidden layers in the Transformer decoder layer. | 112_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/switch_transformers.md | https://huggingface.co/docs/transformers/en/model_doc/switch_transformers/#switchtransformersconfig | .md | Number of sparse (MoE) dense hidden layers in the Transformer decoder layer.
num_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
num_experts (`int`, *optional*, defaults to 8):
Number of experts for each SwitchTransformer layer.
router_bias (`bool`, *optional*, defaults to `False`):
Whether to add a bias to the router.
router_jitter_noise (`float`, *optional*, defaults to 0.01):
Amount of noise to add to the router. | 112_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/switch_transformers.md | https://huggingface.co/docs/transformers/en/model_doc/switch_transformers/#switchtransformersconfig | .md | router_jitter_noise (`float`, *optional*, defaults to 0.01):
Amount of noise to add to the router.
router_dtype (`str`, *optional*, default to `"float32"`):
The `dtype` used for the routers. It is preferable to keep the `dtype` to `"float32"` as specified in the
*selective precision* discussion in [the paper](https://arxiv.org/abs/2101.03961).
router_ignore_padding_tokens (`bool`, *optional*, defaults to `False`):
Whether to ignore padding tokens when routing. | 112_4_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/switch_transformers.md | https://huggingface.co/docs/transformers/en/model_doc/switch_transformers/#switchtransformersconfig | .md | router_ignore_padding_tokens (`bool`, *optional*, defaults to `False`):
Whether to ignore padding tokens when routing.
relative_attention_num_buckets (`int`, *optional*, defaults to 32):
The number of buckets to use for each attention layer.
relative_attention_max_distance (`int`, *optional*, defaults to 128):
The maximum distance of the longer sequences for the bucket separation.
dropout_rate (`float`, *optional*, defaults to 0.1):
The ratio for all dropout layers. | 112_4_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/switch_transformers.md | https://huggingface.co/docs/transformers/en/model_doc/switch_transformers/#switchtransformersconfig | .md | dropout_rate (`float`, *optional*, defaults to 0.1):
The ratio for all dropout layers.
layer_norm_eps (`float`, *optional*, defaults to 1e-6):
The epsilon used by the layer normalization layers.
router_z_loss_coef (`float`, *optional*, defaults to 0.001):
The z loss factor for the total loss.
router_aux_loss_coef (`float`, *optional*, defaults to 0.001):
The aux loss factor for the total loss.
initializer_factor (`float`, *optional*, defaults to 1.0): | 112_4_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/switch_transformers.md | https://huggingface.co/docs/transformers/en/model_doc/switch_transformers/#switchtransformersconfig | .md | The aux loss factor for the total loss.
initializer_factor (`float`, *optional*, defaults to 1.0):
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
dense_act_fn (`string`, *optional*, defaults to `"relu"`):
Type of feed forward layer to be used. Should be one of `"relu"` or `"gated-gelu"`. SwitchTransformersv1.1
uses the `"gated-gelu"` feed forward projection. Original SwitchTransformers uses `"relu"`. | 112_4_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/switch_transformers.md | https://huggingface.co/docs/transformers/en/model_doc/switch_transformers/#switchtransformersconfig | .md | uses the `"gated-gelu"` feed forward projection. Original SwitchTransformers uses `"relu"`.
add_router_probs (`bool`, *optional*, defaults to `False`):
Whether to output router probabilities to compute router auxiliary loss.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). | 112_4_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/switch_transformers.md | https://huggingface.co/docs/transformers/en/model_doc/switch_transformers/#switchtransformerstop1router | .md | Router using tokens choose top-1 experts assignment.
This router uses the same mechanism as in Switch Transformer (https://arxiv.org/abs/2101.03961) and V-MoE
(https://arxiv.org/abs/2106.05974): tokens choose their top experts. Items are sorted by router_probs and then
routed to their choice of expert until the expert's expert_capacity is reached. **There is no guarantee that each
token is processed by an expert**, or that each expert receives at least one token.
Methods: _compute_router_probabilities | 112_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/switch_transformers.md | https://huggingface.co/docs/transformers/en/model_doc/switch_transformers/#switchtransformerstop1router | .md | token is processed by an expert**, or that each expert receives at least one token.
Methods: _compute_router_probabilities
- forward | 112_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/switch_transformers.md | https://huggingface.co/docs/transformers/en/model_doc/switch_transformers/#switchtransformerssparsemlp | .md | Implementation of the Switch Transformers Sparse MLP module.
Methods: forward | 112_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/switch_transformers.md | https://huggingface.co/docs/transformers/en/model_doc/switch_transformers/#switchtransformersmodel | .md | The bare SWITCH_TRANSFORMERS Model transformer outputting raw hidden-states without any specific head on top.
The SWITCH_TRANSFORMERS model was proposed in [Switch Transformers: Scaling to Trillion Parameter Models with
Simple and Efficient Sparsity](https://arxiv.org/abs/2101.03961) by [William
Fedus](https://arxiv.org/search/cs?searchtype=author&query=Fedus%2C+W), [Barret
Zoph](https://arxiv.org/search/cs?searchtype=author&query=Zoph%2C+B), and [Noam | 112_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/switch_transformers.md | https://huggingface.co/docs/transformers/en/model_doc/switch_transformers/#switchtransformersmodel | .md | Zoph](https://arxiv.org/search/cs?searchtype=author&query=Zoph%2C+B), and [Noam
Shazeer](https://arxiv.org/search/cs?searchtype=author&query=Shazeer%2C+N). It's an encoder-decoder T5-like model
with sparse Feed Forward that stands for Mixture of Experts (MoE) architecture.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.) | 112_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/switch_transformers.md | https://huggingface.co/docs/transformers/en/model_doc/switch_transformers/#switchtransformersmodel | .md | library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`SwitchTransformersConfig`]): Model configuration class with all the parameters of the model. | 112_7_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/switch_transformers.md | https://huggingface.co/docs/transformers/en/model_doc/switch_transformers/#switchtransformersmodel | .md | Parameters:
config ([`SwitchTransformersConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 112_7_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/switch_transformers.md | https://huggingface.co/docs/transformers/en/model_doc/switch_transformers/#switchtransformersforconditionalgeneration | .md | SWITCH_TRANSFORMERS Model with a `language modeling` head on top.
The SWITCH_TRANSFORMERS model was proposed in [Switch Transformers: Scaling to Trillion Parameter Models with
Simple and Efficient Sparsity](https://arxiv.org/abs/2101.03961) by [William
Fedus](https://arxiv.org/search/cs?searchtype=author&query=Fedus%2C+W), [Barret
Zoph](https://arxiv.org/search/cs?searchtype=author&query=Zoph%2C+B), and [Noam | 112_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/switch_transformers.md | https://huggingface.co/docs/transformers/en/model_doc/switch_transformers/#switchtransformersforconditionalgeneration | .md | Zoph](https://arxiv.org/search/cs?searchtype=author&query=Zoph%2C+B), and [Noam
Shazeer](https://arxiv.org/search/cs?searchtype=author&query=Shazeer%2C+N). It's an encoder-decoder T5-like model
with sparse Feed Forward that stands for Mixture of Experts (MoE) architecture.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.) | 112_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/switch_transformers.md | https://huggingface.co/docs/transformers/en/model_doc/switch_transformers/#switchtransformersforconditionalgeneration | .md | library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`SwitchTransformersConfig`]): Model configuration class with all the parameters of the model. | 112_8_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/switch_transformers.md | https://huggingface.co/docs/transformers/en/model_doc/switch_transformers/#switchtransformersforconditionalgeneration | .md | Parameters:
config ([`SwitchTransformersConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 112_8_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/switch_transformers.md | https://huggingface.co/docs/transformers/en/model_doc/switch_transformers/#switchtransformersencodermodel | .md | The bare SWITCH_TRANSFORMERS Model transformer outputting encoder's raw hidden-states without any specific head on top.
The SWITCH_TRANSFORMERS model was proposed in [Switch Transformers: Scaling to Trillion Parameter Models with
Simple and Efficient Sparsity](https://arxiv.org/abs/2101.03961) by [William
Fedus](https://arxiv.org/search/cs?searchtype=author&query=Fedus%2C+W), [Barret
Zoph](https://arxiv.org/search/cs?searchtype=author&query=Zoph%2C+B), and [Noam | 112_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/switch_transformers.md | https://huggingface.co/docs/transformers/en/model_doc/switch_transformers/#switchtransformersencodermodel | .md | Zoph](https://arxiv.org/search/cs?searchtype=author&query=Zoph%2C+B), and [Noam
Shazeer](https://arxiv.org/search/cs?searchtype=author&query=Shazeer%2C+N). It's an encoder-decoder T5-like model
with sparse Feed Forward that stands for Mixture of Experts (MoE) architecture.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.) | 112_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/switch_transformers.md | https://huggingface.co/docs/transformers/en/model_doc/switch_transformers/#switchtransformersencodermodel | .md | library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`SwitchTransformersConfig`]): Model configuration class with all the parameters of the model. | 112_9_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/switch_transformers.md | https://huggingface.co/docs/transformers/en/model_doc/switch_transformers/#switchtransformersencodermodel | .md | Parameters:
config ([`SwitchTransformersConfig`]): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 112_9_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/ | .md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 113_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
--> | 113_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#overview | .md | The VITS model was proposed in [Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech](https://arxiv.org/abs/2106.06103) by Jaehyeon Kim, Jungil Kong, Juhee Son.
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational | 113_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#overview | .md | speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers, | 113_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#overview | .md | text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text. | 113_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#overview | .md | synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the | 113_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#overview | .md | inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
The abstract from the paper is the following: | 113_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#overview | .md | *Several recent end-to-end text-to-speech (TTS) models enabling single-stage training and parallel sampling have been proposed, but their sample quality does not match that of two-stage TTS systems. In this work, we present a parallel end-to-end TTS method that generates more natural sounding audio than current two-stage models. Our method adopts variational inference augmented with normalizing flows and an adversarial training process, which improves the expressive power of generative modeling. We also | 113_1_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#overview | .md | with normalizing flows and an adversarial training process, which improves the expressive power of generative modeling. We also propose a stochastic duration predictor to synthesize speech with diverse rhythms from input text. With the uncertainty modeling over latent variables and the stochastic duration predictor, our method expresses the natural one-to-many relationship in which a text input can be spoken in multiple ways with different pitches and rhythms. A subjective human evaluation (mean opinion | 113_1_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#overview | .md | a text input can be spoken in multiple ways with different pitches and rhythms. A subjective human evaluation (mean opinion score, or MOS) on the LJ Speech, a single speaker dataset, shows that our method outperforms the best publicly available TTS systems and achieves a MOS comparable to ground truth.* | 113_1_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#overview | .md | This model can also be used with TTS checkpoints from [Massively Multilingual Speech (MMS)](https://arxiv.org/abs/2305.13516)
as these checkpoints use the same architecture and a slightly modified tokenizer.
This model was contributed by [Matthijs](https://huggingface.co/Matthijs) and [sanchit-gandhi](https://huggingface.co/sanchit-gandhi). The original code can be found [here](https://github.com/jaywalnut310/vits). | 113_1_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#usage-examples | .md | Both the VITS and MMS-TTS checkpoints can be used with the same API. Since the flow-based model is non-deterministic, it
is good practice to set a seed to ensure reproducibility of the outputs. For languages with a Roman alphabet,
such as English or French, the tokenizer can be used directly to pre-process the text inputs. The following code example
runs a forward pass using the MMS-TTS English checkpoint:
```python
import torch
from transformers import VitsTokenizer, VitsModel, set_seed | 113_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#usage-examples | .md | tokenizer = VitsTokenizer.from_pretrained("facebook/mms-tts-eng")
model = VitsModel.from_pretrained("facebook/mms-tts-eng")
inputs = tokenizer(text="Hello - my dog is cute", return_tensors="pt")
set_seed(555) # make deterministic
with torch.no_grad():
outputs = model(**inputs)
waveform = outputs.waveform[0]
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy | 113_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#usage-examples | .md | waveform = outputs.waveform[0]
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=waveform)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio | 113_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#usage-examples | .md | Audio(waveform, rate=model.config.sampling_rate)
```
For certain languages with a non-Roman alphabet, such as Arabic, Mandarin or Hindi, the [`uroman`](https://github.com/isi-nlp/uroman)
perl package is required to pre-process the text inputs to the Roman alphabet.
You can check whether you require the `uroman` package for your language by inspecting the `is_uroman` attribute of
the pre-trained `tokenizer`:
```python
from transformers import VitsTokenizer | 113_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#usage-examples | .md | tokenizer = VitsTokenizer.from_pretrained("facebook/mms-tts-eng")
print(tokenizer.is_uroman)
```
If the is_uroman attribute is `True`, the tokenizer will automatically apply the `uroman` package to your text inputs, but you need to install uroman if not already installed using:
```
pip install --upgrade uroman
```
Note: Python version required to use `uroman` as python package should be >= `3.10`.
You can use the tokenizer as usual without any additional preprocessing steps:
```python
import torch | 113_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#usage-examples | .md | You can use the tokenizer as usual without any additional preprocessing steps:
```python
import torch
from transformers import VitsTokenizer, VitsModel, set_seed
import os
import subprocess | 113_2_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#usage-examples | .md | tokenizer = VitsTokenizer.from_pretrained("facebook/mms-tts-kor")
model = VitsModel.from_pretrained("facebook/mms-tts-kor")
text = "이봐 무슨 일이야"
inputs = tokenizer(text=text, return_tensors="pt")
set_seed(555) # make deterministic
with torch.no_grad():
outputs = model(inputs["input_ids"]) | 113_2_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#usage-examples | .md | waveform = outputs.waveform[0]
```
If you don't want to upgrade to python >= `3.10`, then you can use the `uroman` perl package to pre-process the text inputs to the Roman alphabet.
To do this, first clone the uroman repository to your local machine and set the bash variable `UROMAN` to the local path:
```bash
git clone https://github.com/isi-nlp/uroman.git
cd uroman
export UROMAN=$(pwd)
``` | 113_2_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#usage-examples | .md | ```bash
git clone https://github.com/isi-nlp/uroman.git
cd uroman
export UROMAN=$(pwd)
```
You can then pre-process the text input using the following code snippet. You can either rely on using the bash variable
`UROMAN` to point to the uroman repository, or you can pass the uroman directory as an argument to the `uromanize` function:
```python
import torch
from transformers import VitsTokenizer, VitsModel, set_seed
import os
import subprocess | 113_2_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#usage-examples | .md | tokenizer = VitsTokenizer.from_pretrained("facebook/mms-tts-kor")
model = VitsModel.from_pretrained("facebook/mms-tts-kor")
def uromanize(input_string, uroman_path):
"""Convert non-Roman strings to Roman using the `uroman` perl package."""
script_path = os.path.join(uroman_path, "bin", "uroman.pl")
command = ["perl", script_path] | 113_2_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#usage-examples | .md | command = ["perl", script_path]
process = subprocess.Popen(command, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# Execute the perl command
stdout, stderr = process.communicate(input=input_string.encode())
if process.returncode != 0:
raise ValueError(f"Error {process.returncode}: {stderr.decode()}")
# Return the output as a string and skip the new-line character at the end
return stdout.decode()[:-1] | 113_2_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#usage-examples | .md | # Return the output as a string and skip the new-line character at the end
return stdout.decode()[:-1]
text = "이봐 무슨 일이야"
uromanized_text = uromanize(text, uroman_path=os.environ["UROMAN"])
inputs = tokenizer(text=uromanized_text, return_tensors="pt")
set_seed(555) # make deterministic
with torch.no_grad():
outputs = model(inputs["input_ids"])
waveform = outputs.waveform[0]
``` | 113_2_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#vitsconfig | .md | This is the configuration class to store the configuration of a [`VitsModel`]. It is used to instantiate a VITS
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the VITS
[facebook/mms-tts-eng](https://huggingface.co/facebook/mms-tts-eng) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the | 113_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#vitsconfig | .md | Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 38):
Vocabulary size of the VITS model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed to the forward method of [`VitsModel`].
hidden_size (`int`, *optional*, defaults to 192):
Dimensionality of the text encoder layers. | 113_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#vitsconfig | .md | hidden_size (`int`, *optional*, defaults to 192):
Dimensionality of the text encoder layers.
num_hidden_layers (`int`, *optional*, defaults to 6):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 2):
Number of attention heads for each attention layer in the Transformer encoder.
window_size (`int`, *optional*, defaults to 4):
Window size for the relative positional embeddings in the attention layers of the Transformer encoder. | 113_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#vitsconfig | .md | Window size for the relative positional embeddings in the attention layers of the Transformer encoder.
use_bias (`bool`, *optional*, defaults to `True`):
Whether to use bias in the key, query, value projection layers in the Transformer encoder.
ffn_dim (`int`, *optional*, defaults to 768):
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
layerdrop (`float`, *optional*, defaults to 0.1): | 113_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#vitsconfig | .md | layerdrop (`float`, *optional*, defaults to 0.1):
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
ffn_kernel_size (`int`, *optional*, defaults to 3):
Kernel size of the 1D convolution layers used by the feed-forward network in the Transformer encoder.
flow_size (`int`, *optional*, defaults to 192):
Dimensionality of the flow layers.
spectrogram_bins (`int`, *optional*, defaults to 513): | 113_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#vitsconfig | .md | Dimensionality of the flow layers.
spectrogram_bins (`int`, *optional*, defaults to 513):
Number of frequency bins in the target spectrogram.
hidden_act (`str` or `function`, *optional*, defaults to `"relu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` are supported.
hidden_dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings and encoder. | 113_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#vitsconfig | .md | The dropout probability for all fully connected layers in the embeddings and encoder.
attention_dropout (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities.
activation_dropout (`float`, *optional*, defaults to 0.1):
The dropout ratio for activations inside the fully connected layer.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices. | 113_3_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#vitsconfig | .md | The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-05):
The epsilon used by the layer normalization layers.
use_stochastic_duration_prediction (`bool`, *optional*, defaults to `True`):
Whether to use the stochastic duration prediction module or the regular duration predictor.
num_speakers (`int`, *optional*, defaults to 1):
Number of speakers if this is a multi-speaker model. | 113_3_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#vitsconfig | .md | num_speakers (`int`, *optional*, defaults to 1):
Number of speakers if this is a multi-speaker model.
speaker_embedding_size (`int`, *optional*, defaults to 0):
Number of channels used by the speaker embeddings. Is zero for single-speaker models.
upsample_initial_channel (`int`, *optional*, defaults to 512):
The number of input channels into the HiFi-GAN upsampling network.
upsample_rates (`Tuple[int]` or `List[int]`, *optional*, defaults to `[8, 8, 2, 2]`): | 113_3_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#vitsconfig | .md | upsample_rates (`Tuple[int]` or `List[int]`, *optional*, defaults to `[8, 8, 2, 2]`):
A tuple of integers defining the stride of each 1D convolutional layer in the HiFi-GAN upsampling network.
The length of `upsample_rates` defines the number of convolutional layers and has to match the length of
`upsample_kernel_sizes`.
upsample_kernel_sizes (`Tuple[int]` or `List[int]`, *optional*, defaults to `[16, 16, 4, 4]`): | 113_3_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#vitsconfig | .md | `upsample_kernel_sizes`.
upsample_kernel_sizes (`Tuple[int]` or `List[int]`, *optional*, defaults to `[16, 16, 4, 4]`):
A tuple of integers defining the kernel size of each 1D convolutional layer in the HiFi-GAN upsampling
network. The length of `upsample_kernel_sizes` defines the number of convolutional layers and has to match
the length of `upsample_rates`.
resblock_kernel_sizes (`Tuple[int]` or `List[int]`, *optional*, defaults to `[3, 7, 11]`): | 113_3_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#vitsconfig | .md | the length of `upsample_rates`.
resblock_kernel_sizes (`Tuple[int]` or `List[int]`, *optional*, defaults to `[3, 7, 11]`):
A tuple of integers defining the kernel sizes of the 1D convolutional layers in the HiFi-GAN
multi-receptive field fusion (MRF) module.
resblock_dilation_sizes (`Tuple[Tuple[int]]` or `List[List[int]]`, *optional*, defaults to `[[1, 3, 5], [1, 3, 5], [1, 3, 5]]`):
A nested tuple of integers defining the dilation rates of the dilated 1D convolutional layers in the | 113_3_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#vitsconfig | .md | A nested tuple of integers defining the dilation rates of the dilated 1D convolutional layers in the
HiFi-GAN multi-receptive field fusion (MRF) module.
leaky_relu_slope (`float`, *optional*, defaults to 0.1):
The angle of the negative slope used by the leaky ReLU activation.
depth_separable_channels (`int`, *optional*, defaults to 2):
Number of channels to use in each depth-separable block.
depth_separable_num_layers (`int`, *optional*, defaults to 3): | 113_3_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#vitsconfig | .md | Number of channels to use in each depth-separable block.
depth_separable_num_layers (`int`, *optional*, defaults to 3):
Number of convolutional layers to use in each depth-separable block.
duration_predictor_flow_bins (`int`, *optional*, defaults to 10):
Number of channels to map using the unonstrained rational spline in the duration predictor model.
duration_predictor_tail_bound (`float`, *optional*, defaults to 5.0): | 113_3_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#vitsconfig | .md | duration_predictor_tail_bound (`float`, *optional*, defaults to 5.0):
Value of the tail bin boundary when computing the unconstrained rational spline in the duration predictor
model.
duration_predictor_kernel_size (`int`, *optional*, defaults to 3):
Kernel size of the 1D convolution layers used in the duration predictor model.
duration_predictor_dropout (`float`, *optional*, defaults to 0.5):
The dropout ratio for the duration predictor model. | 113_3_14 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#vitsconfig | .md | duration_predictor_dropout (`float`, *optional*, defaults to 0.5):
The dropout ratio for the duration predictor model.
duration_predictor_num_flows (`int`, *optional*, defaults to 4):
Number of flow stages used by the duration predictor model.
duration_predictor_filter_channels (`int`, *optional*, defaults to 256):
Number of channels for the convolution layers used in the duration predictor model.
prior_encoder_num_flows (`int`, *optional*, defaults to 4): | 113_3_15 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#vitsconfig | .md | prior_encoder_num_flows (`int`, *optional*, defaults to 4):
Number of flow stages used by the prior encoder flow model.
prior_encoder_num_wavenet_layers (`int`, *optional*, defaults to 4):
Number of WaveNet layers used by the prior encoder flow model.
posterior_encoder_num_wavenet_layers (`int`, *optional*, defaults to 16):
Number of WaveNet layers used by the posterior encoder model.
wavenet_kernel_size (`int`, *optional*, defaults to 5):
Kernel size of the 1D convolution layers used in the WaveNet model. | 113_3_16 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#vitsconfig | .md | wavenet_kernel_size (`int`, *optional*, defaults to 5):
Kernel size of the 1D convolution layers used in the WaveNet model.
wavenet_dilation_rate (`int`, *optional*, defaults to 1):
Dilation rates of the dilated 1D convolutional layers used in the WaveNet model.
wavenet_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the WaveNet layers.
speaking_rate (`float`, *optional*, defaults to 1.0):
Speaking rate. Larger values give faster synthesised speech. | 113_3_17 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#vitsconfig | .md | speaking_rate (`float`, *optional*, defaults to 1.0):
Speaking rate. Larger values give faster synthesised speech.
noise_scale (`float`, *optional*, defaults to 0.667):
How random the speech prediction is. Larger values create more variation in the predicted speech.
noise_scale_duration (`float`, *optional*, defaults to 0.8):
How random the duration prediction is. Larger values create more variation in the predicted durations.
sampling_rate (`int`, *optional*, defaults to 16000): | 113_3_18 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#vitsconfig | .md | sampling_rate (`int`, *optional*, defaults to 16000):
The sampling rate at which the output audio waveform is digitalized expressed in hertz (Hz).
Example:
```python
>>> from transformers import VitsModel, VitsConfig | 113_3_19 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#vitsconfig | .md | >>> # Initializing a "facebook/mms-tts-eng" style configuration
>>> configuration = VitsConfig()
>>> # Initializing a model (with random weights) from the "facebook/mms-tts-eng" style configuration
>>> model = VitsModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
``` | 113_3_20 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#vitstokenizer | .md | Construct a VITS tokenizer. Also supports MMS-TTS.
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
Path to the vocabulary file.
language (`str`, *optional*):
Language identifier.
add_blank (`bool`, *optional*, defaults to `True`):
Whether to insert token id 0 in between the other tokens.
normalize (`bool`, *optional*, defaults to `True`): | 113_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#vitstokenizer | .md | Whether to insert token id 0 in between the other tokens.
normalize (`bool`, *optional*, defaults to `True`):
Whether to normalize the input text by removing all casing and punctuation.
phonemize (`bool`, *optional*, defaults to `True`):
Whether to convert the input text into phonemes.
is_uroman (`bool`, *optional*, defaults to `False`):
Whether the `uroman` Romanizer needs to be applied to the input text prior to tokenizing.
Methods: __call__
- save_vocabulary | 113_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#vitsmodel | .md | The complete VITS model, for text-to-speech synthesis.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 113_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#vitsmodel | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`VitsConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the | 113_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vits.md | https://huggingface.co/docs/transformers/en/model_doc/vits/#vitsmodel | .md | load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 113_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md | https://huggingface.co/docs/transformers/en/model_doc/rag/ | .md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 114_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md | https://huggingface.co/docs/transformers/en/model_doc/rag/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 114_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md | https://huggingface.co/docs/transformers/en/model_doc/rag/#rag | .md | <div class="flex flex-wrap space-x-1">
<a href="https://huggingface.co/models?filter=rag">
<img alt="Models" src="https://img.shields.io/badge/All_model_pages-rag-blueviolet">
</a>
</div> | 114_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md | https://huggingface.co/docs/transformers/en/model_doc/rag/#overview | .md | Retrieval-augmented generation ("RAG") models combine the powers of pretrained dense retrieval (DPR) and
sequence-to-sequence models. RAG models retrieve documents, pass them to a seq2seq model, then marginalize to generate
outputs. The retriever and seq2seq modules are initialized from pretrained models, and fine-tuned jointly, allowing
both retrieval and generation to adapt to downstream tasks. | 114_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md | https://huggingface.co/docs/transformers/en/model_doc/rag/#overview | .md | both retrieval and generation to adapt to downstream tasks.
It is based on the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir
Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela.
The abstract from the paper is the following: | 114_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md | https://huggingface.co/docs/transformers/en/model_doc/rag/#overview | .md | The abstract from the paper is the following:
*Large pre-trained language models have been shown to store factual knowledge in their parameters, and achieve
state-of-the-art results when fine-tuned on downstream NLP tasks. However, their ability to access and precisely
manipulate knowledge is still limited, and hence on knowledge-intensive tasks, their performance lags behind
task-specific architectures. Additionally, providing provenance for their decisions and updating their world knowledge | 114_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md | https://huggingface.co/docs/transformers/en/model_doc/rag/#overview | .md | task-specific architectures. Additionally, providing provenance for their decisions and updating their world knowledge
remain open research problems. Pre-trained models with a differentiable access mechanism to explicit nonparametric
memory can overcome this issue, but have so far been only investigated for extractive downstream tasks. We explore a
general-purpose fine-tuning recipe for retrieval-augmented generation (RAG) — models which combine pre-trained | 114_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md | https://huggingface.co/docs/transformers/en/model_doc/rag/#overview | .md | general-purpose fine-tuning recipe for retrieval-augmented generation (RAG) — models which combine pre-trained
parametric and non-parametric memory for language generation. We introduce RAG models where the parametric memory is a
pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a
pre-trained neural retriever. We compare two RAG formulations, one which conditions on the same retrieved passages | 114_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md | https://huggingface.co/docs/transformers/en/model_doc/rag/#overview | .md | pre-trained neural retriever. We compare two RAG formulations, one which conditions on the same retrieved passages
across the whole generated sequence, the other can use different passages per token. We fine-tune and evaluate our
models on a wide range of knowledge-intensive NLP tasks and set the state-of-the-art on three open domain QA tasks,
outperforming parametric seq2seq models and task-specific retrieve-and-extract architectures. For language generation | 114_2_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md | https://huggingface.co/docs/transformers/en/model_doc/rag/#overview | .md | outperforming parametric seq2seq models and task-specific retrieve-and-extract architectures. For language generation
tasks, we find that RAG models generate more specific, diverse and factual language than a state-of-the-art
parametric-only seq2seq baseline.*
This model was contributed by [ola13](https://huggingface.co/ola13). | 114_2_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md | https://huggingface.co/docs/transformers/en/model_doc/rag/#usage-tips | .md | Retrieval-augmented generation ("RAG") models combine the powers of pretrained dense retrieval (DPR) and Seq2Seq models.
RAG models retrieve docs, pass them to a seq2seq model, then marginalize to generate outputs. The retriever and seq2seq
modules are initialized from pretrained models, and fine-tuned jointly, allowing both retrieval and generation to adapt
to downstream tasks. | 114_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md | https://huggingface.co/docs/transformers/en/model_doc/rag/#ragconfig | .md | [`RagConfig`] stores the configuration of a *RagModel*. Configuration objects inherit from [`PretrainedConfig`] and
can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information.
Args:
title_sep (`str`, *optional*, defaults to `" / "`):
Separator inserted between the title and the text of the retrieved document when calling [`RagRetriever`].
doc_sep (`str`, *optional*, defaults to `" // "`): | 114_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md | https://huggingface.co/docs/transformers/en/model_doc/rag/#ragconfig | .md | doc_sep (`str`, *optional*, defaults to `" // "`):
Separator inserted between the text of the retrieved document and the original input when calling
[`RagRetriever`].
n_docs (`int`, *optional*, defaults to 5):
Number of documents to retrieve.
max_combined_length (`int`, *optional*, defaults to 300):
Max length of contextualized input returned by [`~RagRetriever.__call__`].
retrieval_vector_size (`int`, *optional*, defaults to 768):
Dimensionality of the document embeddings indexed by [`RagRetriever`]. | 114_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md | https://huggingface.co/docs/transformers/en/model_doc/rag/#ragconfig | .md | Dimensionality of the document embeddings indexed by [`RagRetriever`].
retrieval_batch_size (`int`, *optional*, defaults to 8):
Retrieval batch size, defined as the number of queries issues concurrently to the faiss index encapsulated
[`RagRetriever`].
dataset (`str`, *optional*, defaults to `"wiki_dpr"`):
A dataset identifier of the indexed dataset in HuggingFace Datasets (list all available datasets and ids
using `datasets.list_datasets()`).
dataset_split (`str`, *optional*, defaults to `"train"`) | 114_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md | https://huggingface.co/docs/transformers/en/model_doc/rag/#ragconfig | .md | using `datasets.list_datasets()`).
dataset_split (`str`, *optional*, defaults to `"train"`)
Which split of the `dataset` to load.
index_name (`str`, *optional*, defaults to `"compressed"`)
The index name of the index associated with the `dataset`. One can choose between `"legacy"`, `"exact"` and
`"compressed"`.
index_path (`str`, *optional*)
The path to the serialized faiss index on disk.
passages_path (`str`, *optional*):
A path to text passages compatible with the faiss index. Required if using | 114_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md | https://huggingface.co/docs/transformers/en/model_doc/rag/#ragconfig | .md | passages_path (`str`, *optional*):
A path to text passages compatible with the faiss index. Required if using
[`~models.rag.retrieval_rag.LegacyIndex`]
use_dummy_dataset (`bool`, *optional*, defaults to `False`)
Whether to load a "dummy" variant of the dataset specified by `dataset`.
label_smoothing (`float`, *optional*, defaults to 0.0):
Only relevant if `return_loss` is set to `True`. Controls the `epsilon` parameter value for label smoothing | 114_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md | https://huggingface.co/docs/transformers/en/model_doc/rag/#ragconfig | .md | Only relevant if `return_loss` is set to `True`. Controls the `epsilon` parameter value for label smoothing
in the loss calculation. If set to 0, no label smoothing is performed.
do_marginalize (`bool`, *optional*, defaults to `False`):
If `True`, the logits are marginalized over all documents by making use of
`torch.nn.functional.log_softmax`.
reduce_loss (`bool`, *optional*, defaults to `False`):
Whether or not to reduce the NLL loss using the `torch.Tensor.sum` operation. | 114_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md | https://huggingface.co/docs/transformers/en/model_doc/rag/#ragconfig | .md | Whether or not to reduce the NLL loss using the `torch.Tensor.sum` operation.
do_deduplication (`bool`, *optional*, defaults to `True`):
Whether or not to deduplicate the generations from different context documents for a given input. Has to be
set to `False` if used while training with distributed backend.
exclude_bos_score (`bool`, *optional*, defaults to `False`):
Whether or not to disregard the BOS token when computing the loss.
output_retrieved(`bool`, *optional*, defaults to `False`): | 114_4_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md | https://huggingface.co/docs/transformers/en/model_doc/rag/#ragconfig | .md | Whether or not to disregard the BOS token when computing the loss.
output_retrieved(`bool`, *optional*, defaults to `False`):
If set to `True`, `retrieved_doc_embeds`, `retrieved_doc_ids`, `context_input_ids` and
`context_attention_mask` are returned. See returned tensors for more detail.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models).
forced_eos_token_id (`int`, *optional*): | 114_4_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md | https://huggingface.co/docs/transformers/en/model_doc/rag/#ragconfig | .md | forced_eos_token_id (`int`, *optional*):
The id of the token to force as the last generated token when `max_length` is reached. Usually set to
`eos_token_id`. | 114_4_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md | https://huggingface.co/docs/transformers/en/model_doc/rag/#ragtokenizer | .md | No docstring available for RagTokenizer | 114_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md | https://huggingface.co/docs/transformers/en/model_doc/rag/#rag-specific-outputs | .md | models.rag.modeling_rag.RetrievAugLMMarginOutput
Base class for retriever augmented marginalized models outputs.
Args:
loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
Language modeling loss.
logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`):
Prediction scores of the language modeling head. The score is possibly marginalized over all documents for
each vocabulary token. | 114_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md | https://huggingface.co/docs/transformers/en/model_doc/rag/#rag-specific-outputs | .md | each vocabulary token.
doc_scores (`torch.FloatTensor` of shape `(batch_size, config.n_docs)`):
Score between each retrieved document embeddings (see `retrieved_doc_embeds`) and
`question_encoder_last_hidden_state`.
past_key_values (`List[torch.FloatTensor]`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
List of `torch.FloatTensor` of length `config.n_layers`, with each tensor of shape `(2, batch_size,
num_heads, sequence_length, embed_size_per_head)`). | 114_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md | https://huggingface.co/docs/transformers/en/model_doc/rag/#rag-specific-outputs | .md | num_heads, sequence_length, embed_size_per_head)`).
Contains precomputed hidden-states (key and values in the attention blocks) of the decoder that can be used
(see `past_key_values` input) to speed up sequential decoding.
retrieved_doc_embeds (`torch.FloatTensor` of shape `(batch_size, config.n_docs, hidden_size)`, *optional*, returned when *output_retrieved=True*):
Embedded documents retrieved by the retriever. Is used with `question_encoder_last_hidden_state` to compute
the `doc_scores`. | 114_6_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md | https://huggingface.co/docs/transformers/en/model_doc/rag/#rag-specific-outputs | .md | Embedded documents retrieved by the retriever. Is used with `question_encoder_last_hidden_state` to compute
the `doc_scores`.
retrieved_doc_ids (`torch.LongTensor` of shape `(batch_size, config.n_docs)`, *optional*, returned when *output_retrieved=True*):
The indexes of the embedded documents retrieved by the retriever.
context_input_ids (`torch.LongTensor` of shape `(batch_size * config.n_docs, config.max_combined_length)`, *optional*, returned when *output_retrieved=True*): | 114_6_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md | https://huggingface.co/docs/transformers/en/model_doc/rag/#rag-specific-outputs | .md | Input ids post-processed from the retrieved documents and the question encoder input_ids by the retriever.
context_attention_mask (`torch.LongTensor` of shape `(batch_size * config.n_docs, config.max_combined_length)`, *optional*, returned when *output_retrieved=True*):
Attention mask post-processed from the retrieved documents and the question encoder `input_ids` by the
retriever.
question_encoder_last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): | 114_6_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md | https://huggingface.co/docs/transformers/en/model_doc/rag/#rag-specific-outputs | .md | question_encoder_last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
Sequence of hidden states at the output of the last layer of the question encoder pooled output of the
model.
question_enc_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings and one for the output of each layer) of | 114_6_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md | https://huggingface.co/docs/transformers/en/model_doc/rag/#rag-specific-outputs | .md | Tuple of `torch.FloatTensor` (one for the output of the embeddings and one for the output of each layer) of
shape `(batch_size, sequence_length, hidden_size)`.
Hidden states of the question encoder at the output of each layer plus the initial embedding outputs.
question_enc_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): | 114_6_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/rag.md | https://huggingface.co/docs/transformers/en/model_doc/rag/#rag-specific-outputs | .md | Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights of the question encoder, after the attention softmax, used to compute the weighted
average in the self-attention heads.
generator_enc_last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
Sequence of hidden-states at the output of the last layer of the generator encoder of the model. | 114_6_7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.