source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/#dinov2config
.md
If used as backbone, list of features to output. Can be any of `"stem"`, `"stage1"`, `"stage2"`, etc. (depending on how many stages the model has). If unset and `out_indices` is set, will default to the corresponding stages. If unset and `out_indices` is unset, will default to the last stage. Must be in the same order as defined in the `stage_names` attribute. out_indices (`List[int]`, *optional*): If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how
402_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/#dinov2config
.md
If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how many stages the model has). If unset and `out_features` is set, will default to the corresponding stages. If unset and `out_features` is unset, will default to the last stage. Must be in the same order as defined in the `stage_names` attribute. apply_layernorm (`bool`, *optional*, defaults to `True`): Whether to apply layer normalization to the feature maps in case the model is used as backbone.
402_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/#dinov2config
.md
Whether to apply layer normalization to the feature maps in case the model is used as backbone. reshape_hidden_states (`bool`, *optional*, defaults to `True`): Whether to reshape the feature maps to 4D tensors of shape `(batch_size, hidden_size, height, width)` in case the model is used as backbone. If `False`, the feature maps will be 3D tensors of shape `(batch_size, seq_len, hidden_size)`. Example: ```python >>> from transformers import Dinov2Config, Dinov2Model
402_4_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/#dinov2config
.md
>>> # Initializing a Dinov2 dinov2-base-patch16-224 style configuration >>> configuration = Dinov2Config() >>> # Initializing a model (with random weights) from the dinov2-base-patch16-224 style configuration >>> model = Dinov2Model(configuration) >>> # Accessing the model configuration >>> configuration = model.config ``` <frameworkcontent> <pt>
402_4_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/#dinov2model
.md
The bare DINOv2 Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`Dinov2Config`]): Model configuration class with all the parameters of the model.
402_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/#dinov2model
.md
behavior. Parameters: config ([`Dinov2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
402_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/#dinov2forimageclassification
.md
Dinov2 Model transformer with an image classification head on top (a linear layer on top of the final hidden state of the [CLS] token) e.g. for ImageNet. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`Dinov2Config`]): Model configuration class with all the parameters of the model.
402_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/#dinov2forimageclassification
.md
behavior. Parameters: config ([`Dinov2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward </pt> <jax>
402_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/#flaxdinov2model
.md
No docstring available for FlaxDinov2Model Methods: __call__
402_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinov2.md
https://huggingface.co/docs/transformers/en/model_doc/dinov2/#flaxdinov2forimageclassification
.md
No docstring available for FlaxDinov2ForImageClassification Methods: __call__ </jax> </frameworkcontent>
402_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
403_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
403_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#overview
.md
The UnivNet model was proposed in [UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram Discriminators for High-Fidelity Waveform Generation](https://arxiv.org/abs/2106.07889) by Won Jang, Dan Lim, Jaesam Yoon, Bongwan Kin, and Juntae Kim.
403_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#overview
.md
The UnivNet model is a generative adversarial network (GAN) trained to synthesize high fidelity speech waveforms. The UnivNet model shared in `transformers` is the *generator*, which maps a conditioning log-mel spectrogram and optional noise sequence to a speech waveform (e.g. a vocoder). Only the generator is required for inference. The *discriminator* used to train the `generator` is not implemented. The abstract from the paper is the following:
403_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#overview
.md
*Most neural vocoders employ band-limited mel-spectrograms to generate waveforms. If full-band spectral features are used as the input, the vocoder can be provided with as much acoustic information as possible. However, in some models employing full-band mel-spectrograms, an over-smoothing problem occurs as part of which non-sharp spectrograms are generated. To address this problem, we propose UnivNet, a neural vocoder that synthesizes high-fidelity waveforms in real time. Inspired by works in the field of
403_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#overview
.md
we propose UnivNet, a neural vocoder that synthesizes high-fidelity waveforms in real time. Inspired by works in the field of voice activity detection, we added a multi-resolution spectrogram discriminator that employs multiple linear spectrogram magnitudes computed using various parameter sets. Using full-band mel-spectrograms as input, we expect to generate high-resolution signals by adding a discriminator that employs spectrograms of multiple resolutions as the input. In an evaluation on a dataset
403_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#overview
.md
signals by adding a discriminator that employs spectrograms of multiple resolutions as the input. In an evaluation on a dataset containing information on hundreds of speakers, UnivNet obtained the best objective and subjective results among competing models for both seen and unseen speakers. These results, including the best subjective score for text-to-speech, demonstrate the potential for fast adaptation to new speakers without a need for training from scratch.*
403_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#overview
.md
Tips:
403_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#overview
.md
- The `noise_sequence` argument for [`UnivNetModel.forward`] should be standard Gaussian noise (such as from `torch.randn`) of shape `([batch_size], noise_length, model.config.model_in_channels)`, where `noise_length` should match the length dimension (dimension 1) of the `input_features` argument. If not supplied, it will be randomly generated; a `torch.Generator` can be supplied to the `generator` argument so that the forward pass can be reproduced. (Note that [`UnivNetFeatureExtractor`] will return
403_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#overview
.md
to the `generator` argument so that the forward pass can be reproduced. (Note that [`UnivNetFeatureExtractor`] will return generated noise by default, so it shouldn't be necessary to generate `noise_sequence` manually.)
403_1_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#overview
.md
- Padding added by [`UnivNetFeatureExtractor`] can be removed from the [`UnivNetModel`] output through the [`UnivNetFeatureExtractor.batch_decode`] method, as shown in the usage example below. - Padding the end of each waveform with silence can reduce artifacts at the end of the generated audio sample. This can be done by supplying `pad_end = True` to [`UnivNetFeatureExtractor.__call__`]. See [this issue](https://github.com/seungwonpark/melgan/issues/8) for more details. Usage Example: ```python
403_1_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#overview
.md
Usage Example: ```python import torch from scipy.io.wavfile import write from datasets import Audio, load_dataset
403_1_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#overview
.md
from transformers import UnivNetFeatureExtractor, UnivNetModel model_id_or_path = "dg845/univnet-dev" model = UnivNetModel.from_pretrained(model_id_or_path) feature_extractor = UnivNetFeatureExtractor.from_pretrained(model_id_or_path)
403_1_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#overview
.md
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") # Resample the audio to the model and feature extractor's sampling rate. ds = ds.cast_column("audio", Audio(sampling_rate=feature_extractor.sampling_rate)) # Pad the end of the converted waveforms to reduce artifacts at the end of the output audio samples. inputs = feature_extractor( ds[0]["audio"]["array"], sampling_rate=ds[0]["audio"]["sampling_rate"], pad_end=True, return_tensors="pt" )
403_1_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#overview
.md
with torch.no_grad(): audio = model(**inputs)
403_1_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#overview
.md
# Remove the extra padding at the end of the output. audio = feature_extractor.batch_decode(**audio)[0] # Convert to wav file write("sample_audio.wav", feature_extractor.sampling_rate, audio) ``` This model was contributed by [dg845](https://huggingface.co/dg845).
403_1_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#overview
.md
``` This model was contributed by [dg845](https://huggingface.co/dg845). To the best of my knowledge, there is no official code release, but an unofficial implementation can be found at [maum-ai/univnet](https://github.com/maum-ai/univnet) with pretrained checkpoints [here](https://github.com/maum-ai/univnet#pre-trained-model).
403_1_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#univnetconfig
.md
This is the configuration class to store the configuration of a [`UnivNetModel`]. It is used to instantiate a UnivNet vocoder model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the UnivNet [dg845/univnet-dev](https://huggingface.co/dg845/univnet-dev) architecture, which corresponds to the 'c32'
403_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#univnetconfig
.md
[dg845/univnet-dev](https://huggingface.co/dg845/univnet-dev) architecture, which corresponds to the 'c32' architecture in [maum-ai/univnet](https://github.com/maum-ai/univnet/blob/master/config/default_c32.yaml). Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: model_in_channels (`int`, *optional*, defaults to 64):
403_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#univnetconfig
.md
documentation from [`PretrainedConfig`] for more information. Args: model_in_channels (`int`, *optional*, defaults to 64): The number of input channels for the UnivNet residual network. This should correspond to `noise_sequence.shape[1]` and the value used in the [`UnivNetFeatureExtractor`] class. model_hidden_channels (`int`, *optional*, defaults to 32): The number of hidden channels of each residual block in the UnivNet residual network. num_mel_bins (`int`, *optional*, defaults to 100):
403_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#univnetconfig
.md
num_mel_bins (`int`, *optional*, defaults to 100): The number of frequency bins in the conditioning log-mel spectrogram. This should correspond to the value used in the [`UnivNetFeatureExtractor`] class. resblock_kernel_sizes (`Tuple[int]` or `List[int]`, *optional*, defaults to `[3, 3, 3]`): A tuple of integers defining the kernel sizes of the 1D convolutional layers in the UnivNet residual network. The length of `resblock_kernel_sizes` defines the number of resnet blocks and should match that of
403_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#univnetconfig
.md
network. The length of `resblock_kernel_sizes` defines the number of resnet blocks and should match that of `resblock_stride_sizes` and `resblock_dilation_sizes`. resblock_stride_sizes (`Tuple[int]` or `List[int]`, *optional*, defaults to `[8, 8, 4]`): A tuple of integers defining the stride sizes of the 1D convolutional layers in the UnivNet residual network. The length of `resblock_stride_sizes` should match that of `resblock_kernel_sizes` and `resblock_dilation_sizes`.
403_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#univnetconfig
.md
network. The length of `resblock_stride_sizes` should match that of `resblock_kernel_sizes` and `resblock_dilation_sizes`. resblock_dilation_sizes (`Tuple[Tuple[int]]` or `List[List[int]]`, *optional*, defaults to `[[1, 3, 9, 27], [1, 3, 9, 27], [1, 3, 9, 27]]`): A nested tuple of integers defining the dilation rates of the dilated 1D convolutional layers in the UnivNet residual network. The length of `resblock_dilation_sizes` should match that of
403_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#univnetconfig
.md
UnivNet residual network. The length of `resblock_dilation_sizes` should match that of `resblock_kernel_sizes` and `resblock_stride_sizes`. The length of each nested list in `resblock_dilation_sizes` defines the number of convolutional layers per resnet block. kernel_predictor_num_blocks (`int`, *optional*, defaults to 3): The number of residual blocks in the kernel predictor network, which calculates the kernel and bias for each location variable convolution layer in the UnivNet residual network.
403_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#univnetconfig
.md
each location variable convolution layer in the UnivNet residual network. kernel_predictor_hidden_channels (`int`, *optional*, defaults to 64): The number of hidden channels for each residual block in the kernel predictor network. kernel_predictor_conv_size (`int`, *optional*, defaults to 3): The kernel size of each 1D convolutional layer in the kernel predictor network. kernel_predictor_dropout (`float`, *optional*, defaults to 0.0):
403_2_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#univnetconfig
.md
kernel_predictor_dropout (`float`, *optional*, defaults to 0.0): The dropout probability for each residual block in the kernel predictor network. initializer_range (`float`, *optional*, defaults to 0.01): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. leaky_relu_slope (`float`, *optional*, defaults to 0.2): The angle of the negative slope used by the leaky ReLU activation. Example: ```python >>> from transformers import UnivNetModel, UnivNetConfig
403_2_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#univnetconfig
.md
>>> # Initializing a Tortoise TTS style configuration >>> configuration = UnivNetConfig() >>> # Initializing a model (with random weights) from the Tortoise TTS style configuration >>> model = UnivNetModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
403_2_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#univnetfeatureextractor
.md
Constructs a UnivNet feature extractor. This class extracts log-mel-filter bank features from raw speech using the short time Fourier Transform (STFT). The STFT implementation follows that of TacoTron 2 and Hifi-GAN. This feature extractor inherits from [`~feature_extraction_sequence_utils.SequenceFeatureExtractor`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: feature_size (`int`, *optional*, defaults to 1):
403_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#univnetfeatureextractor
.md
Args: feature_size (`int`, *optional*, defaults to 1): The feature dimension of the extracted features. sampling_rate (`int`, *optional*, defaults to 24000): The sampling rate at which the audio files should be digitalized expressed in hertz (Hz). padding_value (`float`, *optional*, defaults to 0.0): The value to pad with when applying the padding strategy defined by the `padding` argument to [`UnivNetFeatureExtractor.__call__`]. Should correspond to audio silence. The `pad_end` argument to
403_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#univnetfeatureextractor
.md
[`UnivNetFeatureExtractor.__call__`]. Should correspond to audio silence. The `pad_end` argument to `__call__` will also use this padding value. do_normalize (`bool`, *optional*, defaults to `False`): Whether to perform Tacotron 2 normalization on the input. Normalizing can help to significantly improve the performance for some models. num_mel_bins (`int`, *optional*, defaults to 100): The number of mel-frequency bins in the extracted spectrogram features. This should match
403_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#univnetfeatureextractor
.md
The number of mel-frequency bins in the extracted spectrogram features. This should match `UnivNetModel.config.num_mel_bins`. hop_length (`int`, *optional*, defaults to 256): The direct number of samples between sliding windows. Otherwise referred to as "shift" in many papers. Note that this is different from other audio feature extractors such as [`SpeechT5FeatureExtractor`] which take the `hop_length` in ms. win_length (`int`, *optional*, defaults to 1024):
403_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#univnetfeatureextractor
.md
the `hop_length` in ms. win_length (`int`, *optional*, defaults to 1024): The direct number of samples for each sliding window. Note that this is different from other audio feature extractors such as [`SpeechT5FeatureExtractor`] which take the `win_length` in ms. win_function (`str`, *optional*, defaults to `"hann_window"`): Name for the window function used for windowing, must be accessible via `torch.{win_function}` filter_length (`int`, *optional*, defaults to 1024):
403_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#univnetfeatureextractor
.md
filter_length (`int`, *optional*, defaults to 1024): The number of FFT components to use. If `None`, this is determined using `transformers.audio_utils.optimal_fft_length`. max_length_s (`int`, *optional*, defaults to 10): The maximum input lenght of the model in seconds. This is used to pad the audio. fmin (`float`, *optional*, defaults to 0.0): Minimum mel frequency in Hz. fmax (`float`, *optional*): Maximum mel frequency in Hz. If not set, defaults to `sampling_rate / 2`.
403_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#univnetfeatureextractor
.md
fmax (`float`, *optional*): Maximum mel frequency in Hz. If not set, defaults to `sampling_rate / 2`. mel_floor (`float`, *optional*, defaults to 1e-09): Minimum value of mel frequency banks. Note that the way [`UnivNetFeatureExtractor`] uses `mel_floor` is different than in [`transformers.audio_utils.spectrogram`]. center (`bool`, *optional*, defaults to `False`): Whether to pad the waveform so that frame `t` is centered around time `t * hop_length`. If `False`, frame
403_3_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#univnetfeatureextractor
.md
Whether to pad the waveform so that frame `t` is centered around time `t * hop_length`. If `False`, frame `t` will start at time `t * hop_length`. compression_factor (`float`, *optional*, defaults to 1.0): The multiplicative compression factor for dynamic range compression during spectral normalization. compression_clip_val (`float`, *optional*, defaults to 1e-05): The clip value applied to the waveform before applying dynamic range compression during spectral normalization.
403_3_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#univnetfeatureextractor
.md
The clip value applied to the waveform before applying dynamic range compression during spectral normalization. normalize_min (`float`, *optional*, defaults to -11.512925148010254): The min value used for Tacotron 2-style linear normalization. The default is the original value from the Tacotron 2 implementation. normalize_max (`float`, *optional*, defaults to 2.3143386840820312): The max value used for Tacotron 2-style linear normalization. The default is the original value from the
403_3_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#univnetfeatureextractor
.md
The max value used for Tacotron 2-style linear normalization. The default is the original value from the Tacotron 2 implementation. model_in_channels (`int`, *optional*, defaults to 64): The number of input channels to the [`UnivNetModel`] model. This should match `UnivNetModel.config.model_in_channels`. pad_end_length (`int`, *optional*, defaults to 10): If padding the end of each waveform, the number of spectrogram frames worth of samples to append. The
403_3_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#univnetfeatureextractor
.md
If padding the end of each waveform, the number of spectrogram frames worth of samples to append. The number of appended samples will be `pad_end_length * hop_length`. return_attention_mask (`bool`, *optional*, defaults to `True`): Whether or not [`~UnivNetFeatureExtractor.__call__`] should return `attention_mask`. Methods: __call__
403_3_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#univnetmodel
.md
UnivNet GAN vocoder. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
403_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/univnet.md
https://huggingface.co/docs/transformers/en/model_doc/univnet/#univnetmodel
.md
and behavior. Parameters: config ([`UnivNetConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
403_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
404_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
404_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukebox
.md
<Tip warning={true}> This model is in maintenance mode only, we don't accept any new PRs changing its code. If you run into any issues running this model, please reinstall the last version that supported this model: v4.40.2. You can do so by running the following command: `pip install -U transformers==4.40.2`. </Tip>
404_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#overview
.md
The Jukebox model was proposed in [Jukebox: A generative model for music](https://arxiv.org/pdf/2005.00341.pdf) by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever. It introduces a generative music model which can produce minute long samples that can be conditioned on an artist, genres and lyrics. The abstract from the paper is the following:
404_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#overview
.md
*We introduce Jukebox, a model that generates music with singing in the raw audio domain. We tackle the long context of raw audio using a multiscale VQ-VAE to compress it to discrete codes, and modeling those using autoregressive Transformers. We show that the combined model at scale can generate high-fidelity and diverse songs with coherence up to multiple minutes. We can condition on artist and genre to steer the musical and vocal style, and on unaligned lyrics to make the singing more controllable. We
404_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#overview
.md
on artist and genre to steer the musical and vocal style, and on unaligned lyrics to make the singing more controllable. We are releasing thousands of non cherry-picked samples, along with model weights and code.*
404_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#overview
.md
As shown on the following figure, Jukebox is made of 3 `priors` which are decoder only models. They follow the architecture described in [Generating Long Sequences with Sparse Transformers](https://arxiv.org/abs/1904.10509), modified to support longer context length.
404_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#overview
.md
First, a autoencoder is used to encode the text lyrics. Next, the first (also called `top_prior`) prior attends to the last hidden states extracted from the lyrics encoder. The priors are linked to the previous priors respectively via an `AudioConditioner` module. The`AudioConditioner` upsamples the outputs of the previous prior to raw tokens at a certain audio frame per second resolution.
404_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#overview
.md
The metadata such as *artist, genre and timing* are passed to each prior, in the form of a start token and positional embedding for the timing data. The hidden states are mapped to the closest codebook vector from the VQVAE in order to convert them to raw audio. ![JukeboxModel](https://gist.githubusercontent.com/ArthurZucker/92c1acaae62ebf1b6a951710bdd8b6af/raw/c9c517bf4eff61393f6c7dec9366ef02bdd059a3/jukebox.svg) This model was contributed by [Arthur Zucker](https://huggingface.co/ArthurZ).
404_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#overview
.md
This model was contributed by [Arthur Zucker](https://huggingface.co/ArthurZ). The original code can be found [here](https://github.com/openai/jukebox).
404_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#usage-tips
.md
- This model only supports inference. This is for a few reasons, mostly because it requires a crazy amount of memory to train. Feel free to open a PR and add what's missing to have a full integration with the hugging face trainer! - This model is very slow, and takes 8h to generate a minute long audio using the 5b top prior on a V100 GPU. In order automaticallay handle the device on which the model should execute, use `accelerate`.
404_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#usage-tips
.md
- Contrary to the paper, the order of the priors goes from `0` to `1` as it felt more intuitive : we sample starting from `0`. - Primed sampling (conditioning the sampling on raw audio) requires more memory than ancestral sampling and should be used with `fp16` set to `True`. This model was contributed by [Arthur Zucker](https://huggingface.co/ArthurZ). The original code can be found [here](https://github.com/openai/jukebox).
404_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxconfig
.md
This is the configuration class to store the configuration of a [`JukeboxModel`]. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Instantiating a configuration with the defaults will yield a similar configuration to that of [openai/jukebox-1b-lyrics](https://huggingface.co/openai/jukebox-1b-lyrics) architecture.
404_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxconfig
.md
[openai/jukebox-1b-lyrics](https://huggingface.co/openai/jukebox-1b-lyrics) architecture. The downsampling and stride are used to determine downsampling of the input sequence. For example, downsampling = (5,3), and strides = (2, 2) will downsample the audio by 2^5 = 32 to get the first level of codes, and 2**8 = 256 to get the second level codes. This is mostly true for training the top level prior and the upsamplers. Args: vqvae_config (`JukeboxVQVAEConfig`, *optional*):
404_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxconfig
.md
Args: vqvae_config (`JukeboxVQVAEConfig`, *optional*): Configuration for the `JukeboxVQVAE` model. prior_config_list (`List[JukeboxPriorConfig]`, *optional*): List of the configs for each of the `JukeboxPrior` of the model. The original architecture uses 3 priors. nb_priors (`int`, *optional*, defaults to 3): Number of prior models that will sequentially sample tokens. Each prior is conditional auto regressive
404_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxconfig
.md
Number of prior models that will sequentially sample tokens. Each prior is conditional auto regressive (decoder) model, apart from the top prior, which can include a lyric encoder. The available models were trained using a top prior and 2 upsampler priors. sampling_rate (`int`, *optional*, defaults to 44100): Sampling rate of the raw audio. timing_dims (`int`, *optional*, defaults to 64): Dimensions of the JukeboxRangeEmbedding layer which is equivalent to traditional positional embedding
404_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxconfig
.md
Dimensions of the JukeboxRangeEmbedding layer which is equivalent to traditional positional embedding layer. The timing embedding layer converts the absolute and relative position in the currently sampled audio to a tensor of length `timing_dims` that will be added to the music tokens. min_duration (`int`, *optional*, defaults to 0): Minimum duration of the audios to generate max_duration (`float`, *optional*, defaults to 600.0): Maximum duration of the audios to generate
404_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxconfig
.md
max_duration (`float`, *optional*, defaults to 600.0): Maximum duration of the audios to generate max_nb_genres (`int`, *optional*, defaults to 5): Maximum number of genres that can be used to condition a single sample. metadata_conditioning (`bool`, *optional*, defaults to `True`): Whether or not to use metadata conditioning, corresponding to the artist, the genre and the min/maximum duration. Example: ```python >>> from transformers import JukeboxModel, JukeboxConfig
404_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxconfig
.md
>>> # Initializing a Jukebox configuration >>> configuration = JukeboxConfig() >>> # Initializing a model from the configuration >>> model = JukeboxModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
404_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxpriorconfig
.md
This is the configuration class to store the configuration of a [`JukeboxPrior`]. It is used to instantiate a `JukeboxPrior` according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the top level prior from the [openai/jukebox-1b-lyrics](https://huggingface.co/openai/jukebox -1b-lyrics) architecture.
404_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxpriorconfig
.md
[openai/jukebox-1b-lyrics](https://huggingface.co/openai/jukebox -1b-lyrics) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: act_fn (`str`, *optional*, defaults to `"quick_gelu"`): Activation function. alignment_head (`int`, *optional*, defaults to 2): Head that is responsible of the alignment between lyrics and music. Only used to compute the lyric to audio
404_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxpriorconfig
.md
Head that is responsible of the alignment between lyrics and music. Only used to compute the lyric to audio alignment alignment_layer (`int`, *optional*, defaults to 68): Index of the layer that is responsible of the alignment between lyrics and music. Only used to compute the lyric to audio alignment attention_multiplier (`float`, *optional*, defaults to 0.25): Multiplier coefficient used to define the hidden dimension of the attention layers. 0.25 means that 0.25*width of the model will be used.
404_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxpriorconfig
.md
0.25*width of the model will be used. attention_pattern (`str`, *optional*, defaults to `"enc_dec_with_lyrics"`): Which attention pattern to use for the decoder/ attn_dropout (`int`, *optional*, defaults to 0): Dropout probability for the post-attention layer dropout in the decoder. attn_res_scale (`bool`, *optional*, defaults to `False`): Whether or not to scale the residuals in the attention conditioner block. blocks (`int`, *optional*, defaults to 64):
404_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxpriorconfig
.md
Whether or not to scale the residuals in the attention conditioner block. blocks (`int`, *optional*, defaults to 64): Number of blocks used in the `block_attn`. A sequence of length seq_len is factored as `[blocks, seq_len // blocks]` in the `JukeboxAttention` layer. conv_res_scale (`int`, *optional*): Whether or not to scale the residuals in the conditioner block. Since the top level prior does not have a conditioner, the default value is to None and should not be modified.
404_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxpriorconfig
.md
conditioner, the default value is to None and should not be modified. num_layers (`int`, *optional*, defaults to 72): Number of layers of the transformer architecture. emb_dropout (`int`, *optional*, defaults to 0): Embedding dropout used in the lyric decoder. encoder_config (`JukeboxPriorConfig`, *optional*) : Configuration of the encoder which models the prior on the lyrics. encoder_loss_fraction (`float`, *optional*, defaults to 0.4): Multiplication factor used in front of the lyric encoder loss.
404_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxpriorconfig
.md
encoder_loss_fraction (`float`, *optional*, defaults to 0.4): Multiplication factor used in front of the lyric encoder loss. hidden_size (`int`, *optional*, defaults to 2048): Hidden dimension of the attention layers. init_scale (`float`, *optional*, defaults to 0.2): Initialization scales for the prior modules. is_encoder_decoder (`bool`, *optional*, defaults to `True`): Whether or not the prior is an encoder-decoder model. In case it is not, and `nb_relevant_lyric_tokens` is
404_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxpriorconfig
.md
Whether or not the prior is an encoder-decoder model. In case it is not, and `nb_relevant_lyric_tokens` is greater than 0, the `encoder` args should be specified for the lyric encoding. mask (`bool`, *optional*, defaults to `False`): Whether or not to mask the previous positions in the attention. max_duration (`int`, *optional*, defaults to 600): Maximum supported duration of the generated song in seconds. max_nb_genres (`int`, *optional*, defaults to 1):
404_5_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxpriorconfig
.md
Maximum supported duration of the generated song in seconds. max_nb_genres (`int`, *optional*, defaults to 1): Maximum number of genres that can be used to condition the model. merged_decoder (`bool`, *optional*, defaults to `True`): Whether or not the decoder and the encoder inputs are merged. This is used for the separated encoder-decoder architecture metadata_conditioning (`bool`, *optional*, defaults to `True)`: Whether or not to condition on the artist and genre metadata.
404_5_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxpriorconfig
.md
metadata_conditioning (`bool`, *optional*, defaults to `True)`: Whether or not to condition on the artist and genre metadata. metadata_dims (`List[int]`, *optional*, defaults to `[604, 7898]`): Number of genres and the number of artists that were used to train the embedding layers of the prior models. min_duration (`int`, *optional*, defaults to 0): Minimum duration of the generated audio on which the model was trained. mlp_multiplier (`float`, *optional*, defaults to 1.0):
404_5_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxpriorconfig
.md
Minimum duration of the generated audio on which the model was trained. mlp_multiplier (`float`, *optional*, defaults to 1.0): Multiplier coefficient used to define the hidden dimension of the MLP layers. 0.25 means that 0.25*width of the model will be used. music_vocab_size (`int`, *optional*, defaults to 2048): Number of different music tokens. Should be similar to the `JukeboxVQVAEConfig.nb_discrete_codes`. n_ctx (`int`, *optional*, defaults to 6144):
404_5_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxpriorconfig
.md
n_ctx (`int`, *optional*, defaults to 6144): Number of context tokens for each prior. The context tokens are the music tokens that are attended to when generating music tokens. n_heads (`int`, *optional*, defaults to 2): Number of attention heads. nb_relevant_lyric_tokens (`int`, *optional*, defaults to 384): Number of lyric tokens that are used when sampling a single window of length `n_ctx` res_conv_depth (`int`, *optional*, defaults to 3):
404_5_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxpriorconfig
.md
res_conv_depth (`int`, *optional*, defaults to 3): Depth of the `JukeboxDecoderConvBock` used to upsample the previously sampled audio in the `JukeboxMusicTokenConditioner`. res_conv_width (`int`, *optional*, defaults to 128): Width of the `JukeboxDecoderConvBock` used to upsample the previously sampled audio in the `JukeboxMusicTokenConditioner`. res_convolution_multiplier (`int`, *optional*, defaults to 1): Multiplier used to scale the `hidden_dim` of the `JukeboxResConv1DBlock`.
404_5_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxpriorconfig
.md
Multiplier used to scale the `hidden_dim` of the `JukeboxResConv1DBlock`. res_dilation_cycle (`int`, *optional*): Dilation cycle used to define the `JukeboxMusicTokenConditioner`. Usually similar to the ones used in the corresponding level of the VQVAE. The first prior does not use it as it is not conditioned on upper level tokens. res_dilation_growth_rate (`int`, *optional*, defaults to 1): Dilation grow rate used between each convolutionnal block of the `JukeboxMusicTokenConditioner`
404_5_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxpriorconfig
.md
Dilation grow rate used between each convolutionnal block of the `JukeboxMusicTokenConditioner` res_downs_t (`List[int]`, *optional*, defaults to `[3, 2, 2]`): Downsampling rates used in the audio conditioning network res_strides_t (`List[int]`, *optional*, defaults to `[2, 2, 2]`): Striding used in the audio conditioning network resid_dropout (`int`, *optional*, defaults to 0): Residual dropout used in the attention pattern. sampling_rate (`int`, *optional*, defaults to 44100):
404_5_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxpriorconfig
.md
Residual dropout used in the attention pattern. sampling_rate (`int`, *optional*, defaults to 44100): Sampling rate used for training. spread (`int`, *optional*): Spread used in the `summary_spread_attention` pattern timing_dims (`int`, *optional*, defaults to 64): Dimension of the timing embedding. zero_out (`bool`, *optional*, defaults to `False`): Whether or not to zero out convolution weights when initializing.
404_5_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxvqvaeconfig
.md
This is the configuration class to store the configuration of a [`JukeboxVQVAE`]. It is used to instantiate a `JukeboxVQVAE` according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the VQVAE from [openai/jukebox-1b-lyrics](https://huggingface.co/openai/jukebox-1b-lyrics) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
404_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxvqvaeconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: act_fn (`str`, *optional*, defaults to `"relu"`): Activation function of the model. nb_discrete_codes (`int`, *optional*, defaults to 2048): Number of codes of the VQVAE. commit (`float`, *optional*, defaults to 0.02): Commit loss multiplier. conv_input_shape (`int`, *optional*, defaults to 1): Number of audio channels.
404_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxvqvaeconfig
.md
Commit loss multiplier. conv_input_shape (`int`, *optional*, defaults to 1): Number of audio channels. conv_res_scale (`bool`, *optional*, defaults to `False`): Whether or not to scale the residuals of the `JukeboxResConv1DBlock`. embed_dim (`int`, *optional*, defaults to 64): Embedding dimension of the codebook vectors. hop_fraction (`List[int]`, *optional*, defaults to `[0.125, 0.5, 0.5]`): Fraction of non-intersecting window used when continuing the sampling process.
404_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxvqvaeconfig
.md
Fraction of non-intersecting window used when continuing the sampling process. levels (`int`, *optional*, defaults to 3): Number of hierarchical levels that used in the VQVAE. lmu (`float`, *optional*, defaults to 0.99): Used in the codebook update, exponential moving average coefficient. For more detail refer to Appendix A.1 of the original [VQVAE paper](https://arxiv.org/pdf/1711.00937v2.pdf) multipliers (`List[int]`, *optional*, defaults to `[2, 1, 1]`):
404_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxvqvaeconfig
.md
multipliers (`List[int]`, *optional*, defaults to `[2, 1, 1]`): Depth and width multipliers used for each level. Used on the `res_conv_width` and `res_conv_depth` res_conv_depth (`int`, *optional*, defaults to 4): Depth of the encoder and decoder block. If no `multipliers` are used, this is the same for each level. res_conv_width (`int`, *optional*, defaults to 32): Width of the encoder and decoder block. If no `multipliers` are used, this is the same for each level.
404_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxvqvaeconfig
.md
Width of the encoder and decoder block. If no `multipliers` are used, this is the same for each level. res_convolution_multiplier (`int`, *optional*, defaults to 1): Scaling factor of the hidden dimension used in the `JukeboxResConv1DBlock`. res_dilation_cycle (`int`, *optional*): Dilation cycle value used in the `JukeboxResnet`. If an int is used, each new Conv1 block will have a depth reduced by a power of `res_dilation_cycle`. res_dilation_growth_rate (`int`, *optional*, defaults to 3):
404_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxvqvaeconfig
.md
reduced by a power of `res_dilation_cycle`. res_dilation_growth_rate (`int`, *optional*, defaults to 3): Resnet dilation growth rate used in the VQVAE (dilation_growth_rate ** depth) res_downs_t (`List[int]`, *optional*, defaults to `[3, 2, 2]`): Downsampling rate for each level of the hierarchical VQ-VAE. res_strides_t (`List[int]`, *optional*, defaults to `[2, 2, 2]`): Stride used for each level of the hierarchical VQ-VAE. sample_length (`int`, *optional*, defaults to 1058304):
404_6_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxvqvaeconfig
.md
Stride used for each level of the hierarchical VQ-VAE. sample_length (`int`, *optional*, defaults to 1058304): Provides the max input shape of the VQVAE. Is used to compute the input shape of each level. init_scale (`float`, *optional*, defaults to 0.2): Initialization scale. zero_out (`bool`, *optional*, defaults to `False`): Whether or not to zero out convolution weights when initializing.
404_6_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxtokenizer
.md
Constructs a Jukebox tokenizer. Jukebox can be conditioned on 3 different inputs : - Artists, unique ids are associated to each artist from the provided dictionary. - Genres, unique ids are associated to each genre from the provided dictionary. - Lyrics, character based tokenization. Must be initialized with the list of characters that are inside the vocabulary. This tokenizer does not require training. It should be able to process a different number of inputs:
404_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxtokenizer
.md
vocabulary. This tokenizer does not require training. It should be able to process a different number of inputs: as the conditioning of the model can be done on the three different queries. If None is provided, defaults values will be used.: Depending on the number of genres on which the model should be conditioned (`n_genres`). ```python >>> from transformers import JukeboxTokenizer
404_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxtokenizer
.md
>>> tokenizer = JukeboxTokenizer.from_pretrained("openai/jukebox-1b-lyrics") >>> tokenizer("Alan Jackson", "Country Rock", "old town road")["input_ids"] [tensor([[ 0, 0, 0, 6785, 546, 41, 38, 30, 76, 46, 41, 49, 40, 76, 44, 41, 27, 30]]), tensor([[ 0, 0, 0, 145, 0]]), tensor([[ 0, 0, 0, 145, 0]])] ``` You can get around that behavior by passing `add_prefix_space=True` when instantiating this tokenizer or when you
404_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxtokenizer
.md
``` You can get around that behavior by passing `add_prefix_space=True` when instantiating this tokenizer or when you call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance. <Tip> If nothing is provided, the genres and the artist will either be selected randomly or set to None </Tip> This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to:
404_7_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxtokenizer
.md
</Tip> This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to: this superclass for more information regarding those methods. However the code does not allow that and only supports composing from various genres. Args: artists_file (`str`): Path to the vocabulary file which contains a mapping between artists and ids. The default file supports both "v2" and "v3" genres_file (`str`):
404_7_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxtokenizer
.md
both "v2" and "v3" genres_file (`str`): Path to the vocabulary file which contain a mapping between genres and ids. lyrics_file (`str`): Path to the vocabulary file which contains the accepted characters for the lyrics tokenization. version (`List[str]`, `optional`, default to `["v3", "v2", "v2"]`) : List of the tokenizer versions. The `5b-lyrics`'s top level prior model was trained using `v3` instead of `v2`. n_genres (`int`, `optional`, defaults to 1): Maximum number of genres to use for composition.
404_7_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/jukebox.md
https://huggingface.co/docs/transformers/en/model_doc/jukebox/#jukeboxtokenizer
.md
`v2`. n_genres (`int`, `optional`, defaults to 1): Maximum number of genres to use for composition. max_n_lyric_tokens (`int`, `optional`, defaults to 512): Maximum number of lyric tokens to keep. unk_token (`str`, *optional*, defaults to `"<|endoftext|>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. Methods: save_vocabulary
404_7_6