source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phimoe.md | https://huggingface.co/docs/transformers/en/model_doc/phimoe/#phimoeforsequenceclassification | .md | `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in
each row of the batch).
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the | 120_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phimoe.md | https://huggingface.co/docs/transformers/en/model_doc/phimoe/#phimoeforsequenceclassification | .md | This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`PhimoeConfig`]): | 120_8_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/phimoe.md | https://huggingface.co/docs/transformers/en/model_doc/phimoe/#phimoeforsequenceclassification | .md | and behavior.
Parameters:
config ([`PhimoeConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
</pt>
</frameworkcontent> | 120_8_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/ | .md | <!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 121_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 121_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#overview | .md | The SpeechT5 model was proposed in [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](https://arxiv.org/abs/2110.07205) by Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei.
The abstract from the paper is the following: | 121_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#overview | .md | *Motivated by the success of T5 (Text-To-Text Transfer Transformer) in pre-trained natural language processing models, we propose a unified-modal SpeechT5 framework that explores the encoder-decoder pre-training for self-supervised speech/text representation learning. The SpeechT5 framework consists of a shared encoder-decoder network and six modal-specific (speech/text) pre/post-nets. After preprocessing the input speech/text through the pre-nets, the shared encoder-decoder network models the | 121_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#overview | .md | pre/post-nets. After preprocessing the input speech/text through the pre-nets, the shared encoder-decoder network models the sequence-to-sequence transformation, and then the post-nets generate the output in the speech/text modality based on the output of the decoder. Leveraging large-scale unlabeled speech and text data, we pre-train SpeechT5 to learn a unified-modal representation, hoping to improve the modeling capability for both speech and text. To align the textual and speech information into this | 121_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#overview | .md | hoping to improve the modeling capability for both speech and text. To align the textual and speech information into this unified semantic space, we propose a cross-modal vector quantization approach that randomly mixes up speech/text states with latent units as the interface between encoder and decoder. Extensive evaluations show the superiority of the proposed SpeechT5 framework on a wide variety of spoken language processing tasks, including automatic speech recognition, speech synthesis, speech | 121_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#overview | .md | on a wide variety of spoken language processing tasks, including automatic speech recognition, speech synthesis, speech translation, voice conversion, speech enhancement, and speaker identification.* | 121_1_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#overview | .md | This model was contributed by [Matthijs](https://huggingface.co/Matthijs). The original code can be found [here](https://github.com/microsoft/SpeechT5). | 121_1_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5config | .md | This is the configuration class to store the configuration of a [`SpeechT5Model`]. It is used to instantiate a
SpeechT5 model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the SpeechT5
[microsoft/speecht5_asr](https://huggingface.co/microsoft/speecht5_asr) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the | 121_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5config | .md | Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 81):
Vocabulary size of the SpeechT5 model. Defines the number of different tokens that can be represented by
the `inputs_ids` passed to the forward method of [`SpeechT5Model`].
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer. | 121_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5config | .md | hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
encoder_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
encoder_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
encoder_ffn_dim (`int`, *optional*, defaults to 3072):
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. | 121_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5config | .md | Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
encoder_layerdrop (`float`, *optional*, defaults to 0.1):
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
decoder_layers (`int`, *optional*, defaults to 6):
Number of hidden layers in the Transformer decoder.
decoder_attention_heads (`int`, *optional*, defaults to 12): | 121_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5config | .md | Number of hidden layers in the Transformer decoder.
decoder_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (`int`, *optional*, defaults to 3072):
Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer decoder.
decoder_layerdrop (`float`, *optional*, defaults to 0.1):
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) | 121_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5config | .md | The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` are supported.
positional_dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for the text position encoding layers. | 121_2_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5config | .md | positional_dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for the text position encoding layers.
hidden_dropout (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities.
activation_dropout (`float`, *optional*, defaults to 0.1):
The dropout ratio for activations inside the fully connected layer. | 121_2_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5config | .md | activation_dropout (`float`, *optional*, defaults to 0.1):
The dropout ratio for activations inside the fully connected layer.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-5):
The epsilon used by the layer normalization layers.
scale_embedding (`bool`, *optional*, defaults to `False`):
Scale embeddings by diving by sqrt(d_model). | 121_2_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5config | .md | scale_embedding (`bool`, *optional*, defaults to `False`):
Scale embeddings by diving by sqrt(d_model).
feat_extract_norm (`str`, *optional*, defaults to `"group"`):
The norm to be applied to 1D convolutional layers in the speech encoder pre-net. One of `"group"` for group
normalization of only the first 1D convolutional layer or `"layer"` for layer normalization of all 1D
convolutional layers.
feat_proj_dropout (`float`, *optional*, defaults to 0.0): | 121_2_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5config | .md | convolutional layers.
feat_proj_dropout (`float`, *optional*, defaults to 0.0):
The dropout probability for output of the speech encoder pre-net.
feat_extract_activation (`str, `optional`, defaults to `"gelu"`):
The non-linear activation function (function or string) in the 1D convolutional layers of the feature
extractor. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported.
conv_dim (`Tuple[int]` or `List[int]`, *optional*, defaults to `(512, 512, 512, 512, 512, 512, 512)`): | 121_2_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5config | .md | conv_dim (`Tuple[int]` or `List[int]`, *optional*, defaults to `(512, 512, 512, 512, 512, 512, 512)`):
A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the
speech encoder pre-net. The length of *conv_dim* defines the number of 1D convolutional layers.
conv_stride (`Tuple[int]` or `List[int]`, *optional*, defaults to `(5, 2, 2, 2, 2, 2, 2)`):
A tuple of integers defining the stride of each 1D convolutional layer in the speech encoder pre-net. The | 121_2_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5config | .md | A tuple of integers defining the stride of each 1D convolutional layer in the speech encoder pre-net. The
length of *conv_stride* defines the number of convolutional layers and has to match the length of
*conv_dim*.
conv_kernel (`Tuple[int]` or `List[int]`, *optional*, defaults to `(10, 3, 3, 3, 3, 3, 3)`):
A tuple of integers defining the kernel size of each 1D convolutional layer in the speech encoder pre-net. | 121_2_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5config | .md | A tuple of integers defining the kernel size of each 1D convolutional layer in the speech encoder pre-net.
The length of *conv_kernel* defines the number of convolutional layers and has to match the length of
*conv_dim*.
conv_bias (`bool`, *optional*, defaults to `False`):
Whether the 1D convolutional layers have a bias.
num_conv_pos_embeddings (`int`, *optional*, defaults to 128):
Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional
embeddings layer. | 121_2_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5config | .md | Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional
embeddings layer.
num_conv_pos_embedding_groups (`int`, *optional*, defaults to 16):
Number of groups of 1D convolutional positional embeddings layer.
apply_spec_augment (`bool`, *optional*, defaults to `True`):
Whether to apply *SpecAugment* data augmentation to the outputs of the speech encoder pre-net. For
reference see [SpecAugment: A Simple Data Augmentation Method for Automatic Speech | 121_2_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5config | .md | reference see [SpecAugment: A Simple Data Augmentation Method for Automatic Speech
Recognition](https://arxiv.org/abs/1904.08779).
mask_time_prob (`float`, *optional*, defaults to 0.05):
Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking
procecure generates ''mask_time_prob*len(time_axis)/mask_time_length'' independent masks over the axis. If
reasoning from the propability of each feature vector to be chosen as the start of the vector span to be | 121_2_14 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5config | .md | reasoning from the propability of each feature vector to be chosen as the start of the vector span to be
masked, *mask_time_prob* should be `prob_vector_start*mask_time_length`. Note that overlap may decrease the
actual percentage of masked vectors. This is only relevant if `apply_spec_augment is True`.
mask_time_length (`int`, *optional*, defaults to 10):
Length of vector span along the time axis.
mask_time_min_masks (`int`, *optional*, defaults to 2),: | 121_2_15 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5config | .md | Length of vector span along the time axis.
mask_time_min_masks (`int`, *optional*, defaults to 2),:
The minimum number of masks of length `mask_feature_length` generated along the time axis, each time step,
irrespectively of `mask_feature_prob`. Only relevant if ''mask_time_prob*len(time_axis)/mask_time_length <
mask_time_min_masks''
mask_feature_prob (`float`, *optional*, defaults to 0.0):
Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The | 121_2_16 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5config | .md | Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The
masking procecure generates ''mask_feature_prob*len(feature_axis)/mask_time_length'' independent masks over
the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector
span to be masked, *mask_feature_prob* should be `prob_vector_start*mask_feature_length`. Note that overlap | 121_2_17 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5config | .md | span to be masked, *mask_feature_prob* should be `prob_vector_start*mask_feature_length`. Note that overlap
may decrease the actual percentage of masked vectors. This is only relevant if `apply_spec_augment is
True`.
mask_feature_length (`int`, *optional*, defaults to 10):
Length of vector span along the feature axis.
mask_feature_min_masks (`int`, *optional*, defaults to 0),:
The minimum number of masks of length `mask_feature_length` generated along the feature axis, each time | 121_2_18 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5config | .md | The minimum number of masks of length `mask_feature_length` generated along the feature axis, each time
step, irrespectively of `mask_feature_prob`. Only relevant if
''mask_feature_prob*len(feature_axis)/mask_feature_length < mask_feature_min_masks''
num_mel_bins (`int`, *optional*, defaults to 80):
Number of mel features used per input features. Used by the speech decoder pre-net. Should correspond to
the value used in the [`SpeechT5Processor`] class. | 121_2_19 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5config | .md | the value used in the [`SpeechT5Processor`] class.
speech_decoder_prenet_layers (`int`, *optional*, defaults to 2):
Number of layers in the speech decoder pre-net.
speech_decoder_prenet_units (`int`, *optional*, defaults to 256):
Dimensionality of the layers in the speech decoder pre-net.
speech_decoder_prenet_dropout (`float`, *optional*, defaults to 0.5):
The dropout probability for the speech decoder pre-net layers.
speaker_embedding_dim (`int`, *optional*, defaults to 512): | 121_2_20 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5config | .md | The dropout probability for the speech decoder pre-net layers.
speaker_embedding_dim (`int`, *optional*, defaults to 512):
Dimensionality of the *XVector* embedding vectors.
speech_decoder_postnet_layers (`int`, *optional*, defaults to 5):
Number of layers in the speech decoder post-net.
speech_decoder_postnet_units (`int`, *optional*, defaults to 256):
Dimensionality of the layers in the speech decoder post-net.
speech_decoder_postnet_kernel (`int`, *optional*, defaults to 5): | 121_2_21 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5config | .md | Dimensionality of the layers in the speech decoder post-net.
speech_decoder_postnet_kernel (`int`, *optional*, defaults to 5):
Number of convolutional filter channels in the speech decoder post-net.
speech_decoder_postnet_dropout (`float`, *optional*, defaults to 0.5):
The dropout probability for the speech decoder post-net layers.
reduction_factor (`int`, *optional*, defaults to 2):
Spectrogram length reduction factor for the speech decoder inputs. | 121_2_22 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5config | .md | reduction_factor (`int`, *optional*, defaults to 2):
Spectrogram length reduction factor for the speech decoder inputs.
max_speech_positions (`int`, *optional*, defaults to 4000):
The maximum sequence length of speech features that this model might ever be used with.
max_text_positions (`int`, *optional*, defaults to 450):
The maximum sequence length of text features that this model might ever be used with.
encoder_max_relative_position (`int`, *optional*, defaults to 160): | 121_2_23 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5config | .md | encoder_max_relative_position (`int`, *optional*, defaults to 160):
Maximum distance for relative position embedding in the encoder.
use_guided_attention_loss (`bool`, *optional*, defaults to `True`):
Whether to apply guided attention loss while training the TTS model.
guided_attention_loss_num_heads (`int`, *optional*, defaults to 2):
Number of attention heads the guided attention loss will be applied to. Use -1 to apply this loss to all
attention heads. | 121_2_24 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5config | .md | Number of attention heads the guided attention loss will be applied to. Use -1 to apply this loss to all
attention heads.
guided_attention_loss_sigma (`float`, *optional*, defaults to 0.4):
Standard deviation for guided attention loss.
guided_attention_loss_scale (`float`, *optional*, defaults to 10.0):
Scaling coefficient for guided attention loss (also known as lambda).
use_cache (`bool`, *optional*, defaults to `True`): | 121_2_25 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5config | .md | Scaling coefficient for guided attention loss (also known as lambda).
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models).
Example:
```python
>>> from transformers import SpeechT5Model, SpeechT5Config | 121_2_26 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5config | .md | >>> # Initializing a "microsoft/speecht5_asr" style configuration
>>> configuration = SpeechT5Config()
>>> # Initializing a model (with random weights) from the "microsoft/speecht5_asr" style configuration
>>> model = SpeechT5Model(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
``` | 121_2_27 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5hifiganconfig | .md | This is the configuration class to store the configuration of a [`SpeechT5HifiGanModel`]. It is used to instantiate
a SpeechT5 HiFi-GAN vocoder model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the SpeechT5
[microsoft/speecht5_hifigan](https://huggingface.co/microsoft/speecht5_hifigan) architecture. | 121_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5hifiganconfig | .md | [microsoft/speecht5_hifigan](https://huggingface.co/microsoft/speecht5_hifigan) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
model_in_dim (`int`, *optional*, defaults to 80):
The number of frequency bins in the input log-mel spectrogram.
sampling_rate (`int`, *optional*, defaults to 16000): | 121_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5hifiganconfig | .md | The number of frequency bins in the input log-mel spectrogram.
sampling_rate (`int`, *optional*, defaults to 16000):
The sampling rate at which the output audio will be generated, expressed in hertz (Hz).
upsample_initial_channel (`int`, *optional*, defaults to 512):
The number of input channels into the upsampling network.
upsample_rates (`Tuple[int]` or `List[int]`, *optional*, defaults to `[4, 4, 4, 4]`): | 121_3_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5hifiganconfig | .md | upsample_rates (`Tuple[int]` or `List[int]`, *optional*, defaults to `[4, 4, 4, 4]`):
A tuple of integers defining the stride of each 1D convolutional layer in the upsampling network. The
length of *upsample_rates* defines the number of convolutional layers and has to match the length of
*upsample_kernel_sizes*.
upsample_kernel_sizes (`Tuple[int]` or `List[int]`, *optional*, defaults to `[8, 8, 8, 8]`): | 121_3_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5hifiganconfig | .md | *upsample_kernel_sizes*.
upsample_kernel_sizes (`Tuple[int]` or `List[int]`, *optional*, defaults to `[8, 8, 8, 8]`):
A tuple of integers defining the kernel size of each 1D convolutional layer in the upsampling network. The
length of *upsample_kernel_sizes* defines the number of convolutional layers and has to match the length of
*upsample_rates*.
resblock_kernel_sizes (`Tuple[int]` or `List[int]`, *optional*, defaults to `[3, 7, 11]`): | 121_3_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5hifiganconfig | .md | *upsample_rates*.
resblock_kernel_sizes (`Tuple[int]` or `List[int]`, *optional*, defaults to `[3, 7, 11]`):
A tuple of integers defining the kernel sizes of the 1D convolutional layers in the multi-receptive field
fusion (MRF) module.
resblock_dilation_sizes (`Tuple[Tuple[int]]` or `List[List[int]]`, *optional*, defaults to `[[1, 3, 5], [1, 3, 5], [1, 3, 5]]`):
A nested tuple of integers defining the dilation rates of the dilated 1D convolutional layers in the
multi-receptive field fusion (MRF) module. | 121_3_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5hifiganconfig | .md | multi-receptive field fusion (MRF) module.
initializer_range (`float`, *optional*, defaults to 0.01):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
leaky_relu_slope (`float`, *optional*, defaults to 0.1):
The angle of the negative slope used by the leaky ReLU activation.
normalize_before (`bool`, *optional*, defaults to `True`):
Whether or not to normalize the spectrogram before vocoding using the vocoder's learned mean and variance.
Example:
```python | 121_3_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5hifiganconfig | .md | Example:
```python
>>> from transformers import SpeechT5HifiGan, SpeechT5HifiGanConfig | 121_3_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5hifiganconfig | .md | >>> # Initializing a "microsoft/speecht5_hifigan" style configuration
>>> configuration = SpeechT5HifiGanConfig()
>>> # Initializing a model (with random weights) from the "microsoft/speecht5_hifigan" style configuration
>>> model = SpeechT5HifiGan(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
``` | 121_3_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5tokenizer | .md | Construct a SpeechT5 tokenizer. Based on [SentencePiece](https://github.com/google/sentencepiece).
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
[SentencePiece](https://github.com/google/sentencepiece) file (generally has a *.spm* extension) that
contains the vocabulary necessary to instantiate a tokenizer. | 121_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5tokenizer | .md | contains the vocabulary necessary to instantiate a tokenizer.
bos_token (`str`, *optional*, defaults to `"<s>"`):
The begin of sequence token.
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`): | 121_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5tokenizer | .md | token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
normalize (`bool`, *optional*, defaults to `False`):
Whether to convert numeric quantities in the text to their spelt-out english counterparts.
sp_model_kwargs (`dict`, *optional*):
Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for | 121_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5tokenizer | .md | sp_model_kwargs (`dict`, *optional*):
Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for
SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things,
to set:
- `enable_sampling`: Enable subword regularization.
- `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout.
- `nbest_size = {0,1}`: No sampling is performed.
- `nbest_size > 1`: samples from the nbest_size results. | 121_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5tokenizer | .md | - `nbest_size = {0,1}`: No sampling is performed.
- `nbest_size > 1`: samples from the nbest_size results.
- `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
- `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
Attributes:
sp_model (`SentencePieceProcessor`): | 121_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5tokenizer | .md | BPE-dropout.
Attributes:
sp_model (`SentencePieceProcessor`):
The *SentencePiece* processor that is used for every conversion (string, tokens and IDs).
Methods: __call__
- save_vocabulary
- decode
- batch_decode | 121_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5featureextractor | .md | Constructs a SpeechT5 feature extractor.
This class can pre-process a raw speech signal by (optionally) normalizing to zero-mean unit-variance, for use by
the SpeechT5 speech encoder prenet.
This class can also extract log-mel filter bank features from raw speech, for use by the SpeechT5 speech decoder
prenet.
This feature extractor inherits from [`~feature_extraction_sequence_utils.SequenceFeatureExtractor`] which contains | 121_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5featureextractor | .md | prenet.
This feature extractor inherits from [`~feature_extraction_sequence_utils.SequenceFeatureExtractor`] which contains
most of the main methods. Users should refer to this superclass for more information regarding those methods.
Args:
feature_size (`int`, *optional*, defaults to 1):
The feature dimension of the extracted features.
sampling_rate (`int`, *optional*, defaults to 16000):
The sampling rate at which the audio files should be digitalized expressed in hertz (Hz). | 121_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5featureextractor | .md | The sampling rate at which the audio files should be digitalized expressed in hertz (Hz).
padding_value (`float`, *optional*, defaults to 0.0):
The value that is used to fill the padding values.
do_normalize (`bool`, *optional*, defaults to `False`):
Whether or not to zero-mean unit-variance normalize the input. Normalizing can help to significantly
improve the performance for some models.
num_mel_bins (`int`, *optional*, defaults to 80): | 121_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5featureextractor | .md | improve the performance for some models.
num_mel_bins (`int`, *optional*, defaults to 80):
The number of mel-frequency bins in the extracted spectrogram features.
hop_length (`int`, *optional*, defaults to 16):
Number of ms between windows. Otherwise referred to as "shift" in many papers.
win_length (`int`, *optional*, defaults to 64):
Number of ms per window.
win_function (`str`, *optional*, defaults to `"hann_window"`): | 121_5_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5featureextractor | .md | Number of ms per window.
win_function (`str`, *optional*, defaults to `"hann_window"`):
Name for the window function used for windowing, must be accessible via `torch.{win_function}`
frame_signal_scale (`float`, *optional*, defaults to 1.0):
Constant multiplied in creating the frames before applying DFT. This argument is deprecated.
fmin (`float`, *optional*, defaults to 80):
Minimum mel frequency in Hz.
fmax (`float`, *optional*, defaults to 7600):
Maximum mel frequency in Hz. | 121_5_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5featureextractor | .md | Minimum mel frequency in Hz.
fmax (`float`, *optional*, defaults to 7600):
Maximum mel frequency in Hz.
mel_floor (`float`, *optional*, defaults to 1e-10):
Minimum value of mel frequency banks.
reduction_factor (`int`, *optional*, defaults to 2):
Spectrogram length reduction factor. This argument is deprecated.
return_attention_mask (`bool`, *optional*, defaults to `True`):
Whether or not [`~SpeechT5FeatureExtractor.__call__`] should return `attention_mask`.
Methods: __call__ | 121_5_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5processor | .md | Constructs a SpeechT5 processor which wraps a feature extractor and a tokenizer into a single processor.
[`SpeechT5Processor`] offers all the functionalities of [`SpeechT5FeatureExtractor`] and [`SpeechT5Tokenizer`]. See
the docstring of [`~SpeechT5Processor.__call__`] and [`~SpeechT5Processor.decode`] for more information.
Args:
feature_extractor (`SpeechT5FeatureExtractor`):
An instance of [`SpeechT5FeatureExtractor`]. The feature extractor is a required input.
tokenizer (`SpeechT5Tokenizer`): | 121_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5processor | .md | An instance of [`SpeechT5FeatureExtractor`]. The feature extractor is a required input.
tokenizer (`SpeechT5Tokenizer`):
An instance of [`SpeechT5Tokenizer`]. The tokenizer is a required input.
Methods: __call__
- pad
- from_pretrained
- save_pretrained
- batch_decode
- decode | 121_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5model | .md | The bare SpeechT5 Encoder-Decoder Model outputting raw hidden-states without any specific pre- or post-nets.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 121_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5model | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`SpeechT5Config`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the | 121_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5model | .md | load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
encoder ([`SpeechT5EncoderWithSpeechPrenet`] or [`SpeechT5EncoderWithTextPrenet`] or `None`):
The Transformer encoder module that applies the appropiate speech or text encoder prenet. If `None`,
[`SpeechT5EncoderWithoutPrenet`] will be used and the `input_values` are assumed to be hidden states. | 121_7_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5model | .md | [`SpeechT5EncoderWithoutPrenet`] will be used and the `input_values` are assumed to be hidden states.
decoder ([`SpeechT5DecoderWithSpeechPrenet`] or [`SpeechT5DecoderWithTextPrenet`] or `None`):
The Transformer decoder module that applies the appropiate speech or text decoder prenet. If `None`,
[`SpeechT5DecoderWithoutPrenet`] will be used and the `decoder_input_values` are assumed to be hidden
states.
Methods: forward | 121_7_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5forspeechtotext | .md | SpeechT5 Model with a speech encoder and a text decoder.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 121_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5forspeechtotext | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`SpeechT5Config`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the | 121_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5forspeechtotext | .md | load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 121_8_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5fortexttospeech | .md | SpeechT5 Model with a text encoder and a speech decoder.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 121_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5fortexttospeech | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`SpeechT5Config`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the | 121_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5fortexttospeech | .md | load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
- generate | 121_9_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5forspeechtospeech | .md | SpeechT5 Model with a speech encoder and a speech decoder.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 121_10_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5forspeechtospeech | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`SpeechT5Config`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the | 121_10_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5forspeechtospeech | .md | load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward
- generate_speech | 121_10_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5hifigan | .md | HiFi-GAN vocoder.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters: | 121_11_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/speecht5.md | https://huggingface.co/docs/transformers/en/model_doc/speecht5/#speecht5hifigan | .md | and behavior.
Parameters:
config ([`SpeechT5HifiGanConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 121_11_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v1.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v1/ | .md | <!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 122_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v1.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v1/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 122_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v1.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v1/#overview | .md | The MobileNet model was proposed in [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) by Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam.
The abstract from the paper is the following: | 122_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v1.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v1/#overview | .md | *We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive | 122_1_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v1.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v1/#overview | .md | builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization.* | 122_1_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v1.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v1/#overview | .md | This model was contributed by [matthijs](https://huggingface.co/Matthijs). The original code and weights can be found [here](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md). | 122_1_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v1.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v1/#usage-tips | .md | - The checkpoints are named **mobilenet\_v1\_*depth*\_*size***, for example **mobilenet\_v1\_1.0\_224**, where **1.0** is the depth multiplier (sometimes also referred to as "alpha" or the width multiplier) and **224** is the resolution of the input images the model was trained on.
- Even though the checkpoint is trained on images of specific size, the model will work on images of any size. The smallest supported image size is 32x32. | 122_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v1.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v1/#usage-tips | .md | - One can use [`MobileNetV1ImageProcessor`] to prepare images for the model.
- The available image classification checkpoints are pre-trained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k) (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). However, the model predicts 1001 classes: the 1000 classes from ImageNet plus an extra “background” class (index 0). | 122_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v1.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v1/#usage-tips | .md | - The original TensorFlow checkpoints use different padding rules than PyTorch, requiring the model to determine the padding amount at inference time, since this depends on the input image size. To use native PyTorch padding behavior, create a [`MobileNetV1Config`] with `tf_padding = False`.
Unsupported features: | 122_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v1.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v1/#usage-tips | .md | Unsupported features:
- The [`MobileNetV1Model`] outputs a globally pooled version of the last hidden state. In the original model it is possible to use a 7x7 average pooling layer with stride 2 instead of global pooling. For larger inputs, this gives a pooled output that is larger than 1x1 pixel. The HuggingFace implementation does not support this. | 122_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v1.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v1/#usage-tips | .md | - It is currently not possible to specify an `output_stride`. For smaller output strides, the original model invokes dilated convolution to prevent the spatial resolution from being reduced further. The output stride of the HuggingFace model is always 32.
- The original TensorFlow checkpoints include quantized models. We do not support these models as they include additional "FakeQuantization" operations to unquantize the weights. | 122_2_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v1.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v1/#usage-tips | .md | - It's common to extract the output from the pointwise layers at indices 5, 11, 12, 13 for downstream purposes. Using `output_hidden_states=True` returns the output from all intermediate layers. There is currently no way to limit this to specific layers. | 122_2_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v1.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v1/#resources | .md | A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with MobileNetV1.
<PipelineTag pipeline="image-classification"/>
- [`MobileNetV1ForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). | 122_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v1.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v1/#resources | .md | - See also: [Image classification task guide](../tasks/image_classification)
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. | 122_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v1.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v1/#mobilenetv1config | .md | This is the configuration class to store the configuration of a [`MobileNetV1Model`]. It is used to instantiate a
MobileNetV1 model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the MobileNetV1
[google/mobilenet_v1_1.0_224](https://huggingface.co/google/mobilenet_v1_1.0_224) architecture. | 122_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v1.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v1/#mobilenetv1config | .md | [google/mobilenet_v1_1.0_224](https://huggingface.co/google/mobilenet_v1_1.0_224) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
num_channels (`int`, *optional*, defaults to 3):
The number of input channels.
image_size (`int`, *optional*, defaults to 224):
The size (resolution) of each image.
depth_multiplier (`float`, *optional*, defaults to 1.0): | 122_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v1.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v1/#mobilenetv1config | .md | The size (resolution) of each image.
depth_multiplier (`float`, *optional*, defaults to 1.0):
Shrinks or expands the number of channels in each layer. Default is 1.0, which starts the network with 32
channels. This is sometimes also called "alpha" or "width multiplier".
min_depth (`int`, *optional*, defaults to 8):
All layers will have at least this many channels.
hidden_act (`str` or `function`, *optional*, defaults to `"relu6"`): | 122_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v1.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v1/#mobilenetv1config | .md | All layers will have at least this many channels.
hidden_act (`str` or `function`, *optional*, defaults to `"relu6"`):
The non-linear activation function (function or string) in the Transformer encoder and convolution layers.
tf_padding (`bool`, *optional*, defaults to `True`):
Whether to use TensorFlow padding rules on the convolution layers.
classifier_dropout_prob (`float`, *optional*, defaults to 0.999):
The dropout ratio for attached classifiers. | 122_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v1.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v1/#mobilenetv1config | .md | classifier_dropout_prob (`float`, *optional*, defaults to 0.999):
The dropout ratio for attached classifiers.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 0.001):
The epsilon used by the layer normalization layers.
Example:
```python
>>> from transformers import MobileNetV1Config, MobileNetV1Model | 122_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v1.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v1/#mobilenetv1config | .md | >>> # Initializing a "mobilenet_v1_1.0_224" style configuration
>>> configuration = MobileNetV1Config()
>>> # Initializing a model from the "mobilenet_v1_1.0_224" style configuration
>>> model = MobileNetV1Model(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
``` | 122_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v1.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v1/#mobilenetv1featureextractor | .md | No docstring available for MobileNetV1FeatureExtractor
Methods: preprocess | 122_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v1.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v1/#mobilenetv1imageprocessor | .md | Constructs a MobileNetV1 image processor.
Args:
do_resize (`bool`, *optional*, defaults to `True`):
Whether to resize the image's (height, width) dimensions to the specified `size`. Can be overridden by
`do_resize` in the `preprocess` method.
size (`Dict[str, int]` *optional*, defaults to `{"shortest_edge": 256}`):
Size of the image after resizing. The shortest edge of the image is resized to size["shortest_edge"], with | 122_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/mobilenet_v1.md | https://huggingface.co/docs/transformers/en/model_doc/mobilenet_v1/#mobilenetv1imageprocessor | .md | Size of the image after resizing. The shortest edge of the image is resized to size["shortest_edge"], with
the longest edge resized to keep the input aspect ratio. Can be overridden by `size` in the `preprocess`
method.
resample (`PILImageResampling`, *optional*, defaults to `PILImageResampling.BILINEAR`):
Resampling filter to use if resizing the image. Can be overridden by the `resample` parameter in the
`preprocess` method.
do_center_crop (`bool`, *optional*, defaults to `True`): | 122_6_1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.