source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
https://huggingface.co/docs/transformers/en/model_doc/deit/#deitconfig
.md
layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. image_size (`int`, *optional*, defaults to 224): The size (resolution) of each image. patch_size (`int`, *optional*, defaults to 16): The size (resolution) of each patch. num_channels (`int`, *optional*, defaults to 3): The number of input channels. qkv_bias (`bool`, *optional*, defaults to `True`): Whether to add a bias to the queries, keys and values.
313_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
https://huggingface.co/docs/transformers/en/model_doc/deit/#deitconfig
.md
qkv_bias (`bool`, *optional*, defaults to `True`): Whether to add a bias to the queries, keys and values. encoder_stride (`int`, *optional*, defaults to 16): Factor to increase the spatial resolution by in the decoder head for masked image modeling. Example: ```python >>> from transformers import DeiTConfig, DeiTModel
313_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
https://huggingface.co/docs/transformers/en/model_doc/deit/#deitconfig
.md
>>> # Initializing a DeiT deit-base-distilled-patch16-224 style configuration >>> configuration = DeiTConfig() >>> # Initializing a model (with random weights) from the deit-base-distilled-patch16-224 style configuration >>> model = DeiTModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
313_5_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
https://huggingface.co/docs/transformers/en/model_doc/deit/#deitfeatureextractor
.md
No docstring available for DeiTFeatureExtractor Methods: __call__
313_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
https://huggingface.co/docs/transformers/en/model_doc/deit/#deitimageprocessor
.md
Constructs a DeiT image processor. Args: do_resize (`bool`, *optional*, defaults to `True`): Whether to resize the image's (height, width) dimensions to the specified `size`. Can be overridden by `do_resize` in `preprocess`. size (`Dict[str, int]` *optional*, defaults to `{"height": 256, "width": 256}`): Size of the image after `resize`. Can be overridden by `size` in `preprocess`. resample (`PILImageResampling` filter, *optional*, defaults to `Resampling.BICUBIC`):
313_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
https://huggingface.co/docs/transformers/en/model_doc/deit/#deitimageprocessor
.md
resample (`PILImageResampling` filter, *optional*, defaults to `Resampling.BICUBIC`): Resampling filter to use if resizing the image. Can be overridden by `resample` in `preprocess`. do_center_crop (`bool`, *optional*, defaults to `True`): Whether to center crop the image. If the input size is smaller than `crop_size` along any edge, the image is padded with 0's and then center cropped. Can be overridden by `do_center_crop` in `preprocess`.
313_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
https://huggingface.co/docs/transformers/en/model_doc/deit/#deitimageprocessor
.md
is padded with 0's and then center cropped. Can be overridden by `do_center_crop` in `preprocess`. crop_size (`Dict[str, int]`, *optional*, defaults to `{"height": 224, "width": 224}`): Desired output size when applying center-cropping. Can be overridden by `crop_size` in `preprocess`. rescale_factor (`int` or `float`, *optional*, defaults to `1/255`): Scale factor to use if rescaling the image. Can be overridden by the `rescale_factor` parameter in the `preprocess` method.
313_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
https://huggingface.co/docs/transformers/en/model_doc/deit/#deitimageprocessor
.md
Scale factor to use if rescaling the image. Can be overridden by the `rescale_factor` parameter in the `preprocess` method. do_rescale (`bool`, *optional*, defaults to `True`): Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the `do_rescale` parameter in the `preprocess` method. do_normalize (`bool`, *optional*, defaults to `True`): Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess` method.
313_7_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
https://huggingface.co/docs/transformers/en/model_doc/deit/#deitimageprocessor
.md
Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess` method. image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`): Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_STD`):
313_7_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
https://huggingface.co/docs/transformers/en/model_doc/deit/#deitimageprocessor
.md
image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_STD`): Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method. Methods: preprocess <frameworkcontent> <pt>
313_7_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
https://huggingface.co/docs/transformers/en/model_doc/deit/#deitmodel
.md
The bare DeiT Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`DeiTConfig`]): Model configuration class with all the parameters of the model.
313_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
https://huggingface.co/docs/transformers/en/model_doc/deit/#deitmodel
.md
behavior. Parameters: config ([`DeiTConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
313_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
https://huggingface.co/docs/transformers/en/model_doc/deit/#deitformaskedimagemodeling
.md
DeiT Model with a decoder on top for masked image modeling, as proposed in [SimMIM](https://arxiv.org/abs/2111.09886). <Tip> Note that we provide a script to pre-train this model on custom data in our [examples directory](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-pretraining). </Tip> This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
313_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
https://huggingface.co/docs/transformers/en/model_doc/deit/#deitformaskedimagemodeling
.md
</Tip> This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`DeiTConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
313_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
https://huggingface.co/docs/transformers/en/model_doc/deit/#deitformaskedimagemodeling
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
313_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
https://huggingface.co/docs/transformers/en/model_doc/deit/#deitforimageclassification
.md
DeiT Model transformer with an image classification head on top (a linear layer on top of the final hidden state of the [CLS] token) e.g. for ImageNet. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`DeiTConfig`]): Model configuration class with all the parameters of the model.
313_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
https://huggingface.co/docs/transformers/en/model_doc/deit/#deitforimageclassification
.md
behavior. Parameters: config ([`DeiTConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
313_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
https://huggingface.co/docs/transformers/en/model_doc/deit/#deitforimageclassificationwithteacher
.md
DeiT Model transformer with image classification heads on top (a linear layer on top of the final hidden state of the [CLS] token and a linear layer on top of the final hidden state of the distillation token) e.g. for ImageNet. .. warning:: This model supports inference-only. Fine-tuning with distillation (i.e. with a teacher) is not yet supported. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
313_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
https://huggingface.co/docs/transformers/en/model_doc/deit/#deitforimageclassificationwithteacher
.md
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`DeiTConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
313_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
https://huggingface.co/docs/transformers/en/model_doc/deit/#deitforimageclassificationwithteacher
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward </pt> <tf>
313_11_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
https://huggingface.co/docs/transformers/en/model_doc/deit/#tfdeitmodel
.md
No docstring available for TFDeiTModel Methods: call
313_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
https://huggingface.co/docs/transformers/en/model_doc/deit/#tfdeitformaskedimagemodeling
.md
No docstring available for TFDeiTForMaskedImageModeling Methods: call
313_13_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
https://huggingface.co/docs/transformers/en/model_doc/deit/#tfdeitforimageclassification
.md
No docstring available for TFDeiTForImageClassification Methods: call
313_14_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/deit.md
https://huggingface.co/docs/transformers/en/model_doc/deit/#tfdeitforimageclassificationwithteacher
.md
No docstring available for TFDeiTForImageClassificationWithTeacher Methods: call </tf> </frameworkcontent>
313_15_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus_x.md
https://huggingface.co/docs/transformers/en/model_doc/pegasus_x/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
314_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus_x.md
https://huggingface.co/docs/transformers/en/model_doc/pegasus_x/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
314_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus_x.md
https://huggingface.co/docs/transformers/en/model_doc/pegasus_x/#overview
.md
The PEGASUS-X model was proposed in [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) by Jason Phang, Yao Zhao and Peter J. Liu. PEGASUS-X (PEGASUS eXtended) extends the PEGASUS models for long input summarization through additional long input pretraining and using staggered block-local attention with global tokens in the encoder. The abstract from the paper is the following:
314_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus_x.md
https://huggingface.co/docs/transformers/en/model_doc/pegasus_x/#overview
.md
*While large pretrained Transformer models have proven highly capable at tackling natural language tasks, handling long sequence inputs continues to be a significant challenge. One such task is long input summarization, where inputs are longer than the maximum input context of most pretrained models. Through an extensive set of experiments, we investigate what model architectural changes and pretraining paradigms can most efficiently adapt a pretrained Transformer for long input summarization. We find that
314_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus_x.md
https://huggingface.co/docs/transformers/en/model_doc/pegasus_x/#overview
.md
and pretraining paradigms can most efficiently adapt a pretrained Transformer for long input summarization. We find that a staggered, block-local Transformer with global encoder tokens strikes a good balance of performance and efficiency, and that an additional pretraining phase on long sequences meaningfully improves downstream summarization performance. Based on our findings, we introduce PEGASUS-X, an extension of the PEGASUS model with additional long input pretraining to handle inputs of up to 16K
314_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus_x.md
https://huggingface.co/docs/transformers/en/model_doc/pegasus_x/#overview
.md
we introduce PEGASUS-X, an extension of the PEGASUS model with additional long input pretraining to handle inputs of up to 16K tokens. PEGASUS-X achieves strong performance on long input summarization tasks comparable with much larger models while adding few additional parameters and not requiring model parallelism to train.*
314_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus_x.md
https://huggingface.co/docs/transformers/en/model_doc/pegasus_x/#overview
.md
This model was contributed by [zphang](https://huggingface.co/zphang). The original code can be found [here](https://github.com/google-research/pegasus).
314_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus_x.md
https://huggingface.co/docs/transformers/en/model_doc/pegasus_x/#documentation-resources
.md
- [Translation task guide](../tasks/translation) - [Summarization task guide](../tasks/summarization) <Tip> PEGASUS-X uses the same tokenizer as [PEGASUS](pegasus). </Tip>
314_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus_x.md
https://huggingface.co/docs/transformers/en/model_doc/pegasus_x/#pegasusxconfig
.md
This is the configuration class to store the configuration of a [`PegasusXModel`]. It is used to instantiate a PEGASUS-X model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the PEGASUS-X [google/pegasus-x-large](https://huggingface.co/google/pegasus-x-large) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
314_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus_x.md
https://huggingface.co/docs/transformers/en/model_doc/pegasus_x/#pegasusxconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 96103): Vocabulary size of the PEGASUS-X model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`PegasusXModel`]. d_model (`int`, *optional*, defaults to 1024): Dimension of the layers and the pooler layer.
314_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus_x.md
https://huggingface.co/docs/transformers/en/model_doc/pegasus_x/#pegasusxconfig
.md
d_model (`int`, *optional*, defaults to 1024): Dimension of the layers and the pooler layer. encoder_layers (`int`, *optional*, defaults to 16): Number of encoder layers. decoder_layers (`int`, *optional*, defaults to 16): Number of decoder layers. encoder_attention_heads (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer encoder. decoder_attention_heads (`int`, *optional*, defaults to 16):
314_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus_x.md
https://huggingface.co/docs/transformers/en/model_doc/pegasus_x/#pegasusxconfig
.md
decoder_attention_heads (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer decoder. decoder_ffn_dim (`int`, *optional*, defaults to 4096): Dimension of the "intermediate" (often named feed-forward) layer in decoder. encoder_ffn_dim (`int`, *optional*, defaults to 4096): Dimension of the "intermediate" (often named feed-forward) layer in decoder. activation_function (`str` or `function`, *optional*, defaults to `"gelu"`):
314_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus_x.md
https://huggingface.co/docs/transformers/en/model_doc/pegasus_x/#pegasusxconfig
.md
activation_function (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"silu"` and `"gelu_new"` are supported. dropout (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities.
314_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus_x.md
https://huggingface.co/docs/transformers/en/model_doc/pegasus_x/#pegasusxconfig
.md
attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. activation_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for activations inside the fully connected layer. max_position_embeddings (`int`, *optional*, defaults to 16384): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). init_std (`float`, *optional*, defaults to 0.02):
314_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus_x.md
https://huggingface.co/docs/transformers/en/model_doc/pegasus_x/#pegasusxconfig
.md
just in case (e.g., 512 or 1024 or 2048). init_std (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. encoder_layerdrop (`float`, *optional*, defaults to 0.0): The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. decoder_layerdrop (`float`, *optional*, defaults to 0.0):
314_3_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus_x.md
https://huggingface.co/docs/transformers/en/model_doc/pegasus_x/#pegasusxconfig
.md
for more details. decoder_layerdrop (`float`, *optional*, defaults to 0.0): The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should return the last key/values attentions (not used by all models) forced_eos_token_id (`int`, *optional*, defaults to 1): The id of the token to force as the last generated token when `max_length` is reached. Usually set to
314_3_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus_x.md
https://huggingface.co/docs/transformers/en/model_doc/pegasus_x/#pegasusxconfig
.md
The id of the token to force as the last generated token when `max_length` is reached. Usually set to `eos_token_id`. num_global_tokens (`int`, *optional*, defaults to 128): Number of global tokens to use for the encoder block_size (`int`, *optional*, defaults to 512): Block size for encoder local attention. Sequence length should be an exact multiple of block size. block_size must be a multiple of 2 if stagger_local_block is True stagger_local_block (`bool`, *optional*, defaults to `True`):
314_3_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus_x.md
https://huggingface.co/docs/transformers/en/model_doc/pegasus_x/#pegasusxconfig
.md
block_size must be a multiple of 2 if stagger_local_block is True stagger_local_block (`bool`, *optional*, defaults to `True`): Whether to stagger every other local attention by half a block Example: ```python >>> from transformers import PegasusXConfig, PegasusXModel
314_3_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus_x.md
https://huggingface.co/docs/transformers/en/model_doc/pegasus_x/#pegasusxconfig
.md
>>> # Initializing a PEGASUS google/pegasus-x-large style configuration >>> configuration = PegasusXConfig() >>> # Initializing a model (with random weights) from the google/pegasus-x-large style configuration >>> model = PegasusXModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
314_3_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus_x.md
https://huggingface.co/docs/transformers/en/model_doc/pegasus_x/#pegasusxmodel
.md
The bare PEGASUS-X Model outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
314_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus_x.md
https://huggingface.co/docs/transformers/en/model_doc/pegasus_x/#pegasusxmodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`PegasusXConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
314_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus_x.md
https://huggingface.co/docs/transformers/en/model_doc/pegasus_x/#pegasusxmodel
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
314_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus_x.md
https://huggingface.co/docs/transformers/en/model_doc/pegasus_x/#pegasusxforconditionalgeneration
.md
The PEGASUS-X for conditional generation (e.g. summarization). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
314_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus_x.md
https://huggingface.co/docs/transformers/en/model_doc/pegasus_x/#pegasusxforconditionalgeneration
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`PegasusXConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
314_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pegasus_x.md
https://huggingface.co/docs/transformers/en/model_doc/pegasus_x/#pegasusxforconditionalgeneration
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
314_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
315_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
315_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#overview
.md
The Wav2Vec2-BERT model was proposed in [Seamless: Multilingual Expressive and Streaming Speech Translation](https://ai.meta.com/research/publications/seamless-multilingual-expressive-and-streaming-speech-translation/) by the Seamless Communication team from Meta AI. This model was pre-trained on 4.5M hours of unlabeled audio data covering more than 143 languages. It requires finetuning to be used for downstream tasks such as Automatic Speech Recognition (ASR), or Audio Classification.
315_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#overview
.md
The official results of the model can be found in Section 3.2.1 of the paper. The abstract from the paper is the following:
315_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#overview
.md
*Recent advancements in automatic speech translation have dramatically expanded language coverage, improved multimodal capabilities, and enabled a wide range of tasks and functionalities. That said, large-scale automatic speech translation systems today lack key features that help machine-mediated communication feel seamless when compared to human-to-human dialogue. In this work, we introduce a family of models that enable end-to-end expressive and multilingual translations in a streaming fashion. First,
315_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#overview
.md
we introduce a family of models that enable end-to-end expressive and multilingual translations in a streaming fashion. First, we contribute an improved version of the massively multilingual and multimodal SeamlessM4T model—SeamlessM4T v2. This newer model, incorporating an updated UnitY2 framework, was trained on more low-resource language data. The expanded version of SeamlessAlign adds 114,800 hours of automatically aligned data for a total of 76 languages. SeamlessM4T v2 provides the foundation on
315_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#overview
.md
adds 114,800 hours of automatically aligned data for a total of 76 languages. SeamlessM4T v2 provides the foundation on which our two newest models, SeamlessExpressive and SeamlessStreaming, are initiated. SeamlessExpressive enables translation that preserves vocal styles and prosody. Compared to previous efforts in expressive speech research, our work addresses certain underexplored aspects of prosody, such as speech rate and pauses, while also preserving the style of one’s voice. As for
315_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#overview
.md
underexplored aspects of prosody, such as speech rate and pauses, while also preserving the style of one’s voice. As for SeamlessStreaming, our model leverages the Efficient Monotonic Multihead Attention (EMMA) mechanism to generate low-latency target translations without waiting for complete source utterances. As the first of its kind, SeamlessStreaming enables simultaneous speech-to-speech/text translation for multiple source and target languages. To understand the performance of these models, we
315_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#overview
.md
speech-to-speech/text translation for multiple source and target languages. To understand the performance of these models, we combined novel and modified versions of existing automatic metrics to evaluate prosody, latency, and robustness. For human evaluations, we adapted existing protocols tailored for measuring the most relevant attributes in the preservation of meaning, naturalness, and expressivity. To ensure that our models can be used safely and responsibly, we implemented the first known red-teaming
315_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#overview
.md
and expressivity. To ensure that our models can be used safely and responsibly, we implemented the first known red-teaming effort for multimodal machine translation, a system for the detection and mitigation of added toxicity, a systematic evaluation of gender bias, and an inaudible localized watermarking mechanism designed to dampen the impact of deepfakes. Consequently, we bring major components from SeamlessExpressive and SeamlessStreaming together to form Seamless, the first publicly available system
315_1_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#overview
.md
major components from SeamlessExpressive and SeamlessStreaming together to form Seamless, the first publicly available system that unlocks expressive cross-lingual communication in real-time. In sum, Seamless gives us a pivotal look at the technical foundation needed to turn the Universal Speech Translator from a science fiction concept into a real-world technology. Finally, contributions in this work—including models, code, and a watermark detector—are publicly released and accessible at the link below.*
315_1_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#overview
.md
in this work—including models, code, and a watermark detector—are publicly released and accessible at the link below.*
315_1_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#overview
.md
This model was contributed by [ylacombe](https://huggingface.co/ylacombe). The original code can be found [here](https://github.com/facebookresearch/seamless_communication).
315_1_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#usage-tips
.md
- Wav2Vec2-BERT follows the same architecture as Wav2Vec2-Conformer, but employs a causal depthwise convolutional layer and uses as input a mel-spectrogram representation of the audio instead of the raw waveform. - Wav2Vec2-BERT can use either no relative position embeddings, Shaw-like position embeddings, Transformer-XL-like position embeddings, or rotary position embeddings by setting the correct `config.position_embeddings_type`.
315_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#usage-tips
.md
rotary position embeddings by setting the correct `config.position_embeddings_type`. - Wav2Vec2-BERT also introduces a Conformer-based adapter network instead of a simple convolutional network.
315_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#resources
.md
<PipelineTag pipeline="automatic-speech-recognition"/> - [`Wav2Vec2BertForCTC`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition).
315_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#resources
.md
- You can also adapt these notebooks on [how to finetune a speech recognition model in English](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/speech_recognition.ipynb), and [how to finetune a speech recognition model in any language](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multi_lingual_speech_recognition.ipynb). <PipelineTag pipeline="audio-classification"/>
315_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#resources
.md
<PipelineTag pipeline="audio-classification"/> - [`Wav2Vec2BertForSequenceClassification`] can be used by adapting this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification). - See also: [Audio classification task guide](../tasks/audio_classification)
315_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#wav2vec2bertconfig
.md
This is the configuration class to store the configuration of a [`Wav2Vec2BertModel`]. It is used to instantiate an Wav2Vec2Bert model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Wav2Vec2Bert [facebook/wav2vec2-bert-rel-pos-large](https://huggingface.co/facebook/wav2vec2-bert-rel-pos-large) architecture.
315_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#wav2vec2bertconfig
.md
[facebook/wav2vec2-bert-rel-pos-large](https://huggingface.co/facebook/wav2vec2-bert-rel-pos-large) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*): Vocabulary size of the Wav2Vec2Bert model. Defines the number of different tokens that can be
315_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#wav2vec2bertconfig
.md
vocab_size (`int`, *optional*): Vocabulary size of the Wav2Vec2Bert model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`Wav2Vec2BertModel`]. Vocabulary size of the model. Defines the different tokens that can be represented by the *inputs_ids* passed to the forward method of [`Wav2Vec2BertModel`]. hidden_size (`int`, *optional*, defaults to 1024): Dimensionality of the encoder layers and the pooler layer.
315_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#wav2vec2bertconfig
.md
hidden_size (`int`, *optional*, defaults to 1024): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 24): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 4096): Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
315_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#wav2vec2bertconfig
.md
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. feature_projection_input_dim (`int`, *optional*, defaults to 160): Input dimension of this model, i.e the dimension after processing input audios with [`SeamlessM4TFeatureExtractor`] or [`Wav2Vec2BertProcessor`]. hidden_act (`str` or `function`, *optional*, defaults to `"swish"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
315_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#wav2vec2bertconfig
.md
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"`, `"swish"` and `"gelu_new"` are supported. hidden_dropout (`float`, *optional*, defaults to 0.0): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. activation_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for activations inside the fully connected layer. attention_dropout (`float`, *optional*, defaults to 0.0):
315_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#wav2vec2bertconfig
.md
The dropout ratio for activations inside the fully connected layer. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. feat_proj_dropout (`float`, *optional*, defaults to 0.0): The dropout probability for the feature projection. final_dropout (`float`, *optional*, defaults to 0.1): The dropout probability for the final projection layer of [`Wav2Vec2BertForCTC`]. layerdrop (`float`, *optional*, defaults to 0.1):
315_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#wav2vec2bertconfig
.md
layerdrop (`float`, *optional*, defaults to 0.1): The LayerDrop probability. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon used by the layer normalization layers. apply_spec_augment (`bool`, *optional*, defaults to `True`):
315_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#wav2vec2bertconfig
.md
The epsilon used by the layer normalization layers. apply_spec_augment (`bool`, *optional*, defaults to `True`): Whether to apply *SpecAugment* data augmentation to the outputs of the feature encoder. For reference see [SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition](https://arxiv.org/abs/1904.08779). mask_time_prob (`float`, *optional*, defaults to 0.05): Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking
315_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#wav2vec2bertconfig
.md
Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking procecure generates `mask_time_prob*len(time_axis)/mask_time_length ``independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked, *mask_time_prob* should be `prob_vector_start*mask_time_length`. Note that overlap may decrease the
315_4_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#wav2vec2bertconfig
.md
masked, *mask_time_prob* should be `prob_vector_start*mask_time_length`. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if `apply_spec_augment is True`. mask_time_length (`int`, *optional*, defaults to 10): Length of vector span along the time axis. mask_time_min_masks (`int`, *optional*, defaults to 2): The minimum number of masks of length `mask_feature_length` generated along the time axis, each time step,
315_4_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#wav2vec2bertconfig
.md
The minimum number of masks of length `mask_feature_length` generated along the time axis, each time step, irrespectively of `mask_feature_prob`. Only relevant if `mask_time_prob*len(time_axis)/mask_time_length < mask_time_min_masks`. mask_feature_prob (`float`, *optional*, defaults to 0.0): Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The masking procecure generates `mask_feature_prob*len(feature_axis)/mask_time_length` independent masks over
315_4_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#wav2vec2bertconfig
.md
masking procecure generates `mask_feature_prob*len(feature_axis)/mask_time_length` independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked, *mask_feature_prob* should be `prob_vector_start*mask_feature_length`. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if `apply_spec_augment is True`. mask_feature_length (`int`, *optional*, defaults to 10):
315_4_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#wav2vec2bertconfig
.md
True`. mask_feature_length (`int`, *optional*, defaults to 10): Length of vector span along the feature axis. mask_feature_min_masks (`int`, *optional*, defaults to 0): The minimum number of masks of length `mask_feature_length` generated along the feature axis, each time step, irrespectively of `mask_feature_prob`. Only relevant if `mask_feature_prob*len(feature_axis)/mask_feature_length < mask_feature_min_masks`. ctc_loss_reduction (`str`, *optional*, defaults to `"sum"`):
315_4_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#wav2vec2bertconfig
.md
ctc_loss_reduction (`str`, *optional*, defaults to `"sum"`): Specifies the reduction to apply to the output of `torch.nn.CTCLoss`. Only relevant when training an instance of [`Wav2Vec2BertForCTC`]. ctc_zero_infinity (`bool`, *optional*, defaults to `False`): Whether to zero infinite losses and the associated gradients of `torch.nn.CTCLoss`. Infinite losses mainly occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance of [`Wav2Vec2BertForCTC`].
315_4_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#wav2vec2bertconfig
.md
of [`Wav2Vec2BertForCTC`]. use_weighted_layer_sum (`bool`, *optional*, defaults to `False`): Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an instance of [`Wav2Vec2BertForSequenceClassification`]. classifier_proj_size (`int`, *optional*, defaults to 768): Dimensionality of the projection before token mean-pooling for classification. tdnn_dim (`Tuple[int]` or `List[int]`, *optional*, defaults to `(512, 512, 512, 512, 1500)`):
315_4_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#wav2vec2bertconfig
.md
tdnn_dim (`Tuple[int]` or `List[int]`, *optional*, defaults to `(512, 512, 512, 512, 1500)`): A tuple of integers defining the number of output channels of each 1D convolutional layer in the *TDNN* module of the *XVector* model. The length of *tdnn_dim* defines the number of *TDNN* layers. tdnn_kernel (`Tuple[int]` or `List[int]`, *optional*, defaults to `(5, 3, 3, 1, 1)`): A tuple of integers defining the kernel size of each 1D convolutional layer in the *TDNN* module of the
315_4_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#wav2vec2bertconfig
.md
A tuple of integers defining the kernel size of each 1D convolutional layer in the *TDNN* module of the *XVector* model. The length of *tdnn_kernel* has to match the length of *tdnn_dim*. tdnn_dilation (`Tuple[int]` or `List[int]`, *optional*, defaults to `(1, 2, 3, 1, 1)`): A tuple of integers defining the dilation factor of each 1D convolutional layer in *TDNN* module of the *XVector* model. The length of *tdnn_dilation* has to match the length of *tdnn_dim*.
315_4_17
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#wav2vec2bertconfig
.md
*XVector* model. The length of *tdnn_dilation* has to match the length of *tdnn_dim*. xvector_output_dim (`int`, *optional*, defaults to 512): Dimensionality of the *XVector* embedding vectors. pad_token_id (`int`, *optional*, defaults to 0): The id of the _beginning-of-stream_ token. bos_token_id (`int`, *optional*, defaults to 1): The id of the _padding_ token. eos_token_id (`int`, *optional*, defaults to 2): The id of the _end-of-stream_ token. add_adapter (`bool`, *optional*, defaults to `False`):
315_4_18
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#wav2vec2bertconfig
.md
add_adapter (`bool`, *optional*, defaults to `False`): Whether a convolutional attention network should be stacked on top of the Wav2Vec2Bert Encoder. Can be very useful for warm-starting Wav2Vec2Bert for SpeechEncoderDecoder models. adapter_kernel_size (`int`, *optional*, defaults to 3): Kernel size of the convolutional layers in the adapter network. Only relevant if `add_adapter is True`. adapter_stride (`int`, *optional*, defaults to 2):
315_4_19
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#wav2vec2bertconfig
.md
adapter_stride (`int`, *optional*, defaults to 2): Stride of the convolutional layers in the adapter network. Only relevant if `add_adapter is True`. num_adapter_layers (`int`, *optional*, defaults to 1): Number of convolutional layers that should be used in the adapter network. Only relevant if `add_adapter is True`. adapter_act (`str` or `function`, *optional*, defaults to `"relu"`): The non-linear activation function (function or string) in the adapter layers. If string, `"gelu"`,
315_4_20
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#wav2vec2bertconfig
.md
The non-linear activation function (function or string) in the adapter layers. If string, `"gelu"`, `"relu"`, `"selu"`, `"swish"` and `"gelu_new"` are supported. use_intermediate_ffn_before_adapter (`bool`, *optional*, defaults to `False`): Whether an intermediate feed-forward block should be stacked on top of the Wav2Vec2Bert Encoder and before the adapter network. Only relevant if `add_adapter is True`. output_hidden_size (`int`, *optional*):
315_4_21
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#wav2vec2bertconfig
.md
Only relevant if `add_adapter is True`. output_hidden_size (`int`, *optional*): Dimensionality of the encoder output layer. If not defined, this defaults to *hidden-size*. Only relevant if `add_adapter is True`. position_embeddings_type (`str`, *optional*, defaults to `"relative_key"`): Can be specified to : - `rotary`, for rotary position embeddings. - `relative`, for relative position embeddings. - `relative_key`, for relative position embeddings as defined by Shaw in [Self-Attention
315_4_22
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#wav2vec2bertconfig
.md
- `relative_key`, for relative position embeddings as defined by Shaw in [Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155). If left to `None`, no relative position embeddings is applied. rotary_embedding_base (`int`, *optional*, defaults to 10000): If `"rotary"` position embeddings are used, defines the size of the embedding base. max_source_positions (`int`, *optional*, defaults to 5000):
315_4_23
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#wav2vec2bertconfig
.md
max_source_positions (`int`, *optional*, defaults to 5000): if `"relative"` position embeddings are used, defines the maximum source input positions. left_max_position_embeddings (`int`, *optional*, defaults to 64): If `"relative_key"` (aka Shaw) position embeddings are used, defines the left clipping value for relative positions. right_max_position_embeddings (`int`, *optional*, defaults to 8):
315_4_24
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#wav2vec2bertconfig
.md
right_max_position_embeddings (`int`, *optional*, defaults to 8): If `"relative_key"` (aka Shaw) position embeddings are used, defines the right clipping value for relative positions. conv_depthwise_kernel_size (`int`, *optional*, defaults to 31): Kernel size of convolutional depthwise 1D layer in Conformer blocks. conformer_conv_dropout (`float`, *optional*, defaults to 0.1): The dropout probability for all convolutional layers in Conformer blocks. Example: ```python
315_4_25
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#wav2vec2bertconfig
.md
The dropout probability for all convolutional layers in Conformer blocks. Example: ```python >>> from transformers import Wav2Vec2BertConfig, Wav2Vec2BertModel
315_4_26
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#wav2vec2bertconfig
.md
>>> # Initializing a Wav2Vec2Bert facebook/wav2vec2-bert-rel-pos-large style configuration >>> configuration = Wav2Vec2BertConfig() >>> # Initializing a model (with random weights) from the facebook/wav2vec2-bert-rel-pos-large style configuration >>> model = Wav2Vec2BertModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
315_4_27
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#wav2vec2bertprocessor
.md
Constructs a Wav2Vec2-BERT processor which wraps a Wav2Vec2-BERT feature extractor and a Wav2Vec2 CTC tokenizer into a single processor. [`Wav2Vec2Processor`] offers all the functionalities of [`SeamlessM4TFeatureExtractor`] and [`PreTrainedTokenizer`]. See the docstring of [`~Wav2Vec2Processor.__call__`] and [`~Wav2Vec2Processor.decode`] for more information. Args: feature_extractor (`SeamlessM4TFeatureExtractor`):
315_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#wav2vec2bertprocessor
.md
Args: feature_extractor (`SeamlessM4TFeatureExtractor`): An instance of [`SeamlessM4TFeatureExtractor`]. The feature extractor is a required input. tokenizer ([`PreTrainedTokenizer`]): An instance of [`PreTrainedTokenizer`]. The tokenizer is a required input. Methods: __call__ - pad - from_pretrained - save_pretrained - batch_decode - decode
315_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#wav2vec2bertmodel
.md
The bare Wav2Vec2Bert Model transformer outputting raw hidden-states without any specific head on top. Wav2Vec2Bert was proposed in [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.).
315_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#wav2vec2bertmodel
.md
library implements for all its model (such as downloading or saving etc.). This model is a PyTorch [nn.Module](https://pytorch.org/docs/stable/nn.html#nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`Wav2Vec2BertConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
315_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-bert.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert/#wav2vec2bertmodel
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
315_6_2