source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaconfig
.md
The standard deviation of the truncated_normal_initializer for initializing all weight matrices. ce_ignore_index (`int`, *optional*, defaults to -100): Cross entropy index to ignore. mim_weight (`float`, *optional*, defaults to 1.0): Weight to be assigned to MIM (Masked Image Modeling) unimodal loss mlm_weight (`float`, *optional*, defaults to 1.0): Weight to be assigned to MLM (Masked Language Modeling) unimodal loss global_contrastive_weight (`float`, *optional*, defaults to 1.0):
276_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaconfig
.md
global_contrastive_weight (`float`, *optional*, defaults to 1.0): Weight to be assigned to global contrastive cross-alignment loss. itm_weight (`float`, *optional*, defaults to 1.0): Weight to be assigned to image-text matching multimodal loss. mmm_image_weight (`float`, *optional*, defaults to 1.0): Weight to be assigned to MMM loss's image part. mmm_text_weight (`float`, *optional*, defaults to 1.0): Weight to be assigned to MMM loss's text part.
276_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaconfig
.md
mmm_text_weight (`float`, *optional*, defaults to 1.0): Weight to be assigned to MMM loss's text part. global_backprop_contrastive (`bool`, *optional*, defaults to `True`): Whether to use global backpropgation through all workers in contrastive loss. skip_unmasked_multimodal_encoder (`bool`, *optional*, defaults to `True`): Whether to skip running unmasked multimodal encoder whose outputs are not used by FLAVA losses. return_loss (`bool`, *optional*, defaults to `True`): Whether to return loss or not
276_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaconfig
.md
return_loss (`bool`, *optional*, defaults to `True`): Whether to return loss or not kwargs (*optional*): Dictionary of keyword arguments. Example: ```python >>> from transformers import FlavaConfig, FlavaModel, FlavaForPreTraining
276_2_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaconfig
.md
>>> # Initializing a FlavaConfig with style configuration >>> configuration = FlavaConfig() >>> # Initializing a FlavaModel and FlavaForPreTraining model (with random weights) from the style configuration >>> model = FlavaModel(configuration) >>> model_pre = FlavaForPreTraining(configuration) >>> # Accessing the model configuration >>> configuration = model.config >>> configuration_pre = model_pre.config ```
276_2_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavatextconfig
.md
This is the configuration class to store the configuration of a [`FlavaTextModel`]. It is used to instantiate an FLAVA model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the FLAVA [facebook/flava-full](https://huggingface.co/facebook/flava-full) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
276_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavatextconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 30522): Vocabulary size of the BERT model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`FlavaTextModel`]. type_vocab_size (`int`, *optional*, defaults to 2):
276_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavatextconfig
.md
`inputs_ids` passed when calling [`FlavaTextModel`]. type_vocab_size (`int`, *optional*, defaults to 2): The vocabulary size of the `token_type_ids` passed when calling [`FlavaTextModel`]. Note that even though text encoder allows `token_type_ids`'s value as 2, for text-only pretraining and fine-tuning, only 1 is used similar to RoBERTa. max_position_embeddings (`int`, *optional*, defaults to 512): The maximum sequence length that this model might ever be used with. Typically set this to something large
276_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavatextconfig
.md
The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). For VL, max_length passed to model is 77. position_embedding_type (`str`, *optional*, defaults to `"absolute"`): Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to
276_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavatextconfig
.md
positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to [Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155). For more information on `"relative_key_query"`, please refer to *Method 4* in [Improve Transformer Models with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658). hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer.
276_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavatextconfig
.md
hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
276_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavatextconfig
.md
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
276_3_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavatextconfig
.md
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers.
276_3_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavatextconfig
.md
layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. image_size (`int`, *optional*, defaults to 224): The size (resolution) of each image. patch_size (`int`, *optional*, defaults to 16): The size (resolution) of each patch. num_channels (`int`, *optional*, defaults to 3): The number of input channels. qkv_bias (`bool`, *optional*, defaults to `True`): Whether to add a bias to the queries, keys and values. Example: ```python
276_3_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavatextconfig
.md
Whether to add a bias to the queries, keys and values. Example: ```python >>> from transformers import FlavaTextConfig, FlavaTextModel
276_3_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavatextconfig
.md
>>> # Initializing a FlavaTextModel with style configuration >>> configuration = FlavaTextConfig() >>> # Initializing a FlavaTextModel model (with random weights) from the style configuration >>> model = FlavaTextModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
276_3_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaimageconfig
.md
This is the configuration class to store the configuration of a [`FlavaImageModel`]. It is used to instantiate an FLAVA model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the FLAVA [facebook/flava-full](https://huggingface.co/facebook/flava-full) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
276_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaimageconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12):
276_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaimageconfig
.md
Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
276_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaimageconfig
.md
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 0.02):
276_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaimageconfig
.md
The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. image_size (`int`, *optional*, defaults to 224): The size (resolution) of each image. patch_size (`int`, *optional*, defaults to 16): The size (resolution) of each patch.
276_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaimageconfig
.md
The size (resolution) of each image. patch_size (`int`, *optional*, defaults to 16): The size (resolution) of each patch. num_channels (`int`, *optional*, defaults to 3): The number of input channels. qkv_bias (`bool`, *optional*, defaults to `True`): Whether to add a bias to the queries, keys and values. mask_token (`bool`, *optional*, defaults to `True`): Whether to use a mask token or not. Used in MIM (Masked Image Modeling) loss for FLAVA. vocab_size (`int`, *optional*, defaults to 8192):
276_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaimageconfig
.md
vocab_size (`int`, *optional*, defaults to 8192): Vocabulary size of the [`FlavaImageCodebook`] used in conjunction with [`FlavaImageModel`] for MIM (Masked Image Modeling) loss for FLAVA. Example: ```python >>> from transformers import FlavaImageConfig, FlavaImageModel
276_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaimageconfig
.md
>>> # Initializing a FlavaImageModel with style configuration >>> configuration = FlavaImageConfig() >>> # Initializing a FlavaImageModel model (with random weights) from the style configuration >>> model = FlavaImageModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
276_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavamultimodalconfig
.md
This is the configuration class to store the configuration of a [`FlavaMultimodalModel`]. It is used to instantiate an FLAVA model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the FLAVA [facebook/flava-full](https://huggingface.co/facebook/flava-full) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
276_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavamultimodalconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 6): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12):
276_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavamultimodalconfig
.md
Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
276_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavamultimodalconfig
.md
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 0.02):
276_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavamultimodalconfig
.md
The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. qkv_bias (`bool`, *optional*, defaults to `True`): Whether to add a bias to the queries, keys and values. use_cls_token (`bool`, *optional*, defaults to `True`):
276_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavamultimodalconfig
.md
Whether to add a bias to the queries, keys and values. use_cls_token (`bool`, *optional*, defaults to `True`): Whether to use an extra CLS token for multimodal settings. Usually needed by the FLAVA model. Example: ```python >>> from transformers import FlavaMultimodalConfig, FlavaMultimodalModel
276_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavamultimodalconfig
.md
>>> # Initializing a FlavaMultimodalModel with style configuration >>> configuration = FlavaMultimodalConfig() >>> # Initializing a FlavaMultimodalModel model (with random weights) from the style configuration >>> model = FlavaMultimodalModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
276_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaimagecodebookconfig
.md
No docstring available for FlavaImageCodebookConfig
276_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaprocessor
.md
Constructs a FLAVA processor which wraps a FLAVA image processor and a FLAVA tokenizer into a single processor. [`FlavaProcessor`] offers all the functionalities of [`FlavaImageProcessor`] and [`BertTokenizerFast`]. See the [`~FlavaProcessor.__call__`] and [`~FlavaProcessor.decode`] for more information. Args: image_processor ([`FlavaImageProcessor`], *optional*): The image processor is a required input. tokenizer ([`BertTokenizerFast`], *optional*): The tokenizer is a required input.
276_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavafeatureextractor
.md
No docstring available for FlavaFeatureExtractor
276_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaimageprocessor
.md
Constructs a Flava image processor. Args: do_resize (`bool`, *optional*, defaults to `True`): Whether to resize the image's (height, width) dimensions to the specified `size`. Can be overridden by the `do_resize` parameter in `preprocess`. size (`Dict[str, int]` *optional*, defaults to `{"height": 224, "width": 224}`): Size of the image after resizing. Can be overridden by the `size` parameter in `preprocess`. resample (`PILImageResampling`, *optional*, defaults to `PILImageResampling.BICUBIC`):
276_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaimageprocessor
.md
resample (`PILImageResampling`, *optional*, defaults to `PILImageResampling.BICUBIC`): Resampling filter to use if resizing the image. Can be overridden by the `resample` parameter in `preprocess`. do_center_crop (`bool`, *optional*, defaults to `True`): Whether to center crop the images. Can be overridden by the `do_center_crop` parameter in `preprocess`. crop_size (`Dict[str, int]` *optional*, defaults to `{"height": 224, "width": 224}`):
276_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaimageprocessor
.md
crop_size (`Dict[str, int]` *optional*, defaults to `{"height": 224, "width": 224}`): Size of image after the center crop `(crop_size["height"], crop_size["width"])`. Can be overridden by the `crop_size` parameter in `preprocess`. do_rescale (`bool`, *optional*, defaults to `True`): Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the `do_rescale` parameter in `preprocess`. rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
276_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaimageprocessor
.md
parameter in `preprocess`. rescale_factor (`int` or `float`, *optional*, defaults to `1/255`): Scale factor to use if rescaling the image. Can be overridden by the `rescale_factor` parameter in `preprocess`. do_normalize (`bool`, *optional*, defaults to `True`): Whether to normalize the image. Can be overridden by the `do_normalize` parameter in `preprocess`. image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`):
276_9_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaimageprocessor
.md
image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`): Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_STD`): Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
276_9_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaimageprocessor
.md
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method. return_image_mask (`bool`, *optional*, defaults to `False`): Whether to return the image mask. Can be overridden by the `return_image_mask` parameter in `preprocess`. input_size_patches (`int`, *optional*, defaults to 14):
276_9_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaimageprocessor
.md
input_size_patches (`int`, *optional*, defaults to 14): Number of patches in the image in height and width direction. 14x14 = 196 total patches. Can be overridden by the `input_size_patches` parameter in `preprocess`. total_mask_patches (`int`, *optional*, defaults to 75): Total number of patches that should be masked. Can be overridden by the `total_mask_patches` parameter in `preprocess`. mask_group_min_patches (`int`, *optional*, defaults to 16):
276_9_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaimageprocessor
.md
`preprocess`. mask_group_min_patches (`int`, *optional*, defaults to 16): Minimum number of patches that should be masked. Can be overridden by the `mask_group_min_patches` parameter in `preprocess`. mask_group_max_patches (`int`, *optional*): Maximum number of patches that should be masked. Can be overridden by the `mask_group_max_patches` parameter in `preprocess`. mask_group_min_aspect_ratio (`float`, *optional*, defaults to 0.3):
276_9_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaimageprocessor
.md
parameter in `preprocess`. mask_group_min_aspect_ratio (`float`, *optional*, defaults to 0.3): Minimum aspect ratio of the mask window. Can be overridden by the `mask_group_min_aspect_ratio` parameter in `preprocess`. mask_group_max_aspect_ratio (`float`, *optional*): Maximum aspect ratio of the mask window. Can be overridden by the `mask_group_max_aspect_ratio` parameter in `preprocess`. codebook_do_resize (`bool`, *optional*, defaults to `True`):
276_9_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaimageprocessor
.md
in `preprocess`. codebook_do_resize (`bool`, *optional*, defaults to `True`): Whether to resize the input for codebook to a certain. Can be overridden by the `codebook_do_resize` parameter in `preprocess`. `codebook_size`. codebook_size (`Dict[str, int]`, *optional*, defaults to `{"height": 224, "width": 224}`): Resize the input for codebook to the given size. Can be overridden by the `codebook_size` parameter in `preprocess`.
276_9_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaimageprocessor
.md
Resize the input for codebook to the given size. Can be overridden by the `codebook_size` parameter in `preprocess`. codebook_resample (`PILImageResampling`, *optional*, defaults to `PILImageResampling.LANCZOS`): Resampling filter to use if resizing the codebook image. Can be overridden by the `codebook_resample` parameter in `preprocess`. codebook_do_center_crop (`bool`, *optional*, defaults to `True`): Whether to crop the input for codebook at the center. If the input size is smaller than
276_9_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaimageprocessor
.md
Whether to crop the input for codebook at the center. If the input size is smaller than `codebook_crop_size` along any edge, the image is padded with 0's and then center cropped. Can be overridden by the `codebook_do_center_crop` parameter in `preprocess`. codebook_crop_size (`Dict[str, int]`, *optional*, defaults to `{"height": 224, "width": 224}`): Desired output size for codebook input when applying center-cropping. Can be overridden by the `codebook_crop_size` parameter in `preprocess`.
276_9_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaimageprocessor
.md
`codebook_crop_size` parameter in `preprocess`. codebook_do_rescale (`bool`, *optional*, defaults to `True`): Whether to rescale the input for codebook by the specified scale `codebook_rescale_factor`. Can be overridden by the `codebook_do_rescale` parameter in `preprocess`. codebook_rescale_factor (`int` or `float`, *optional*, defaults to `1/255`): Defines the scale factor to use if rescaling the codebook image. Can be overridden by the `codebook_rescale_factor` parameter in `preprocess`.
276_9_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaimageprocessor
.md
`codebook_rescale_factor` parameter in `preprocess`. codebook_do_map_pixels (`bool`, *optional*, defaults to `True`): Whether to map the pixel values of the codebook input to (1 - 2e)x + e. Can be overridden by the `codebook_do_map_pixels` parameter in `preprocess`. codebook_do_normalize (`bool`, *optional*, defaults to `True`): Whether or not to normalize the input for codebook with `codebook_image_mean` and `codebook_image_std`. Can be overridden by the `codebook_do_normalize` parameter in `preprocess`.
276_9_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaimageprocessor
.md
be overridden by the `codebook_do_normalize` parameter in `preprocess`. codebook_image_mean (`Optional[Union[float, Iterable[float]]]`, *optional*, defaults to `[0, 0, 0]`): The sequence of means for each channel, to be used when normalizing images for codebook. Can be overridden by the `codebook_image_mean` parameter in `preprocess`. codebook_image_std (`Optional[Union[float, Iterable[float]]]`, *optional*, defaults to `[0.5, 0.5, 0.5]`):
276_9_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaimageprocessor
.md
codebook_image_std (`Optional[Union[float, Iterable[float]]]`, *optional*, defaults to `[0.5, 0.5, 0.5]`): The sequence of standard deviations for each channel, to be used when normalizing images for codebook. Can be overridden by the `codebook_image_std` parameter in `preprocess`. Methods: preprocess
276_9_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaforpretraining
.md
The FLAVA model for pretraining which outputs losses, embeddings, logits and transformer outputs. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`FlavaConfig`]): Model configuration class with all the parameters of the model.
276_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaforpretraining
.md
behavior. Parameters: config ([`FlavaConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Parameters: image_codebook ([`nn.Module`]): If passed, the image codebook will be set to this. Otherwise. it will
276_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaforpretraining
.md
Parameters: image_codebook ([`nn.Module`]): If passed, the image codebook will be set to this. Otherwise. it will be initialized using the image_codebook_config defined in the config first as the first parameter. Methods: forward
276_10_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavamodel
.md
The bare FLAVA Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`FlavaConfig`]): Model configuration class with all the parameters of the model.
276_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavamodel
.md
behavior. Parameters: config ([`FlavaConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward - get_text_features - get_image_features
276_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaimagecodebook
.md
The FLAVA's image codebook model inspired from DALL-E's original encoder. Outputs raw hidden states and can be used to generate image tokens for an image based on DALL-E's vocab. Used to generate labels for MIM. Use `get_codebook_indices` to get image tokens for an image. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
276_12_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaimagecodebook
.md
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`FlavaImageCodebookConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward - get_codebook_indices - get_codebook_probs
276_12_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavatextmodel
.md
The bare FLAVA Text Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`FlavaTextConfig`]): Model configuration class with all the parameters of the model.
276_13_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavatextmodel
.md
behavior. Parameters: config ([`FlavaTextConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
276_13_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaimagemodel
.md
The bare FLAVA Image Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`FlavaImageConfig`]): Model configuration class with all the parameters of the model.
276_14_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavaimagemodel
.md
behavior. Parameters: config ([`FlavaImageConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
276_14_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavamultimodalmodel
.md
The bare FLAVA Multimodal Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`FlavaMultimodalConfig`]): Model configuration class with all the parameters of the model.
276_15_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flava.md
https://huggingface.co/docs/transformers/en/model_doc/flava/#flavamultimodalmodel
.md
behavior. Parameters: config ([`FlavaMultimodalConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
276_15_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinat.md
https://huggingface.co/docs/transformers/en/model_doc/dinat/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
277_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinat.md
https://huggingface.co/docs/transformers/en/model_doc/dinat/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
277_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinat.md
https://huggingface.co/docs/transformers/en/model_doc/dinat/#overview
.md
DiNAT was proposed in [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) by Ali Hassani and Humphrey Shi. It extends [NAT](nat) by adding a Dilated Neighborhood Attention pattern to capture global context, and shows significant performance improvements over it. The abstract from the paper is the following: *Transformers are quickly becoming one of the most heavily applied deep learning architectures across modalities,
277_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinat.md
https://huggingface.co/docs/transformers/en/model_doc/dinat/#overview
.md
*Transformers are quickly becoming one of the most heavily applied deep learning architectures across modalities, domains, and tasks. In vision, on top of ongoing efforts into plain transformers, hierarchical transformers have also gained significant attention, thanks to their performance and easy integration into existing frameworks. These models typically employ localized attention mechanisms, such as the sliding-window Neighborhood Attention (NA)
277_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinat.md
https://huggingface.co/docs/transformers/en/model_doc/dinat/#overview
.md
These models typically employ localized attention mechanisms, such as the sliding-window Neighborhood Attention (NA) or Swin Transformer's Shifted Window Self Attention. While effective at reducing self attention's quadratic complexity, local attention weakens two of the most desirable properties of self attention: long range inter-dependency modeling, and global receptive field. In this paper, we introduce Dilated Neighborhood Attention (DiNA), a natural, flexible and
277_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinat.md
https://huggingface.co/docs/transformers/en/model_doc/dinat/#overview
.md
and global receptive field. In this paper, we introduce Dilated Neighborhood Attention (DiNA), a natural, flexible and efficient extension to NA that can capture more global context and expand receptive fields exponentially at no additional cost. NA's local attention and DiNA's sparse global attention complement each other, and therefore we introduce Dilated Neighborhood Attention Transformer (DiNAT), a new hierarchical vision transformer built upon both.
277_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinat.md
https://huggingface.co/docs/transformers/en/model_doc/dinat/#overview
.md
introduce Dilated Neighborhood Attention Transformer (DiNAT), a new hierarchical vision transformer built upon both. DiNAT variants enjoy significant improvements over strong baselines such as NAT, Swin, and ConvNeXt. Our large model is faster and ahead of its Swin counterpart by 1.5% box AP in COCO object detection, 1.3% mask AP in COCO instance segmentation, and 1.1% mIoU in ADE20K semantic segmentation.
277_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinat.md
https://huggingface.co/docs/transformers/en/model_doc/dinat/#overview
.md
1.3% mask AP in COCO instance segmentation, and 1.1% mIoU in ADE20K semantic segmentation. Paired with new frameworks, our large variant is the new state of the art panoptic segmentation model on COCO (58.2 PQ) and ADE20K (48.5 PQ), and instance segmentation model on Cityscapes (44.5 AP) and ADE20K (35.4 AP) (no extra data). It also matches the state of the art specialized semantic segmentation models on ADE20K (58.2 mIoU), and ranks second on Cityscapes (84.5 mIoU) (no extra data). * <img
277_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinat.md
https://huggingface.co/docs/transformers/en/model_doc/dinat/#overview
.md
and ranks second on Cityscapes (84.5 mIoU) (no extra data). * <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/dilated-neighborhood-attention-pattern.jpg" alt="drawing" width="600"/> <small> Neighborhood Attention with different dilation values. Taken from the <a href="https://arxiv.org/abs/2209.15001">original paper</a>.</small> This model was contributed by [Ali Hassani](https://huggingface.co/alihassanijr).
277_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinat.md
https://huggingface.co/docs/transformers/en/model_doc/dinat/#overview
.md
This model was contributed by [Ali Hassani](https://huggingface.co/alihassanijr). The original code can be found [here](https://github.com/SHI-Labs/Neighborhood-Attention-Transformer).
277_1_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinat.md
https://huggingface.co/docs/transformers/en/model_doc/dinat/#usage-tips
.md
DiNAT can be used as a *backbone*. When `output_hidden_states = True`, it will output both `hidden_states` and `reshaped_hidden_states`. The `reshaped_hidden_states` have a shape of `(batch, num_channels, height, width)` rather than `(batch_size, height, width, num_channels)`. Notes: - DiNAT depends on [NATTEN](https://github.com/SHI-Labs/NATTEN/)'s implementation of Neighborhood Attention and Dilated Neighborhood Attention.
277_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinat.md
https://huggingface.co/docs/transformers/en/model_doc/dinat/#usage-tips
.md
You can install it with pre-built wheels for Linux by referring to [shi-labs.com/natten](https://shi-labs.com/natten), or build on your system by running `pip install natten`. Note that the latter will likely take time to compile. NATTEN does not support Windows devices yet. - Patch size of 4 is only supported at the moment.
277_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinat.md
https://huggingface.co/docs/transformers/en/model_doc/dinat/#resources
.md
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DiNAT. <PipelineTag pipeline="image-classification"/> - [`DinatForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
277_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinat.md
https://huggingface.co/docs/transformers/en/model_doc/dinat/#resources
.md
- See also: [Image classification task guide](../tasks/image_classification) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
277_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinat.md
https://huggingface.co/docs/transformers/en/model_doc/dinat/#dinatconfig
.md
This is the configuration class to store the configuration of a [`DinatModel`]. It is used to instantiate a Dinat model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Dinat [shi-labs/dinat-mini-in1k-224](https://huggingface.co/shi-labs/dinat-mini-in1k-224) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
277_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinat.md
https://huggingface.co/docs/transformers/en/model_doc/dinat/#dinatconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: patch_size (`int`, *optional*, defaults to 4): The size (resolution) of each patch. NOTE: Only patch size of 4 is supported at the moment. num_channels (`int`, *optional*, defaults to 3): The number of input channels. embed_dim (`int`, *optional*, defaults to 64): Dimensionality of patch embedding.
277_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinat.md
https://huggingface.co/docs/transformers/en/model_doc/dinat/#dinatconfig
.md
The number of input channels. embed_dim (`int`, *optional*, defaults to 64): Dimensionality of patch embedding. depths (`List[int]`, *optional*, defaults to `[3, 4, 6, 5]`): Number of layers in each level of the encoder. num_heads (`List[int]`, *optional*, defaults to `[2, 4, 8, 16]`): Number of attention heads in each layer of the Transformer encoder. kernel_size (`int`, *optional*, defaults to 7): Neighborhood Attention kernel size.
277_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinat.md
https://huggingface.co/docs/transformers/en/model_doc/dinat/#dinatconfig
.md
kernel_size (`int`, *optional*, defaults to 7): Neighborhood Attention kernel size. dilations (`List[List[int]]`, *optional*, defaults to `[[1, 8, 1], [1, 4, 1, 4], [1, 2, 1, 2, 1, 2], [1, 1, 1, 1, 1]]`): Dilation value of each NA layer in the Transformer encoder. mlp_ratio (`float`, *optional*, defaults to 3.0): Ratio of MLP hidden dimensionality to embedding dimensionality. qkv_bias (`bool`, *optional*, defaults to `True`): Whether or not a learnable bias should be added to the queries, keys and values.
277_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinat.md
https://huggingface.co/docs/transformers/en/model_doc/dinat/#dinatconfig
.md
Whether or not a learnable bias should be added to the queries, keys and values. hidden_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout probability for all fully connected layers in the embeddings and encoder. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. drop_path_rate (`float`, *optional*, defaults to 0.1): Stochastic depth rate. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
277_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinat.md
https://huggingface.co/docs/transformers/en/model_doc/dinat/#dinatconfig
.md
Stochastic depth rate. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon used by the layer normalization layers.
277_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinat.md
https://huggingface.co/docs/transformers/en/model_doc/dinat/#dinatconfig
.md
layer_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon used by the layer normalization layers. layer_scale_init_value (`float`, *optional*, defaults to 0.0): The initial value for the layer scale. Disabled if <=0. out_features (`List[str]`, *optional*): If used as backbone, list of features to output. Can be any of `"stem"`, `"stage1"`, `"stage2"`, etc. (depending on how many stages the model has). If unset and `out_indices` is set, will default to the
277_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinat.md
https://huggingface.co/docs/transformers/en/model_doc/dinat/#dinatconfig
.md
(depending on how many stages the model has). If unset and `out_indices` is set, will default to the corresponding stages. If unset and `out_indices` is unset, will default to the last stage. Must be in the same order as defined in the `stage_names` attribute. out_indices (`List[int]`, *optional*): If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how many stages the model has). If unset and `out_features` is set, will default to the corresponding stages.
277_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinat.md
https://huggingface.co/docs/transformers/en/model_doc/dinat/#dinatconfig
.md
many stages the model has). If unset and `out_features` is set, will default to the corresponding stages. If unset and `out_features` is unset, will default to the last stage. Must be in the same order as defined in the `stage_names` attribute. Example: ```python >>> from transformers import DinatConfig, DinatModel
277_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinat.md
https://huggingface.co/docs/transformers/en/model_doc/dinat/#dinatconfig
.md
>>> # Initializing a Dinat shi-labs/dinat-mini-in1k-224 style configuration >>> configuration = DinatConfig() >>> # Initializing a model (with random weights) from the shi-labs/dinat-mini-in1k-224 style configuration >>> model = DinatModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
277_4_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinat.md
https://huggingface.co/docs/transformers/en/model_doc/dinat/#dinatmodel
.md
The bare Dinat Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`DinatConfig`]): Model configuration class with all the parameters of the model.
277_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinat.md
https://huggingface.co/docs/transformers/en/model_doc/dinat/#dinatmodel
.md
behavior. Parameters: config ([`DinatConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
277_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinat.md
https://huggingface.co/docs/transformers/en/model_doc/dinat/#dinatforimageclassification
.md
Dinat Model transformer with an image classification head on top (a linear layer on top of the final hidden state of the [CLS] token) e.g. for ImageNet. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`DinatConfig`]): Model configuration class with all the parameters of the model.
277_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/dinat.md
https://huggingface.co/docs/transformers/en/model_doc/dinat/#dinatforimageclassification
.md
behavior. Parameters: config ([`DinatConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
277_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
278_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
278_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#overview
.md
The Wav2Vec2-Conformer was added to an updated version of [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino. The official results of the model can be found in Table 3 and Table 4 of the paper.
278_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#overview
.md
The official results of the model can be found in Table 3 and Table 4 of the paper. The Wav2Vec2-Conformer weights were released by the Meta AI team within the [Fairseq library](https://github.com/pytorch/fairseq/blob/main/examples/wav2vec/README.md#pre-trained-models). This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The original code can be found [here](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec).
278_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#overview
.md
The original code can be found [here](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec). Note: Meta (FAIR) released a new version of [Wav2Vec2-BERT 2.0](https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert) - it's pretrained on 4.5M hours of audio. We especially recommend using it for fine-tuning tasks, e.g. as per [this guide](https://huggingface.co/blog/fine-tune-w2v2-bert).
278_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#usage-tips
.md
- Wav2Vec2-Conformer follows the same architecture as Wav2Vec2, but replaces the *Attention*-block with a *Conformer*-block as introduced in [Conformer: Convolution-augmented Transformer for Speech Recognition](https://arxiv.org/abs/2005.08100). - For the same number of layers, Wav2Vec2-Conformer requires more parameters than Wav2Vec2, but also yields an improved word error rate. - Wav2Vec2-Conformer uses the same tokenizer and feature extractor as Wav2Vec2.
278_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#usage-tips
.md
an improved word error rate. - Wav2Vec2-Conformer uses the same tokenizer and feature extractor as Wav2Vec2. - Wav2Vec2-Conformer can use either no relative position embeddings, Transformer-XL-like position embeddings, or rotary position embeddings by setting the correct `config.position_embeddings_type`.
278_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#resources
.md
- [Audio classification task guide](../tasks/audio_classification) - [Automatic speech recognition task guide](../tasks/asr)
278_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2-conformer.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-conformer/#wav2vec2conformerconfig
.md
This is the configuration class to store the configuration of a [`Wav2Vec2ConformerModel`]. It is used to instantiate an Wav2Vec2Conformer model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Wav2Vec2Conformer [facebook/wav2vec2-conformer-rel-pos-large](https://huggingface.co/facebook/wav2vec2-conformer-rel-pos-large) architecture.
278_4_0