source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vivit.md
https://huggingface.co/docs/transformers/en/model_doc/vivit/#vivitconfig
.md
Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. hidden_act (`str` or `function`, *optional*, defaults to `"gelu_fast"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"`, `"gelu_fast"` and `"gelu_new"` are supported.
160_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vivit.md
https://huggingface.co/docs/transformers/en/model_doc/vivit/#vivitconfig
.md
`"relu"`, `"selu"`, `"gelu_fast"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
160_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vivit.md
https://huggingface.co/docs/transformers/en/model_doc/vivit/#vivitconfig
.md
The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-06): The epsilon used by the layer normalization layers. qkv_bias (`bool`, *optional*, defaults to `True`): Whether to add a bias to the queries, keys and values. Example: ```python >>> from transformers import VivitConfig, VivitModel
160_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vivit.md
https://huggingface.co/docs/transformers/en/model_doc/vivit/#vivitconfig
.md
>>> # Initializing a ViViT google/vivit-b-16x2-kinetics400 style configuration >>> configuration = VivitConfig() >>> # Initializing a model (with random weights) from the google/vivit-b-16x2-kinetics400 style configuration >>> model = VivitModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
160_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vivit.md
https://huggingface.co/docs/transformers/en/model_doc/vivit/#vivitimageprocessor
.md
Constructs a Vivit image processor. Args: do_resize (`bool`, *optional*, defaults to `True`): Whether to resize the image's (height, width) dimensions to the specified `size`. Can be overridden by the `do_resize` parameter in the `preprocess` method. size (`Dict[str, int]` *optional*, defaults to `{"shortest_edge": 256}`): Size of the output image after resizing. The shortest edge of the image will be resized to
160_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vivit.md
https://huggingface.co/docs/transformers/en/model_doc/vivit/#vivitimageprocessor
.md
Size of the output image after resizing. The shortest edge of the image will be resized to `size["shortest_edge"]` while maintaining the aspect ratio of the original image. Can be overriden by `size` in the `preprocess` method. resample (`PILImageResampling`, *optional*, defaults to `Resampling.BILINEAR`): Resampling filter to use if resizing the image. Can be overridden by the `resample` parameter in the `preprocess` method. do_center_crop (`bool`, *optional*, defaults to `True`):
160_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vivit.md
https://huggingface.co/docs/transformers/en/model_doc/vivit/#vivitimageprocessor
.md
`preprocess` method. do_center_crop (`bool`, *optional*, defaults to `True`): Whether to center crop the image to the specified `crop_size`. Can be overridden by the `do_center_crop` parameter in the `preprocess` method. crop_size (`Dict[str, int]`, *optional*, defaults to `{"height": 224, "width": 224}`): Size of the image after applying the center crop. Can be overridden by the `crop_size` parameter in the `preprocess` method. do_rescale (`bool`, *optional*, defaults to `True`):
160_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vivit.md
https://huggingface.co/docs/transformers/en/model_doc/vivit/#vivitimageprocessor
.md
`preprocess` method. do_rescale (`bool`, *optional*, defaults to `True`): Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the `do_rescale` parameter in the `preprocess` method. rescale_factor (`int` or `float`, *optional*, defaults to `1/127.5`): Defines the scale factor to use if rescaling the image. Can be overridden by the `rescale_factor` parameter in the `preprocess` method. offset (`bool`, *optional*, defaults to `True`):
160_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vivit.md
https://huggingface.co/docs/transformers/en/model_doc/vivit/#vivitimageprocessor
.md
in the `preprocess` method. offset (`bool`, *optional*, defaults to `True`): Whether to scale the image in both negative and positive directions. Can be overriden by the `offset` in the `preprocess` method. do_normalize (`bool`, *optional*, defaults to `True`): Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess` method. image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`):
160_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vivit.md
https://huggingface.co/docs/transformers/en/model_doc/vivit/#vivitimageprocessor
.md
method. image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`): Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_STD`): Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
160_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vivit.md
https://huggingface.co/docs/transformers/en/model_doc/vivit/#vivitimageprocessor
.md
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method. Methods: preprocess
160_6_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vivit.md
https://huggingface.co/docs/transformers/en/model_doc/vivit/#vivitmodel
.md
The bare ViViT Transformer model outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`VivitConfig`]): Model configuration class with all the parameters of the model.
160_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vivit.md
https://huggingface.co/docs/transformers/en/model_doc/vivit/#vivitmodel
.md
behavior. Parameters: config ([`VivitConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
160_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vivit.md
https://huggingface.co/docs/transformers/en/model_doc/vivit/#vivitforvideoclassification
.md
ViViT Transformer model with a video classification head on top (a linear layer on top of the final hidden state of the [CLS] token) e.g. for Kinetics-400. <Tip> Note that it's possible to fine-tune ViT on higher resolution images than the ones it has been trained on, by setting `interpolate_pos_encoding` to `True` in the forward of the model. This will interpolate the pre-trained position embeddings to the higher resolution. </Tip>
160_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vivit.md
https://huggingface.co/docs/transformers/en/model_doc/vivit/#vivitforvideoclassification
.md
position embeddings to the higher resolution. </Tip> This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`VivitConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
160_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vivit.md
https://huggingface.co/docs/transformers/en/model_doc/vivit/#vivitforvideoclassification
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
160_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/resnet.md
https://huggingface.co/docs/transformers/en/model_doc/resnet/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
161_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/resnet.md
https://huggingface.co/docs/transformers/en/model_doc/resnet/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
161_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/resnet.md
https://huggingface.co/docs/transformers/en/model_doc/resnet/#overview
.md
The ResNet model was proposed in [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren and Jian Sun. Our implementation follows the small changes made by [Nvidia](https://catalog.ngc.nvidia.com/orgs/nvidia/resources/resnet_50_v1_5_for_pytorch), we apply the `stride=2` for downsampling in bottleneck's `3x3` conv and not in the first `1x1`. This is generally known as "ResNet v1.5".
161_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/resnet.md
https://huggingface.co/docs/transformers/en/model_doc/resnet/#overview
.md
ResNet introduced residual connections, they allow to train networks with an unseen number of layers (up to 1000). ResNet won the 2015 ILSVRC & COCO competition, one important milestone in deep computer vision. The abstract from the paper is the following:
161_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/resnet.md
https://huggingface.co/docs/transformers/en/model_doc/resnet/#overview
.md
*Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet
161_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/resnet.md
https://huggingface.co/docs/transformers/en/model_doc/resnet/#overview
.md
that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers.
161_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/resnet.md
https://huggingface.co/docs/transformers/en/model_doc/resnet/#overview
.md
The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.*
161_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/resnet.md
https://huggingface.co/docs/transformers/en/model_doc/resnet/#overview
.md
The figure below illustrates the architecture of ResNet. Taken from the [original paper](https://arxiv.org/abs/1512.03385). <img width="600" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/resnet_architecture.png"/>
161_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/resnet.md
https://huggingface.co/docs/transformers/en/model_doc/resnet/#overview
.md
This model was contributed by [Francesco](https://huggingface.co/Francesco). The TensorFlow version of this model was added by [amyeroberts](https://huggingface.co/amyeroberts). The original code can be found [here](https://github.com/KaimingHe/deep-residual-networks).
161_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/resnet.md
https://huggingface.co/docs/transformers/en/model_doc/resnet/#resources
.md
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ResNet. <PipelineTag pipeline="image-classification"/> - [`ResNetForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
161_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/resnet.md
https://huggingface.co/docs/transformers/en/model_doc/resnet/#resources
.md
- See also: [Image classification task guide](../tasks/image_classification) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
161_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/resnet.md
https://huggingface.co/docs/transformers/en/model_doc/resnet/#resnetconfig
.md
This is the configuration class to store the configuration of a [`ResNetModel`]. It is used to instantiate an ResNet model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the ResNet [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
161_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/resnet.md
https://huggingface.co/docs/transformers/en/model_doc/resnet/#resnetconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: num_channels (`int`, *optional*, defaults to 3): The number of input channels. embedding_size (`int`, *optional*, defaults to 64): Dimensionality (hidden size) for the embedding layer. hidden_sizes (`List[int]`, *optional*, defaults to `[256, 512, 1024, 2048]`): Dimensionality (hidden size) at each stage.
161_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/resnet.md
https://huggingface.co/docs/transformers/en/model_doc/resnet/#resnetconfig
.md
hidden_sizes (`List[int]`, *optional*, defaults to `[256, 512, 1024, 2048]`): Dimensionality (hidden size) at each stage. depths (`List[int]`, *optional*, defaults to `[3, 4, 6, 3]`): Depth (number of layers) for each stage. layer_type (`str`, *optional*, defaults to `"bottleneck"`): The layer to use, it can be either `"basic"` (used for smaller models, like resnet-18 or resnet-34) or `"bottleneck"` (used for larger models like resnet-50 and above). hidden_act (`str`, *optional*, defaults to `"relu"`):
161_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/resnet.md
https://huggingface.co/docs/transformers/en/model_doc/resnet/#resnetconfig
.md
`"bottleneck"` (used for larger models like resnet-50 and above). hidden_act (`str`, *optional*, defaults to `"relu"`): The non-linear activation function in each block. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. downsample_in_first_stage (`bool`, *optional*, defaults to `False`): If `True`, the first stage will downsample the inputs using a `stride` of 2. downsample_in_bottleneck (`bool`, *optional*, defaults to `False`):
161_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/resnet.md
https://huggingface.co/docs/transformers/en/model_doc/resnet/#resnetconfig
.md
downsample_in_bottleneck (`bool`, *optional*, defaults to `False`): If `True`, the first conv 1x1 in ResNetBottleNeckLayer will downsample the inputs using a `stride` of 2. out_features (`List[str]`, *optional*): If used as backbone, list of features to output. Can be any of `"stem"`, `"stage1"`, `"stage2"`, etc. (depending on how many stages the model has). If unset and `out_indices` is set, will default to the
161_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/resnet.md
https://huggingface.co/docs/transformers/en/model_doc/resnet/#resnetconfig
.md
(depending on how many stages the model has). If unset and `out_indices` is set, will default to the corresponding stages. If unset and `out_indices` is unset, will default to the last stage. Must be in the same order as defined in the `stage_names` attribute. out_indices (`List[int]`, *optional*): If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how many stages the model has). If unset and `out_features` is set, will default to the corresponding stages.
161_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/resnet.md
https://huggingface.co/docs/transformers/en/model_doc/resnet/#resnetconfig
.md
many stages the model has). If unset and `out_features` is set, will default to the corresponding stages. If unset and `out_features` is unset, will default to the last stage. Must be in the same order as defined in the `stage_names` attribute. Example: ```python >>> from transformers import ResNetConfig, ResNetModel
161_3_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/resnet.md
https://huggingface.co/docs/transformers/en/model_doc/resnet/#resnetconfig
.md
>>> # Initializing a ResNet resnet-50 style configuration >>> configuration = ResNetConfig() >>> # Initializing a model (with random weights) from the resnet-50 style configuration >>> model = ResNetModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ``` <frameworkcontent> <pt>
161_3_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/resnet.md
https://huggingface.co/docs/transformers/en/model_doc/resnet/#resnetmodel
.md
The bare ResNet model outputting raw features without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`ResNetConfig`]): Model configuration class with all the parameters of the model.
161_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/resnet.md
https://huggingface.co/docs/transformers/en/model_doc/resnet/#resnetmodel
.md
behavior. Parameters: config ([`ResNetConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
161_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/resnet.md
https://huggingface.co/docs/transformers/en/model_doc/resnet/#resnetforimageclassification
.md
ResNet Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for ImageNet. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`ResNetConfig`]): Model configuration class with all the parameters of the model.
161_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/resnet.md
https://huggingface.co/docs/transformers/en/model_doc/resnet/#resnetforimageclassification
.md
behavior. Parameters: config ([`ResNetConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward </pt> <tf>
161_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/resnet.md
https://huggingface.co/docs/transformers/en/model_doc/resnet/#tfresnetmodel
.md
No docstring available for TFResNetModel Methods: call
161_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/resnet.md
https://huggingface.co/docs/transformers/en/model_doc/resnet/#tfresnetforimageclassification
.md
No docstring available for TFResNetForImageClassification Methods: call </tf> <jax>
161_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/resnet.md
https://huggingface.co/docs/transformers/en/model_doc/resnet/#flaxresnetmodel
.md
No docstring available for FlaxResNetModel Methods: __call__
161_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/resnet.md
https://huggingface.co/docs/transformers/en/model_doc/resnet/#flaxresnetforimageclassification
.md
No docstring available for FlaxResNetForImageClassification Methods: __call__ </jax> </frameworkcontent>
161_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/van.md
https://huggingface.co/docs/transformers/en/model_doc/van/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
162_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/van.md
https://huggingface.co/docs/transformers/en/model_doc/van/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
162_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/van.md
https://huggingface.co/docs/transformers/en/model_doc/van/#van
.md
<Tip warning={true}> This model is in maintenance mode only, we don't accept any new PRs changing its code. If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0. You can do so by running the following command: `pip install -U transformers==4.30.0`. </Tip>
162_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/van.md
https://huggingface.co/docs/transformers/en/model_doc/van/#overview
.md
The VAN model was proposed in [Visual Attention Network](https://arxiv.org/abs/2202.09741) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu. This paper introduces a new attention layer based on convolution operations able to capture both local and distant relationships. This is done by combining normal and large kernel convolution layers. The latter uses a dilated convolution to capture distant correlations. The abstract from the paper is the following:
162_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/van.md
https://huggingface.co/docs/transformers/en/model_doc/van/#overview
.md
*While originally designed for natural language processing tasks, the self-attention mechanism has recently taken various computer vision areas by storm. However, the 2D nature of images brings three challenges for applying self-attention in computer vision. (1) Treating images as 1D sequences neglects their 2D structures. (2) The quadratic complexity is too expensive for high-resolution images. (3) It only captures spatial adaptability but ignores channel adaptability. In this paper, we propose a novel
162_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/van.md
https://huggingface.co/docs/transformers/en/model_doc/van/#overview
.md
images. (3) It only captures spatial adaptability but ignores channel adaptability. In this paper, we propose a novel large kernel attention (LKA) module to enable self-adaptive and long-range correlations in self-attention while avoiding the above issues. We further introduce a novel neural network based on LKA, namely Visual Attention Network (VAN). While extremely simple, VAN outperforms the state-of-the-art vision transformers and convolutional neural networks with a large margin in extensive
162_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/van.md
https://huggingface.co/docs/transformers/en/model_doc/van/#overview
.md
VAN outperforms the state-of-the-art vision transformers and convolutional neural networks with a large margin in extensive experiments, including image classification, object detection, semantic segmentation, instance segmentation, etc. Code is available at [this https URL](https://github.com/Visual-Attention-Network/VAN-Classification).*
162_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/van.md
https://huggingface.co/docs/transformers/en/model_doc/van/#overview
.md
Tips: - VAN does not have an embedding layer, thus the `hidden_states` will have a length equal to the number of stages. The figure below illustrates the architecture of a Visual Attention Layer. Taken from the [original paper](https://arxiv.org/abs/2202.09741). <img width="600" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/van_architecture.png"/>
162_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/van.md
https://huggingface.co/docs/transformers/en/model_doc/van/#overview
.md
<img width="600" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/van_architecture.png"/> This model was contributed by [Francesco](https://huggingface.co/Francesco). The original code can be found [here](https://github.com/Visual-Attention-Network/VAN-Classification).
162_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/van.md
https://huggingface.co/docs/transformers/en/model_doc/van/#resources
.md
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with VAN. <PipelineTag pipeline="image-classification"/> - [`VanForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
162_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/van.md
https://huggingface.co/docs/transformers/en/model_doc/van/#resources
.md
- See also: [Image classification task guide](../tasks/image_classification) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
162_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/van.md
https://huggingface.co/docs/transformers/en/model_doc/van/#vanconfig
.md
This is the configuration class to store the configuration of a [`VanModel`]. It is used to instantiate a VAN model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the VAN [Visual-Attention-Network/van-base](https://huggingface.co/Visual-Attention-Network/van-base) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
162_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/van.md
https://huggingface.co/docs/transformers/en/model_doc/van/#vanconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: image_size (`int`, *optional*, defaults to 224): The size (resolution) of each image. num_channels (`int`, *optional*, defaults to 3): The number of input channels. patch_sizes (`List[int]`, *optional*, defaults to `[7, 3, 3, 3]`): Patch size to use in each stage's embedding layer.
162_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/van.md
https://huggingface.co/docs/transformers/en/model_doc/van/#vanconfig
.md
patch_sizes (`List[int]`, *optional*, defaults to `[7, 3, 3, 3]`): Patch size to use in each stage's embedding layer. strides (`List[int]`, *optional*, defaults to `[4, 2, 2, 2]`): Stride size to use in each stage's embedding layer to downsample the input. hidden_sizes (`List[int]`, *optional*, defaults to `[64, 128, 320, 512]`): Dimensionality (hidden size) at each stage. depths (`List[int]`, *optional*, defaults to `[3, 3, 12, 3]`): Depth (number of layers) for each stage.
162_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/van.md
https://huggingface.co/docs/transformers/en/model_doc/van/#vanconfig
.md
depths (`List[int]`, *optional*, defaults to `[3, 3, 12, 3]`): Depth (number of layers) for each stage. mlp_ratios (`List[int]`, *optional*, defaults to `[8, 8, 4, 4]`): The expansion ratio for mlp layer at each stage. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in each layer. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. initializer_range (`float`, *optional*, defaults to 0.02):
162_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/van.md
https://huggingface.co/docs/transformers/en/model_doc/van/#vanconfig
.md
`"selu"` and `"gelu_new"` are supported. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-06): The epsilon used by the layer normalization layers. layer_scale_init_value (`float`, *optional*, defaults to 0.01): The initial value for layer scaling. drop_path_rate (`float`, *optional*, defaults to 0.0): The dropout probability for stochastic depth.
162_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/van.md
https://huggingface.co/docs/transformers/en/model_doc/van/#vanconfig
.md
drop_path_rate (`float`, *optional*, defaults to 0.0): The dropout probability for stochastic depth. dropout_rate (`float`, *optional*, defaults to 0.0): The dropout probability for dropout. Example: ```python >>> from transformers import VanModel, VanConfig
162_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/van.md
https://huggingface.co/docs/transformers/en/model_doc/van/#vanconfig
.md
>>> # Initializing a VAN van-base style configuration >>> configuration = VanConfig() >>> # Initializing a model from the van-base style configuration >>> model = VanModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
162_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/van.md
https://huggingface.co/docs/transformers/en/model_doc/van/#vanmodel
.md
The bare VAN model outputting raw features without any specific head on top. Note, VAN does not have an embedding layer. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`VanConfig`]): Model configuration class with all the parameters of the model.
162_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/van.md
https://huggingface.co/docs/transformers/en/model_doc/van/#vanmodel
.md
behavior. Parameters: config ([`VanConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
162_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/van.md
https://huggingface.co/docs/transformers/en/model_doc/van/#vanforimageclassification
.md
VAN Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for ImageNet. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`VanConfig`]): Model configuration class with all the parameters of the model.
162_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/van.md
https://huggingface.co/docs/transformers/en/model_doc/van/#vanforimageclassification
.md
behavior. Parameters: config ([`VanConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
162_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/
.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
163_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
163_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flaubert
.md
<div class="flex flex-wrap space-x-1"> <a href="https://huggingface.co/models?filter=flaubert"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-flaubert-blueviolet"> </a> <a href="https://huggingface.co/spaces/docs-demos/flaubert_small_cased"> <img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"> </a> </div>
163_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#overview
.md
The FlauBERT model was proposed in the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le et al. It's a transformer model pretrained using a masked language modeling (MLM) objective (like BERT). The abstract from the paper is the following: *Language models have become a key step to achieve state-of-the art results in many different Natural Language
163_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#overview
.md
*Language models have become a key step to achieve state-of-the art results in many different Natural Language Processing (NLP) tasks. Leveraging the huge amount of unlabeled texts nowadays available, they provide an efficient way to pre-train continuous word representations that can be fine-tuned for a downstream task, along with their contextualization at the sentence level. This has been widely demonstrated for English using contextualized
163_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#overview
.md
contextualization at the sentence level. This has been widely demonstrated for English using contextualized representations (Dai and Le, 2015; Peters et al., 2018; Howard and Ruder, 2018; Radford et al., 2018; Devlin et al., 2019; Yang et al., 2019b). In this paper, we introduce and share FlauBERT, a model learned on a very large and heterogeneous French corpus. Models of different sizes are trained using the new CNRS (French National Centre for
163_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#overview
.md
heterogeneous French corpus. Models of different sizes are trained using the new CNRS (French National Centre for Scientific Research) Jean Zay supercomputer. We apply our French language models to diverse NLP tasks (text classification, paraphrasing, natural language inference, parsing, word sense disambiguation) and show that most of the time they outperform other pretraining approaches. Different versions of FlauBERT as well as a unified evaluation
163_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#overview
.md
time they outperform other pretraining approaches. Different versions of FlauBERT as well as a unified evaluation protocol for the downstream tasks, called FLUE (French Language Understanding Evaluation), are shared to the research community for further reproducible experiments in French NLP.* This model was contributed by [formiel](https://huggingface.co/formiel). The original code can be found [here](https://github.com/getalp/Flaubert). Tips:
163_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#overview
.md
Tips: - Like RoBERTa, without the sentence ordering prediction (so just trained on the MLM objective).
163_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#resources
.md
- [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice)
163_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flaubertconfig
.md
This is the configuration class to store the configuration of a [`FlaubertModel`] or a [`TFFlaubertModel`]. It is used to instantiate a FlauBERT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the FlauBERT [flaubert/flaubert_base_uncased](https://huggingface.co/flaubert/flaubert_base_uncased) architecture.
163_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flaubertconfig
.md
[flaubert/flaubert_base_uncased](https://huggingface.co/flaubert/flaubert_base_uncased) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: pre_norm (`bool`, *optional*, defaults to `False`): Whether to apply the layer normalization before or after the feed forward layer following the attention in
163_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flaubertconfig
.md
Whether to apply the layer normalization before or after the feed forward layer following the attention in each layer (Vaswani et al., Tensor2Tensor for Neural Machine Translation. 2018) layerdrop (`float`, *optional*, defaults to 0.0): Probability to drop layers during training (Fan et al., Reducing Transformer Depth on Demand with Structured Dropout. ICLR 2020) vocab_size (`int`, *optional*, defaults to 30145):
163_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flaubertconfig
.md
Structured Dropout. ICLR 2020) vocab_size (`int`, *optional*, defaults to 30145): Vocabulary size of the FlauBERT model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`FlaubertModel`] or [`TFFlaubertModel`]. emb_dim (`int`, *optional*, defaults to 2048): Dimensionality of the encoder layers and the pooler layer. n_layer (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder.
163_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flaubertconfig
.md
n_layer (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. n_head (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer encoder. dropout (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_dropout (`float`, *optional*, defaults to 0.1): The dropout probability for the attention mechanism
163_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flaubertconfig
.md
attention_dropout (`float`, *optional*, defaults to 0.1): The dropout probability for the attention mechanism gelu_activation (`bool`, *optional*, defaults to `True`): Whether or not to use a *gelu* activation instead of *relu*. sinusoidal_embeddings (`bool`, *optional*, defaults to `False`): Whether or not to use sinusoidal positional embeddings instead of absolute positional embeddings. causal (`bool`, *optional*, defaults to `False`):
163_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flaubertconfig
.md
causal (`bool`, *optional*, defaults to `False`): Whether or not the model should behave in a causal manner. Causal models use a triangular attention mask in order to only attend to the left-side context instead if a bidirectional context. asm (`bool`, *optional*, defaults to `False`): Whether or not to use an adaptive log softmax projection layer instead of a linear layer for the prediction layer. n_langs (`int`, *optional*, defaults to 1):
163_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flaubertconfig
.md
layer. n_langs (`int`, *optional*, defaults to 1): The number of languages the model handles. Set to 1 for monolingual models. use_lang_emb (`bool`, *optional*, defaults to `True`) Whether to use language embeddings. Some models use additional language embeddings, see [the multilingual models page](http://huggingface.co/transformers/multilingual.html#xlm-language-embeddings) for information on how to use them. max_position_embeddings (`int`, *optional*, defaults to 512):
163_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flaubertconfig
.md
on how to use them. max_position_embeddings (`int`, *optional*, defaults to 512): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). embed_init_std (`float`, *optional*, defaults to 2048^-0.5): The standard deviation of the truncated_normal_initializer for initializing the embedding matrices. init_std (`int`, *optional*, defaults to 50257):
163_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flaubertconfig
.md
init_std (`int`, *optional*, defaults to 50257): The standard deviation of the truncated_normal_initializer for initializing all weight matrices except the embedding matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. bos_index (`int`, *optional*, defaults to 0): The index of the beginning of sentence token in the vocabulary. eos_index (`int`, *optional*, defaults to 1): The index of the end of sentence token in the vocabulary.
163_4_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flaubertconfig
.md
eos_index (`int`, *optional*, defaults to 1): The index of the end of sentence token in the vocabulary. pad_index (`int`, *optional*, defaults to 2): The index of the padding token in the vocabulary. unk_index (`int`, *optional*, defaults to 3): The index of the unknown token in the vocabulary. mask_index (`int`, *optional*, defaults to 5): The index of the masking token in the vocabulary. is_encoder(`bool`, *optional*, defaults to `True`):
163_4_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flaubertconfig
.md
The index of the masking token in the vocabulary. is_encoder(`bool`, *optional*, defaults to `True`): Whether or not the initialized model should be a transformer encoder or decoder as seen in Vaswani et al. summary_type (`string`, *optional*, defaults to "first"): Argument used when doing sequence summary. Used in the sequence classification and multiple choice models. Has to be one of the following options: - `"last"`: Take the last token hidden state (like XLNet).
163_4_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flaubertconfig
.md
Has to be one of the following options: - `"last"`: Take the last token hidden state (like XLNet). - `"first"`: Take the first token hidden state (like BERT). - `"mean"`: Take the mean of all tokens hidden states. - `"cls_index"`: Supply a Tensor of classification token position (like GPT/GPT-2). - `"attn"`: Not implemented now, use multi-head attention. summary_use_proj (`bool`, *optional*, defaults to `True`):
163_4_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flaubertconfig
.md
- `"attn"`: Not implemented now, use multi-head attention. summary_use_proj (`bool`, *optional*, defaults to `True`): Argument used when doing sequence summary. Used in the sequence classification and multiple choice models. Whether or not to add a projection after the vector extraction. summary_activation (`str`, *optional*): Argument used when doing sequence summary. Used in the sequence classification and multiple choice models.
163_4_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flaubertconfig
.md
Argument used when doing sequence summary. Used in the sequence classification and multiple choice models. Pass `"tanh"` for a tanh activation to the output, any other value will result in no activation. summary_proj_to_labels (`bool`, *optional*, defaults to `True`): Used in the sequence classification and multiple choice models. Whether the projection outputs should have `config.num_labels` or `config.hidden_size` classes. summary_first_dropout (`float`, *optional*, defaults to 0.1):
163_4_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flaubertconfig
.md
summary_first_dropout (`float`, *optional*, defaults to 0.1): Used in the sequence classification and multiple choice models. The dropout ratio to be used after the projection and activation. start_n_top (`int`, *optional*, defaults to 5): Used in the SQuAD evaluation script. end_n_top (`int`, *optional*, defaults to 5): Used in the SQuAD evaluation script. mask_token_id (`int`, *optional*, defaults to 0): Model agnostic parameter to identify masked tokens when generating text in an MLM context.
163_4_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flaubertconfig
.md
Model agnostic parameter to identify masked tokens when generating text in an MLM context. lang_id (`int`, *optional*, defaults to 1): The ID of the language used by the model. This parameter is used when generating text in a given language.
163_4_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flauberttokenizer
.md
Construct a Flaubert tokenizer. Based on Byte-Pair Encoding. The tokenization process is the following: - Moses preprocessing and tokenization. - Normalizing all inputs text. - The arguments `special_tokens` and the function `set_special_tokens`, can be used to add additional symbols (like "__classify__") to a vocabulary. - The argument `do_lowercase` controls lower casing (automatically set for pretrained vocabularies).
163_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flauberttokenizer
.md
- The argument `do_lowercase` controls lower casing (automatically set for pretrained vocabularies). This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: vocab_file (`str`): Vocabulary file. merges_file (`str`): Merges file. do_lowercase (`bool`, *optional*, defaults to `False`): Controls lower casing. unk_token (`str`, *optional*, defaults to `"<unk>"`):
163_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flauberttokenizer
.md
Controls lower casing. unk_token (`str`, *optional*, defaults to `"<unk>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. bos_token (`str`, *optional*, defaults to `"<s>"`): The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token. <Tip> When building a sequence using special tokens, this is not the token that is used for the beginning of
163_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flauberttokenizer
.md
<Tip> When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the `cls_token`. </Tip> sep_token (`str`, *optional*, defaults to `"</s>"`): The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.
163_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flauberttokenizer
.md
token of a sequence built with special tokens. pad_token (`str`, *optional*, defaults to `"<pad>"`): The token used for padding, for example when batching sequences of different lengths. cls_token (`str`, *optional*, defaults to `"</s>"`): The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.
163_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flauberttokenizer
.md
instead of per-token classification). It is the first token of the sequence when built with special tokens. mask_token (`str`, *optional*, defaults to `"<special1>"`): The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.
163_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flauberttokenizer
.md
modeling. This is the token which the model will try to predict. additional_special_tokens (`List[str]`, *optional*, defaults to `['<special0>', '<special1>', '<special2>', '<special3>', '<special4>', '<special5>', '<special6>', '<special7>', '<special8>', '<special9>']`): List of additional special tokens. lang2id (`Dict[str, int]`, *optional*): Dictionary mapping languages string identifiers to their IDs. id2lang (`Dict[int, str]`, *optional*):
163_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/flaubert.md
https://huggingface.co/docs/transformers/en/model_doc/flaubert/#flauberttokenizer
.md
Dictionary mapping languages string identifiers to their IDs. id2lang (`Dict[int, str]`, *optional*): Dictionary mapping language IDs to their string identifiers. <frameworkcontent> <pt>
163_5_7