source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xclip.md
https://huggingface.co/docs/transformers/en/model_doc/xclip/#xclipvisionconfig
.md
The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. initializer_factor (`float`, *optional*, defaults to 1): A factor for initializing all weight matrices (should be kept to 1, used internally for initialization testing). drop_path_rate (`float`, *optional*, defaults to 0.0): Stochastic depth rate. Example: ```python
138_6_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xclip.md
https://huggingface.co/docs/transformers/en/model_doc/xclip/#xclipvisionconfig
.md
testing). drop_path_rate (`float`, *optional*, defaults to 0.0): Stochastic depth rate. Example: ```python >>> from transformers import XCLIPVisionModel, XCLIPVisionConfig
138_6_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xclip.md
https://huggingface.co/docs/transformers/en/model_doc/xclip/#xclipvisionconfig
.md
>>> # Initializing a XCLIPVisionModel with microsoft/xclip-base-patch32 style configuration >>> configuration = XCLIPVisionConfig() >>> # Initializing a XCLIPVisionModel model from the microsoft/xclip-base-patch32 style configuration >>> model = XCLIPVisionModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
138_6_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xclip.md
https://huggingface.co/docs/transformers/en/model_doc/xclip/#xclipmodel
.md
This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`XCLIPConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
138_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xclip.md
https://huggingface.co/docs/transformers/en/model_doc/xclip/#xclipmodel
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward - get_text_features - get_video_features
138_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xclip.md
https://huggingface.co/docs/transformers/en/model_doc/xclip/#xcliptextmodel
.md
No docstring available for XCLIPTextModel Methods: forward
138_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/xclip.md
https://huggingface.co/docs/transformers/en/model_doc/xclip/#xclipvisionmodel
.md
No docstring available for XCLIPVisionModel Methods: forward
138_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/videomae.md
https://huggingface.co/docs/transformers/en/model_doc/videomae/
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
139_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/videomae.md
https://huggingface.co/docs/transformers/en/model_doc/videomae/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
139_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/videomae.md
https://huggingface.co/docs/transformers/en/model_doc/videomae/#overview
.md
The VideoMAE model was proposed in [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Zhan Tong, Yibing Song, Jue Wang, Limin Wang. VideoMAE extends masked auto encoders ([MAE](vit_mae)) to video, claiming state-of-the-art performance on several video classification benchmarks. The abstract from the paper is the following:
139_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/videomae.md
https://huggingface.co/docs/transformers/en/model_doc/videomae/#overview
.md
*Pre-training video transformers on extra large-scale datasets is generally required to achieve premier performance on relatively small datasets. In this paper, we show that video masked autoencoders (VideoMAE) are data-efficient learners for self-supervised video pre-training (SSVP). We are inspired by the recent ImageMAE and propose customized video tube masking and reconstruction. These simple designs turn out to be effective for overcoming information leakage caused by the temporal correlation during
139_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/videomae.md
https://huggingface.co/docs/transformers/en/model_doc/videomae/#overview
.md
These simple designs turn out to be effective for overcoming information leakage caused by the temporal correlation during video reconstruction. We obtain three important findings on SSVP: (1) An extremely high proportion of masking ratio (i.e., 90% to 95%) still yields favorable performance of VideoMAE. The temporally redundant video content enables higher masking ratio than that of images. (2) VideoMAE achieves impressive results on very small datasets (i.e., around 3k-4k videos) without using any extra
139_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/videomae.md
https://huggingface.co/docs/transformers/en/model_doc/videomae/#overview
.md
of images. (2) VideoMAE achieves impressive results on very small datasets (i.e., around 3k-4k videos) without using any extra data. This is partially ascribed to the challenging task of video reconstruction to enforce high-level structure learning. (3) VideoMAE shows that data quality is more important than data quantity for SSVP. Domain shift between pre-training and target datasets are important issues in SSVP. Notably, our VideoMAE with the vanilla ViT backbone can achieve 83.9% on Kinects-400, 75.3%
139_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/videomae.md
https://huggingface.co/docs/transformers/en/model_doc/videomae/#overview
.md
are important issues in SSVP. Notably, our VideoMAE with the vanilla ViT backbone can achieve 83.9% on Kinects-400, 75.3% on Something-Something V2, 90.8% on UCF101, and 61.1% on HMDB51 without using any extra data.*
139_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/videomae.md
https://huggingface.co/docs/transformers/en/model_doc/videomae/#overview
.md
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/videomae_architecture.jpeg" alt="drawing" width="600"/> <small> VideoMAE pre-training. Taken from the <a href="https://arxiv.org/abs/2203.12602">original paper</a>. </small> This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/MCG-NJU/VideoMAE).
139_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/videomae.md
https://huggingface.co/docs/transformers/en/model_doc/videomae/#using-scaled-dot-product-attention-sdpa
.md
PyTorch includes a native scaled dot-product attention (SDPA) operator as part of `torch.nn.functional`. This function encompasses several implementations that can be applied depending on the inputs and the hardware in use. See the [official documentation](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) or the [GPU Inference](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention) page for more information.
139_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/videomae.md
https://huggingface.co/docs/transformers/en/model_doc/videomae/#using-scaled-dot-product-attention-sdpa
.md
page for more information. SDPA is used by default for `torch>=2.1.1` when an implementation is available, but you may also set `attn_implementation="sdpa"` in `from_pretrained()` to explicitly request SDPA to be used. ``` from transformers import VideoMAEForVideoClassification model = VideoMAEForVideoClassification.from_pretrained("MCG-NJU/videomae-base-finetuned-kinetics", attn_implementation="sdpa", torch_dtype=torch.float16) ... ```
139_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/videomae.md
https://huggingface.co/docs/transformers/en/model_doc/videomae/#using-scaled-dot-product-attention-sdpa
.md
... ``` For the best speedups, we recommend loading the model in half-precision (e.g. `torch.float16` or `torch.bfloat16`). On a local benchmark (A100-40GB, PyTorch 2.3.0, OS Ubuntu 22.04) with `float32` and `MCG-NJU/videomae-base-finetuned-kinetics` model, we saw the following speedups during inference. | Batch size | Average inference time (ms), eager mode | Average inference time (ms), sdpa model | Speed up, Sdpa / Eager (x) |
139_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/videomae.md
https://huggingface.co/docs/transformers/en/model_doc/videomae/#using-scaled-dot-product-attention-sdpa
.md
|--------------|-------------------------------------------|-------------------------------------------|------------------------------| | 1 | 37 | 10 | 3.7 | | 2 | 24 | 18 | 1.33 |
139_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/videomae.md
https://huggingface.co/docs/transformers/en/model_doc/videomae/#using-scaled-dot-product-attention-sdpa
.md
| 4 | 43 | 32 | 1.34 | | 8 | 84 | 60 | 1.4 |
139_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/videomae.md
https://huggingface.co/docs/transformers/en/model_doc/videomae/#resources
.md
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with VideoMAE. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. **Video classification** - [A notebook](https://github.com/huggingface/notebooks/blob/main/examples/video_classification.ipynb) that shows how
139_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/videomae.md
https://huggingface.co/docs/transformers/en/model_doc/videomae/#resources
.md
- [A notebook](https://github.com/huggingface/notebooks/blob/main/examples/video_classification.ipynb) that shows how to fine-tune a VideoMAE model on a custom dataset. - [Video classification task guide](../tasks/video_classification) - [A 🤗 Space](https://huggingface.co/spaces/sayakpaul/video-classification-ucf101-subset) showing how to perform inference with a video classification model.
139_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/videomae.md
https://huggingface.co/docs/transformers/en/model_doc/videomae/#videomaeconfig
.md
This is the configuration class to store the configuration of a [`VideoMAEModel`]. It is used to instantiate a VideoMAE model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the VideoMAE [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
139_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/videomae.md
https://huggingface.co/docs/transformers/en/model_doc/videomae/#videomaeconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: image_size (`int`, *optional*, defaults to 224): The size (resolution) of each image. patch_size (`int`, *optional*, defaults to 16): The size (resolution) of each patch. num_channels (`int`, *optional*, defaults to 3): The number of input channels. num_frames (`int`, *optional*, defaults to 16):
139_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/videomae.md
https://huggingface.co/docs/transformers/en/model_doc/videomae/#videomaeconfig
.md
num_channels (`int`, *optional*, defaults to 3): The number of input channels. num_frames (`int`, *optional*, defaults to 16): The number of frames in each video. tubelet_size (`int`, *optional*, defaults to 2): The number of tubelets. hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12):
139_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/videomae.md
https://huggingface.co/docs/transformers/en/model_doc/videomae/#videomaeconfig
.md
Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
139_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/videomae.md
https://huggingface.co/docs/transformers/en/model_doc/videomae/#videomaeconfig
.md
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 0.02):
139_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/videomae.md
https://huggingface.co/docs/transformers/en/model_doc/videomae/#videomaeconfig
.md
The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. qkv_bias (`bool`, *optional*, defaults to `True`): Whether to add a bias to the queries, keys and values. use_mean_pooling (`bool`, *optional*, defaults to `True`):
139_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/videomae.md
https://huggingface.co/docs/transformers/en/model_doc/videomae/#videomaeconfig
.md
Whether to add a bias to the queries, keys and values. use_mean_pooling (`bool`, *optional*, defaults to `True`): Whether to mean pool the final hidden states instead of using the final hidden state of the [CLS] token. decoder_num_attention_heads (`int`, *optional*, defaults to 6): Number of attention heads for each attention layer in the decoder. decoder_hidden_size (`int`, *optional*, defaults to 384): Dimensionality of the decoder. decoder_num_hidden_layers (`int`, *optional*, defaults to 4):
139_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/videomae.md
https://huggingface.co/docs/transformers/en/model_doc/videomae/#videomaeconfig
.md
Dimensionality of the decoder. decoder_num_hidden_layers (`int`, *optional*, defaults to 4): Number of hidden layers in the decoder. decoder_intermediate_size (`int`, *optional*, defaults to 1536): Dimensionality of the "intermediate" (i.e., feed-forward) layer in the decoder. norm_pix_loss (`bool`, *optional*, defaults to `True`): Whether to normalize the target patch pixels. Example: ```python >>> from transformers import VideoMAEConfig, VideoMAEModel
139_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/videomae.md
https://huggingface.co/docs/transformers/en/model_doc/videomae/#videomaeconfig
.md
>>> # Initializing a VideoMAE videomae-base style configuration >>> configuration = VideoMAEConfig() >>> # Randomly initializing a model from the configuration >>> model = VideoMAEModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
139_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/videomae.md
https://huggingface.co/docs/transformers/en/model_doc/videomae/#videomaefeatureextractor
.md
No docstring available for VideoMAEFeatureExtractor Methods: __call__
139_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/videomae.md
https://huggingface.co/docs/transformers/en/model_doc/videomae/#videomaeimageprocessor
.md
Constructs a VideoMAE image processor. Args: do_resize (`bool`, *optional*, defaults to `True`): Whether to resize the image's (height, width) dimensions to the specified `size`. Can be overridden by the `do_resize` parameter in the `preprocess` method. size (`Dict[str, int]` *optional*, defaults to `{"shortest_edge": 224}`): Size of the output image after resizing. The shortest edge of the image will be resized to
139_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/videomae.md
https://huggingface.co/docs/transformers/en/model_doc/videomae/#videomaeimageprocessor
.md
Size of the output image after resizing. The shortest edge of the image will be resized to `size["shortest_edge"]` while maintaining the aspect ratio of the original image. Can be overriden by `size` in the `preprocess` method. resample (`PILImageResampling`, *optional*, defaults to `Resampling.BILINEAR`): Resampling filter to use if resizing the image. Can be overridden by the `resample` parameter in the `preprocess` method. do_center_crop (`bool`, *optional*, defaults to `True`):
139_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/videomae.md
https://huggingface.co/docs/transformers/en/model_doc/videomae/#videomaeimageprocessor
.md
`preprocess` method. do_center_crop (`bool`, *optional*, defaults to `True`): Whether to center crop the image to the specified `crop_size`. Can be overridden by the `do_center_crop` parameter in the `preprocess` method. crop_size (`Dict[str, int]`, *optional*, defaults to `{"height": 224, "width": 224}`): Size of the image after applying the center crop. Can be overridden by the `crop_size` parameter in the `preprocess` method. do_rescale (`bool`, *optional*, defaults to `True`):
139_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/videomae.md
https://huggingface.co/docs/transformers/en/model_doc/videomae/#videomaeimageprocessor
.md
`preprocess` method. do_rescale (`bool`, *optional*, defaults to `True`): Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the `do_rescale` parameter in the `preprocess` method. rescale_factor (`int` or `float`, *optional*, defaults to `1/255`): Defines the scale factor to use if rescaling the image. Can be overridden by the `rescale_factor` parameter in the `preprocess` method. do_normalize (`bool`, *optional*, defaults to `True`):
139_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/videomae.md
https://huggingface.co/docs/transformers/en/model_doc/videomae/#videomaeimageprocessor
.md
in the `preprocess` method. do_normalize (`bool`, *optional*, defaults to `True`): Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess` method. image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`): Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method.
139_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/videomae.md
https://huggingface.co/docs/transformers/en/model_doc/videomae/#videomaeimageprocessor
.md
channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_STD`): Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method. Methods: preprocess
139_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/videomae.md
https://huggingface.co/docs/transformers/en/model_doc/videomae/#videomaemodel
.md
The bare VideoMAE Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`VideoMAEConfig`]): Model configuration class with all the parameters of the model.
139_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/videomae.md
https://huggingface.co/docs/transformers/en/model_doc/videomae/#videomaemodel
.md
behavior. Parameters: config ([`VideoMAEConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
139_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/videomae.md
https://huggingface.co/docs/transformers/en/model_doc/videomae/#videomaeforpretraining
.md
`VideoMAEForPreTraining` includes the decoder on top for self-supervised pre-training. The VideoMAE Model transformer with the decoder on top for self-supervised pre-training. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
139_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/videomae.md
https://huggingface.co/docs/transformers/en/model_doc/videomae/#videomaeforpretraining
.md
behavior. Parameters: config ([`VideoMAEConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
139_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/videomae.md
https://huggingface.co/docs/transformers/en/model_doc/videomae/#videomaeforvideoclassification
.md
VideoMAE Model transformer with a video classification head on top (a linear layer on top of the average pooled hidden states of all tokens) e.g. for ImageNet. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`VideoMAEConfig`]): Model configuration class with all the parameters of the model.
139_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/videomae.md
https://huggingface.co/docs/transformers/en/model_doc/videomae/#videomaeforvideoclassification
.md
behavior. Parameters: config ([`VideoMAEConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
139_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/
.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
140_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
140_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#overview
.md
The Vision Transformer (ViT) model was proposed in [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. It's the first paper that successfully trains a Transformer encoder on ImageNet, attaining
140_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#overview
.md
Uszkoreit, Neil Houlsby. It's the first paper that successfully trains a Transformer encoder on ImageNet, attaining very good results compared to familiar convolutional architectures. The abstract from the paper is the following: *While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with
140_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#overview
.md
applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of
140_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#overview
.md
sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.*
140_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#overview
.md
substantially fewer computational resources to train.* <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/vit_architecture.jpg" alt="drawing" width="600"/> <small> ViT architecture. Taken from the <a href="https://arxiv.org/abs/2010.11929">original paper.</a> </small> Following the original Vision Transformer, some follow-up works have been made:
140_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#overview
.md
Following the original Vision Transformer, some follow-up works have been made: - [DeiT](deit) (Data-efficient Image Transformers) by Facebook AI. DeiT models are distilled vision transformers. The authors of DeiT also released more efficiently trained ViT models, which you can directly plug into [`ViTModel`] or [`ViTForImageClassification`]. There are 4 variants available (in 3 different sizes): *facebook/deit-tiny-patch16-224*,
140_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#overview
.md
[`ViTForImageClassification`]. There are 4 variants available (in 3 different sizes): *facebook/deit-tiny-patch16-224*, *facebook/deit-small-patch16-224*, *facebook/deit-base-patch16-224* and *facebook/deit-base-patch16-384*. Note that one should use [`DeiTImageProcessor`] in order to prepare images for the model. - [BEiT](beit) (BERT pre-training of Image Transformers) by Microsoft Research. BEiT models outperform supervised pre-trained
140_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#overview
.md
- [BEiT](beit) (BERT pre-training of Image Transformers) by Microsoft Research. BEiT models outperform supervised pre-trained vision transformers using a self-supervised method inspired by BERT (masked image modeling) and based on a VQ-VAE. - DINO (a method for self-supervised training of Vision Transformers) by Facebook AI. Vision Transformers trained using the DINO method show very interesting properties not seen with convolutional models. They are capable of segmenting
140_1_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#overview
.md
the DINO method show very interesting properties not seen with convolutional models. They are capable of segmenting objects, without having ever been trained to do so. DINO checkpoints can be found on the [hub](https://huggingface.co/models?other=dino). - [MAE](vit_mae) (Masked Autoencoders) by Facebook AI. By pre-training Vision Transformers to reconstruct pixel values for a high portion
140_1_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#overview
.md
(75%) of masked patches (using an asymmetric encoder-decoder architecture), the authors show that this simple method outperforms supervised pre-training after fine-tuning. This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code (written in JAX) can be found [here](https://github.com/google-research/vision_transformer). Note that we converted the weights from Ross Wightman's [timm library](https://github.com/rwightman/pytorch-image-models),
140_1_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#overview
.md
Note that we converted the weights from Ross Wightman's [timm library](https://github.com/rwightman/pytorch-image-models), who already converted the weights from JAX to PyTorch. Credits go to him!
140_1_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#usage-tips
.md
- To feed images to the Transformer encoder, each image is split into a sequence of fixed-size non-overlapping patches, which are then linearly embedded. A [CLS] token is added to serve as representation of an entire image, which can be used for classification. The authors also add absolute position embeddings, and feed the resulting sequence of vectors to a standard Transformer encoder. - As the Vision Transformer expects each image to be of the same size (resolution), one can use
140_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#usage-tips
.md
- As the Vision Transformer expects each image to be of the same size (resolution), one can use [`ViTImageProcessor`] to resize (or rescale) and normalize images for the model. - Both the patch resolution and image resolution used during pre-training or fine-tuning are reflected in the name of each checkpoint. For example, `google/vit-base-patch16-224` refers to a base-sized architecture with patch
140_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#usage-tips
.md
each checkpoint. For example, `google/vit-base-patch16-224` refers to a base-sized architecture with patch resolution of 16x16 and fine-tuning resolution of 224x224. All checkpoints can be found on the [hub](https://huggingface.co/models?search=vit). - The available checkpoints are either (1) pre-trained on [ImageNet-21k](http://www.image-net.org/) (a collection of
140_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#usage-tips
.md
- The available checkpoints are either (1) pre-trained on [ImageNet-21k](http://www.image-net.org/) (a collection of 14 million images and 21k classes) only, or (2) also fine-tuned on [ImageNet](http://www.image-net.org/challenges/LSVRC/2012/) (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). - The Vision Transformer was pre-trained using a resolution of 224x224. During fine-tuning, it is often beneficial to
140_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#usage-tips
.md
- The Vision Transformer was pre-trained using a resolution of 224x224. During fine-tuning, it is often beneficial to use a higher resolution than pre-training [(Touvron et al., 2019)](https://arxiv.org/abs/1906.06423), [(Kolesnikov et al., 2020)](https://arxiv.org/abs/1912.11370). In order to fine-tune at higher resolution, the authors perform 2D interpolation of the pre-trained position embeddings, according to their location in the original image.
140_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#usage-tips
.md
2D interpolation of the pre-trained position embeddings, according to their location in the original image. - The best results are obtained with supervised pre-training, which is not the case in NLP. The authors also performed an experiment with a self-supervised pre-training objective, namely masked patched prediction (inspired by masked language modeling). With this approach, the smaller ViT-B/16 model achieves 79.9% accuracy on ImageNet, a significant
140_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#usage-tips
.md
language modeling). With this approach, the smaller ViT-B/16 model achieves 79.9% accuracy on ImageNet, a significant improvement of 2% to training from scratch, but still 4% behind supervised pre-training.
140_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#using-scaled-dot-product-attention-sdpa
.md
PyTorch includes a native scaled dot-product attention (SDPA) operator as part of `torch.nn.functional`. This function encompasses several implementations that can be applied depending on the inputs and the hardware in use. See the [official documentation](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) or the [GPU Inference](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention) page for more information.
140_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#using-scaled-dot-product-attention-sdpa
.md
page for more information. SDPA is used by default for `torch>=2.1.1` when an implementation is available, but you may also set `attn_implementation="sdpa"` in `from_pretrained()` to explicitly request SDPA to be used. ``` from transformers import ViTForImageClassification model = ViTForImageClassification.from_pretrained("google/vit-base-patch16-224", attn_implementation="sdpa", torch_dtype=torch.float16) ... ```
140_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#using-scaled-dot-product-attention-sdpa
.md
... ``` For the best speedups, we recommend loading the model in half-precision (e.g. `torch.float16` or `torch.bfloat16`). On a local benchmark (A100-40GB, PyTorch 2.3.0, OS Ubuntu 22.04) with `float32` and `google/vit-base-patch16-224` model, we saw the following speedups during inference. | Batch size | Average inference time (ms), eager mode | Average inference time (ms), sdpa model | Speed up, Sdpa / Eager (x) |
140_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#using-scaled-dot-product-attention-sdpa
.md
|--------------|-------------------------------------------|-------------------------------------------|------------------------------| | 1 | 7 | 6 | 1.17 | | 2 | 8 | 6 | 1.33 |
140_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#using-scaled-dot-product-attention-sdpa
.md
| 4 | 8 | 6 | 1.33 | | 8 | 8 | 6 | 1.33 |
140_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#resources
.md
Demo notebooks regarding inference as well as fine-tuning ViT on custom data can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/VisionTransformer).
140_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#resources
.md
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ViT. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. `ViTForImageClassification` is supported by: <PipelineTag pipeline="image-classification"/>
140_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#resources
.md
`ViTForImageClassification` is supported by: <PipelineTag pipeline="image-classification"/> - A blog post on how to [Fine-Tune ViT for Image Classification with Hugging Face Transformers](https://huggingface.co/blog/fine-tune-vit) - A blog post on [Image Classification with Hugging Face Transformers and `Keras`](https://www.philschmid.de/image-classification-huggingface-transformers-keras)
140_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#resources
.md
- A notebook on [Fine-tuning for Image Classification with Hugging Face Transformers](https://github.com/huggingface/notebooks/blob/main/examples/image_classification.ipynb) - A notebook on how to [Fine-tune the Vision Transformer on CIFAR-10 with the Hugging Face Trainer](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_the_%F0%9F%A4%97_Trainer.ipynb)
140_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#resources
.md
- A notebook on how to [Fine-tune the Vision Transformer on CIFAR-10 with PyTorch Lightning](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_PyTorch_Lightning.ipynb) ⚗️ Optimization - A blog post on how to [Accelerate Vision Transformer (ViT) with Quantization using Optimum](https://www.philschmid.de/optimizing-vision-transformer) ⚡️ Inference
140_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#resources
.md
⚡️ Inference - A notebook on [Quick demo: Vision Transformer (ViT) by Google Brain](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Quick_demo_of_HuggingFace_version_of_Vision_Transformer_inference.ipynb) 🚀 Deploy - A blog post on [Deploying Tensorflow Vision Models in Hugging Face with TF Serving](https://huggingface.co/blog/tf-serving-vision) - A blog post on [Deploying Hugging Face ViT on Vertex AI](https://huggingface.co/blog/deploy-vertex-ai)
140_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#resources
.md
- A blog post on [Deploying Hugging Face ViT on Vertex AI](https://huggingface.co/blog/deploy-vertex-ai) - A blog post on [Deploying Hugging Face ViT on Kubernetes with TF Serving](https://huggingface.co/blog/deploy-tfserving-kubernetes)
140_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#vitconfig
.md
This is the configuration class to store the configuration of a [`ViTModel`]. It is used to instantiate an ViT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the ViT [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
140_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#vitconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12):
140_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#vitconfig
.md
Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
140_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#vitconfig
.md
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 0.02):
140_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#vitconfig
.md
The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. image_size (`int`, *optional*, defaults to 224): The size (resolution) of each image. patch_size (`int`, *optional*, defaults to 16): The size (resolution) of each patch.
140_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#vitconfig
.md
The size (resolution) of each image. patch_size (`int`, *optional*, defaults to 16): The size (resolution) of each patch. num_channels (`int`, *optional*, defaults to 3): The number of input channels. qkv_bias (`bool`, *optional*, defaults to `True`): Whether to add a bias to the queries, keys and values. encoder_stride (`int`, *optional*, defaults to 16): Factor to increase the spatial resolution by in the decoder head for masked image modeling. Example: ```python
140_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#vitconfig
.md
Factor to increase the spatial resolution by in the decoder head for masked image modeling. Example: ```python >>> from transformers import ViTConfig, ViTModel
140_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#vitconfig
.md
>>> # Initializing a ViT vit-base-patch16-224 style configuration >>> configuration = ViTConfig() >>> # Initializing a model (with random weights) from the vit-base-patch16-224 style configuration >>> model = ViTModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
140_5_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#vitfeatureextractor
.md
No docstring available for ViTFeatureExtractor Methods: __call__
140_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#vitimageprocessor
.md
Constructs a ViT image processor. Args: do_resize (`bool`, *optional*, defaults to `True`): Whether to resize the image's (height, width) dimensions to the specified `(size["height"], size["width"])`. Can be overridden by the `do_resize` parameter in the `preprocess` method. size (`dict`, *optional*, defaults to `{"height": 224, "width": 224}`): Size of the output image after resizing. Can be overridden by the `size` parameter in the `preprocess` method.
140_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#vitimageprocessor
.md
Size of the output image after resizing. Can be overridden by the `size` parameter in the `preprocess` method. resample (`PILImageResampling`, *optional*, defaults to `Resampling.BILINEAR`): Resampling filter to use if resizing the image. Can be overridden by the `resample` parameter in the `preprocess` method. do_rescale (`bool`, *optional*, defaults to `True`): Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the `do_rescale` parameter in the `preprocess` method.
140_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#vitimageprocessor
.md
parameter in the `preprocess` method. rescale_factor (`int` or `float`, *optional*, defaults to `1/255`): Scale factor to use if rescaling the image. Can be overridden by the `rescale_factor` parameter in the `preprocess` method. do_normalize (`bool`, *optional*, defaults to `True`): Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess` method. image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`):
140_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#vitimageprocessor
.md
method. image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`): Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_STD`): Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
140_7_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#vitimageprocessor
.md
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method. do_convert_rgb (`bool`, *optional*): Whether to convert the image to RGB. Methods: preprocess
140_7_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#vitimageprocessorfast
.md
Constructs a ViT image processor. Args: do_resize (`bool`, *optional*, defaults to `True`): Whether to resize the image's (height, width) dimensions to the specified `(size["height"], size["width"])`. Can be overridden by the `do_resize` parameter in the `preprocess` method. size (`dict`, *optional*, defaults to `{"height": 224, "width": 224}`): Size of the output image after resizing. Can be overridden by the `size` parameter in the `preprocess` method.
140_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#vitimageprocessorfast
.md
Size of the output image after resizing. Can be overridden by the `size` parameter in the `preprocess` method. resample (`PILImageResampling`, *optional*, defaults to `Resampling.BILINEAR`): Resampling filter to use if resizing the image. Can be overridden by the `resample` parameter in the `preprocess` method. do_rescale (`bool`, *optional*, defaults to `True`): Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the `do_rescale` parameter in the `preprocess` method.
140_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#vitimageprocessorfast
.md
parameter in the `preprocess` method. rescale_factor (`int` or `float`, *optional*, defaults to `1/255`): Scale factor to use if rescaling the image. Can be overridden by the `rescale_factor` parameter in the `preprocess` method. do_normalize (`bool`, *optional*, defaults to `True`): Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess` method. image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`):
140_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#vitimageprocessorfast
.md
method. image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`): Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_STD`): Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
140_8_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#vitimageprocessorfast
.md
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method. do_convert_rgb (`bool`, *optional*): Whether to convert the image to RGB. Methods: preprocess <frameworkcontent> <pt>
140_8_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#vitmodel
.md
The bare ViT Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`ViTConfig`]): Model configuration class with all the parameters of the model.
140_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#vitmodel
.md
behavior. Parameters: config ([`ViTConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
140_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#vitformaskedimagemodeling
.md
ViT Model with a decoder on top for masked image modeling, as proposed in [SimMIM](https://arxiv.org/abs/2111.09886). <Tip> Note that we provide a script to pre-train this model on custom data in our [examples directory](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-pretraining). </Tip> This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
140_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#vitformaskedimagemodeling
.md
</Tip> This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`ViTConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
140_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/vit.md
https://huggingface.co/docs/transformers/en/model_doc/vit/#vitformaskedimagemodeling
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
140_10_2