source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nat.md
https://huggingface.co/docs/transformers/en/model_doc/nat/#resources
.md
- See also: [Image classification task guide](../tasks/image_classification) If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
361_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nat.md
https://huggingface.co/docs/transformers/en/model_doc/nat/#natconfig
.md
This is the configuration class to store the configuration of a [`NatModel`]. It is used to instantiate a Nat model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Nat [shi-labs/nat-mini-in1k-224](https://huggingface.co/shi-labs/nat-mini-in1k-224) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
361_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nat.md
https://huggingface.co/docs/transformers/en/model_doc/nat/#natconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: patch_size (`int`, *optional*, defaults to 4): The size (resolution) of each patch. NOTE: Only patch size of 4 is supported at the moment. num_channels (`int`, *optional*, defaults to 3): The number of input channels. embed_dim (`int`, *optional*, defaults to 64): Dimensionality of patch embedding.
361_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nat.md
https://huggingface.co/docs/transformers/en/model_doc/nat/#natconfig
.md
The number of input channels. embed_dim (`int`, *optional*, defaults to 64): Dimensionality of patch embedding. depths (`List[int]`, *optional*, defaults to `[3, 4, 6, 5]`): Number of layers in each level of the encoder. num_heads (`List[int]`, *optional*, defaults to `[2, 4, 8, 16]`): Number of attention heads in each layer of the Transformer encoder. kernel_size (`int`, *optional*, defaults to 7): Neighborhood Attention kernel size. mlp_ratio (`float`, *optional*, defaults to 3.0):
361_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nat.md
https://huggingface.co/docs/transformers/en/model_doc/nat/#natconfig
.md
Neighborhood Attention kernel size. mlp_ratio (`float`, *optional*, defaults to 3.0): Ratio of MLP hidden dimensionality to embedding dimensionality. qkv_bias (`bool`, *optional*, defaults to `True`): Whether or not a learnable bias should be added to the queries, keys and values. hidden_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout probability for all fully connected layers in the embeddings and encoder. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0):
361_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nat.md
https://huggingface.co/docs/transformers/en/model_doc/nat/#natconfig
.md
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. drop_path_rate (`float`, *optional*, defaults to 0.1): Stochastic depth rate. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. initializer_range (`float`, *optional*, defaults to 0.02):
361_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nat.md
https://huggingface.co/docs/transformers/en/model_doc/nat/#natconfig
.md
`"selu"` and `"gelu_new"` are supported. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon used by the layer normalization layers. layer_scale_init_value (`float`, *optional*, defaults to 0.0): The initial value for the layer scale. Disabled if <=0. out_features (`List[str]`, *optional*):
361_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nat.md
https://huggingface.co/docs/transformers/en/model_doc/nat/#natconfig
.md
The initial value for the layer scale. Disabled if <=0. out_features (`List[str]`, *optional*): If used as backbone, list of features to output. Can be any of `"stem"`, `"stage1"`, `"stage2"`, etc. (depending on how many stages the model has). If unset and `out_indices` is set, will default to the corresponding stages. If unset and `out_indices` is unset, will default to the last stage. Must be in the same order as defined in the `stage_names` attribute. out_indices (`List[int]`, *optional*):
361_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nat.md
https://huggingface.co/docs/transformers/en/model_doc/nat/#natconfig
.md
same order as defined in the `stage_names` attribute. out_indices (`List[int]`, *optional*): If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how many stages the model has). If unset and `out_features` is set, will default to the corresponding stages. If unset and `out_features` is unset, will default to the last stage. Must be in the same order as defined in the `stage_names` attribute. Example: ```python
361_5_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nat.md
https://huggingface.co/docs/transformers/en/model_doc/nat/#natconfig
.md
same order as defined in the `stage_names` attribute. Example: ```python >>> from transformers import NatConfig, NatModel
361_5_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nat.md
https://huggingface.co/docs/transformers/en/model_doc/nat/#natconfig
.md
>>> # Initializing a Nat shi-labs/nat-mini-in1k-224 style configuration >>> configuration = NatConfig() >>> # Initializing a model (with random weights) from the shi-labs/nat-mini-in1k-224 style configuration >>> model = NatModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
361_5_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nat.md
https://huggingface.co/docs/transformers/en/model_doc/nat/#natmodel
.md
The bare Nat Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`NatConfig`]): Model configuration class with all the parameters of the model.
361_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nat.md
https://huggingface.co/docs/transformers/en/model_doc/nat/#natmodel
.md
behavior. Parameters: config ([`NatConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
361_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nat.md
https://huggingface.co/docs/transformers/en/model_doc/nat/#natforimageclassification
.md
Nat Model transformer with an image classification head on top (a linear layer on top of the final hidden state of the [CLS] token) e.g. for ImageNet. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`NatConfig`]): Model configuration class with all the parameters of the model.
361_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/nat.md
https://huggingface.co/docs/transformers/en/model_doc/nat/#natforimageclassification
.md
behavior. Parameters: config ([`NatConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
361_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
362_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
362_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/#overview
.md
The Pixtral model was released by the Mistral AI team in a [blog post](https://mistral.ai/news/pixtral-12b/). Pixtral is a multimodal version of [Mistral](mistral), incorporating a 400 million parameter vision encoder trained from scratch. The intro from the blog says the following:
362_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/#overview
.md
*Pixtral is trained to understand both natural images and documents, achieving 52.5% on the MMMU reasoning benchmark, surpassing a number of larger models. The model shows strong abilities in tasks such as chart and figure understanding, document question answering, multimodal reasoning and instruction following. Pixtral is able to ingest images at their natural resolution and aspect ratio, giving the user flexibility on the number of tokens used to process an image. Pixtral is also able to process any
362_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/#overview
.md
aspect ratio, giving the user flexibility on the number of tokens used to process an image. Pixtral is also able to process any number of images in its long context window of 128K tokens. Unlike previous open-source models, Pixtral does not compromise on text benchmark performance to excel in multimodal tasks.*
362_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/#overview
.md
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/pixtral_architecture.webp" alt="drawing" width="600"/> <small> Pixtral architecture. Taken from the <a href="https://mistral.ai/news/pixtral-12b/">blog post.</a> </small> Tips: - Pixtral is a multimodal model, taking images and text as input, and producing text as output.
362_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/#overview
.md
Tips: - Pixtral is a multimodal model, taking images and text as input, and producing text as output. - This model follows the [Llava](llava) architecture. The model uses [`PixtralVisionModel`] for its vision encoder, and [`MistralForCausalLM`] for its language decoder. - The main contribution is the 2d ROPE (rotary position embeddings) on the images, and support for arbitrary image sizes (the images are not padded together nor are they resized).
362_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/#overview
.md
- Similar to [Llava](llava), the model internally replaces the `[IMG]` token placeholders by image embeddings from the vision encoder. The format for one or multiple prompts is the following: ``` "<s>[INST][IMG]\nWhat are the things I should be cautious about when I visit this place?[/INST]" ```
362_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/#overview
.md
"<s>[INST][IMG]\nWhat are the things I should be cautious about when I visit this place?[/INST]" ``` Then, the processor will replace each `[IMG]` token with a number of `[IMG]` tokens that depend on the height and the width of each image. Each *row* of the image is separated by an `[IMG_BREAK]` token, and each image is separated by an `[IMG_END]` token. It's advised to use the `apply_chat_template` method of the processor, which takes care of all of this. See the [usage section](#usage) for more info.
362_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/#overview
.md
This model was contributed by [amyeroberts](https://huggingface.co/amyeroberts) and [ArthurZ](https://huggingface.co/ArthurZ). The original code can be found [here](https://github.com/vllm-project/vllm/pull/8377).
362_1_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/#usage
.md
At inference time, it's advised to use the processor's `apply_chat_template` method, which correctly formats the prompt for the model: ```python from transformers import AutoProcessor, LlavaForConditionalGeneration from PIL import Image model_id = "mistral-community/pixtral-12b" processor = AutoProcessor.from_pretrained(model_id) model = LlavaForConditionalGeneration.from_pretrained(model_id).to("cuda")
362_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/#usage
.md
url_dog = "https://picsum.photos/id/237/200/300" url_mountain = "https://picsum.photos/seed/picsum/200/300" chat = [ { "role": "user", "content": [ {"type": "text", "content": "Can this animal"}, {"type": "image"}, {"type": "text", "content": "live here?"}, {"type": "image"} ] } ]
362_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/#usage
.md
prompt = processor.apply_chat_template(chat) inputs = processor(text=prompt, images=[url_dog, url_mountain], return_tensors="pt").to(model.device) generate_ids = model.generate(**inputs, max_new_tokens=500) output = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] ```
362_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/#pixtralvisionconfig
.md
This is the configuration class to store the configuration of a [`PixtralVisionModel`]. It is used to instantiate an Pixtral vision encoder according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to the vision encoder used by Pixtral-12B. e.g. [pixtral-hf/pixtral-9b](https://huggingface.co/pixtral-hf/pixtral-9b)
362_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/#pixtralvisionconfig
.md
e.g. [pixtral-hf/pixtral-9b](https://huggingface.co/pixtral-hf/pixtral-9b) Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: hidden_size (`int`, *optional*, defaults to 1024): Dimension of the hidden representations. intermediate_size (`int`, *optional*, defaults to 4096): Dimension of the MLP representations. num_hidden_layers (`int`, *optional*, defaults to 24):
362_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/#pixtralvisionconfig
.md
Dimension of the MLP representations. num_hidden_layers (`int`, *optional*, defaults to 24): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 16): Number of attention heads in the Transformer encoder. num_channels (`int`, *optional*, defaults to 3): Number of input channels in the input images. image_size (`int`, *optional*, defaults to 1024): Max dimension of the input images. patch_size (`int`, *optional*, defaults to 16): Size of the image patches.
362_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/#pixtralvisionconfig
.md
Max dimension of the input images. patch_size (`int`, *optional*, defaults to 16): Size of the image patches. hidden_act (`str`, *optional*, defaults to `"gelu"`): Activation function used in the hidden layers. attention_dropout (`float`, *optional*, defaults to 0.0): Dropout probability for the attention layers. rope_theta (`float`, *optional*, defaults to 10000.0): The base period of the RoPE embeddings. initializer_range (`float`, *optional*, defaults to 0.02):
362_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/#pixtralvisionconfig
.md
The base period of the RoPE embeddings. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. Example: ```python >>> from transformers import PixtralVisionModel, PixtralVisionConfig
362_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/#pixtralvisionconfig
.md
>>> # Initializing a Pixtral-12B style configuration >>> config = PixtralVisionConfig() >>> # Initializing a model (with randomly initialized weights) from the configuration >>> model = PixtralVisionModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
362_3_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/#pixtralvisionmodel
.md
The bare Pixtral vision encoder outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
362_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/#pixtralvisionmodel
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`PixtralVisionConfig`]): Model configuration class with all the parameters of the vision encoder. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
362_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/#pixtralvisionmodel
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
362_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/#pixtralimageprocessor
.md
Constructs a Pixtral image processor. Args: do_resize (`bool`, *optional*, defaults to `True`): Whether to resize the image's (height, width) dimensions to the specified `size`. Can be overridden by `do_resize` in the `preprocess` method. size (`Dict[str, int]` *optional*, defaults to `{"longest_edge": 1024}`): Size of the maximum dimension of either the height or width dimension of the image. Used to control how
362_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/#pixtralimageprocessor
.md
Size of the maximum dimension of either the height or width dimension of the image. Used to control how images are resized. If either the height or width are greater than `size["longest_edge"]` then both the height and width are rescaled by `height / ratio`, `width /ratio` where `ratio = max(height / longest_edge, width / longest_edge)` patch_size (`Dict[str, int]` *optional*, defaults to `{"height": 16, "width": 16}`):
362_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/#pixtralimageprocessor
.md
patch_size (`Dict[str, int]` *optional*, defaults to `{"height": 16, "width": 16}`): Size of the patches in the model, used to calculate the output image size. Can be overridden by `patch_size` in the `preprocess` method. resample (`PILImageResampling`, *optional*, defaults to `Resampling.BICUBIC`): Resampling filter to use if resizing the image. Can be overridden by `resample` in the `preprocess` method. do_rescale (`bool`, *optional*, defaults to `True`):
362_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/#pixtralimageprocessor
.md
do_rescale (`bool`, *optional*, defaults to `True`): Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by `do_rescale` in the `preprocess` method. rescale_factor (`int` or `float`, *optional*, defaults to `1/255`): Scale factor to use if rescaling the image. Can be overridden by `rescale_factor` in the `preprocess` method. do_normalize (`bool`, *optional*, defaults to `True`):
362_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/#pixtralimageprocessor
.md
method. do_normalize (`bool`, *optional*, defaults to `True`): Whether to normalize the image. Can be overridden by `do_normalize` in the `preprocess` method. image_mean (`float` or `List[float]`, *optional*, defaults to `[0.48145466, 0.4578275, 0.40821073]`): Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method.
362_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/#pixtralimageprocessor
.md
channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. image_std (`float` or `List[float]`, *optional*, defaults to `[0.26862954, 0.26130258, 0.27577711]`): Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method. Can be overridden by the `image_std` parameter in the `preprocess` method.
362_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/#pixtralimageprocessor
.md
Can be overridden by the `image_std` parameter in the `preprocess` method. do_convert_rgb (`bool`, *optional*, defaults to `True`): Whether to convert the image to RGB. Methods: preprocess
362_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/#pixtralimageprocessorfast
.md
Constructs a fast Pixtral image processor that leverages torchvision. Args: do_resize (`bool`, *optional*, defaults to `True`): Whether to resize the image's (height, width) dimensions to the specified `size`. Can be overridden by `do_resize` in the `preprocess` method. size (`Dict[str, int]` *optional*, defaults to `{"longest_edge": 1024}`): Size of the maximum dimension of either the height or width dimension of the image. Used to control how
362_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/#pixtralimageprocessorfast
.md
Size of the maximum dimension of either the height or width dimension of the image. Used to control how images are resized. If either the height or width are greater than `size["longest_edge"]` then both the height and width are rescaled by `height / ratio`, `width /ratio` where `ratio = max(height / longest_edge, width / longest_edge)` patch_size (`Dict[str, int]` *optional*, defaults to `{"height": 16, "width": 16}`):
362_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/#pixtralimageprocessorfast
.md
patch_size (`Dict[str, int]` *optional*, defaults to `{"height": 16, "width": 16}`): Size of the patches in the model, used to calculate the output image size. Can be overridden by `patch_size` in the `preprocess` method. resample (`PILImageResampling`, *optional*, defaults to `Resampling.BICUBIC`): Resampling filter to use if resizing the image. Can be overridden by `resample` in the `preprocess` method. do_rescale (`bool`, *optional*, defaults to `True`):
362_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/#pixtralimageprocessorfast
.md
do_rescale (`bool`, *optional*, defaults to `True`): Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by `do_rescale` in the `preprocess` method. rescale_factor (`int` or `float`, *optional*, defaults to `1/255`): Scale factor to use if rescaling the image. Can be overridden by `rescale_factor` in the `preprocess` method. do_normalize (`bool`, *optional*, defaults to `True`):
362_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/#pixtralimageprocessorfast
.md
method. do_normalize (`bool`, *optional*, defaults to `True`): Whether to normalize the image. Can be overridden by `do_normalize` in the `preprocess` method. image_mean (`float` or `List[float]`, *optional*, defaults to `[0.48145466, 0.4578275, 0.40821073]`): Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method.
362_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/#pixtralimageprocessorfast
.md
channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. image_std (`float` or `List[float]`, *optional*, defaults to `[0.26862954, 0.26130258, 0.27577711]`): Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method. Can be overridden by the `image_std` parameter in the `preprocess` method.
362_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/#pixtralimageprocessorfast
.md
Can be overridden by the `image_std` parameter in the `preprocess` method. do_convert_rgb (`bool`, *optional*, defaults to `True`): Whether to convert the image to RGB. Methods: preprocess
362_6_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/#pixtralprocessor
.md
Constructs a Pixtral processor which wraps a Pixtral image processor and a Pixtral tokenizer into a single processor. [`PixtralProcessor`] offers all the functionalities of [`CLIPImageProcessor`] and [`LlamaTokenizerFast`]. See the [`~PixtralProcessor.__call__`] and [`~PixtralProcessor.decode`] for more information. Args: image_processor ([`PixtralImageProcessor`], *optional*): The image processor is a required input. tokenizer ([`LlamaTokenizerFast`], *optional*): The tokenizer is a required input.
362_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/#pixtralprocessor
.md
The image processor is a required input. tokenizer ([`LlamaTokenizerFast`], *optional*): The tokenizer is a required input. patch_size (`int`, *optional*, defaults to 16): Patch size from the vision tower. chat_template (`str`, *optional*): A Jinja template which will be used to convert lists of messages in a chat into a tokenizable string. image_token (`str`, *optional*, defaults to `"[IMG]"`): Special token used to denote image location. image_break_token (`str`, *optional*, defaults to `"[IMG_BREAK]"`):
362_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/pixtral.md
https://huggingface.co/docs/transformers/en/model_doc/pixtral/#pixtralprocessor
.md
Special token used to denote image location. image_break_token (`str`, *optional*, defaults to `"[IMG_BREAK]"`): Special token used to denote the end of a line of pixels in an image. image_end_token (`str`, *optional*, defaults to `"[IMG_END]"`): Special token used to denote the end of an image input.
362_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything_v2.md
https://huggingface.co/docs/transformers/en/model_doc/depth_anything_v2/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
363_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything_v2.md
https://huggingface.co/docs/transformers/en/model_doc/depth_anything_v2/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
363_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything_v2.md
https://huggingface.co/docs/transformers/en/model_doc/depth_anything_v2/#overview
.md
Depth Anything V2 was introduced in [the paper of the same name](https://arxiv.org/abs/2406.09414) by Lihe Yang et al. It uses the same architecture as the original [Depth Anything model](depth_anything), but uses synthetic data and a larger capacity teacher model to achieve much finer and robust depth predictions. The abstract from the paper is the following:
363_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything_v2.md
https://huggingface.co/docs/transformers/en/model_doc/depth_anything_v2/#overview
.md
*This work presents Depth Anything V2. Without pursuing fancy techniques, we aim to reveal crucial findings to pave the way towards building a powerful monocular depth estimation model. Notably, compared with V1, this version produces much finer and more robust depth predictions through three key practices: 1) replacing all labeled real images with synthetic images, 2) scaling up the capacity of our teacher model, and 3) teaching student models via the bridge of large-scale pseudo-labeled real images.
363_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything_v2.md
https://huggingface.co/docs/transformers/en/model_doc/depth_anything_v2/#overview
.md
up the capacity of our teacher model, and 3) teaching student models via the bridge of large-scale pseudo-labeled real images. Compared with the latest models built on Stable Diffusion, our models are significantly more efficient (more than 10x faster) and more accurate. We offer models of different scales (ranging from 25M to 1.3B params) to support extensive scenarios. Benefiting from their strong generalization capability, we fine-tune them with metric depth labels to obtain our metric depth models. In
363_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything_v2.md
https://huggingface.co/docs/transformers/en/model_doc/depth_anything_v2/#overview
.md
from their strong generalization capability, we fine-tune them with metric depth labels to obtain our metric depth models. In addition to our models, considering the limited diversity and frequent noise in current test sets, we construct a versatile evaluation benchmark with precise annotations and diverse scenes to facilitate future research.*
363_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything_v2.md
https://huggingface.co/docs/transformers/en/model_doc/depth_anything_v2/#overview
.md
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/depth_anything_overview.jpg" alt="drawing" width="600"/> <small> Depth Anything overview. Taken from the <a href="https://arxiv.org/abs/2401.10891">original paper</a>.</small> The Depth Anything models were contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/DepthAnything/Depth-Anything-V2).
363_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything_v2.md
https://huggingface.co/docs/transformers/en/model_doc/depth_anything_v2/#usage-example
.md
There are 2 main ways to use Depth Anything V2: either using the pipeline API, which abstracts away all the complexity for you, or by using the `DepthAnythingForDepthEstimation` class yourself.
363_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything_v2.md
https://huggingface.co/docs/transformers/en/model_doc/depth_anything_v2/#pipeline-api
.md
The pipeline allows to use the model in a few lines of code: ```python >>> from transformers import pipeline >>> from PIL import Image >>> import requests >>> # load pipe >>> pipe = pipeline(task="depth-estimation", model="depth-anything/Depth-Anything-V2-Small-hf") >>> # load image >>> url = 'http://images.cocodataset.org/val2017/000000039769.jpg' >>> image = Image.open(requests.get(url, stream=True).raw) >>> # inference >>> depth = pipe(image)["depth"] ```
363_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything_v2.md
https://huggingface.co/docs/transformers/en/model_doc/depth_anything_v2/#using-the-model-yourself
.md
If you want to do the pre- and post-processing yourself, here's how to do that: ```python >>> from transformers import AutoImageProcessor, AutoModelForDepthEstimation >>> import torch >>> import numpy as np >>> from PIL import Image >>> import requests >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw)
363_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything_v2.md
https://huggingface.co/docs/transformers/en/model_doc/depth_anything_v2/#using-the-model-yourself
.md
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image_processor = AutoImageProcessor.from_pretrained("depth-anything/Depth-Anything-V2-Small-hf") >>> model = AutoModelForDepthEstimation.from_pretrained("depth-anything/Depth-Anything-V2-Small-hf") >>> # prepare image for the model >>> inputs = image_processor(images=image, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs)
363_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything_v2.md
https://huggingface.co/docs/transformers/en/model_doc/depth_anything_v2/#using-the-model-yourself
.md
>>> with torch.no_grad(): ... outputs = model(**inputs) >>> # interpolate to original size and visualize the prediction >>> post_processed_output = image_processor.post_process_depth_estimation( ... outputs, ... target_sizes=[(image.height, image.width)], ... )
363_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything_v2.md
https://huggingface.co/docs/transformers/en/model_doc/depth_anything_v2/#using-the-model-yourself
.md
>>> predicted_depth = post_processed_output[0]["predicted_depth"] >>> depth = (predicted_depth - predicted_depth.min()) / (predicted_depth.max() - predicted_depth.min()) >>> depth = depth.detach().cpu().numpy() * 255 >>> depth = Image.fromarray(depth.astype("uint8")) ```
363_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything_v2.md
https://huggingface.co/docs/transformers/en/model_doc/depth_anything_v2/#resources
.md
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Depth Anything. - [Monocular depth estimation task guide](../tasks/monocular_depth_estimation) - [Depth Anything V2 demo](https://huggingface.co/spaces/depth-anything/Depth-Anything-V2).
363_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything_v2.md
https://huggingface.co/docs/transformers/en/model_doc/depth_anything_v2/#resources
.md
- [Depth Anything V2 demo](https://huggingface.co/spaces/depth-anything/Depth-Anything-V2). - A notebook showcasing inference with [`DepthAnythingForDepthEstimation`] can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/Depth%20Anything/Predicting_depth_in_an_image_with_Depth_Anything.ipynb). 🌎 - [Core ML conversion of the `small` variant for use on Apple Silicon](https://huggingface.co/apple/coreml-depth-anything-v2-small).
363_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything_v2.md
https://huggingface.co/docs/transformers/en/model_doc/depth_anything_v2/#resources
.md
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
363_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything_v2.md
https://huggingface.co/docs/transformers/en/model_doc/depth_anything_v2/#depthanythingconfig
.md
This is the configuration class to store the configuration of a [`DepthAnythingModel`]. It is used to instantiate a DepthAnything model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the DepthAnything [LiheYoung/depth-anything-small-hf](https://huggingface.co/LiheYoung/depth-anything-small-hf) architecture.
363_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything_v2.md
https://huggingface.co/docs/transformers/en/model_doc/depth_anything_v2/#depthanythingconfig
.md
[LiheYoung/depth-anything-small-hf](https://huggingface.co/LiheYoung/depth-anything-small-hf) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: backbone_config (`Union[Dict[str, Any], PretrainedConfig]`, *optional*): The configuration of the backbone model. Only used in case `is_hybrid` is `True` or in case you want to leverage the [`AutoBackbone`] API.
363_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything_v2.md
https://huggingface.co/docs/transformers/en/model_doc/depth_anything_v2/#depthanythingconfig
.md
leverage the [`AutoBackbone`] API. backbone (`str`, *optional*): Name of backbone to use when `backbone_config` is `None`. If `use_pretrained_backbone` is `True`, this will load the corresponding pretrained weights from the timm or transformers library. If `use_pretrained_backbone` is `False`, this loads the backbone's config and uses that to initialize the backbone with random weights. use_pretrained_backbone (`bool`, *optional*, defaults to `False`): Whether to use pretrained weights for the backbone.
363_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything_v2.md
https://huggingface.co/docs/transformers/en/model_doc/depth_anything_v2/#depthanythingconfig
.md
use_pretrained_backbone (`bool`, *optional*, defaults to `False`): Whether to use pretrained weights for the backbone. use_timm_backbone (`bool`, *optional*, defaults to `False`): Whether or not to use the `timm` library for the backbone. If set to `False`, will use the [`AutoBackbone`] API. backbone_kwargs (`dict`, *optional*): Keyword arguments to be passed to AutoBackbone when loading from a checkpoint e.g. `{'out_indices': (0, 1, 2, 3)}`. Cannot be specified if `backbone_config` is set.
363_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything_v2.md
https://huggingface.co/docs/transformers/en/model_doc/depth_anything_v2/#depthanythingconfig
.md
e.g. `{'out_indices': (0, 1, 2, 3)}`. Cannot be specified if `backbone_config` is set. patch_size (`int`, *optional*, defaults to 14): The size of the patches to extract from the backbone features. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. reassemble_hidden_size (`int`, *optional*, defaults to 384): The number of input channels of the reassemble layers.
363_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything_v2.md
https://huggingface.co/docs/transformers/en/model_doc/depth_anything_v2/#depthanythingconfig
.md
reassemble_hidden_size (`int`, *optional*, defaults to 384): The number of input channels of the reassemble layers. reassemble_factors (`List[int]`, *optional*, defaults to `[4, 2, 1, 0.5]`): The up/downsampling factors of the reassemble layers. neck_hidden_sizes (`List[str]`, *optional*, defaults to `[48, 96, 192, 384]`): The hidden sizes to project to for the feature maps of the backbone. fusion_hidden_size (`int`, *optional*, defaults to 64): The number of channels before fusion.
363_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything_v2.md
https://huggingface.co/docs/transformers/en/model_doc/depth_anything_v2/#depthanythingconfig
.md
fusion_hidden_size (`int`, *optional*, defaults to 64): The number of channels before fusion. head_in_index (`int`, *optional*, defaults to -1): The index of the features to use in the depth estimation head. head_hidden_size (`int`, *optional*, defaults to 32): The number of output channels in the second convolution of the depth estimation head. depth_estimation_type (`str`, *optional*, defaults to `"relative"`): The type of depth estimation to use. Can be one of `["relative", "metric"]`.
363_6_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything_v2.md
https://huggingface.co/docs/transformers/en/model_doc/depth_anything_v2/#depthanythingconfig
.md
The type of depth estimation to use. Can be one of `["relative", "metric"]`. max_depth (`float`, *optional*): The maximum depth to use for the "metric" depth estimation head. 20 should be used for indoor models and 80 for outdoor models. For "relative" depth estimation, this value is ignored. Example: ```python >>> from transformers import DepthAnythingConfig, DepthAnythingForDepthEstimation
363_6_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything_v2.md
https://huggingface.co/docs/transformers/en/model_doc/depth_anything_v2/#depthanythingconfig
.md
>>> # Initializing a DepthAnything small style configuration >>> configuration = DepthAnythingConfig() >>> # Initializing a model from the DepthAnything small style configuration >>> model = DepthAnythingForDepthEstimation(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
363_6_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything_v2.md
https://huggingface.co/docs/transformers/en/model_doc/depth_anything_v2/#depthanythingfordepthestimation
.md
Depth Anything Model with a depth estimation head on top (consisting of 3 convolutional layers) e.g. for KITTI, NYUv2. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`DepthAnythingConfig`]): Model configuration class with all the parameters of the model.
363_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/depth_anything_v2.md
https://huggingface.co/docs/transformers/en/model_doc/depth_anything_v2/#depthanythingfordepthestimation
.md
behavior. Parameters: config ([`DepthAnythingConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
363_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/
.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
364_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
364_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#albert
.md
<div class="flex flex-wrap space-x-1"> <a href="https://huggingface.co/models?filter=albert"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-albert-blueviolet"> </a> <a href="https://huggingface.co/spaces/docs-demos/albert-base-v2"> <img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"> </a> </div>
364_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#overview
.md
The ALBERT model was proposed in [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942) by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. It presents two parameter-reduction techniques to lower memory consumption and increase the training speed of BERT: - Splitting the embedding matrix into two smaller matrices. - Using repeating layers split among groups. The abstract from the paper is the following:
364_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#overview
.md
- Using repeating layers split among groups. The abstract from the paper is the following: *Increasing model size when pretraining natural language representations often results in improved performance on downstream tasks. However, at some point further model increases become harder due to GPU/TPU memory limitations, longer training times, and unexpected model degradation. To address these problems, we present two parameter-reduction
364_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#overview
.md
longer training times, and unexpected model degradation. To address these problems, we present two parameter-reduction techniques to lower memory consumption and increase the training speed of BERT. Comprehensive empirical evidence shows that our proposed methods lead to models that scale much better compared to the original BERT. We also use a self-supervised loss that focuses on modeling inter-sentence coherence, and show it consistently helps downstream tasks
364_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#overview
.md
self-supervised loss that focuses on modeling inter-sentence coherence, and show it consistently helps downstream tasks with multi-sentence inputs. As a result, our best model establishes new state-of-the-art results on the GLUE, RACE, and SQuAD benchmarks while having fewer parameters compared to BERT-large.* This model was contributed by [lysandre](https://huggingface.co/lysandre). This model jax version was contributed by
364_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#overview
.md
This model was contributed by [lysandre](https://huggingface.co/lysandre). This model jax version was contributed by [kamalkraj](https://huggingface.co/kamalkraj). The original code can be found [here](https://github.com/google-research/ALBERT).
364_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#usage-tips
.md
- ALBERT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. - ALBERT uses repeating layers which results in a small memory footprint, however the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
364_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#usage-tips
.md
number of (repeating) layers. - Embedding size E is different from hidden size H justified because the embeddings are context independent (one embedding vector represents one token), whereas hidden states are context dependent (one hidden state represents a sequence of tokens) so it's more logical to have H >> E. Also, the embedding matrix is large since it's V x E (V being the vocab size). If E < H, it has less parameters. - Layers are split in groups that share parameters (to save memory).
364_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#usage-tips
.md
- Layers are split in groups that share parameters (to save memory). Next sentence prediction is replaced by a sentence ordering prediction: in the inputs, we have two sentences A and B (that are consecutive) and we either feed A followed by B or B followed by A. The model must predict if they have been swapped or not.
364_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#using-scaled-dot-product-attention-sdpa
.md
PyTorch includes a native scaled dot-product attention (SDPA) operator as part of `torch.nn.functional`. This function encompasses several implementations that can be applied depending on the inputs and the hardware in use. See the [official documentation](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) or the [GPU Inference](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention) page for more information.
364_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#using-scaled-dot-product-attention-sdpa
.md
page for more information. SDPA is used by default for `torch>=2.1.1` when an implementation is available, but you may also set `attn_implementation="sdpa"` in `from_pretrained()` to explicitly request SDPA to be used. ``` from transformers import AlbertModel model = AlbertModel.from_pretrained("albert/albert-base-v1", torch_dtype=torch.float16, attn_implementation="sdpa") ... ``` For the best speedups, we recommend loading the model in half-precision (e.g. `torch.float16` or `torch.bfloat16`).
364_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#using-scaled-dot-product-attention-sdpa
.md
... ``` For the best speedups, we recommend loading the model in half-precision (e.g. `torch.float16` or `torch.bfloat16`). On a local benchmark (GeForce RTX 2060-8GB, PyTorch 2.3.1, OS Ubuntu 20.04) with `float16`, we saw the following speedups during training and inference.
364_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#training-for-100-iterations
.md
|batch_size|seq_len|Time per batch (eager - s)| Time per batch (sdpa - s)| Speedup (%)| Eager peak mem (MB)| sdpa peak mem (MB)| Mem saving (%)| |----------|-------|--------------------------|--------------------------|------------|--------------------|-------------------|---------------| |2 |256 |0.028 |0.024 |14.388 |358.411 |321.088 |11.624 |
364_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#training-for-100-iterations
.md
|2 |512 |0.049 |0.041 |17.681 |753.458 |602.660 |25.022 | |4 |256 |0.044 |0.039 |12.246 |679.534 |602.660 |12.756 | |4 |512 |0.090 |0.076 |18.472 |1434.820 |1134.140 |26.512 |
364_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#training-for-100-iterations
.md
|8 |256 |0.081 |0.072 |12.664 |1283.825 |1134.140 |13.198 | |8 |512 |0.170 |0.143 |18.957 |2820.398 |2219.695 |27.062 |
364_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#inference-with-50-batches
.md
|batch_size|seq_len|Per token latency eager (ms)|Per token latency SDPA (ms)|Speedup (%) |Mem eager (MB)|Mem BT (MB)|Mem saved (%)| |----------|-------|----------------------------|---------------------------|------------|--------------|-----------|-------------| |4 |128 |0.083 |0.071 |16.967 |48.319 |48.45 |-0.268 |
364_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/albert.md
https://huggingface.co/docs/transformers/en/model_doc/albert/#inference-with-50-batches
.md
|4 |256 |0.148 |0.127 |16.37 |63.4 |63.922 |-0.817 | |4 |512 |0.31 |0.247 |25.473 |110.092 |94.343 |16.693 | |8 |128 |0.137 |0.124 |11.102 |63.4 |63.66 |-0.409 |
364_6_1