source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2model
.md
The bare Qwen2 Model outputting raw hidden-states without any specific head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
190_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2model
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`Qwen2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
190_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2model
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`Qwen2DecoderLayer`] Args: config: Qwen2Config Methods: forward
190_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2forcausallm
.md
No docstring available for Qwen2ForCausalLM Methods: forward
190_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2forsequenceclassification
.md
The Qwen2 Model transformer with a sequence classification head on top (linear layer). [`Qwen2ForSequenceClassification`] uses the last token in order to do the classification, as other causal models (e.g. GPT-2) do. Since it does classification on the last token, it requires to know the position of the last token. If a `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
190_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2forsequenceclassification
.md
`pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in each row of the batch). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
190_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2forsequenceclassification
.md
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters:
190_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2forsequenceclassification
.md
and behavior. Parameters: config ([`Qwen2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
190_9_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2fortokenclassification
.md
The Qwen2 Model transformer with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
190_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2fortokenclassification
.md
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`Qwen2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not
190_10_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2fortokenclassification
.md
Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
190_10_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2forquestionanswering
.md
The Qwen2 Model transformer with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layer on top of the hidden-states output to compute `span start logits` and `span end logits`). This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
190_11_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2forquestionanswering
.md
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`Qwen2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not
190_11_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/qwen2.md
https://huggingface.co/docs/transformers/en/model_doc/qwen2/#qwen2forquestionanswering
.md
Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
190_11_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
191_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
191_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#overview
.md
The ZoeDepth model was proposed in [ZoeDepth: Zero-shot Transfer by Combining Relative and Metric Depth](https://arxiv.org/abs/2302.12288) by Shariq Farooq Bhat, Reiner Birkl, Diana Wofk, Peter Wonka, Matthias Müller. ZoeDepth extends the [DPT](dpt) framework for metric (also called absolute) depth estimation. ZoeDepth is pre-trained on 12 datasets using relative depth and fine-tuned on two domains (NYU and KITTI) using metric depth. A lightweight head is used with a novel bin adjustment design called
191_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#overview
.md
on two domains (NYU and KITTI) using metric depth. A lightweight head is used with a novel bin adjustment design called metric bins module for each domain. During inference, each input image is automatically routed to the appropriate head using a latent classifier.
191_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#overview
.md
The abstract from the paper is the following:
191_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#overview
.md
*This paper tackles the problem of depth estimation from a single image. Existing work either focuses on generalization performance disregarding metric scale, i.e. relative depth estimation, or state-of-the-art results on specific datasets, i.e. metric depth estimation. We propose the first approach that combines both worlds, leading to a model with excellent generalization performance while maintaining metric scale. Our flagship model, ZoeD-M12-NK, is pre-trained on 12 datasets using relative depth and
191_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#overview
.md
while maintaining metric scale. Our flagship model, ZoeD-M12-NK, is pre-trained on 12 datasets using relative depth and fine-tuned on two datasets using metric depth. We use a lightweight head with a novel bin adjustment design called metric bins module for each domain. During inference, each input image is automatically routed to the appropriate head using a latent classifier. Our framework admits multiple configurations depending on the datasets used for relative depth pre-training and metric
191_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#overview
.md
Our framework admits multiple configurations depending on the datasets used for relative depth pre-training and metric fine-tuning. Without pre-training, we can already significantly improve the state of the art (SOTA) on the NYU Depth v2 indoor dataset. Pre-training on twelve datasets and fine-tuning on the NYU Depth v2 indoor dataset, we can further improve SOTA for a total of 21% in terms of relative absolute error (REL). Finally, ZoeD-M12-NK is the first model that can jointly train on multiple
191_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#overview
.md
of 21% in terms of relative absolute error (REL). Finally, ZoeD-M12-NK is the first model that can jointly train on multiple datasets (NYU Depth v2 and KITTI) without a significant drop in performance and achieve unprecedented zero-shot generalization performance to eight unseen datasets from both indoor and outdoor domains.*
191_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#overview
.md
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/zoedepth_architecture_bis.png" alt="drawing" width="600"/> <small> ZoeDepth architecture. Taken from the <a href="https://arxiv.org/abs/2302.12288">original paper.</a> </small> This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/isl-org/ZoeDepth).
191_1_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#usage-tips
.md
- ZoeDepth is an absolute (also called metric) depth estimation model, unlike DPT which is a relative depth estimation model. This means that ZoeDepth is able to estimate depth in metric units like meters. The easiest to perform inference with ZoeDepth is by leveraging the [pipeline API](../main_classes/pipelines.md): ```python >>> from transformers import pipeline >>> from PIL import Image >>> import requests
191_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#usage-tips
.md
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> pipe = pipeline(task="depth-estimation", model="Intel/zoedepth-nyu-kitti") >>> result = pipe(image) >>> depth = result["depth"] ``` Alternatively, one can also perform inference using the classes: ```python >>> from transformers import AutoImageProcessor, ZoeDepthForDepthEstimation >>> import torch >>> import numpy as np >>> from PIL import Image >>> import requests
191_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#usage-tips
.md
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image_processor = AutoImageProcessor.from_pretrained("Intel/zoedepth-nyu-kitti") >>> model = ZoeDepthForDepthEstimation.from_pretrained("Intel/zoedepth-nyu-kitti") >>> # prepare image for the model >>> inputs = image_processor(images=image, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(pixel_values)
191_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#usage-tips
.md
>>> with torch.no_grad(): ... outputs = model(pixel_values) >>> # interpolate to original size and visualize the prediction >>> ## ZoeDepth dynamically pads the input image. Thus we pass the original image size as argument >>> ## to `post_process_depth_estimation` to remove the padding and resize to original dimensions. >>> post_processed_output = image_processor.post_process_depth_estimation( ... outputs, ... source_sizes=[(image.height, image.width)], ... )
191_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#usage-tips
.md
>>> predicted_depth = post_processed_output[0]["predicted_depth"] >>> depth = (predicted_depth - predicted_depth.min()) / (predicted_depth.max() - predicted_depth.min()) >>> depth = depth.detach().cpu().numpy() * 255 >>> depth = Image.fromarray(depth.astype("uint8")) ``` <Tip>
191_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#usage-tips
.md
>>> depth = Image.fromarray(depth.astype("uint8")) ``` <Tip> <p>In the <a href="https://github.com/isl-org/ZoeDepth/blob/edb6daf45458569e24f50250ef1ed08c015f17a7/zoedepth/models/depth_model.py#L131">original implementation</a> ZoeDepth model performs inference on both the original and flipped images and averages out the results. The <code>post_process_depth_estimation</code> function can handle this for us by passing the flipped outputs to the optional <code>outputs_flipped</code> argument:</p>
191_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#usage-tips
.md
<pre><code class="language-Python">&gt;&gt;&gt; with torch.no_grad(): ... outputs = model(pixel_values) ... outputs_flipped = model(pixel_values=torch.flip(inputs.pixel_values, dims=[3])) &gt;&gt;&gt; post_processed_output = image_processor.post_process_depth_estimation( ... outputs, ... source_sizes=[(image.height, image.width)], ... outputs_flipped=outputs_flipped, ... ) </code></pre> </Tip>
191_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#resources
.md
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ZoeDepth. - A demo notebook regarding inference with ZoeDepth models can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/ZoeDepth). 🌎
191_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#zoedepthconfig
.md
This is the configuration class to store the configuration of a [`ZoeDepthForDepthEstimation`]. It is used to instantiate an ZoeDepth model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the ZoeDepth [Intel/zoedepth-nyu](https://huggingface.co/Intel/zoedepth-nyu) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
191_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#zoedepthconfig
.md
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: backbone_config (`Union[Dict[str, Any], PretrainedConfig]`, *optional*, defaults to `BeitConfig()`): The configuration of the backbone model. backbone (`str`, *optional*): Name of backbone to use when `backbone_config` is `None`. If `use_pretrained_backbone` is `True`, this
191_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#zoedepthconfig
.md
Name of backbone to use when `backbone_config` is `None`. If `use_pretrained_backbone` is `True`, this will load the corresponding pretrained weights from the timm or transformers library. If `use_pretrained_backbone` is `False`, this loads the backbone's config and uses that to initialize the backbone with random weights. use_pretrained_backbone (`bool`, *optional*, defaults to `False`): Whether to use pretrained weights for the backbone. backbone_kwargs (`dict`, *optional*):
191_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#zoedepthconfig
.md
Whether to use pretrained weights for the backbone. backbone_kwargs (`dict`, *optional*): Keyword arguments to be passed to AutoBackbone when loading from a checkpoint e.g. `{'out_indices': (0, 1, 2, 3)}`. Cannot be specified if `backbone_config` is set. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported.
191_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#zoedepthconfig
.md
`"relu"`, `"selu"` and `"gelu_new"` are supported. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. batch_norm_eps (`float`, *optional*, defaults to 1e-05): The epsilon used by the batch normalization layers. readout_type (`str`, *optional*, defaults to `"project"`): The readout type to use when processing the readout token (CLS token) of the intermediate hidden states of
191_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#zoedepthconfig
.md
The readout type to use when processing the readout token (CLS token) of the intermediate hidden states of the ViT backbone. Can be one of [`"ignore"`, `"add"`, `"project"`]. - "ignore" simply ignores the CLS token. - "add" passes the information from the CLS token to all other tokens by adding the representations. - "project" passes information to the other tokens by concatenating the readout to all other tokens before projecting the
191_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#zoedepthconfig
.md
- "project" passes information to the other tokens by concatenating the readout to all other tokens before projecting the representation to the original feature dimension D using a linear layer followed by a GELU non-linearity. reassemble_factors (`List[int]`, *optional*, defaults to `[4, 2, 1, 0.5]`): The up/downsampling factors of the reassemble layers. neck_hidden_sizes (`List[str]`, *optional*, defaults to `[96, 192, 384, 768]`): The hidden sizes to project to for the feature maps of the backbone.
191_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#zoedepthconfig
.md
The hidden sizes to project to for the feature maps of the backbone. fusion_hidden_size (`int`, *optional*, defaults to 256): The number of channels before fusion. head_in_index (`int`, *optional*, defaults to -1): The index of the features to use in the heads. use_batch_norm_in_fusion_residual (`bool`, *optional*, defaults to `False`): Whether to use batch normalization in the pre-activate residual units of the fusion blocks. use_bias_in_fusion_residual (`bool`, *optional*, defaults to `True`):
191_4_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#zoedepthconfig
.md
use_bias_in_fusion_residual (`bool`, *optional*, defaults to `True`): Whether to use bias in the pre-activate residual units of the fusion blocks. num_relative_features (`int`, *optional*, defaults to 32): The number of features to use in the relative depth estimation head. add_projection (`bool`, *optional*, defaults to `False`): Whether to add a projection layer before the depth estimation head. bottleneck_features (`int`, *optional*, defaults to 256): The number of features in the bottleneck layer.
191_4_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#zoedepthconfig
.md
bottleneck_features (`int`, *optional*, defaults to 256): The number of features in the bottleneck layer. num_attractors (`List[int], *optional*, defaults to `[16, 8, 4, 1]`): The number of attractors to use in each stage. bin_embedding_dim (`int`, *optional*, defaults to 128): The dimension of the bin embeddings. attractor_alpha (`int`, *optional*, defaults to 1000): The alpha value to use in the attractor. attractor_gamma (`int`, *optional*, defaults to 2): The gamma value to use in the attractor.
191_4_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#zoedepthconfig
.md
attractor_gamma (`int`, *optional*, defaults to 2): The gamma value to use in the attractor. attractor_kind (`str`, *optional*, defaults to `"mean"`): The kind of attractor to use. Can be one of [`"mean"`, `"sum"`]. min_temp (`float`, *optional*, defaults to 0.0212): The minimum temperature value to consider. max_temp (`float`, *optional*, defaults to 50.0): The maximum temperature value to consider. bin_centers_type (`str`, *optional*, defaults to `"softplus"`):
191_4_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#zoedepthconfig
.md
The maximum temperature value to consider. bin_centers_type (`str`, *optional*, defaults to `"softplus"`): Activation type used for bin centers. Can be "normed" or "softplus". For "normed" bin centers, linear normalization trick is applied. This results in bounded bin centers. For "softplus", softplus activation is used and thus are unbounded. bin_configurations (`List[dict]`, *optional*, defaults to `[{'n_bins': 64, 'min_depth': 0.001, 'max_depth': 10.0}]`): Configuration for each of the bin heads.
191_4_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#zoedepthconfig
.md
Configuration for each of the bin heads. Each configuration should consist of the following keys: - name (`str`): The name of the bin head - only required in case of multiple bin configurations. - `n_bins` (`int`): The number of bins to use. - `min_depth` (`float`): The minimum depth value to consider. - `max_depth` (`float`): The maximum depth value to consider. In case only a single configuration is passed, the model will use a single head with the specified configuration.
191_4_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#zoedepthconfig
.md
In case only a single configuration is passed, the model will use a single head with the specified configuration. In case multiple configurations are passed, the model will use multiple heads with the specified configurations. num_patch_transformer_layers (`int`, *optional*): The number of transformer layers to use in the patch transformer. Only used in case of multiple bin configurations. patch_transformer_hidden_size (`int`, *optional*):
191_4_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#zoedepthconfig
.md
patch_transformer_hidden_size (`int`, *optional*): The hidden size to use in the patch transformer. Only used in case of multiple bin configurations. patch_transformer_intermediate_size (`int`, *optional*): The intermediate size to use in the patch transformer. Only used in case of multiple bin configurations. patch_transformer_num_attention_heads (`int`, *optional*): The number of attention heads to use in the patch transformer. Only used in case of multiple bin configurations. Example: ```python
191_4_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#zoedepthconfig
.md
Example: ```python >>> from transformers import ZoeDepthConfig, ZoeDepthForDepthEstimation
191_4_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#zoedepthconfig
.md
>>> # Initializing a ZoeDepth zoedepth-large style configuration >>> configuration = ZoeDepthConfig() >>> # Initializing a model from the zoedepth-large style configuration >>> model = ZoeDepthForDepthEstimation(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
191_4_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#zoedepthimageprocessor
.md
Constructs a ZoeDepth image processor. Args: do_pad (`bool`, *optional*, defaults to `True`): Whether to apply pad the input. do_rescale (`bool`, *optional*, defaults to `True`): Whether to rescale the image by the specified scale `rescale_factor`. Can be overidden by `do_rescale` in `preprocess`. rescale_factor (`int` or `float`, *optional*, defaults to `1/255`): Scale factor to use if rescaling the image. Can be overidden by `rescale_factor` in `preprocess`.
191_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#zoedepthimageprocessor
.md
Scale factor to use if rescaling the image. Can be overidden by `rescale_factor` in `preprocess`. do_normalize (`bool`, *optional*, defaults to `True`): Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess` method. image_mean (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_MEAN`): Mean to use if normalizing the image. This is a float or list of floats the length of the number of
191_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#zoedepthimageprocessor
.md
Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. image_std (`float` or `List[float]`, *optional*, defaults to `IMAGENET_STANDARD_STD`): Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method.
191_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#zoedepthimageprocessor
.md
number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method. do_resize (`bool`, *optional*, defaults to `True`): Whether to resize the image's (height, width) dimensions. Can be overidden by `do_resize` in `preprocess`. size (`Dict[str, int]` *optional*, defaults to `{"height": 384, "width": 512}`): Size of the image after resizing. Size of the image after resizing. If `keep_aspect_ratio` is `True`,
191_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#zoedepthimageprocessor
.md
Size of the image after resizing. Size of the image after resizing. If `keep_aspect_ratio` is `True`, the image is resized by choosing the smaller of the height and width scaling factors and using it for both dimensions. If `ensure_multiple_of` is also set, the image is further resized to a size that is a multiple of this value. Can be overidden by `size` in `preprocess`. resample (`PILImageResampling`, *optional*, defaults to `Resampling.BILINEAR`):
191_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#zoedepthimageprocessor
.md
Can be overidden by `size` in `preprocess`. resample (`PILImageResampling`, *optional*, defaults to `Resampling.BILINEAR`): Defines the resampling filter to use if resizing the image. Can be overidden by `resample` in `preprocess`. keep_aspect_ratio (`bool`, *optional*, defaults to `True`): If `True`, the image is resized by choosing the smaller of the height and width scaling factors and using it for both dimensions. This ensures that the image is scaled down as little as possible while still fitting
191_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#zoedepthimageprocessor
.md
for both dimensions. This ensures that the image is scaled down as little as possible while still fitting within the desired output size. In case `ensure_multiple_of` is also set, the image is further resized to a size that is a multiple of this value by flooring the height and width to the nearest multiple of this value. Can be overidden by `keep_aspect_ratio` in `preprocess`. ensure_multiple_of (`int`, *optional*, defaults to 32):
191_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#zoedepthimageprocessor
.md
Can be overidden by `keep_aspect_ratio` in `preprocess`. ensure_multiple_of (`int`, *optional*, defaults to 32): If `do_resize` is `True`, the image is resized to a size that is a multiple of this value. Works by flooring the height and width to the nearest multiple of this value. Works both with and without `keep_aspect_ratio` being set to `True`. Can be overidden by `ensure_multiple_of` in `preprocess`. Methods: preprocess
191_5_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#zoedepthfordepthestimation
.md
ZoeDepth model with one or multiple metric depth estimation head(s) on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`ViTConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
191_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/zoedepth.md
https://huggingface.co/docs/transformers/en/model_doc/zoedepth/#zoedepthfordepthestimation
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
191_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
192_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
192_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#overview
.md
The Idefics3 model was proposed in [Building and better understanding vision-language models: insights and future directions](https://huggingface.co/papers/2408.12637) by Hugo Laurençon, Andrés Marafioti, Victor Sanh, and Léo Tronchon. Idefics3 is an adaptation of the Idefics2 model with three main differences: - It uses Llama3 for the text model. - It uses an updated processing logic for the images. - It removes the perceiver. The abstract from the paper is the following:
192_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#overview
.md
*The field of vision-language models (VLMs), which take images and texts as inputs and output texts, is rapidly evolving and has yet to reach consensus on several key aspects of the development pipeline, including data, architecture, and training methods. This paper can be seen as a tutorial for building a VLM. We begin by providing a comprehensive overview of the current state-of-the-art approaches, highlighting the strengths and weaknesses of each, addressing the major challenges in the field, and
192_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#overview
.md
approaches, highlighting the strengths and weaknesses of each, addressing the major challenges in the field, and suggesting promising research directions for underexplored areas. We then walk through the practical steps to build Idefics3-8B, a powerful VLM that significantly outperforms its predecessor Idefics2-8B, while being trained efficiently, exclusively on open datasets, and using a straightforward pipeline. These steps include the creation of Docmatix, a dataset for improving document understanding
192_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#overview
.md
using a straightforward pipeline. These steps include the creation of Docmatix, a dataset for improving document understanding capabilities, which is 240 times larger than previously available datasets. We release the model along with the datasets created for its training.*
192_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#usage-tips
.md
Input images are processed either by upsampling (if resizing is enabled) or at their original resolution. The resizing behavior depends on two parameters: do_resize and size. If `do_resize` is set to `True`, the model resizes images so that the longest edge is 4*364 pixels by default. The default resizing behavior can be customized by passing a dictionary to the `size` parameter. For example, `{"longest_edge": 4 * 364}` is the default, but you can change it to a different value if needed.
192_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#usage-tips
.md
Here’s how to control resizing and set a custom size: ```python image_processor = Idefics3ImageProcessor(do_resize=True, size={"longest_edge": 2 * 364}, max_image_size=364) ``` Additionally, the `max_image_size` parameter, which controls the size of each square patch the image is decomposed into, is set to 364 by default but can be adjusted as needed. After resizing (if applicable), the image processor decomposes the images into square patches based on the `max_image_size` parameter.
192_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#usage-tips
.md
This model was contributed by [amyeroberts](https://huggingface.co/amyeroberts) and [andimarafioti](https://huggingface.co/andito).
192_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#idefics3config
.md
This is the configuration class to store the configuration of a [`Idefics3Model`]. It is used to instantiate a Idefics3 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the model of the Idefics3 [HuggingFaceM4/Idefics3-8B-Llama3](https://huggingface.co/HuggingFaceM4/Idefics3-8B-Llama3) architecture.
192_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#idefics3config
.md
[HuggingFaceM4/Idefics3-8B-Llama3](https://huggingface.co/HuggingFaceM4/Idefics3-8B-Llama3) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: use_cache (`bool`, *optional*, defaults to `True`): Whether or not the model should cache the key/value pairs of the attention mechanism. Only relevant if `config.is_decoder=True`.
192_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#idefics3config
.md
relevant if `config.is_decoder=True`. image_token_id (`int`, *optional*, defaults to 128257): The id of the "image" token. tie_word_embeddings (`bool`, *optional*, defaults to `False`): Whether or not to tie the word embeddings with the token embeddings. vision_config (`IdeficsVisionConfig` or `dict`, *optional*, defaults to `IdeficsVisionConfig`): Custom vision config or dict for the vision tower text_config (`PretrainedConfig` or `dict`, *optional*, defaults to `LlamaConfig`):
192_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#idefics3config
.md
text_config (`PretrainedConfig` or `dict`, *optional*, defaults to `LlamaConfig`): Custom text config or dict for the text model scale_factor (`int`, *optional*, defaults to 2): The scale factor for the image encoder. pad_token_id (`int`, *optional*, defaults to 128002): The id of the padding token. Example: ```python >>> from transformers import Idefics3Model, Idefics3Config >>> # Initializing configuration >>> configuration = Idefics3Config() >>> # Initializing a model from the configuration
192_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#idefics3config
.md
>>> # Initializing configuration >>> configuration = Idefics3Config() >>> # Initializing a model from the configuration >>> model = Idefics3Model(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
192_3_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#idefics3visionconfig
.md
This is the configuration class to store the configuration of a [`Idefics3VisionModel`]. It is used to instantiate a Idefics3 vision encoder according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the SigLIP checkpoint [google/siglip-base-patch16-224](https://huggingface.co/google/siglip-base-patch16-224) used in the Idefics3 model
192_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#idefics3visionconfig
.md
[google/siglip-base-patch16-224](https://huggingface.co/google/siglip-base-patch16-224) used in the Idefics3 model [HuggingFaceM4/Idefics3-8B-Llama3](https://huggingface.co/HuggingFaceM4/Idefics3-8B-Llama3). Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: hidden_size (`int`, *optional*, defaults to 1152): Dimensionality of the encoder layers and the pooler layer.
192_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#idefics3visionconfig
.md
Args: hidden_size (`int`, *optional*, defaults to 1152): Dimensionality of the encoder layers and the pooler layer. intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 16): Number of attention heads for each attention layer in the Transformer encoder.
192_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#idefics3visionconfig
.md
Number of attention heads for each attention layer in the Transformer encoder. num_channels (`int`, *optional*, defaults to 3): Number of channels in the input images. image_size (`int`, *optional*, defaults to 224): The size (resolution) of each image. patch_size (`int`, *optional*, defaults to 32): The size (resolution) of each patch. hidden_act (`str` or `function`, *optional*, defaults to `"gelu_pytorch_tanh"`):
192_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#idefics3visionconfig
.md
The size (resolution) of each patch. hidden_act (`str` or `function`, *optional*, defaults to `"gelu_pytorch_tanh"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` `"quick_gelu"` are supported. layer_norm_eps (`float`, *optional*, defaults to 1e-06): The epsilon used by the layer normalization layers. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities.
192_4_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#idefics3visionconfig
.md
attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. Example: ```python >>> from transformers.models.idefics3.modeling_idefics3 import Idefics3VisionTransformer >>> from transformers.models.idefics3.configuration_idefics3 import Idefics3VisionConfig
192_4_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#idefics3visionconfig
.md
>>> # Initializing a Idefics3VisionConfig with google/siglip-base-patch16-224 style configuration >>> configuration = Idefics3VisionConfig() >>> # Initializing a Idefics3VisionTransformer (with random weights) from the google/siglip-base-patch16-224 style configuration >>> model = Idefics3VisionTransformer(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
192_4_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#idefics3visiontransformer
.md
The Idefics3 Vision Transformer Model outputting raw image embedding. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
192_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#idefics3visiontransformer
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`Idefics3VisionConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
192_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#idefics3visiontransformer
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
192_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#idefics3model
.md
Idefics3 model consisting of a SIGLIP vision encoder and Llama3 language decoder This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
192_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#idefics3model
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`Idefics3Config`] or [`Idefics3VisionConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
192_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#idefics3model
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
192_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#idefics3forconditionalgeneration
.md
The Idefics3 Model with a language modeling head. It is made up a SigLIP vision encoder, with a language modeling head on top. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
192_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#idefics3forconditionalgeneration
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`Idefics3Config`] or [`Idefics3VisionConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
192_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#idefics3forconditionalgeneration
.md
load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward Constructs a Idefics3 image processor. Args: do_convert_rgb (`bool`, *optional*, defaults to `True`): Whether to convert the image to RGB. This is useful if the input image is of a different format e.g. RGBA. Only has an effect if the input image is in the PIL format. do_resize (`bool`, *optional*, defaults to `True`):
192_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#idefics3forconditionalgeneration
.md
Only has an effect if the input image is in the PIL format. do_resize (`bool`, *optional*, defaults to `True`): Whether to resize the image. The longest edge of the image is resized to be <= `size["longest_edge"]`, with the shortest edge resized to keep the input aspect ratio. size (`Dict`, *optional*, defaults to `{"longest_edge": 4 * 364}`): Controls the size of the output image. This is a dictionary containing the key "longest_edge".
192_7_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#idefics3forconditionalgeneration
.md
Controls the size of the output image. This is a dictionary containing the key "longest_edge". The image will be resized such that the longest edge is <= `size["longest_edge"]` and the shortest edge is resized to keep the input aspect ratio. resample (`Resampling`, *optional*, defaults to `Resampling.LANCZOS`): Resampling filter to use when resizing the image. do_image_splitting (`bool`, *optional*, defaults to `True`):
192_7_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#idefics3forconditionalgeneration
.md
Resampling filter to use when resizing the image. do_image_splitting (`bool`, *optional*, defaults to `True`): Whether to split the image into sub-images concatenated with the original image. They are split into patches such that each patch has a size of `max_image_size["height"]` x `max_image_size["width"]`. max_image_size (`Dict`, *optional*, defaults to `{"longest_edge": 364}`): Maximum resolution of the patches of images accepted by the model. This is a dictionary containing the key "longest_edge".
192_7_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#idefics3forconditionalgeneration
.md
Maximum resolution of the patches of images accepted by the model. This is a dictionary containing the key "longest_edge". do_rescale (`bool`, *optional*, defaults to `True`): Whether to rescale the image. If set to `True`, the image is rescaled to have pixel values between 0 and 1. rescale_factor (`float`, *optional*, defaults to `1/255`): Rescale factor to rescale the image by if `do_rescale` is set to `True`. do_normalize (`bool`, *optional*, defaults to `True`):
192_7_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#idefics3forconditionalgeneration
.md
Rescale factor to rescale the image by if `do_rescale` is set to `True`. do_normalize (`bool`, *optional*, defaults to `True`): Whether to normalize the image. If set to `True`, the image is normalized to have a mean of `image_mean` and a standard deviation of `image_std`. image_mean (`float` or `List[float]`, *optional*, defaults to `IDEFICS_STANDARD_MEAN`): Mean to use if normalizing the image. This is a float or list of floats the length of the number of
192_7_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#idefics3forconditionalgeneration
.md
Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. Can be overridden by the `image_mean` parameter in the `preprocess` method. image_std (`float` or `List[float]`, *optional*, defaults to `IDEFICS_STANDARD_STD`): Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
192_7_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#idefics3forconditionalgeneration
.md
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method. Can be overridden by the `image_std` parameter in the `preprocess` method. do_pad (`bool`, *optional*, defaults to `True`): Whether or not to pad the images to the largest height and width in the batch and number of images per
192_7_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#idefics3forconditionalgeneration
.md
Whether or not to pad the images to the largest height and width in the batch and number of images per sample in the batch, such that the returned tensor is of shape (batch_size, max_num_images, num_channels, max_height, max_width). Methods: preprocess Constructs a Idefics3 processor which wraps a LLama tokenizer and Idefics3 image processor into a single processor. [`Idefics3Processor`] offers all the functionalities of [`Idefics3ImageProcessor`] and [`Idefics3TokenizerFast`]. See
192_7_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#idefics3forconditionalgeneration
.md
[`Idefics3Processor`] offers all the functionalities of [`Idefics3ImageProcessor`] and [`Idefics3TokenizerFast`]. See the docstring of [`~IdeficsProcessor.__call__`] and [`~IdeficsProcessor.decode`] for more information. Args: image_processor (`Idefics3ImageProcessor`): An instance of [`Idefics3ImageProcessor`]. The image processor is a required input. tokenizer (`PreTrainedTokenizerBase`, *optional*):
192_7_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#idefics3forconditionalgeneration
.md
tokenizer (`PreTrainedTokenizerBase`, *optional*): An instance of [`PreTrainedTokenizerBase`]. This should correspond with the model's text model. The tokenizer is a required input. image_seq_len (`int`, *optional*, defaults to 169): The length of the image sequence i.e. the number of <image> tokens per image in the input. This parameter is used to build the string from the input prompt and image tokens and should match the
192_7_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/idefics3.md
https://huggingface.co/docs/transformers/en/model_doc/idefics3/#idefics3forconditionalgeneration
.md
This parameter is used to build the string from the input prompt and image tokens and should match the value the model used. It is computed as: image_seq_len = int(((image_size // patch_size) ** 2) / (scale_factor**2)) chat_template (`str`, *optional*): A Jinja template which will be used to convert lists of messages in a chat into a tokenizable string. Methods: __call__
192_7_13